forum_id
stringlengths 9
13
| raw_ocr_text
stringlengths 4
631k
|
---|---|
o-K3HVUeEw | Composable Part-Based ManipulationWeiyu Liu1, Jiayuan Mao2, Joy Hsu1, Tucker Hermans3,4, Animesh Garg3,5, Jiajun Wu11Stanford2MIT3NVIDIA4University of Utah5Georgia TechAbstract: In this paper, we propose composable part-based manipulation (CPM),a novel approach that leverages object-part decomposition and part-part correspon-dences to improve learning and generalization of robotic manipulation skills. Byconsidering the functional correspondences between object parts, we conceptualizefunctional actions, such as pouring and constrained placing, as combinations ofdifferent correspondence constraints. CPM comprises a collection of composablediffusion models, where each model captures a different inter-object correspon-dence. These diffusion models can generate parameters for manipulation skillsbased on the specific object parts. Leveraging part-based correspondences coupledwith the task decomposition into distinct constraints enables strong generalizationto novel objects and object categories. We validate our approach in both simulatedand real-world scenarios, demonstrating its effectiveness in achieving robust andgeneralized manipulation capabilities. For videos and additional results, see ourwebsite: https://cpmcorl2023.github.io/.Keywords: Manipulation, Part Decomposition, Diffusion ModelrimhandlebodyPour<align, rim, rim><facing-up, handle, body><tilt, body, body>Test: Pour from mug to bowl in real worldTrain: Pour from glass, pan, and bowl to bowl in simulationFigure 1: CPM composes part-based diffusion models to predict target object poses directly from point clouds.In this example, we show that the “pouring” action is decomposed into three part-based correspondences, whichgeneralize manipulation across object categories, and from simulation to the real world1 IntroductionCompositionality provides appealing benefits in robotic manipulation, as it enables efficient learning,reasoning, and planning. Prior works have extensively studied the decomposition of scenes intoobjects and their relationships [ 1,2,3], as well as the division of long-horizon plans into primitiveskills [ 3,4], in order to navigate complex environments and devise long-horizon plans. In this paper,we present a different view of compositionality by considering object-part decomposition based onfunctionality (e.g., rim, handle, body ), and leverage such decomposition to improve the learning ofgeometric and physical relationships for robot manipulation.In the context of language descriptions of objects, part names not only describe the geometric shapesof the parts but also capture their functional affordances. For instance, as depicted in Figure 1, for theaction of “pouring”, the rims define the boundary for alignment between the objects, the body of thepouring vessel should be tilted for the action, and its handle provides a constraint on the directionthe object should face when pouring. Leveraging this knowledge of part affordances, we posit thata family of functional actions, such as pouring and constrained placing, can be conceptualized asa combination of functional correspondences between object parts. Modeling actions using such a7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.decomposition yields two important generalizations. First, it enables action generalization to novelinstances from the same object category. Second and more importantly, it facilitates generalizationtounseen object categories. For example, after learning part affordances for the “pouring” action,our robot trained on “pour from bowls ” and “... pans ” can generalize to “pour from mugs ”, with noadditional training necessary for manipulation with the new object category.Motivated by these insights, we present the composable part-based manipulation (CPM). CPMcomprises a collection of diffusion models, where each model captures the correspondence betweenparts of different objects. These conditional diffusion models take the geometry of the object partsas input and generate parameters for manipulation skills, such as the starting and ending poses of abowl during the pouring action. Specifically, each model outputs a distribution of feasible trajectoriesthat satisfy a particular correspondence. After learning a collection of composable diffusion models,we represent actions as combinations of part-part correspondences. During inference, we leveragethe composition of primitive diffusion models to sample trajectories that adhere to all the partcorrespondences. This approach improves generalization to novel object categories over models thatdo not reason about both parts and composable correspondence constraints.In summary, this paper makes two key contributions. First, we propose composable part-based manip-ulation, which models manipulation actions as a composition of part-part correspondences betweenobjects. Second, we develop diffusion models trained to capture primitive functional correspondencesthat can be flexibly recombined during inference. CPM achieves strong generalization across variousdimensions, including novel object instances and object categories. We validate the efficacy of CPMon both PyBullet-based simulations and real-robot experiments.2 Related WorkObject representations for manipulation. Prior works use segmentations of common objectparts (e.g., blades, lids, and handles) for manipulating articulated objects [ 5,6,7,8] as well as fortransfer to novel objects [ 9,10]. A common approach that has been shown effective across differentmanipulation domains [ 11,12,13] first predicts which part of an object the robot should focus on(e.g., the handle), and then predicts an action relative to the part. Closely related is visual affordancedetection [ 14,15,16], which segments objects into different functional regions, such as graspableparts and support surfaces of objects. These functional regions can be shared by more distinctobjects, and can be useful for generalizing task-oriented grasping between object categories [ 17,18].Keypoints are another representation that shows robustness to large intra-category shape variationand topology changes [ 19]. Each keypoint set can provide essential pose information, that lacks inprevious segmentation approaches, to support tasks such as hanging mugs on pegs by their handles.The initial supervised approach [ 19] has been extended to methods that discover keypoints frominteractions [ 20,21] and from unlabeled videos [ 22]. Recently, implicit object representations havebeen used to provide correspondence between any point within the same object category generalizingacross 6-DoF pose changes [ 23,24,25]. Large pretrained vision models also support the developmentof object representations; recent works leverage these models to significantly reduce domain-specifictraining data, showing strong results for open-vocabulary part segmentation [ 26], few-shot affordancesegmentation [ 27], and one-shot pose estimation on any novel object from the same category [ 28].Despite this huge progress, we still lack object representations that support strong generalization ofmanipulation to new object categories. We focus on tackling this problem.Learning interactions of objects. Works in robotics have established the importance of modelinginteractions of objects. Recent approaches directly work on 3D observations, without relying onknown object models. Learning spatial relations between objects enables the picking and placingof objects at specific locations [ 1,29,30,2,31], such as placing an object in the middle drawer,stacking objects, and setting the table. These relations can be extended to represent the logical stateof the world to support planning for long-horizon tasks [ 3,32,33]. Other works focus on learninglower-level interactions between objects, such as placing an object stably on a messy tabletop andpushing an object using a tool [ 34,35]. For example, O2O-afford [ 34] correlates feature mapsextracted from two objects using a point convolution and outputs a point-wise interaction heatmap.2<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart1<latexit sha1_base64="jqQn1QL4mOITLlizeUeaD0EKzaM=">AAACi3icbVFdaxQxFM2MH62j1Wl99CW4LFQoy0wrKtKHWlF8kordtrCzDplsZjc0H0NyR1xCfqiP/Rm+mdmO0G29kHA4956c5KRqBLeQZb+j+N79Bw83Nh8lj59sPX2Wbu+cWd0aysZUC20uKmKZ4IqNgYNgF41hRFaCnVeXH7v++U9mLNfqFJYNm0oyV7zmlECgytQNccEay4VWP9wuvPKlKyrpvvuS7+GvPhnieVnAggFJhl3jtB/44Lv9s99bzdSBBPYLABzoomyIAe+TQhJYUCKC6ObZ/6S+TAfZKFsVvgvyHgxQXydl+qeYadpKpoAKYu0kzxqYuuDGqWDBsLWsIfSSzNkkQEUks1O3CsnjYWBmuNYmLAV4xd5UOCKtXcoqTHb3trd7Hfm/3qSF+t3UcdW0wBS9NqpbgUHjLnE844ZREMsACDU83BXTBTGEQviXNZdKrr3BdV6gtbA+CVnlt5O5C872R/mb0f6314Oj4z61TfQCvUS7KEdv0RH6gk7QGFF0FW1EabQdb8UH8fv48Ho0jnrNc7RW8ae/smvH5w==</latexit>T(t)AF<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart<latexit sha1_base64="qT+6EftEtJ4oZA+aT07HQ481bfE=">AAACy3icfVFdb9MwFHXC1ygf6+CRF0NVaUhVlUwI9jhAAl5AnVi3SU2JHNdprfkjsm8QxeSRX8Uv4afwhp1l0rohrpTo6Jx7fK6vi0pwC0nyO4pv3Lx1+87W3d69+w8ebvd3Hh1bXRvKplQLbU4LYpngik2Bg2CnlWFEFoKdFGdvg37ylRnLtTqCdcXmkiwVLzkl4Km8/2uYscpyodUXtwvPm9xlhXSfm5yP8KemN8TLPIMVA9IbBuGoa3jdhP+7ZtT2lJ4E9g0AHOgsr4iBxvOZJLCiRHjX5cMvvMH53/ALMZweRhjhNoX7FC58Qt4fJOOkLXwdpB0YoK4mef9PttC0lkwBFcTaWZpUMHd+XE4F84G1ZRWhZ2TJZh4qIpmdu3bJDR56ZoFLbfynALfsZYcj0tq1LHxnuLe9qgXyX9qshnJ/7riqamCKngeVtcCgcXgxvOCGURBrDwg13M+K6YoYQsG/60ZKITfu4EIWaC1s0/O7Sq9u5jo43hunL8d7hy8GB2+6rW2hJ+gZ2kUpeoUO0Ac0QVNEo6fR+2gSHcYfYxt/j3+ct8ZR53mMNir++Rey7uGE</latexit>✏✓,tilt<latexit sha1_base64="+fqA0FQehuFsG/PLoRhstfX2gEk=">AAAC0XicfVFdaxNBFJ1dv2r8ivroy2AIVghht4jtY1UQX5RKm6SQjcvsZDYZOh/Lzl0xDAPiq7/Kn+FP8c2Z7RaaVrywy+Gce+bcuVNUghtIkt9RfOPmrdt3du727t1/8PBR//GTqdFNTdmEaqHr04IYJrhiE+Ag2GlVMyILwWbF2bugz76y2nCtTmBTsYUkK8VLTgl4Ku//GmasMlxo9cXuwkuX26yQ9tjlfIQ/ud4Qr/IM1gxIbxiEk67hjQv/927U9pSeBPYNACzoLK9IDc7zmSSwpkR41+XDL7zB+d/wCzGcHkYY4TaFgy0J5WqV5U3lXN4fJOOkLXwdpB0YoK6O8v6fbKlpI5kCKogx8zSpYGH90JwK5mMbwypCz8iKzT1URDKzsO2qHR56ZolLXftPAW7Zyw5LpDEbWfjOcHtzVQvkv7R5A+XBwnJVNcAUPQ8qG4FB4/BueMlrRkFsPCC05n5WTNekJhT8626lFHLrDjZkgdbCuJ7fVXp1M9fBdG+cvh7vfX41OHzbbW0HPUPP0S5K0T46RB/QEZogGr2IPkbTaBYfx5v4e/zjvDWOOs9TtFXxz78VnOQL</latexit>✏✓,facingup<latexit sha1_base64="6tEI7fXhWSYCbdrI/2hVg4+2IOM=">AAACaXicbVDLbhMxFHWGVwmvFDYINhZRpCJV0UyFaJcFJMQKFdG0RZkw8jh3Eqt+jOw7iMjyt/EdfAArJFizw5POgrRcydbROefea5+ylsJhmn7vJdeu37h5a+t2/87de/cfDLYfnjjTWA4TbqSxZyVzIIWGCQqUcFZbYKqUcFqev2n10y9gnTD6GFc1zBRbaFEJzjBSxeDTiOZQOyGN/ux38HkofF4q/zEUYpe+D/0RXRQ5LgFZhK1y3DlehfZ+G3ajqYoUwldE9GjyomYWQygGw3ScroteBVkHhqSro2LwJ58b3ijQyCVzbpqlNc58HCa4hNDPGwc14+dsAdMINVPgZn4dQaCjyMxpZWw8Guma/bfDM+XcSpXRqRgu3WWtJf+nTRusDmZe6LpB0PxiUdVIioa2edK5sMBRriJg3Ir4VsqXzDKOMfWNLaXa+INvd6Ex0oV+zCq7nMxVcLI3zl6O9z68GB6+7lLbIk/JM7JDMrJPDsk7ckQmhJNv5Af5RX73fibbyePkyYU16XU9j8hGJcO/jtS9nw==</latexit>ftopart<latexit sha1_base64="+X11N6DDJgkTslNumghcvbcIAB8=">AAACzHicfVFda9swFJW9j3bZV9Y+7kU0BDoIwS5j62PXQelTabemLcSZkRU5EZUlI12PBaHX/ar9kf2UvU1yXWjasQs2h3Pu0bm6KmrBDSTJ7yh+9PjJ043NZ73nL16+et1/s3VhVKMpm1AllL4qiGGCSzYBDoJd1ZqRqhDssrj+HPTL70wbruQ5rGo2q8hC8pJTAp7K+7+GGasNF0p+s7vwzuU2Kyr71eV8hE9cb4gXeQZLBqQ3DMJ51/DJhf+RG7U9pSeB/QAACyrLa6LBeT6rCCwpEd519/Bbb3D+N/xWDKeHEUa4TeFgieAL6VzeHyTjpC38EKQdGKCuTvP+n2yuaFMxCVQQY6ZpUsPM+nk5FcwnNobVhF6TBZt6KEnFzMy2W3Z46Jk5LpX2nwTcsncdllTGrKrCd4aLm/taIP+lTRso92eWy7oBJulNUNkIDAqHJ8NzrhkFsfKAUM39rJguiSYU/MOupRTV2h1syAKlhHE9v6v0/mYegou9cfphvHf2fnBw2G1tE71FO2gXpegjOkDH6BRNEI12ouPoLPoSn8QQ29jdtMZR59lGaxX//Aug+uHc</latexit>✏✓,align<latexit sha1_base64="e/VTTwH/614p3YtdKa7KcUjTzLg=">AAACEnicbVDLSgNBEJz1GeMr6tHLYBA8hd0gKp4CXjxGMA9IljA7mSRD5rHM9AphyS94EvRbvIlXf8BP8eZssgeT2NBQVHVT3RXFglvw/W9vbX1jc2u7sFPc3ds/OCwdHTetTgxlDaqFNu2IWCa4Yg3gIFg7NozISLBWNL7L9NYTM5Zr9QiTmIWSDBUfcEogo7o2kb1S2a/4s8KrIMhBGeVV75V+un1NE8kUUEGs7QR+DGFKDHAq2LTYTSyLCR2TIes4qIhkNkxnt07xuWP6eKCNawV4xv7dSIm0diIjNykJjOyylpH/aZ0EBjdhylWcAFN0bjRIBAaNs8dxnxtGQUwcINRwdyumI2IIBRfPgkskF35IMy/QWthp0WUVLCezCprVSnBVqT5clmu3eWoFdIrO0AUK0DWqoXtURw1E0Qg9o1f05r14796H9zkfXfPynRO0UN7XL377ntQ=</latexit>XPoint Cloud TransformerDiffusion Transformer Pose Encoder<latexit sha1_base64="JFIyIqOEkSL96w1PNKK6ZpOrrrc=">AAACKnicbVDLSsNAFJ34rPUVdaebYBEUpCRdqMuqG5cV7AOaECbTSTt2JhNmJkoJAb/GlaAf4NIvcFfc+hPunLRd2NYDwz2cey9n7gliSqSy7aGxsLi0vLJaWCuub2xubZs7uw3JE4FwHXHKRSuAElMS4boiiuJWLDBkAcXNoH+d95sPWEjCozs1iLHHYDciIUFQack3992Apa3MT/Nam9TL7PQ+y3yzZJftEax54kxIqXoSBo8f7nvNN3/cDkcJw5FCFErZduxYeSkUiiCKs6KbSBxD1Idd3NY0ggxLLx3dkFlHWulYIRf6RcoaqX83UsikHLBATzKoenK2l4v/9dqJCi+8lERxonCExkZhQi3FrTwQq0MERooONIFIEP1XC/WggEjp2KZcAjZ1Q5p7Kc6pzIo6K2c2mXnSqJSds3LlVod2BcYogANwCI6BA85BFdyAGqgDBJ7AM3gFb8aL8WkMja/x6IIx2dkDUzC+fwHKJqxO</latexit>XPA,j<latexit sha1_base64="wrnhrhzab79ObCCMxCN6TuVT3zc=">AAACKnicbVDLSsNAFJ34rPUVdaebYBEUpCRdqMuiIC4j2AekIUymk3boTCbMTIQSAn6NK0G/wbUrd8WtP+HOSduFbT0w3MO593LmnjChRCrbHhlLyyura+uljfLm1vbOrrm335Q8FQg3EKdctEMoMSUxbiiiKG4nAkMWUtwKBzdFv/WIhSQ8flDDBPsM9mISEQSVlgLzsBOyrJ0HWVHdab3Nzwd5HpgVu2qPYS0SZ0oq9bOPd1eUPDcwfzpdjlKGY4UolNJz7ET5GRSKIIrzcieVOIFoAHvY0zSGDEs/G9+QWyda6VoRF/rFyhqrfzcyyKQcslBPMqj6cr5XiP/1vFRFV35G4iRVOEYToyilluJWEYjVJQIjRYeaQCSI/quF+lBApHRsMy4hm7khK7wU51TmZZ2VM5/MImnWqs5FtXavQ7sGE5TAETgGp8ABl6AO7oALGgCBJ/AMXsGb8WJ8GiPjazK6ZEx3DsAMjO9fO/Sr7A==</latexit>XPF,k<latexit sha1_base64="3u1nRu5P5VJ+x+NSxU2Dim9fHLs=">AAACLHicbVC7TsMwFHXKq5RXeGwwRCAkEKhKOgBjBQsDQ0EUkNpQOa7bmthxZN8gVVEWvoYJCb6FBSHExj+w4bQMFDiS5eNz79XxPUHMmQbXfbEKY+MTk1PF6dLM7Nz8gr24dKFlogitE8mlugqwppxFtA4MOL2KFcUi4PQyCI/y+uUtVZrJ6Bz6MfUF7kaswwgGI7XstWYg0vPsOt2C7ayV5q+auW/CbNfLWvaGW3YHcP4S75tsVCsr4fvJzlmtZX8225IkgkZAONa64bkx+ClWwAinWamZaBpjEuIubRgaYUG1nw62yJxNo7SdjlTmROAM1J8TKRZa90VgOgWGnv5dy8X/ao0EOgd+yqI4ARqRoVEn4Q5IJ4/EaTNFCfC+IZgoZv7qkB5WmIAJbsQlECM7pLkXSMl1VjJZeb+T+UsuKmVvr1w5NaEdoiGKaBWtoy3koX1URceohuqIoDt0jx7Rk/VgPVuv1tuwtWB9zyyjEVgfX0ooq8Y=</latexit>T(t)Pjk,1<latexit sha1_base64="30ibTJ7MVRmCwncRUkcebQ8ruUw=">AAACS3icdVDLSgMxFM3UV62v+ti5CUpBUcpMFyq4KbpxIVKlVaHWkklTjU0mQ3JHKMN8jF/jStClKz/CleLCTNuFrXoh5OScezm5xw8FN+C6r05mbHxicio7nZuZnZtfyC8unRsVacpqVAmlL31imOABqwEHwS5DzYj0BbvwO4epfnHPtOEqqEI3ZA1JbgLe5pSApZr5/QK+8mVcTa7jDdhMmnH6qtj7rpNse0nuf/EkaebX3aLbK/wbeAOwXi6tdN6Ot84qzfzHVUvRSLIAqCDG1D03hEZMNHAqmDWLDAsJ7ZAbVrcwIJKZRtxbMsEFy7RwW2l7AsA99udETKQxXenbTkng1oxqKfmXVo+gvdeIeRBGwALaN2pHAoPCaWK4xTWjILoWEKq5/Sumt0QTCjbXIRdfDu0Qp16glDBJzmbljSbzG5yXit5OsXRqQztA/cqiVbSGNpCHdlEZHaEKqiGKHtAjekYvzpPz7nw6X/3WjDOYWUZDlZn4BpW9t04=</latexit>T(t)Pjk,N<latexit sha1_base64="T37olucuzTaUf0zP+2cBB5KiAz4=">AAACaHicfVBNTxsxFHS2tNDQj6U9VIiLBUWiEop2OUCPiF56qoIgEClJV17nhbix1yv7LVJk+W/1yKW/oveeKrW3VuKGN+FAoOqTLI9n3vPYk5dSWEyS743o0dLjJ8srT5urz56/eBmvvTqzujIcOlxLbbo5syBFAR0UKKFbGmAql3CeTz7U+vklGCt0cYrTEgaKXRRiJDjDQGVxd5v2c+VO/We3g+985upTO+xfJn439c3/yZ98sw+lFTJc5Po4BmS7tX7iM+GzeCtpJbOiD0F6C7YO3/75+u1y9W87i6/7Q80rBQVyyaztpUmJA8cMCi4hWFUWSsYn7AJ6ARZMgR24WQKebgdmSEfahFUgnbF3JxxT1k5VHjoVw7G9r9Xkv7RehaP3AyeKskIo+NxoVEmKmtZx0qEwwFFOA2DciPBWysfMMI4h9AWXXC38wdVeqLW0vhmySu8n8xCc7bXS/dbecQjtiMxrhWyQTbJDUnJADslH0iYdwskV+UF+kd+Nn1EcvYnW561R43bmNVmoaPMGE4TB+w==</latexit>✏✓,Si<latexit sha1_base64="v5wU0bmUU3WRz92nFB9wmicl/wk=">AAACLHicbVDLSgMxFM3UV62vUZe6CBZBQcqMiLoU3bisaKvQ1iGTphqax5DcEcowG7/GlaDf4kbErf/gzvSxsNYDgcM593JyT5wIbiEI3r3C1PTM7FxxvrSwuLS84q+u1a1ODWU1qoU2NzGxTHDFasBBsJvEMCJjwa7j7lnfv35gxnKtrqCXsJYkd4p3OCXgpMjfbLLEcqHVbbYDu3mUNWOZXeYR38NhHvnloBIMgCdJOCJlNEI18r+bbU1TyRRQQaxthEECrYwY4FSwvNRMLUsI7ZI71nBUEclsKxtckeNtp7RxRxv3FOCB+nsjI9LanozdpCRwb/96ffE/r5FC57iVcZWkwBQdBnVSgUHjfiW4zQ2jIHqOEGq4+yum98QQCq64sZRYjt2Q9bNAa2Hzkusq/NvMJKnvV8LDyv7FQfnkdNRaEW2gLbSDQnSETtA5qqIaougRPaEX9Oo9e2/eh/c5HC14o511NAbv6wfEqKjP</latexit>✏(t)Si,1<latexit sha1_base64="CqQquVftToqWZTkc4IA4qIyj4ms=">AAACLHicbVDLSgMxFM3Ud31VXeoiWIQKUmaKqEvRjSupaFVo65BJ0zY0jyG5I5RhNn6NK0G/xY2IW//BneljYdUDgcM593JyTxQLbsH337zc1PTM7Nz8Qn5xaXlltbC2fm11YiirUS20uY2IZYIrVgMOgt3GhhEZCXYT9U4H/s09M5ZrdQX9mDUl6Sje5pSAk8LCVoPFlgut7tIS7GZh2ohkepmFfA+fZ2Gh6Jf9IfBfEoxJEY1RDQtfjZamiWQKqCDW1gM/hmZKDHAqWJZvJJbFhPZIh9UdVUQy20yHV2R4xykt3NbGPQV4qP7cSIm0ti8jNykJdO1vbyD+59UTaB81U67iBJiio6B2IjBoPKgEt7hhFETfEUINd3/FtEsMoeCKm0iJ5MQN6SALtBY2y7uugt/N/CXXlXJwUK5c7BePT8atzaNNtI1KKECH6BidoSqqIYoe0CN6Ri/ek/fqvXsfo9GcN97ZQBPwPr8B9SSo7A==</latexit>✏(t)Si,N<latexit sha1_base64="mhJ+AVftmU1m0Dn1ijHEV2CsyMw=">AAADE3ichVJNj9MwEHXCxy7lqwtHLhZVpUWqqmSFWI4LSIgTKmK7u1JTIsd1WlPHjuIJorL8GzghwW/hhrjyA/gp3LDTrLTdghgpyei9efPG42Sl4Bqi6FcQXrl67frO7o3OzVu379zt7t070aquKBtTJVR1lhHNBJdsDBwEOysrRopMsNNs+cLzpx9YpbmSx7Aq2bQgc8lzTgk4KN0Lwn7CSs2Fku/MPjyyqUmywry1KR/g17bTx/M0gQUD0ul74rgteGb9+6UdNDW5A4F9BAADKklLUoF1eFIQWFAinOpi83OtV/7H/Jz2/f0QA9z4cDBE8Llct9hwibZd/jHGyH3fD5bWpt1eNIyawNtJ3CY91MYo7f5OZorWBZNABdF6EkclTI07NaeCOcNas5LQJZmziUslKZiemuayLO47ZIZzVblHAm7QiwpDCq1XReYq/dz6MufBv3GTGvKnU8NlWQOTdG2U1wKDwv7m8YxXjIJYuYTQirtZMV2QilBw/8eGS1ZsnMF4L1BKaNtxu4ovb2Y7OTkYxk+GB28e946et1vbRQ/QQ7SPYnSIjtArNEJjRAMefAq+BF/Dz+G38Hv4Y10aBq3mPtqI8OcfT4T9fQ==</latexit>T(t)Pj,k<latexit sha1_base64="kOps49F//byAFz3unoCUCeIPHac=">AAAC73ichVFNb9NAEF2bj5bwFeDIZUUUqUhRZFcIOBYQiBMqomkrxcFab9bJKvvh7o4R0cq/gxviyk/ixO/gxq7rSk2DxEi2nt7Mmzc7U1SCW0iSX1F87fqNmzu7t3q379y9d7//4OGx1bWhbEK10Oa0IJYJrtgEOAh2WhlGZCHYSbF6E/InX5ixXKsjWFdsJslC8ZJTAp7K+7+HGassF1p9dnvwtMldVkj3qcn5CH9oekO8yDNYMiC9YUgcdQWvmvB/14zamtKTwL4CgAOd5RUx0Hg+kwSWlAivutz8QhuU/zG/SIf+YYgRbn04OCL4QvkWmx7JlkfeHyTjpA28DdIODFAXh3n/TzbXtJZMARXE2mmaVDBz/kmcCuYNa8sqQldkwaYeKiKZnbn2EA0eemaOS238pwC37GWFI9LatSx8ZZjbXs0F8l+5aQ3ly5njqqqBKXpuVNYCg8bhqnjODaMg1h4QarifFdMlMYSCv/2GSyE33uCCF2gtbNPzu0qvbmYbHO+P0+fj/Y/PBgevu63tosfoCdpDKXqBDtB7dIgmiEZvo1UEUR2fxd/i7/GP89I46jSP0EbEP/8Cm1LwBg==</latexit>T(0)AF“Pour”Sampled Target PosesSegmented PartsInitial SceneExecution<latexit sha1_base64="hmc/Ol2wByB0lLttonACgh+OJ/I=">AAADHXichVLfb9MwEHbCj40wWAePvFhUlYZUVcmENh4HSIgnVMS6TWpK5LhOa+rYUXxBVFb+EJ6Q4G/hDfGK9qfsDTvNpHUb4qQ4p++7u+/u7LQQXEMYnnn+rdt37m5s3gvubz14uN3ZeXSsVVVSNqJKqPI0JZoJLtkIOAh2WpSM5KlgJ+niteNPPrNScyWPYFmwSU5mkmecErBQsuNt9WJWaC6U/Gh24VmdmDjNzYc64X38rg56eJbEMGdAgp4jjtqAl7U739T9JiazILAvAGBAxUlBSqgtHucE5pQIm3W5+EWuy/yP+AXt6rsm+rjR4WCI4DO5KrGmEt6g8o8+hvb/qb+wEUU7Y9LphoOwMXzdiVqni1obJp3zeKpolTMJVBCtx1FYwMTY+TkVrA7iSrOC0AWZsbF1JcmZnpjm2mrcs8gUZ6q0nwTcoJczDMm1XuapjXQD6KucA2/ixhVkLyaGy6ICJulKKKsEBoXdG8BTXjIKYmkdQktue8V0TkpCwb6UNZU0X5vBOC1QSug6sLuKrm7munO8N4j2B3vvn3cPX7Vb20RP0FO0iyJ0gA7RWzREI0Q97X31vns//G/+T/+X/3sV6nttzmO0Zv6fv1BDAWA=</latexit>p✓<latexit sha1_base64="pTqE88oby3YNTSrDuoIS2e/D9pA=">AAADJXichVLfixMxEM6uv871V0998yVYCieUsnuI+ngqiE9S8Xp30K1LNk3b2GwSNrNiCfvH+CTo3+KbCD75d/hmst3C9e7Egc0O3zcz38wkuRbcQBz/CsJLl69cvbZzPbpx89btO53du0dGVSVlI6qEKk9yYpjgko2Ag2AnumSkyAU7zpcvPX/8kZWGK3kIK80mBZlLPuOUgIOy3eB+L2XacKHke7sHj+rMpnlh39UZ7+M3ddTD8yyFBQMS9Txx2AY8r/35qu43MTMHAvsEABZUmmlSQu3wtCCwoES4rNPFN7k+8z/iG9rX9030caPDwRLB53JdYkslvkDlH30M3f9Df+kj9GZIN61e8KzTjQdxY/i8k7ROF7U2zDp/0qmiVcEkUEGMGSexhol1e+BUsDpKK8M0oUsyZ2PnSlIwM7HN9dW455ApnqnSfRJwg57OsKQwZlXkLtIPYs5yHryIG1cwezaxXOoKmKRroVklMCjs3wKe8pJRECvnEFpy1yumC1ISCu7FbKnkxdYM1muBUsLUkdtVcnYz552j/UHyZLD/9nH34EW7tR30AD1EeyhBT9EBeo2GaIRoYIPPwdfgW/gl/B7+CH+uQ8OgzbmHtiz8/RfuEgRC</latexit>g(a) System Overview(b) Composable Part-Based Diffusion Models(c) Point-Cloud Diffusion Transformer<latexit sha1_base64="pqldO+U6c+9Jdc5t/BwlGU7Q3+I=">AAADb3ichVJNb9NAEF3HfJTw0RQOHJDQiihSK4XIrhBwLCAhTiiIpq0Up9Z6s06WrL0r7xgRrfzz+BH8Bk5IcODGruNC07RiJHtHM+/NmxlNogTXEATfvJZ/7fqNm1u32rfv3L233dm5f6RlWVA2olLI4iQhmgmesxFwEOxEFYxkiWDHyeKNyx9/ZoXmMj+EpWKTjMxynnJKwIbiHe+0FzGluZD5qdmFvSo2UZKZj1XM+/h91e7hWRzBnAFp91zisAG8qtz/bdWvMakNAvsCAAZkFCtSQGXjUUZgTomwrPPFz7iO+R/xs7Sr75ro41qHgyGCz/JViTWV4BKVK/oY2vdTf+EQ6u+Qdlw15xtl4Wm4SbsS8k877nSDQVAb3nTCxumixoZx53c0lbTMWA5UEK3HYaBgYuxGORXMapaaKUIXZMbG1s1JxvTE1IdQ4Z6NTHEqC/vlgOvoeYYhmdbLLLFI17q+mHPBy3LjEtKXE8NzVQLL6UooLQUGid1V4SkvGAWxtA6hBbe9YjonBaFgb29NJcnWZjBOC6QUut5VeHEzm87R/iB8Ptj/8Kx78LrZ2hZ6hJ6gXRSiF+gAvUNDNELU++p99356v1o//If+Yx+voC2v4TxAa+bv/QGPux3u</latexit>T(t1)AFFigure 2: (a) Given a task, the partial point clouds of the anchor and function objects, and their parts extractedfrom a learned segmentation model gφ, we sample a sequence of transformations from a learned distribution pθto parameterize the function object’s trajectory. (b) CPM can be generalized to novel object categories becauseit decomposes each action to a collection of functional correspondences between object parts. To sample thetarget transformations that satisfy all functional correspondences, CPM combines the noise predictions from acollection of primitive diffusion models at inference time. (c) Each primitive diffusion model learns a target posedistribution that satisfies a particular part-part correspondence, based on the point clouds of the object parts.Functionals defined on top of object-wise signed distance functions can also represent constraints oninteractions between objects such as contact and containment [ 36]. Flow-based methods can alsolearn static relations between objects [ 37] as well as tool use [ 38], directly from point clouds. A maindifference between our work and these methods is that we bridge the modeling of interactions andobject representations through object-part decomposition and learned part-part correspondences, andenjoy empirically validated improvement in generalization.Composable diffusion models. A set of recent works have investigated the potential of diffusionmodels in robotics [ 39,40,41,42,43,44,45,46,2,47]. Research demonstrates that diffusionmodels can generate multimodal distributions over actions [ 41] and can handle spatial ambiguities insymmetric objects [ 2]. In image domains, prior work has shown a connection between conditionaldiffusion models and energy-based models, and proposed techniques to generate images by combiningdiffusion noises for different language conditions [ 48]. Recent work provides a more principled wayto sample from individually trained models using MCMC [ 49]. Another approach combines diffusionmodels by using additional trained adapters for generating faces [ 50]. CPM combines both lines ofwork to propose composable diffusion models for robotic manipulation. In doing so we must addresstwo challenges of adapting diffusion models to (1) output poses instead of pixels and (2) combineactions in different part frames, while retaining generalization to different distributions.3 Composable Part-Based ManipulationIn this work, our goal is to model functional actions involving an anchor object Athat remains staticand a function object Fthat is being actively manipulated. Shown in Fig. 2 (a), given a task Mandthe partial point clouds of two objects XAandXFin the world frame {W}, we want to predict asequence of SE(3)transformations, i.e., TW={TW,1, ..,TW,N}, which parameterized a trajectoryof the function object Fin the world frame in order to achieve the desired interaction with the anchorobjectA(e.g., pouring). Throughout the paper, we choose N= 2; i.e., we predict the starting poseand the ending poses of the object motion. Then, we use SE(3)interpolation between the two posesto generate the continuous motion trajectory. We define that the object frames of {A}and{F}arecentered at the centroids of the respective point clouds XAandXF, and have the same orientationas the world frame *. Each transformation TWin the world frame can thus be computed by therelative pose between the two objects TAFasTW=TWATAF(TWF)−1. A key challenge we aimto address is generalizing the functional actions from training objects to unseen object instances and,more importantly, novel object categories a robot may have never encountered during training.*The transformation from {W}to an object frame can be computed given this definition. For example,TWF = (RWF,tWF), where RWF is set to an identity matrix and tWF is set to the centroid of XF.33.1 Action as Part-Based Functional CorrespondencesComposable part-based manipulation (CPM) models each action Mas a composition of functionalcorrespondences between object parts. We formalize the symbolic representation of each correspon-denceC∈ CMas⟨Si,PA,j,PF,k⟩, where CMis the set of correspondences for M,Siis a spatialrelation, PA,jandPF,kare two parts of the anchor and the functional objects, respectively. Considerthe example of pouring from a mug to a bowl, as depicted in Fig. 1. This “pour” action contains thefollowing three correspondences: ⟨align,rim(mug),rim(bowl)⟩,⟨tilt,body(mug),body(bowl)⟩, and⟨facing-up ,handle (mug),body(bowl)⟩.The task of predicting robot motion can be cast as the task of finding a robot trajectory that simultane-ously satisfies all the part-based functional correspondences. Instead of manually specifying theseconstraints given object point clouds and their poses, we propose to learn a neural network gφtorecognize the functional parts of objects based on their point clouds and another learned generativemodel pθto parameterize a distribution of T. Using gφ, we can extract point clouds for a givenpart, for example gφ(XF,PF,k) =XPF,k. Learning to recognize functional parts can be treatedas predicting a per-point part segmentation problem and have been studied extensively in priorwork [ 14,15,16,27,51]. Therefore, we focus on the second part which enables the robot to learnmanipulation trajectories of objects, based on the recognized parts.3.2 Generative Modeling of Functional Correspondences with Diffusion ModelsFor each functional correspondence tuple ⟨Si,PA,j,PF,k⟩, we learn a generative distributionpθ,Si(TPjk|XPA,j,XPF,k). Here TPjkdenotes the relative transformations TPA,jPF,k†. We usea point-cloud conditioned diffusion model to parameterize this distribution. In particular, eachprimitive diffusion denoise model εθ,Sitakes in the current diffusion time step t, two part pointcloudsXPA,jandXPF,k, and the noisy transformations TPjkas input, and predicts the noise overTPjk. As illustrated in Fig. 2 (c), the model is based on a transformer encoder. First, we encodepoint clouds for the two parts separately using a point cloud transformer [ 52]. Then we encode eachtransformation using a trained MLP. We input the point cloud and transformation encodings, togetherwith the diffusion time step tto the transformer encoder. The output of the transformer encoder is thepredicted noise over the transformations TPjk. We provide details for the architecture in Appendix A.During training, we optimize the following loss for randomly sampled diffusion time step tandrandom Gaussian noise εsampled from a multivariate Gaussian distribution:LMSE=ε−εθ,Sip1−βtT(0)Pjk+pβtε|XPA,j,XPF,k, t22,where T(0)Pjkis the target transformations to predict and βtis the diffusion noise schedule [ 53].The added noise and the predicted noise are both in the tangent space of SE(3). We build on thetechnique introduced for the SE(3)Denoising Score Matching (DSM) model [ 40], but use DenoisingDiffusion Probabilistic Model (DDPM) [ 53] for more stable training. In practice, we first computethe exponential map of the transformations and then apply the noise. This can be viewed as predictingthe score function for an exponential energy function of SE(3)poses.3.3 Inference-Time Composition of Diffusion ModelsOne of the key features of diffusion models is their compositionality. That is, suppose we have a setof diffusion models, each trained for one specific type of functional correspondences, we can combinetheir predicted noises during inference time to generate a trajectory that adheres to all functionalcorrespondences, as illustrated in Fig. 2 (b). Since each diffusion model implicitly parameterizes anenergy-based model: pθ,Si(T |·)∝exp(−Eθ,Si(T |·))through its noise prediction [ 48,49], samplingfrom the composition of the diffusion models corresponds to sampling from the “intersection” ofdistributions for the individual functional correspondences, or formally, fromQC∈CMpθ,Si(T |·).†Similar to the definition of the object frame, the part frames {PA,j}and{PF,k}are centered at thecentroids of the respective point clouds XPA,jandXPF,kand have the same orientation as the world frame.4Pourfrom mugto bowlPourfrom bowlto mugPlaceknifesafely in mugFigure 3: We generate task demonstrations using the PartNet and ShapeNetSem datasets for the “pouring” and“safe placing” tasks. We create demonstrations for a variety of function and anchor object combinations.In particular, during inference time, starting from T(T)AFrandomly sampled from standard Gaussiandistributions, given the set of constraints CM, we iteratively update the pose prediction by:T(t−1)AF =1αt T(t)AF−1−αt√1− ̄αtXC∈CMεθ,Siftopart(T(t)AF)|XPA,j,XPF,k, t!+σtε,where Tis the number of diffusion steps, αt= 1−βtis the denoise schedule, ̄αt=QTi=1αtis thecumulated denoise schedule, σtis a fixed sampling-time noise schedule, and εis a randomly sampledGaussian noise. The differentiable operation ftopart takesT(t)AFand transforms it to the part framePjkby(TAPA,j)−1T(t)AFTFPF,k, for which each individual diffusion model is trained on.4 Data CollectionWe demonstrate CPM on the “pouring” and “safe placing” tasks. These two tasks require differentfunctional affordances. The pouring action pours from an anchor object to a target object, andrequires alignment of rims, collision avoidance of the handle and the container body , and bodytilt. The safe-placing action places a sharp function object into an anchor object, and requires headcontainment for safety, tiptouching bottom , and a body -body placement constraint. To validate ourapproach, we collect 4522 successful demonstrations for pouring and 2836 successful demonstrationsfor safe placing. To generate the demonstrations, we first source 13 categories of 3D objects fromPartNet [ 54] and the subset of ShapeNetSem [ 55] objects categorized in the Acronym dataset [ 56]. Wethen extract aligned parts either from segmentations and category-level canonical poses in PartNet orfrom manually labeled 3D keypoints for ShapeNetSem objects. We procedurally generate parametersof the actions from the aligned parts (as illustrated in Fig. 3), simulate the interactions by tracing thetrajectories defined by the parameters, and render RGB-D images using multiple cameres set up inthe simulator. Details of the dataset are presented in Appendix C.5 ExperimentsThe subsequent section will showcase the performance of CPM in comparison to baselines and othervariants of our method in simulation. In particular, we evaluate in two important generalizationsettings: 1) generalization to novel object instances from seen object categories, and 2) generalizationto object instances from unseen object categories. We then discuss the deployment of CPM trained insimulation on a real robot.5.1 Experimental SetupWe evaluate all methods in the PyBullet physics simulator [ 57]. To isolate the problem of predictingtarget transformations Tfrom other components of the system (e.g., grasp sampling and motionplanning), we actuate the center of mass of the function object F. We report average task completionscores from 1500 trials indicating failure (0) and success (100), with credits assigned for partialcompletion. The score is computed based on model-based classifiers designed for each task. To test5Table 1: CPM demonstrates strong generalization to novel instances of objects within seen categories.Model Pouring Safe PlacingTransformer-BC 19.21 37.11TAX-Pose 21.71 76.97PC-DDPM 75.83 51.55Part-Aware PC-DDPM 75.28 42.68CPM (ours) 80.00 70.99Table 2: CPM demonstrates strong generalization to function objects from unseen object categories.ModelPouring Safe PlacingBowl Glass Mug Pan Fork Pen ScissorsTransformer-BC 10.23 20.91 6.06 32.04 26.15 31.93 26.44TAX-Pose 23.32 3.82 8.64 46.14 50.90 67.60 36.80PC-DDPM 63.02 75.95 71.39 64.39 40.60 46.63 32.34Part-Aware PC-DDPM 58.98 72.11 67.11 66.17 39.76 48.04 28.15CPM (ours) 79.32 81.44 77.57 62.13 55.94 59.45 63.35generalization to novel objects from seen categories, we randomly split the data for each task Minto 80% training and 20% testing. To test generalization to unseen object categories, we conducta separate experiment for each target category of the function objects, where we withhold datainvolving the target category and train on the remaining data. Details of the evaluation are discussedin Appendix D. We present results with binary success as metric in Appendix E.5.2 Compared MethodsBaselines. We compare CPM with four main baselines. The first is Transformer-BC, which uses amultimodal transformer encoder-decoder from prior work [ 30] to condition on point clouds of theobjects and autoregressively predict target transformations. The second baseline is based on TAX-Pose [ 37] which predicts relative poses between two objects from point-wise soft correspondences.The third is PC-DDPM; similar to recent work [ 40,47], a conditional denoising diffusion probabilisticmodel [ 53] is trained to predict target transformations based on input point clouds of both the functionand the anchor objects. The fourth baseline is the Part-Aware PC-DDPM, which takes in both pointclouds of the objects and per-point segmentation masks that indicate object parts. We discuss thebaseline implementations in details in Appendix B.CPM variants. We evaluate several variants of our model. The first is DDPM with 6D rotationrepresentation instead of SE(3). This variant of CPM learns different diffusion models for differentparts. However, it does not compose pose predictions in different part frames. This model is directlyadapted from existing composable diffusion models for image generation [ 48,49]. The second isDDPM with training-time composition; this model jointly train all primitive diffusion models bycomposes thier noise predictions at training time. The last group are the individual primitive diffusionmodels, which use single DDPM models corresponding to different part-part correspondences,without any composition.5.3 Simulation ResultsComparisons to baselines. We evaluate CPM’s generalization capability in two settings. First,Table 1 shows a comparison of generalization to novel objects from seen categories. Overall, ourmodel achieves strong performance on both tasks of “pouring” and “safe placing”. We note thatTAX-Pose struggles with pouring that requires modeling multimodal actions because the methodextracts a single relative pose estimate from a fixed set of correspondences. The autoregressiveTransformer-BC is also not enough to capture the full distribution of the pouring action. We note thatalthough Part-Aware PC-DDPM leverages the same part segmentation as CPM, it fails to achievestronger performance compared to the PC-DDPM baseline, which only uses the object point cloudsas input. We attribute this to its potential overfitting to the part segmentations within the training data.By contrast, CPM is able to effectively leverage part segmentations by learning primitive diffusionmodels and composing them at inference time. Our model shows substantial improvements in the6Table 3: We ablate the contributions of CPM on the ability to generalize to novel categories of objects.Target Pose Rep Part Frames Composition Pouring Safe Placing6D Rot + 3D Trans No Inf-time 71.22 68.77SE(3) Yes Train-time 69.89 48.46SE(3) Yes Inf-time 75.11 59.58Table 4: We explore the effect of composition, comparing to individual diffusion models, in generalizationacross both “pouring” and “safe placing” tasks. *We note that for the align andfacing-up evaluation, a smallpercentage of examples were removed as they do not contain the involved parts in the partial object point clouds.Pouring⟨align,rim,rim⟩ 70.05*⟨facing-up ,handle ,body⟩ 16.42*⟨tilt, body, body ⟩ 68.69CPM 75.11Safe Placing⟨contain, head, body ⟩ 41.22⟨touch, tip, bottom ⟩ 9.34⟨place, body, body ⟩ 39.86CPM 59.58“safe placing” task compared to other diffusion-based methods, largely due to each part constraintsignificantly restricting the target pose distribution in this task. For instance, the constraint thatrequires the tipof the function object to touch the bottom of the anchor object effectively constrainsthe target pose.Our second set of experiments assesses the model’s capacity to generalize to unseen object categories,thereby highlighting the efficacy of part-based correspondences. Results can be found in Table 2.Remarkably, CPM demonstrates its capability to generalize across object categories for both tasks ina zero-shot manner. CPM’s performance dips slightly for pans as the rim’s of pans are significantlylarger compared to rim’s encountered during training (for example, those of bowls and mugs). Asa comparison, all baselines fall short in consistently generalizing to new categories for both tasks.TAX-Pose is not able to maintain strong performance for safe placing when generalizing to moregeometrically complicated objects including scissors and forks. Our methods are robust to changes inlocal geometry and overall topology by leveraging compositions of part-based correspondences.BodyContain HeadTip Touch BottomComposedFigure 4: We illustrate the learned distribution of eachprimitive diffusion model, which generates diverse sam-ples conforming to the specified constraints, as well asthe distribution from the combined full CPM model. Thehighest-ranked sample is highlighted.Ablation. First, we assess the significance ofourSE(3)encoding, part frame-based transfor-mation, and inference-time composition withinthe context of generalizing to unseen categoriesof objects. As depicted in Table 3, our full CPMwith part frames and inference-time composi-tion shows superior performance compared tothe model trained with training-time composi-tion. This verifies the importance of our designsto support part-based composition and gener-alization. Compared to the variant based on6D Rotation + 3D Translation encoding, CPMyields a better performance on the pouring task,a scenario where the rotation of the function ob-ject plays a pivotal role. On the safe placingtask, which involves less rotation of objects, weobserve a more comparable performance with our model. These results highlight the importance ofSE(3)diffusion model in rotation prediction.Second, we compare the performance of composed part-based diffusion models with the performanceof primitive diffusion models. Shown in Table 4, the composed model outperforms individualdiffusion models, showing the efficacy of our composition paradigm. In addition, these resultsshow the importance of different part-based constraints for the given tasks. In the “pouring” task,align andtiltstrongly constrain the target pose for the function object, while for the “safe placing”task, contain andplace constraints are more salient. Fig. 4 provides a qualitative visualization byshowcasing the part-conditioned distribution associated with each individual diffusion model for7Figure 5: We show sampled frames from trajectories of CPM’s policy. The model is trained only on demonstra-tions with pans, bowls, and wine glasses in simulation and generalizes to mugs in the real world.various constraints, as well as the corresponding composed distribution. The quantitative performanceofcontain andplace primitive models for these tasks aligns with this qualitative comparison, asthey have learned distributions that are close to the composed model. The CPM paradigm allowsus to train each primitive diffusion model independently, encouraging each model to concentrate ondistinct functional affordances, thus enabling them to learn and generalize to diverse distributions ofsamples. During inference, the composition of distributions learned by individual models enablesCPM to find solutions that satisfy all correspondence constraints.5.4 Real-World TransferFinally, we show a real-world robot manipulation experiment for the “pouring” task, highlightingthe transferability of our CPM to real-world manipulation. In this setting, we use the primitivediffusion models trained on simulation data with function objects of glasses, pans, and bowls, andzero-shot transfer to mugs in the real-world experiment. Our setup includes a Franka Emika robotmounted in a tabletop environment. To conduct pouring, we perform plane segmentation and k-meansclustering to extract object point clouds from the scene point cloud captured by two calibrated AzureKinect RGB-D cameras. Next, we apply a pre-trained point transformer (PT) model [ 58] for partsegmentation. The segmentation model is trained on simulation data only. We then apply CPMtrained in simulation for the pouring task. To execute the trajectory, we use the Contact-GraspNet [ 59]to sample robot grasps on the function object and Operational Space Controller [ 60] with impedancefrom Deoxys [ 60] to following a sequence of end-effector pose waypoints computed from the targettransformations. Figure 5 shows our real-world setup and example trajectories predicted by CPM onunseen mugs with different shapes and sizes.6 Limitations and ConclusionWe introduced composable part-based manipulation (CPM), as an approach that leverages object-partdecomposition and part-part correspondences for robotic manipulation. We show that representingactions as combinations of constraints between object parts enables strong generalization. Through thecomposition of primitive diffusion models, we gain generalization capabilities across novel instancesof objects as well as unseen object categories, in simulation and in real-world robot experiments.In this paper, we focus on manipulation tasks involving two objects. Extending CPM to learn skillsinvolving more objects would be important for future work, in particular for manipulating piles orstacks of objects. Second, we parameterize each manipulation action by the starting and endingposes. Extending the transformer-based diffusion model to output more waypoints to parameterizelonger trajectory is important for potentially a wider range of tasks. In addition, CPM does not modeltemporal constraints over the trajectory. One possible extension is to learn trajectory samplers fortemporal constraints and trajectories with loops. CPM assumes external part segmentations. Althoughmany categories can be segmented by off-the-shelf computer vision models [ 26], extending thesystem to jointly learn or finetune part segmentation is important. Finally, composing a larger numberof diffusion models may require more efficient sampling techniques such as [ 61]. We provide anextended discussion of CPM’s assumptions in Appendix F and suggest directions for future research.8AcknowledgmentsWe extend our gratitude to the members of the NVIDIA Seattle Robotics Lab, the RAIL research labat Georgia Tech, and the Stanford Vision and Learning Lab for insightful discussions. This work is inpart supported by NSF grant 2214177, 2211258, AFOSR grant FA9550-22-1-0249, FA9550-23-1-0127, ONR MURI grant N00014-22-1-2740, the Stanford Institute for Human-Centered ArtificialIntelligence (HAI), the MIT-IBM Watson Lab, the MIT Quest for Intelligence, the Center for Brain,Minds, and Machines (CBMM, funded by NSF STC award CCF-1231216), and Analog Devices,JPMC, and Salesforce. Any opinions, findings, and conclusions or recommendations expressed inthis material are those of the authors and do not necessarily reflect the views of our sponsors.References[1]C. Paxton, C. Xie, T. Hermans, and D. Fox. Predicting Stable Configurations for SemanticPlacement of Novel Objects. In CoRL , 2021. 1, 2[2]W. Liu, Y . Du, T. Hermans, S. Chernova, and C. Paxton. StructDiffusion: Language-GuidedCreation of Physically-Valid Structures using Unseen Objects. In RSS, 2023. 1, 2, 3[3]Y . Huang, A. Conkey, and T. Hermans. Planning for Multi-Object Manipulation with GraphNeural Network Relational Classifiers. In ICRA , 2023. 1, 2[4]C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated Task and Motion Planning. Annual Review of Control, Robotics, and AutonomousSystems , 4:265–293, 2021. 1[5]K. Mo, L. J. Guibas, M. Mukadam, A. Gupta, and S. Tulsiani. Where2Act: From Pixels toActions for Articulated 3D Objects. In CVPR , 2021. 2[6]Z. Xu, Z. He, and S. Song. UMPNet: Universal Manipulation Policy Network for ArticulatedObjects. RA-L , 2022. 2[7]R. Wu, Y . Zhao, K. Mo, Z. Guo, Y . Wang, T. Wu, Q. Fan, X. Chen, L. Guibas, and H. Dong.V AT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculatedObjects. In ICLR , 2021. 2[8]H. Geng, H. Xu, C. Zhao, C. Xu, L. Yi, S. Huang, and H. Wang. GAPartNet: Cross-CategoryDomain-Generalizable Object Perception and Manipulation via Generalizable and ActionableParts. In CVPR , 2023. 2[9]J. Aleotti and S. Caselli. Manipulation Planning of Similar Objects by Part Correspondence. InICRA , 2011. 2[10] N. Vahrenkamp, L. Westkamp, N. Yamanobe, E. E. Aksoy, and T. Asfour. Part-based graspplanning for familiar objects. In IEEE-RAS International Conference on Humanoid Robots(Humanoids) , 2016. 2[11] P. Parashar, J. Vakil, S. Powers, and C. Paxton. Spatial-Language Attention Policies for EfficientRobot Learning. arXiv:2304.11235 , 2023. 2[12] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding Language with Visual Affordances overUnstructured Data. In ICRA , 2023. 2[13] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns. Demonstrate Once, Imitate Immediately(DOME): Learning Visual Servoing for One-Shot Imitation Learning. In IROS , 2022. 2[14] A. Myers, C. L. Teo, C. Ferm ̈uller, and Y . Aloimonos. Affordance Detection of Tool Parts fromGeometric Features. In ICRA , 2015. 2, 49[15] T.-T. Do, A. Nguyen, and I. Reid. AffordanceNet: An End-to-End Deep Learning Approach forObject Affordance Detection. In ICRA , 2018. 2, 4, 16[16] S. Deng, X. Xu, C. Wu, K. Chen, and K. Jia. 3D AffordanceNet: A Benchmark for VisualObject Affordance Understanding. In CVPR , 2021. 2, 4, 16[17] W. Liu, A. Daruna, and S. Chernova. CAGE: Context-Aware Grasping Engine. In ICRA , 2020.2[18] P. Ard ́on,`E. Pairet, R. P. Petrick, S. Ramamoorthy, and K. S. Lohan. Learning Grasp AffordanceReasoning through Semantic Relations. In IROS , 2019. 2[19] L. Manuelli, W. Gao, P. Florence, and R. Tedrake. kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation. In ISRR , 2022. 2[20] Z. Qin, K. Fang, Y . Zhu, L. Fei-Fei, and S. Savarese. KETO: Learning Keypoint Representationsfor Tool Manipulation. In ICRA , 2020. 2[21] D. Turpin, L. Wang, S. Tsogkas, S. Dickinson, and A. Garg. GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels. In RSS, 2021. 2[22] L. Manuelli, Y . Li, P. Florence, and R. Tedrake. Keypoints into the Future: Self-SupervisedCorrespondence in Model-Based Reinforcement Learning. In CoRL , 2020. 2[23] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation.InICRA , 2022. 2[24] E. Chun, Y . Du, A. Simeonov, T. Lozano-Perez, and L. Kaelbling. Local Neural DescriptorFields: Locally Conditioned Object Representations for Manipulation. In ICRA , 2023. 2[25] J.-S. Ha, D. Driess, and M. Toussaint. Deep Visual Constraints: Neural Implicit Models forManipulation Planning from Visual Input. RA-L , 7(4):10857–10864, 2022. 2[26] M. Liu, Y . Zhu, H. Cai, S. Han, Z. Ling, F. Porikli, and H. Su. PartSLIP: Low-Shot PartSegmentation for 3D Point Clouds via Pretrained Image-Language Models. In CVPR , 2023. 2,8, 16[27] D. Hadjivelichkov, S. Zwane, L. Agapito, M. P. Deisenroth, and D. Kanoulas. One-Shot Transferof Affordance Regions? AffCorrs! In CoRL , 2022. 2, 4[28] W. Goodwin, I. Havoutis, and I. Posner. You Only Look at One: Category-Level ObjectRepresentations for Pose Estimation From a Single Example. In CoRL , 2022. 2[29] W. Yuan, C. Paxton, K. Desingh, and D. Fox. SORNet: Spatial Object-Centric Representationsfor Sequential Manipulation. In CoRL , 2021. 2[30] W. Liu, C. Paxton, T. Hermans, and D. Fox. StructFormer: Learning Spatial Structure forLanguage-Guided Semantic Rearrangement of Novel Objects. In ICRA , 2022. 2, 6, 14[31] M. Shridhar, L. Manuelli, and D. Fox. CLIPort: What and Where Pathways for RoboticManipulation. In CoRL , 2021. 2[32] T. Silver, R. Chitnis, J. Tenenbaum, L. P. Kaelbling, and T. Lozano-P ́erez. Learning SymbolicOperators for Task and Motion Planning. In IROS , 2021. 2[33] K. Kase, C. Paxton, H. Mazhar, T. Ogata, and D. Fox. Transferable Task Execution from Pixelsthrough Deep Planning Domain Learning. In ICRA , 2020. 2[34] K. Mo, Y . Qin, F. Xiang, H. Su, and L. Guibas. O2O-Afford: Annotation-Free Large-ScaleObject-Object Affordance Learning. In CoRL , 2022. 210[35] J. Liang and A. Boularias. Learning Category-Level Manipulation Tasks from Point Cloudswith Dynamic Graph CNNs. In ICRA , 2023. 2[36] D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake. Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning. In CoRL , 2022. 3[37] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. TAX-Pose: Task-Specific Cross-PoseEstimation for Robot Manipulation. In CoRL , 2022. 3, 6[38] D. Seita, Y . Wang, S. J. Shetty, E. Y . Li, Z. Erickson, and D. Held. ToolFlowNet: RoboticManipulation with Tools via Predicting Tool Flow from Point Clouds. In CoRL , 2022. 3[39] M. Janner, Y . Du, J. Tenenbaum, and S. Levine. Planning with Diffusion for Flexible BehaviorSynthesis. In ICML , 2022. 3[40] J. Urain, N. Funk, J. Peters, and G. Chalvatzaki. SE(3)-DiffusionFields: Learning Smooth CostFunctions for Joint Grasp and Motion Optimization through Diffusion. In ICRA , 2023. 3, 4, 6,14[41] S. Huang, Z. Wang, P. Li, B. Jia, T. Liu, Y . Zhu, W. Liang, and S.-C. Zhu. Diffusion-BasedGeneration, Optimization, and Planning in 3D Scenes. In CVPR , 2023. 3[42] I. Kapelyukh, V . V osylius, and E. Johns. DALL-E-Bot: Introducing Web-Scale DiffusionModels to Robotics. RA-L , 2023. 3[43] A. Ajay, Y . Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is Conditional GenerativeModeling all you need for Decision-Making? In ICLR , 2023. 3[44] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta, B. Ichter,et al. Scaling Robot Learning with Semantically Imagined Experience. arXiv:2302.11550 ,2023. 3[45] U. A. Mishra and Y . Chen. ReorientDiff: Diffusion Model based Reorientation for ObjectManipulation. arXiv:2303.12700 , 2023. 3[46] C. Higuera, B. Boots, and M. Mukadam. Learning to Read Braille: Bridging the Tactile RealityGap with Diffusion Models. arXiv:2304.01182 , 2023. 3[47] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion Policy:Visuomotor Policy Learning via Action Diffusion. In RSS, 2023. 3, 6, 14[48] N. Liu, S. Li, Y . Du, A. Torralba, and J. B. Tenenbaum. Compositional Visual Generation withComposable Diffusion Models. In ECCV , 2022. 3, 4, 6[49] Y . Du, C. Durkan, R. Strudel, J. B. Tenenbaum, S. Dieleman, R. Fergus, J. Sohl-Dickstein,A. Doucet, and W. Grathwohl. Reduce, Reuse, Recycle: Compositional Generation withEnergy-Based Diffusion Models and MCMC. In ICML , 2023. 3, 4, 6[50] Z. Huang, K. C. Chan, Y . Jiang, and Z. Liu. Collaborative Diffusion for Multi-Modal FaceGeneration and Editing. In CVPR , 2023. 3[51] R. Xu, F.-J. Chu, C. Tang, W. Liu, and P. A. Vela. An Affordance Keypoint Detection Networkfor Robot Manipulation. RA-L , 6(2):2870–2877, 2021. 4[52] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu. PCT: Point CloudTransformer. Computational Visual Media , 2021. 4, 13, 14[53] J. Ho, A. Jain, and P. Abbeel. Denoising Diffusion Probabilistic Models. In NeruIPS , 2020. 4,6, 1411[54] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. PartNet: A Large-scaleBenchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding. In CVPR ,2019. 5, 14[55] M. Savva, A. X. Chang, and P. Hanrahan. Semantically-Enriched 3D Models for Common-senseKnowledge. In CVPRW , 2015. 5, 14[56] C. Eppner, A. Mousavian, and D. Fox. ACRONYM: A Large-Scale Grasp Dataset Based onSimulation. In ICRA , 2021. 5, 14[57] E. Coumans and Y . Bai. Pybullet, A Python Module for Physics Simulation in Robotics, Gamesand Machine Learning, 2017. 5[58] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V . Koltun. Point Transformer. In ICCV , 2021. 8[59] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-GraspNet: Efficient 6-DoFGrasp Generation in Cluttered Scenes. In ICRA , 2021. 8[60] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. VIOLA: Imitation Learning for Vision-Based Manipula-tion with Object Proposal Priors. In CoRL , 2022. 8[61] Q. Zhang and Y . Chen. Fast Sampling of Diffusion Models with Exponential Integrator. InICLR , 2022. 812A Network ArchitectureFor each functional correspondence ⟨Si,PA,j,PF,k⟩, we aim to learn a generative distributionpθ,Si(TPjk|XPA,j,XPF,k). Here we discuss the network architecture for primitive diffusion modelεθ,Sithat learns to estimate the generative distribution. We leverage modality-specific encoders toconvert the multimodal inputs to latent tokens that are later processed by a transformer network.Object encoder. Given part point clouds XPA,jandXPF,k, we use a learned encoder hptoencode each part separately as hp(XPA,j)andhp(XPF,k). This encoder is built on the Point CloudTransformer (PCT) [52].Diffusion encodings. Since the goal transformations TPjk={TPjk,n}Nn=1are iteratively refined bythe diffusion model and need to feed back to the model during inference, we use a MLP to encodeeach goal transformation separately hT(TPjk,n). To compute the time-dependent Gaussian posteriorfor reverse diffusion, we obtain a latent code for tusing a Sinusoidal embedding htime(t).Positional encoding. We use a learned position embedding hpos(l)to indicate the position index lofthe part point clouds and poses in input sequences to the subsequent transformer.Diffusion Transformer. The diffusion model predicts the goal poses T(0)Pjkstarting from the last timestep of the reverse diffusion process T(T)Pjk∼ N(0,I), which is sampled from a multivariate normaldistribution with independent components. We use a transformer encoder as the backbone for thediffusion model εθ,Si{T(t)Pjk,n}Nn=1|XPA,j,XPF,k, t, which predicts the time-dependent noise{ε(t)1, ..., ε(t)N}. We obtain the transformer input for the parts χand the target poses τasχ(t)A= [hp(XPA,j);hpos(0);htime(t)]χ(t)F= [hp(XPF,k);hpos(1);htime(t)]τ(t)n= [hT(T(t)Pjk,n);hpos(n−2);htime(t)]where [; ]is the concatenation at the feature dimension. The model takes in the sequence{χ(t)A, χ(t)F, τ(t)1, ..., τ(t)N}and predicts {ε(t)1, ..., ε(t)N}for the object poses.Parameters. We provide network and training parameters in Table A1.Table A1: Model ParametersParameter ValueNumber of PA,jandPF,kpoints 512PCT point cloud encoder hpout dim 200Position embedding hpos learned embeddingPosition embedding hposdim 16Time embedding htime SinusoidalTime embedding htime dim 40Pose encoder hTout dim 200Transformer number of layers 4Transformer number of heads 4Transformer hidden dim 128Transformer dropout 0.0Diffusion steps T 200Diffusion noise schedule βt LinearStart value β0 0.0001End value βT 0.02Loss HuberEpochs 2000Optimizer AdamLearning rate 1e-4Gradient clip value 1.013B Implementation Details for BaselinesWe discuss the implementation of each baseline below:•Transformer-BC: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. This baseline uses a multimodal transformer encoder-decoder fromprior work [ 30] to condition on point clouds of the objects and autoregressively predicttarget transformations. The point clouds are first individually encoded with a point cloudtransformer [ 52]. The point cloud embeddings are fed to the transformer encoder. Thetransformer decoder autoregressively decodes the target poses {TAF,1, ..,TAF,N}.•TAX-Pose: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. We use the code and hyperparameters from the official repository‡. We use the variant that does not require pretrained object embeddings because we usedifferent objects from the paper. As discussed in Appendix F.1.2 of the original paper,pretraining mainly helps to reduce training time. Because the TAX-Pose model onlypredicts one relative pose for each pair of point clouds, we learn a separate model for eachtransformation in TAF. Specifically, one TAX-Pose is trained to predict start pose andanother TAX-Pose is trained to predict end pose.•PC-DDPM: this baseline takes point clouds XAandXFas input, and predicts targettransformations TAF. Similar to recent work [ 40,47], a conditional denoising diffusionprobabilistic model [ 53] is trained to predict target transformations based on input pointclouds of both the function and the anchor objects. This model has the same architecture,including encoders, latent embeddings, and the diffusion transformer, as the primitivediffusion models, which is discussed in Appendix A.•Part-Aware PC-DDPM: this baseline takes point clouds XA∈RNX×3andXF∈RNX×3and two segmentation masks IA∈RNX×NIandIF∈RNX×NIas input, and predictstarget transformations TAF.NXis the number of points for each point cloud and NIisthe number of known object parts. Each channel of the segmentation mask is a binarymask indicating points for a specific part. Each segmentation mask encodes all parts thatcan be extracted from an object point cloud. For simulation experiment, the segmentationmasks come from groundtruth part segmentation. While CPM use the segmentation masksto extract part point clouds, this baseline directly encode the segmentation mask togetherwith the object point cloud. This baseline shares most of the network architecture asPC-DDPM except that point cloud encoder now encodes [XA;IA]∈RNX×(3+NI)and[XF;IF]∈RNX×(3+NI).C Dataset DetailsIn total, we collected 4522 successful demonstrations for pouring and 2836 successful demonstrationsfor safe placing. For each experiment, we use a subset of these demonstrations for training the models,and the remaining data for initializing the simulation. We provide a breakdown of the dataset inTable A2. Because the expert policies do not have 100% success rate, the models will only be trainedon the successful demonstrations. Below we discuss our data collection process in details.Sourcing 3D objects. We source a wide variety of 3D objects from PartNet [ 54] and the subset ofShapeNetSem [ 55] objects categorized in the Acronym dataset [ 56]. We use 13 object categoriesto investigate generalization, including mug,pan,bowl ,wine glass ,knife ,can opener ,scissors ,screwdriver ,fork,spoon ,marker ,pen, and flashlight . Some object categories are reused for differenttasks; for example, mug is used as an anchor for safe placing but also as an object for pouring.Extracting aligned parts. Our generative diffusion models uses part segmentations of objects tolearn primitive diffusion models. For 3D objects from PartNet, we use the segmentations provided in‡Code from https://github.com/r-pad/taxpose.14Table A2: Simulation and Demonstration DataTask Object Source Number of Simulations Number of Success DemonstrationsSafe PlacingPen PartNet 1000 568Fork PartNet 1000 390ScrewDriver PartNet 1000 145Spoon PartNet 1000 410Knife Acronym 1000 496Scissors PartNet 1000 354Flashlight PartNet 1000 141CanOpener PartNet 1000 101Marker PartNet 1000 231PouringMug PartNet 2000 1051WineGlass PartNet 2000 1542Bowl Acronym 2000 776Pan PartNet 2000 1153the dataset. For 3D objects from ShapeNetSem, we first label 3D keypoints, then from the labeledkeypoints, we procedurally extract parts. As ShapeNet provides canonical poses for 3D models, wecan also align the extracted functional parts for each object category.Simulating trajectories and rendering. We simulate the robot-object interactions by tracing thetrajectories defined by the parameters. We first use multiple cameras to render RGB-D images, whichyield realistic object point clouds. We then map the functional parts to the point clouds with thecorrect transformation and scaling. Finally, we obtain point cloud segments of each affordance part.Because these parts are extracted from the rendered point clouds, they can be incomplete, whichincreases the robustness of our method and helps transferability to real-world settings.D Evaluation DetailsIn Section 5, we report task completion scores. For each experiment, we randomly draw 100 samplesfrom the withheld testing data to initialize simulation for evaluation. This procedure ensures that theaction can be successfully performed for the pair of anchor and function objects. To systematicallyevaluate multimodal actions (e.g, pouring from different directions), we sample from each model 5times and simulate the predicted actions. We repeat each experiment with 3 different random seeds,resulting in a total of 1500 trials.The task score indicates task completion between failure (0) and success (100), with credits assignedfor partial completion. The score is computed based on model-based classifiers designed for eachtask. Now we describe how the score is computed in more detail:•Pouring: we first use PyBullet’s collision test to check whether the function object and anchorobject will ever interpenetrate during the execution of the action by rigidly transformingthe function object to the predicted poses. If the objects interpenetrate, we assign a scoreof zero because the action cannot be realistically executed. Then we simulate the pouringaction, and use the percentage of particles successfully transferred from the function objectto the anchor object as the partial score.•Safe Placing: similar to pouring, we check interpenetration for the start pose of the placementaction. If the objects interpenetrate, we assign a score of zero. Then we simulate theplacement action until contact between the anchor and function object. If the orientationof the function object is incorrect (e.g., the blade of the knife is outside of the container),we assign a score of zero. If the orientation is correct, the percentage of the trajectoryparameterized by the predicted transformations that is successfully executed is used as thepartial score.15E Additional ResultsBesides reporting the task completion scores, we include additional task success rates in Table A3 andTable A4. For pouring, a trial is considered successful if there is no interpenetration between objectsand 70% of particles are successfully transferred. For safe placing, a successful trial requires nointerpenetration at the predicted start pose for the function object, correct orientation of the functionobject, and 70% of the predicting trajectory being successfully executed without collision betweenobjects. We observe similar trends as the results presented in Section 5.Table A3: CPM shows strong generalization to novel instances of objects within seen categories.Model Pouring Safe PlacingTransformer-BC 17.53 ±3.13 33.27 ±2.14TAX-Pose 21.33 ±0.58 74.00±1.00PC-DDPM 70.67 ±1.27 48.73 ±2.97Part-Aware PC-DDPM 73.60 ±2.60 36.53 ±2.20CPM (ours) 76.87±1.70 68.87±2.25Table A4: CPM demonstrates strong generalization to function objects from unseen object categories.ModelPouring Safe PlacingBowl Glass Mug Pan Fork Pen ScissorsTransformer-BC 10.00 ±2.51 19.20 ±0.92 5.80 ±1.93 29.33 ±2.04 24.00 ±1.51 27.33 ±2.34 18.47 ±2.93TAX-Pose 21.00 ±1.00 3.00 ±1.00 8.00 ±1.00 42.67 ±2.08 47.67 ±3.21 62.67±4.04 33.33±1.15PC-DDPM 56.53 ±2.00 70.67 ±3.06 68.67 ±4.31 59.93 ±2.80 38.00 ±3.83 43.47 ±1.68 28.47 ±0.83Part-Aware PC-DDPM 54.87 ±2.10 68.33 ±2.97 65.20 ±4.61 62.00±3.56 28.67±1.68 42.40 ±3.12 17.67 ±2.70CPM (ours) 76.40±1.78 78.93 ±3.14 76.00 ±5.26 54.67±1.50 53.93±2.91 56.53±2.04 62.07±1.72F AssumptionsDuring training, our method assumes 1) a description of the manipulation skill as a set of partcorrespondences, 2) access to the dataset of successful trajectories, and 3) access to part segmentationsfor objects in the dataset. During testing, our method assumes the part segmentations for objectsbeing manipulated. We contend that these assumptions align with our current focus. Nonetheless,subsequent research should aim to address them.First, the description of manipulation skills is in symbolic text, e.g., pouring from mugs to bowlscontains three constraints. They can be easily annotated by humans as there is no need to specifyany continuous parameters or mathematical formulas. An interesting future direction is to leveragelarge language models to more efficiently extract constraints. CPM then learns the grounding of theseconstraints from data.Second, we assume access to successful manipulation trajectories. That is, we do not assume anyadditional annotations, such as programs for generating these trajectories. The key focus of thepaper is to improve the data efficiency of learning such skills, in particular for generalization acrosscategories. An important future direction is to improve the data efficiency of this method and learnfrom noisy human demonstrations.Finally, relying on external part segmentation is limiting, but 2D or 3D part segmentation modelsare generally available for many object categories [ 15,16,26]. An exciting future direction is toextend the current framework to automatically discover functional part segmentations leveragingmanipulation data.16 |
2qKBwyLnln | Policy Stitching: Learning Transferable Robot PoliciesPingcheng Jian1Easop Lee1Zachary Bell2Michael M. Zavlanos1Boyuan Chen11Duke University2Air Force Research Laboratorygeneralroboticslab.com/PolicyStitchingAbstract: Training robots with reinforcement learning (RL) typically involvesheavy interactions with the environment, and the acquired skills are often sensi-tive to changes in task environments and robot kinematics. Transfer RL aims toleverage previous knowledge to accelerate learning of new tasks or new body con-figurations. However, existing methods struggle to generalize to novel robot-taskcombinations and scale to realistic tasks due to complex architecture design orstrong regularization that limits the capacity of the learned policy. We proposePolicy Stitching, a novel framework that facilitates robot transfer learning for novelcombinations of robots and tasks. Our key idea is to apply modular policy designand align the latent representations between the modular interfaces. Our methodallows direct stitching of the robot and task modules trained separately to forma new policy for fast adaptation. Our simulated and real-world experiments onvarious 3D manipulation tasks demonstrate the superior zero-shot and few-shottransfer learning performances of our method.Keywords: robot transfer learning, policy stitchingTask ModuleTask ModuleRobot ModuleRobot ModuleTask ModuleRobot ModuleTask ModuleRobot ModuleOriginal PoliciesPolicy Stitching(A)Target States:(B)Seed: 101Seed: 103Seed: 104Seed: 102Target States:(B)Seed: 101Seed: 103Seed: 104Seed: 102Policy StitchingOriginal Policies(A)(B)Fig. 1: Policy Stitching. (A) Our framework facilitates robot transfer learning among novel combinations ofrobots and tasks by decoupling and stitching robot and task modules. (B) Motivation example: A robot arm istrained to reach goals in four different target regions using the modular policy. Results from separate trainingruns with different random seeds (101-104) show misaligned latent representations.1 IntroductionRobots are typically trained to excel at specific tasks such as relocating objects or navigating topredetermined locations. However, such robots need to be retrained from scratch when faced with newtasks or body changes. In contrast, humans demonstrate remarkable capabilities [1] to continuouslyacquire new skills by drawing on past experiences. Even under physical constraints imposed byinjuries, we can still rapidly adapt to perform new tasks. Despite significant advancements in roboticsand machine learning, robots still cannot generalize their experience across a wide range of tasks andbody configurations.Model-based robot learning, in particular self-modeling or self-identification learning [ 2,3,4,5,6,7,8,9,10], aims to learn a predictive model of the robot’s kinematics and dynamics and thenemploy this model for various downstream tasks through model predictive control. However, the7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.learning process needs to be separated into two stages of robot-specific and task-specific learning.On the other hand, model-free reinforcement learning trains policies end to end, but it often haslimited transfer learning performance [ 11]. Existing efforts regularize one large policy network formulti-task learning by learning routing connections to reuse part of the network weights [ 12] orassigning task-specific sub-networks [ 13]. However, the capacity of the large policy network growsexponentially as the number of tasks increases.We introduce Policy Stitching (PS) , a model-free learning framework for knowledge transfer amongnovel robot and task combinations through modular policy design and transferable representationlearning (Fig.1 (A)). We explicitly decouple robot-specific representation (e.g., robot kinematicsand dynamics) and task-specific representation (e.g., object states) in our policy design to enablethe reassembly of both modules for new robot-task combinations. For instance, given one policytrained for a 3-DoF manipulator to pick up a cube and another policy trained for a 5-DoFmanipulator to pick up a stick , if we would like to have the 3-DoF manipulator pick up a stick now(i.e., +), our method can directly take the robot module in the first policy and stitch it with thetask module in the second policy.While the modular design allows direct stitching, the reassembled policy may not work at all. Wefind that this is because the output representation from one neural network module does not alignwith the desired input representation of another module, particularly when modules are trained ondifferent tasks and robot body configurations. Past work in supervised learning [ 14,15] has madesimilar observations. As a motivating example, in Fig.1(B), we show that, even under the same taskand robot setup, latent representations from RL policies trained with different random seeds do notalign with each other. More interestingly, they exhibit similar isometric transformations as shown inrecent work on supervised learning [ 14,15]. See Appendix C.1 for additional details of the isometrictransformation phenomenon.To this end, we further propose to generalize the latent representation alignment techniques fromsupervised learning to reinforcement learning for robot transfer learning. The key idea is to enforcetransformation invariances by projecting the intermediate representations into the same latent coordi-nate system. Unlike supervised learning, RL does not come with human labels to help select anchorcoordinates. We propose to use unsupervised clustering of target states to help resolve this gap. Ourmethod produces aligned representations for effective policy stitching. In summary, our contributionsare three-fold:•Policy Stitching , a model-free reinforcement learning framework for robot transfer learningamong novel robot and task combinations.•Modular Policy Design for robot-specific and task-specific module reassembly. Represen-tation Alignment to learn transferable latent space for direct module stitching.•Demonstration of the clear advantages of our method in both zero-shot and few-shot transferlearning through simulated and physical 3D manipulation tasks .2 Related WorkRobot Transfer Learning. Transferring robot policies across various environment dynamics ornovel tasks is still an open challenge [ 16,17,18,19]. Past efforts have proposed to transfer differentcomponents in reinforcement learning framework, such as value functions [ 20,21,22], rewards [ 23],experience samples [ 24], policies [ 25,26,27], parameters [ 28,29,30], and features [ 31]. In contrast,our method transfers structured network modules of a policy network across novel tasks or novelrobot body configurations. Our modular design is similar to previous work [ 27], but we do not limitthe capacity of the latent space to avoid overfitting. Instead, we enforce invariances among the learnedlatent representations of different modules through feature alignment. Moreover, our work surpassesprevious accomplishments in 2D tasks by demonstrating 3D manipulation skills, both in simulationand the real world.2Fig. 2: Method Overview. Our modular policy design enables the task module to process task-specific statessuch as object and environmental states and the robot module to process robot-specific states such as kinematicsand dynamics. We generalize relative representations to model-free RL setup to align latent representations fromseparately trained policies. With both modular design and latent representation alignment, our method allowsdirect stitching of unseen combination of task and robot module for effective and efficient transfer.Meta Learning. Meta-learning [ 32,33,34,35,36,37] aims to achieve fast adaptation on a newtask based on past learning experiences. Recent research has proposed generating better parameterinitialization for learning new tasks [ 30,38,39,40] or using memory-augmented neural networks[41,42,43] to assimilate new data swiftly without forgetting the previous knowledge. Anothercategory of methods [ 44,45,46,47,48,49] proposes to use a hypernetwork to generate the parametersof policy networks for different tasks. Our work also aims at fast adaptation to novel robot and taskcombinations, but our method reuses structured components in policy networks.Compositional Reinforcement Learning. Functional compositional RL has been used in zero-shottransfer learning [ 27], multi-task learning [ 12], and lifelong learning [ 13]. Some approaches learnthe structure of the modules [ 50,51,12,13], but these trained modules can only be functional in alarge modular system and cannot work when stitched with new modules. Another method [ 27] triesto make the network module reusable, but the module alignment problem prevents it from workingon 3D tasks. Our work can directly stitch policy modules to improve zero-shot and few-shot learningperformances in simulated and physical 3D manipulation tasks.3 Method: Policy StitchingPolicy Stitching focuses on enabling effective knowledge transfer between different tasks or robotbodies. Our method consists of two main components: the modular policy design and transferablerepresentations achieved through latent space alignment. Our framework is designed to be compatiblewith various model-free RL algorithms.3.1 Modular Policy DesignWe propose modular policy design to decompose the policy into two distinct modules focusing onrobot specific and task specific information. Our design provides a straightforward ingredient formodule reuse and stitching. Consider an environment, E, with a robot rand a presented task k, weformulate the problem as a Markov Decision Process with an observed state sEand action aE. Wedenote the learning policy as πErk(aE|sE)parameterized by a function φErk(sE). We utilize themodel-free RL formulation, specifically the Soft Actor-Critic (SAC) [52] algorithm.We decompose the state sE, the policy function φErk(sE), and the Q-function into a robot-specificmodule and a task-specific module (Fig. 2). The state sEis decomposed into a robot-specific statesE,Rand a task-specific state sE,T. The robot-specific state sE,Ronly consists of the joint anglesof the robot, while the task-specific state sE,Tincludes task information at hand such as the currentposition and orientation, linear and angular velocity, and goal position of the object. Similarly, thepolicy function φErk(sE)is also decomposed into a task-specific module gkto encode the input taskstates, and a robot-specific module frto implicitly capture robot kinematics and dynamics from inputrobot states. We follow the same design to decompose the Q-function into task-specific module qk3and robot-specific module hr. Note that this decomposition is similar to previous work [ 27] withgeneralization on both actor and critic network. Formally, our policy function and Q-function can bere-written as follows:φErk(sE) =φErk(sE,T, sE,R) =fr(gk(sE,T), sE,R), (1)QErk(sE, aE) =QErk(sE,T, sE,R, aE) =hr(qk(sE,T), sE,R, aE). (2)For two policy networks fr1(gk1(sE,T), sE,R)andfr2(gk2(sE,T), sE,R), we define Policy Stitch-ing as constructing another policy network fr2(gk1(sE,T), sE,R)by initializing the task modulewith parameters from gk1and initializing the robot module with parameters from fr2. The Q-functionis stitched in similar way.We name the modules after their main functionalities. Such modular design does not completelyseparate the information processing from each module. Since we train the entire policy end to end, thegradients flow in between the two modules during backpropagation. However, by explicitly enforcingtheir main functionalities through our modular design, we can still obtain effective transfer learningby stitching modules to leverage previously acquired knowledge. After the stitching, we can eitheruse the new policy for zero-shot execution or perform few-shot learning for fast adaptation.3.2 Transferable Representations through Latent Space AlignmentOur modular policy design allows for direct stitching, but a simple stitching approach may not yieldoptimal performance. Previous research [ 27] attributes the issue as overfitting to a particular robotand task, and has proposed to use dropout and very small bottleneck dimensions. However, thecapacity of the policy is largely limited to simple 2D tasks and low-dimensional task states. Instead,we identify the fundamental issue as the lack of alignment enforcement of the latent embeddingspace at the output/input interface of two modules. Even under the same robot-task combination, thelatent embedding vectors between the robot and task modules from different training seeds exhibit analmost isometric transformation relationship (Fig.1(B)). While similar observation has been made insupervised learning [ 15,53], we have identified and addressed this issue in the context of RL anddemonstrate the effectiveness in knowledge transfer across novel robot-task combinations.We propose to generalize Relative Representations [ 15] in supervised learning to model-free RL. Un-like other techniques in supervised learning [ 54,14,55], Relative Representations does not introduceadditional learnable parameters. The key idea is to project all latent representations at the interfacebetween two modules to a shared coordinate system among multiple policy networks. Through thisinvariance enforcement of transformation, we can establish a consistent latent representation of thetask-state that aligns with the desired latent input of the robot module.However, the original Relative Representations relies on the ground-truth labels in the superviseddataset to provide the anchor points for building the latent coordinate system. Our setup in RL doesnot come with such labels. The anchor points are analogous to the concept of basis in a linear system,hence should be as dissimilar from each other as possible. To overcome this challenge, we first collecta task state set Sby rolling out two trained naive policies in their own training environments. Wethen perform unsupervised clustering with k-means [ 56] on these task states to select an anchor set A.Specifically, we select kanchor states that are closest to the centroids of the clusters. The number ofclusters kdepends on the dimension of the latent representation.We want to represent every embedded task state es(i)=gks(i)with respect to the embeddedanchor states ea(j)=gka(i), where a(j)∈Aandgkis the task-specific module that embedsthe task states. To this end, we capture the relationship between an embedded task state es(i)andan embedded anchor state ea(j)using a similarity function sim:Rd×Rd→R, which calculatesa similarity score r= sim ( es(i),ea(j)). Given the anchor set a(1), . . . , a(|A|), the transferablerepresentation of an input task state s(i)∈Sis calculated by:rs=simes(i),ea(1)),simes(i),ea(2)), . . . , simes(i),ea(|A|)), (3)which is a vector of length |A|, and each element is the similarity score between the embedded taskstate and an embedded anchor state. We intentionally choose the cosine similarity as our similarity4Fig. 3: Simulation Experimental Setup : From left to right, we show different task and robot configurations:standard table, slippery table, rough terrain with small rocks, cube object, stick object, 7-DoF Franka EmikaPanda arm, third joint locked, and fifth joint locked.measure because it is invariant to reflection, rotation ,and rescaling. Although it is not invariantto vector translation, we add a normalization layer before calculating cosine similarity to mitigatethis issue. In this way, our proposed Relative Representation for RL projects latent representationsfrom different policies into a common coordinate system. Therefore, it overcomes the isometrictransformation issue of the original latent representations and makes them better aligned and invariant.3.3 Implementation DetailsWe challenge our methods in sparse reward setup and use hindsight experience replay [ 57] toencourage exploration. Both the task and robot modules are represented as MLPs [ 58] with fourlayers and 256 hidden dimensions. The last layer of the task module and the first layer of the robotmodule have a dimension of 128, thus the dimension of the latent representation and the selectedanchor set are also 128(k= 128 ). Detailed network structures of our method and other baselines canbe found in Appendix A. Each of our policy is trained on 7threads of the AMD EPYC 7513 CPUand1NVIDIA GeForce RTX 3090 GPU. Each training epoch consists of 35,000steps.4 ExperimentsWe aim to evaluate the performance of Policy Stitching (PS) in both zero-shot and few-shot transfersetups. Furthermore, we generalize our experiments to a physical robot setup to demonstrate thepractical applicability and real-world performance. Finally, we provide quantitative and qualitativeanalysis to understand the role of the learned transferable representations in effective policy stitching.4.1 Simulation Experiment Setup and BaselinesWe modify the panda-gym environment [ 59] to simulate 5manipulation task environments and 3distinct robots with varying kinematics (Fig.3). The tasks consist of 3push scenarios, where therobot must push a cube to the goal position on surfaces with very different friction properties, and 2pick tasks that involve pick and place task of objects with different shapes. In zero-shot evaluation,the stitched policy is directly tested without fine-tuning and each experiment consists of 200testtrajectories, which are repeated 5times to report the mean and standard deviation. In few-shotevaluation, the stitched policy first interacts with the environment for additional 10epochs to storedata in the replay buffer and is then fine-tuned for a few epochs with SAC. We train with 3randomseeds to report the mean and standard deviation.Robot-Task Combination. We evaluate PS on 8 robot-task combinations to cover policy stitchingin both similar and dissimilar setup. We list all combinations as the titles of sub-figures in Fig.4.For instance, “t: Pi1-R1” represents a task module trained in the Pick1 task with Robot1, while “r:Pu1-R2” represents a robot module trained in the Push1 task with Robot2. These modules are thenstitched together to form a new combination that has not been jointly trained together.Baselines. We compare our method to the approach of Devin et al. [27], which uses a similarmodular policy but with small bottleneck and dropout as regularization to alleviate the modulesmisalignment issue. In the few-shot test, we also include an additional comparison to increase thebottleneck dimension of the Devin et al. [27]’s method to the same dimension as our method, sincethe original bottleneck dimension is too small to capture necessary information for complex 3Dtasks. Furthermore, we provide an ablation study with our modular policy design but no latent5Touching Rate (%) Success Rate (%)PS PS(Ablation) Devin et al. Plain PS PS(Ablation) Devin et al. PlainE1 73.0±1.475 .8±4.444.8±3.1 0 .0±0.026 .9±3.613.9±3.5 11 .7±2.6 6 .5±2.5E299 .9±0.293.5±2.0 78 .3±5.4 17 .9±2.724 .4±1.5 6.8±0.7 10 .0±0.9 9 .3±2.1E3 90.8±2.395 .6±0.882.6±1.6 0 .0±0.016 .5±1.9 7.2±1.5 9 .8±2.5 8 .4±3.7E495 .4±1.988.7±1.1 49 .6±1.1 1 .6±1.0 13 .2±1.114 .9±1.011.8±2.4 9 .0±1.4E514 .7±2.7 9.6±1.6 8 .6±0.3 2 .8±0.9 2 .1±0.4 4.0±1.9 2.9±1.0 3 .8±1.6E6 55.6±2.480 .8±3.430.5±4.2 0 .1±0.2 8 .8±1.911 .6±2.010.5±2.1 9 .0±0.7E741 .8±1.713.1±2.9 9 .8±1.3 0 .0±0.04.7±1.6 3.6±0.8 3 .0±1.6 3 .2±1.4E8 18.4±1.318 .8±2.613.6±1.1 0 .0±0.05.3±1.7 3.0±0.7 3 .1±0.5 3 .0±0.7Tab. 1: Success rates and touching rates of zero-shot transferrepresentation alignment to study its importance. We also build a Plain baseline which is an MLPthat takes in all the states at the first layer and has no modular structure. When transferring to noveltask-robot combination, we split the Plain MLP network into two parts (i.e.,top-half and bottom-halfas in Appendix Fig. 8(c)) and perform the same reassembling operation as the modular networks.Specifically, when performing the stitching operation of the Plain baseline, the top half of a Plainnetwork is stitched to the bottom half of another Plain network. This operation that preserves asub-network from the old task and adds a new sub-network for the new task has been widely used inother works [60, 61, 62, 63, 64]. See Appendix A for architecture details.Metrics. We use success rate (task completion within a fixed number of steps) and touching rate(contact with the object during the task) as our evaluation metrics. In challenging transfer scenarios(e.g., a robot trained for a pushing task is required to perform a picking task), all methods may exhibitlow success rates. Therefore, we use touching rate as a more informative metric to indicate whetherthe robot is engaging in meaningful interactions rather than arbitrary movements. Effective touchingbehavior can result in improved exploration for few-shot transfer learning, since touching serves as apreliminary behavior for picking or pushing.4.1.1 Results: Zero-Shot Transfer in SimulationE1-E3 are easier robot-task combinations than E4-E8 since the unseen setup is closer to their originalcombinations. As shown in Tab.1, for E1-E3, PS achieves significantly better or comparable resultsthan Devin et al. [27] and the PS(Ablation) method, suggesting the latent representation alignmentserves as the fundamental step to enable effective module stitching. Our method also outperformsPlain MLP policy, which indicates both modular policy design and latent representation alignmentare beneficial.For more challenging cases (E4-E8) where both task and robot configurations are drastically different,though all methods do not exhibit strong success rates, PS demonstrates notably higher touchingrates. This implies more meaningful interactions that can potentially lead to a higher success rateFig. 4: Success rates of the few-shot transfer learning in simulation.6and faster adaption in our few-shot experiments. Qualitatively, Plain baseline shows limited attemptsto complete the task with frequent aimless arm swinging. Though PS(Ablation) and Devin et al.[27] methods show higher touching rates than the Plain baseline, they lack consistent attempts tomove the target object to the goal after touching. In contrast, our PS method makes the robot attemptto push the object to the goal in most experiments. We show such qualitative comparisons in oursupplementary video.4.1.2 Results: Few-shot Transfer in SimulationFig. 5: Real World Setup .In Fig.4, PS achieves the highest or comparable success rates. No-tably, in E2 and E4, while other methods tend to get stuck at localminima and reach a plateau at sub-optimal success rates, PS con-verges to significantly higher rates. In some experiments such as E1,E4, E5 and E7, PS transfers much faster and achieve higher successrates during the early phase of fine-tuning. This high transfer effi-ciency supports our hypothesis in the zero-shot experiment that the high touching rates achieved byPS can facilitate transfer learning through meaningful interactions. Even in challenging scenarioswhere all methods struggle to achieve a high success rate, PS can quickly adapt based on limitedinteractions. Furthermore, while some methods may achieve comparable results in a few instances,PS is the only method that demonstrates consistent satisfactory performance across all scenarios,suggesting that PS serves as a stable transfer learning framework.4.2 Real World ExperimentFirst, we aim to evaluate the zero-shot transfer performance of PS in the real world by directlytransferring the simulated policies to novel robot-task combinations. Moreover, we are interested inaccessing the feasibility of continuously improving such policies given limited real-world interactions.Fig. 6: Few-Shot Results . Solidcurves show the fine-tuning insimulation. Dotted curves showthe further fine-tuning in the realworld.Setup. In our real-world experiment (Fig.5), we include a Robot1as a 6-DoF UR5 arm, a Robot2 as the same UR5 arm but with thefifth joint being locked, a Task1 as pushing a cube to a goal position,and a Task2 as pushing a cylinder. The friction coefficient betweenthe object and the table is 0.16 for Task 1 and 0.72 for Task 2. Wefirst train Robot1-Task2 and Robot2-Task1 pairs in simulation. Wethen create Task1-Robot1 through policy stitching. For the few-shotexperiments, we first fine-tune the policy for 50 epochs in simulationand test it in the real world. Following this, we then allow thepolicy to interact with the physical world for an additional 6 epochs(5 hours). We use an Intel RealSense D435i to detect the ArUcomarkers [ 65] attached to the object and the goal position on the table.Both objects are 3D printed. The cylinder has a sandpaper at itsbottom to increase the friction coefficient. All evaluation results are based on 20 testing instances.Zero-Shot Results. PS achieves a success rate of 40% and a touching rate of 100% in zero-shottransfer. In comparison, both the PS(Ablation) and the Plain baseline have 0% success rate. ThePS(Ablation) achieves a 75% touching rate, while the Plain baseline cannot even touch the object.The superior performances of PS indicate the great potential of policy stitching to generalize to novelrobot-task combinations in real-world settings.Few-Shot Results. As shown by the solid curves in Fig.6, after the first stage of fine-tuning insimulation, PS achieves the highest success rate of about 90% in simulation, surpassing both thePS(Ablation) and Plain methods ( ∼70%). When this policy is directly transferred to the real UR5arm, PS still achieves the highest success rate at 75%, followed by the PS(Ablation) at 50% and thePlain method at 45%.These results indicate that the modular design with relative representation achieves a promisingsuccess rate on a new task in both simulation and the real world after a few epochs of fine-tuning. Due7Fig. 7: Latent Space Anal-ysis. We train robots with 3different kinematics for thereaching task and visualizetheir latent space colorizedby 4 groups of task states,similar to Fig.1. (A) Visu-alization of the 2D princi-pal components of the la-tent representations. (B)The average pairwise cosinedistances and L2 distancesbetween each pair of thestitched latent states.to the sim2real gap, the success rates of all methods drop to some extent. However, after 6 epochs offine-tuning on the real robot platform shown as the dotted curves, PS results in a final success rateof 90%. Although the success rates of both baselines (i.e., 60% and 55%) are improved after thefine-tuning, there is still a significant gap with PS. Through both zero-shot and few-shot experiments,we ensure that our method performs effectively, efficiently, and reliably in physical scenarios.4.3 Analysis: Latent Space at Module InterfaceTo understand how our latent space alignment benefits effective policy stitching by learning transfer-able representations, we provide further quantitative and qualitative analysis. Since the dimension ofthe latent embedding at the task-robot module interface is 128, we cannot directly visualize the latentrepresentations. We apply the Principal Component Analysis (PCA) [ 66] to reduce the dimension ofthe representation to 2. We train robots with 3 different kinematics on a reaching task with and withoutour latent space alignment. We then colorize the principal components based on their task states andgroup them into four directions as in our motivation example (Fig.1(C)). As shown in Fig.7(A), thelatent spaces of all policies trained with PS remain consistent across various training environments,while others without latent space alignment still show approximate isometric transformations. Theconsistent latent space explains the effectiveness of our policy stitching method.To account for potential information loss during PCA visualization, we also provide quantitativeanalysis. In Fig.7(B), we calculate the pairwise cosine distances and L2 distances (Appendix C.2)between the raw high-dimensional latent states of different stitched policies. The pairwise distancesare significantly smaller when the latent representations are aligned. PS achieves an average cosinedistance of 0.0055 and L2 distance of 0.240, while ablation has a much higher cosine distance of 0.865and L2 distance of 1.275. This analysis indicates that the latent representations of normal modularnetworks differ greatly from each other, and the latent representation alignment can successfullyreduce such differences and facilitate the learning of transferable representations for policy stitching.5 Conclusion, Limitation and Future WorkIn this work, we propose Policy Stitching, a novel framework to transfer robot learning policy to novelrobot-task combinations. Through both simulated and real-world 3D robot manipulation experiments,we demonstrate the significant benefits of our modular policy design and latent space alignment in PS.PS paves a promising direction for effective and efficient robot policy transfer under the end-to-endmodel-free RL framework.One limitation is that our current method of selecting anchor states based on clusters of test states maynot generalize to scenarios with high-dimensional state representations, such as images. An excitingfuture direction is to study self-supervised methods to disentangle latent features for anchor selections.This can further help generalize PS to more complex tasks and reward settings. Another limitationis that our adapted relative representation method still cannot enforce strict invariance of the latentfeatures, hence requires some level of fine-tuning. It will be interesting to explore alternative methodsfor aligning modules of networks without the need for anchor states. We also plan to explore variousrobot platforms with different morphology to generalize PS to more diverse platforms.8AcknowledgmentsThis work is supported in part by ARL under awards W911NF2320182 and W911NF2220113, byAFOSR under award #FA9550-19-1-0169, and by NSF under award CNS-1932011.References[1]S. J. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on knowledge anddata engineering , 22(10):1345–1359, 2010.[2]J. Bongard, V . Zykov, and H. Lipson. Resilient machines through continuous self-modeling.Science , 314(5802):1118–1121, 2006.[3]A. S. Polydoros and L. Nalpantidis. Survey of model-based reinforcement learning: Applicationson robotics. Journal of Intelligent & Robotic Systems , 86(2):153–173, 2017.[4]F. Ebert, C. Finn, S. Dasari, A. Xie, A. Lee, and S. Levine. Visual foresight: Model-baseddeep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568 ,2018.[5]R. Kwiatkowski and H. Lipson. Task-agnostic self-modeling machines. Science Robotics , 4(26):eaau9354, 2019.[6]B. Chen, Y . Hu, L. Li, S. Cummings, and H. Lipson. Smile like you mean it: Driving animatronicrobotic face with learned models. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 2739–2746. IEEE, 2021.[7]K. Hang, W. G. Bircher, A. S. Morgan, and A. M. Dollar. Manipulation for self-identification,and self-identification for better manipulation. Science robotics , 6(54):eabe1321, 2021.[8]B. Chen, R. Kwiatkowski, C. V ondrick, and H. Lipson. Fully body visual self-modeling ofrobot morphologies. Science Robotics , 7(68):eabn1944, 2022.[9]R. Kwiatkowski, Y . Hu, B. Chen, and H. Lipson. On the origins of self-modeling. arXiv preprintarXiv:2209.02010 , 2022.[10] Y . Hu, B. Chen, and H. Lipson. Egocentric visual self-modeling for legged robot locomotion.arXiv preprint arXiv:2207.03386 , 2022.[11] J. Hua, L. Zeng, G. Li, and Z. Ju. Learning for a robot: Deep reinforcement learning, imitationlearning, transfer learning. Sensors , 21(4):1278, 2021.[12] R. Yang, H. Xu, Y . Wu, and X. Wang. Multi-task reinforcement learning with soft modulariza-tion. Advances in Neural Information Processing Systems , 33:4767–4777, 2020.[13] J. A. Mendez, H. van Seijen, and E. Eaton. Modular lifelong reinforcement learning via neuralcomposition. arXiv preprint arXiv:2207.00429 , 2022.[14] M. Gygli, J. Uijlings, and V . Ferrari. Towards reusable network components by learningcompatible representations. In Proceedings of the AAAI Conference on Artificial Intelligence ,volume 35, pages 7620–7629, 2021.[15] L. Moschella, V . Maiorca, M. Fumero, A. Norelli, F. Locatello, and E. Rodol `a. Relativerepresentations enable zero-shot latent space communication. arXiv preprint arXiv:2209.15430 ,2022.[16] M. E. Taylor and P. Stone. Transfer learning for reinforcement learning domains: A survey.Journal of Machine Learning Research , 10(7), 2009.[17] Z. Zhu, K. Lin, and J. Zhou. Transfer learning in deep reinforcement learning: A survey. arXivpreprint arXiv:2009.07888 , 2020.9[18] S. Mohanty, J. Poonganam, A. Gaidon, A. Kolobov, B. Wulfe, D. Chakraborty, G. ˇSemetulskis,J. Schapke, J. Kubilius, J. Pa ˇsukonis, et al. Measuring sample efficiency and generalizationin reinforcement learning benchmarks: Neurips 2020 procgen benchmark. arXiv preprintarXiv:2103.15332 , 2021.[19] P. Jian, C. Yang, D. Guo, H. Liu, and F. Sun. Adversarial skill learning for robust manipulation.In2021 IEEE International Conference on Robotics and Automation (ICRA) , pages 2555–2561.IEEE, 2021.[20] A. Tirinzoni, R. Rodriguez Sanchez, and M. Restelli. Transfer of value functions via variationalmethods. Advances in Neural Information Processing Systems , 31, 2018.[21] Y . Zhang and M. M. Zavlanos. Transfer reinforcement learning under unobserved contextualinformation. In 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems(ICCPS) , pages 75–86. IEEE, 2020.[22] C. Liu, Y . Zhang, Y . Shen, and M. M. Zavlanos. Learning without knowing: Unobservedcontext in continuous transfer reinforcement learning. In Learning for Dynamics and Control ,pages 791–802. PMLR, 2021.[23] G. Konidaris and A. Barto. Autonomous shaping: Knowledge transfer in reinforcement learning.InProceedings of the 23rd international conference on Machine learning , pages 489–496, 2006.[24] A. Lazaric, M. Restelli, and A. Bonarini. Transfer of samples in batch reinforcement learning.InProceedings of the 25th international conference on Machine learning , pages 544–551, 2008.[25] F. Fern ́andez and M. Veloso. Probabilistic policy reuse in a reinforcement learning agent. InProceedings of the fifth international joint conference on Autonomous agents and multiagentsystems , pages 720–727, 2006.[26] G. D. Konidaris and A. G. Barto. Building portable options: Skill transfer in reinforcementlearning. In Ijcai, volume 7, pages 895–900, 2007.[27] C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural networkpolicies for multi-task and multi-robot transfer. In 2017 IEEE international conference onrobotics and automation (ICRA) , pages 2169–2176. IEEE, 2017.[28] F. Doshi-Velez and G. Konidaris. Hidden parameter markov decision processes: A semipara-metric regression approach for discovering latent task parametrizations. In IJCAI: proceedingsof the conference , volume 2016, page 1432. NIH Public Access, 2016.[29] T. W. Killian, S. Daulton, G. Konidaris, and F. Doshi-Velez. Robust and efficient transferlearning with hidden parameter markov decision processes. Advances in neural informationprocessing systems , 30, 2017.[30] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deepnetworks. In International conference on machine learning , pages 1126–1135. PMLR, 2017.[31] A. Barreto, W. Dabney, R. Munos, J. J. Hunt, T. Schaul, H. P. van Hasselt, and D. Silver.Successor features for transfer in reinforcement learning. Advances in neural informationprocessing systems , 30, 2017.[32] J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how tolearn: the meta-meta-... hook . PhD thesis, Technische Universit ̈at M ̈unchen, 1987.[33] J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrentnetworks. Neural Computation , 4(1):131–139, 1992.[34] J. Schmidhuber. A neural network that embeds its own meta-levels. In IEEE InternationalConference on Neural Networks , pages 407–412. IEEE, 1993.10[35] S. Thrun and L. Pratt. Learning to learn . Springer Science & Business Media, 2012.[36] B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learnand think like people. Behavioral and brain sciences , 40:e253, 2017.[37] J. Vanschoren. Meta-learning: A survey. arXiv preprint arXiv:1810.03548 , 2018.[38] C. Finn, K. Xu, and S. Levine. Probabilistic model-agnostic meta-learning. Advances in neuralinformation processing systems , 31, 2018.[39] C. Finn, A. Rajeswaran, S. Kakade, and S. Levine. Online meta-learning. In InternationalConference on Machine Learning , pages 1920–1930. PMLR, 2019.[40] A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn. Learning toadapt in dynamic, real-world environments through meta-reinforcement learning. arXiv preprintarXiv:1803.11347 , 2018.[41] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401 ,2014.[42] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning withmemory-augmented neural networks. In International conference on machine learning , pages1842–1850. PMLR, 2016.[43] T. Munkhdalai and H. Yu. Meta networks. In International conference on machine learning ,pages 2554–2563. PMLR, 2017.[44] Y . Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel. Rl2: Fast reinforce-ment learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779 , 2016.[45] D. Zhao, J. von Oswald, S. Kobayashi, J. Sacramento, and B. F. Grewe. Meta-learning viahypernetworks. 4th Workshop on Meta-Learning at NeurIPS 2020, Vancouver, Canada , 2020.[46] D. Ha, A. Dai, and Q. V . Le. Hypernetworks. arXiv preprint arXiv:1609.09106 , 2016.[47] J. V on Oswald, C. Henning, J. Sacramento, and B. F. Grewe. Continual learning with hypernet-works. arXiv preprint arXiv:1906.00695 , 2019.[48] J. Beck, M. T. Jackson, R. Vuorio, and S. Whiteson. Hypernetworks in meta-reinforcementlearning. arXiv preprint arXiv:2210.11348 , 2022.[49] Z. Xian, S. Lal, H.-Y . Tung, E. A. Platanios, and K. Fragkiadaki. Hyperdynamics: Meta-learningobject and agent dynamics with hypernetworks. arXiv preprint arXiv:2103.09439 , 2021.[50] A. Goyal, A. Lamb, J. Hoffmann, S. Sodhani, S. Levine, Y . Bengio, and B. Sch ̈olkopf. Recurrentindependent mechanisms. arXiv preprint arXiv:1909.10893 , 2019.[51] S. Mittal, A. Lamb, A. Goyal, V . V oleti, M. Shanahan, G. Lajoie, M. Mozer, and Y . Bengio.Learning to combine top-down and bottom-up signals in recurrent neural networks with attentionover modules. In International Conference on Machine Learning , pages 6972–6986. PMLR,2020.[52] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In International conference on machinelearning , pages 1861–1870. PMLR, 2018.[53] C. Olah. Visualizing representations. http://colah.github.io/posts/2015-01-Visualizing-Representations/ , 2015. Accessed: Month Day, Year.11[54] K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivarianceand equivalence. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 991–999, 2015.[55] T. Hofmann, B. Sch ̈olkopf, and A. J. Smola. Kernel methods in machine learning. The annalsof statistics , 36(3):1171–1220, 2008.[56] J. A. Hartigan and M. A. Wong. Algorithm as 136: A k-means clustering algorithm. Journal ofthe royal statistical society. series c (applied statistics) , 28(1):100–108, 1979.[57] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin,O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neural informationprocessing systems , 30, 2017.[58] S. Haykin. Neural networks: a comprehensive foundation . Prentice Hall PTR, 1998.[59] Q. Gallou ́edec, N. Cazin, E. Dellandr ́ea, and L. Chen. panda-gym: Open-Source Goal-Conditioned Environments for Robotic Learning. 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS , 2021.[60] J.-T. Huang, J. Li, D. Yu, L. Deng, and Y . Gong. Cross-language knowledge transfer usingmultilingual deep neural network with shared hidden layers. In 2013 IEEE internationalconference on acoustics, speech and signal processing , pages 7304–7308. IEEE, 2013.[61] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level imagerepresentations using convolutional neural networks. In Proceedings of the IEEE conference oncomputer vision and pattern recognition , pages 1717–1724, 2014.[62] M. Long, H. Zhu, J. Wang, and M. I. Jordan. Unsupervised domain adaptation with residualtransfer networks. Advances in neural information processing systems , 29, 2016.[63] F. Yang, C. Yang, H. Liu, and F. Sun. Evaluations of the gap between supervised and reinforce-ment lifelong learning on robotic manipulation tasks. In Conference on Robot Learning , pages547–556. PMLR, 2022.[64] J. Yoon, S. Kim, E. Yang, and S. J. Hwang. Scalable and order-robust continual learning withadditive parameter decomposition. arXiv preprint arXiv:1902.09432 , 2019.[65] S. Garrido-Jurado, R. Mu ̃noz-Salinas, F. J. Madrid-Cuevas, and M. Mar ́ın-Jim ́enez. Aruco:a minimal library for augmented reality applications based on opencv. Journal of Real-TimeImage Processing , 9(2):399–406, 2014. doi:10.1007/s11554-013-0320-4. URL https://doi.org/10.1007/s11554-013-0320-4 .[66] K. Pearson. On lines and planes of closest fit to systems of points in space. PhilosophicalMagazine , 2(11):559–572, 1901. doi:10.1080/14786440109462720.12AppendixA Network Structure(a)PS (b)PS(Ablation) (c)Plain Baseline(d)Devin et al. [27] (e)Devin et al. [27] (Large)Fig. 8: Detailed network structure of PS, PS(Ablation), Devin et al., Devin et al.(Large) and Plain Baselinemethods.13B Policy Stitching AlgorithmAlgorithm 1 Zero-shot Transfer with Policy StitchingTrain Plain Policies in Source Environments:Train Plain policy 1 πr1k1with SAC in the environment Er1k1.Train Plain policy 2 πr2k2with SAC in the environment Er2k2.Collect Anchor States:Roll out πr1k1inEr1k1to gather a set of task states S1Roll out πr2k2inEr2k2to gather a set of task states S2task state set S← {S1,S2}K-means center set C←k−means (S)Select anchor set A={a(i)}which are task states closest to the K-means center set C.Train Modular Policies in Source Environments:Initialize modular policy 1 φEr1k1withgk1for its task module and fr1for its robot module.Initialize modular policy 2 φEr2k2withgk2for its task module and fr2for its robot module.foreach iteration doforeach environment step doEmbed task state est←gk1(st)Embed anchor state ea(i)←gk1(a(i))Calculate the relative representation rstaccording to equation (3)action t∼fr1(rst)st+1∼pr1k1(st+1|st, action t)D ← D ∪ { (st, action t, r(st, action t),st+1)}end forforeach gradient step doupdate gk1andfr1with SAC algorithmend forend forforeach iteration doforeach environment step doEmbed task state est←gk2(st)Embed anchor state ea(i)←gk2(a(i))Calculate the relative representation rstaccording to equation (3)action t∼fr2(rst)st+1∼pr2k2(st+1|st, action t)D ← D ∪ { (st, action t, r(st, action t),st+1)}end forforeach gradient step doupdate gk2andfr2with SAC algorithmend forend forPolicy Stitching:Initialize the task module with parameters from gk1.Initialize the robot module with parameters from fr2.Construct a stitched policy φEr2k1(sE) =fr2(gk1(sE,T), sE,R).Test the Stitched Policy in the Target Environment:Roll out the stitched policy φEr2k1(sE).Calculate the success rate and touching rate.14Algorithm 2 Few-shot Transfer Learning with Policy StitchingTrain Plain Policies in Source Environments:Train Plain policy 1 πr1k1with SAC in the environment Er1k1.Train Plain policy 2 πr2k2with SAC in the environment Er2k2.Collect Anchor States:Roll out two trained policies in their own training environments to gather a set of task states Sduring these interaction processes.K-means center set C←k−means (S)Select anchor set Awhich are task states closest to the K-means center set C.Train Modular Policies in Source Environments:Initialize modular policy 1 φEr1k1withgk1for its task module and fr1for its robot module.Initialize modular policy 2 φEr2k2withgk2for its task module and fr2for its robot module.foreach iteration doforeach environment step doEmbed task state est←gk1(st)Embed anchor state ea(i)←gk1(a(i))Calculate the relative representation rstaccording to equation (3)action t∼fr1(rst)st+1∼pr1k1(st+1|st, action t)D ← D ∪ { (st, action t, r(st, action t),st+1)}end forforeach gradient step doupdate gk1andfr1with SAC algorithmend forend forforeach iteration doforeach environment step doEmbed task state est←gk2(st)Embed anchor state ea(i)←gk2(a(i))Calculate the relative representation rstaccording to equation (3)action t∼fr2(rst)st+1∼pr2k2(st+1|st, action t)D ← D ∪ { (st, action t, r(st, action t),st+1)}end forforeach gradient step doupdate gk2andfr2with SAC algorithmend forend forPolicy Stitching and Q-function Stitching:Initialize the policy task module with parameters from gk1.Initialize the policy robot module with parameters from fr2.Construct a stitched policy φEr2k1(sE) =fr2(gk1(sE,T), sE,R).Initialize the Q-function task module with parameters from qk1.Initialize the Q-function robot module with parameters from hr2.Construct a stitched Q-function QEr2k1(sE, aE) =hr2(qk1(sE,T), sE,R, aE).Few-Shot Transfer Learning:Fine-tune the stitched policy φEr2k1(sE)and the stitched Q-function QEr2k1(sE, aE)with SACin the target environment Er2k1.15(a)Small modular network (b)Medium modular network (c)Large modular networkFig. 9: The detailed architectures of the modular networks with three different interface dimensions.C Analysis of the Module InterfaceWe carry out additional analysis of the latent representations across different sizes of modularnetworks. We build three different sizes of modular networks, each interface dimension being 3D,16D, and 128D as shown in Fig.9. The transferable representation is added to these networks as shownin Fig.8a. We construct six networks ( 3different sizes, with and without relative representation) toperform the reaching task as described in Fig.1.C.1 Visualization of the latent representations at the modules interfaceFor the small networks with 3D interfaces, we plot the 3D latent representations directly. For themedium and large networks with 16D and 128D interfaces, we use PCA [ 66] to reduce the dimensionto2D. Fig.10 shows the visualization of the interface of the six networks trained with differentrandom seeds. The isometric transformation relationship is shown for the PS(Ablation) methodacross all sizes of modular networks, and with the help of transferable presentation, PS achieves nearinvariance. Similarly, Fig.11 shows the interface of the networks trained with different robot types.The transferable representation achieves near invariance to isometric transformations across all typesof robots.PCA is an information lossy compression process. The PCA method only guarantees identicaloutput results when the input data sets are identical. When the input data sets are similar but notidentical, the output results may vary considerably. In our experiments using PCA for visualization,we have observed that the PCA results of most interfaces with transferable representations are similar.However, in rare cases, we have noticed significant differences in the PCA results. As shown in Fig.12,the original 3D latent states have very similar distributions across the four different runs, but afterthe dimension reduction to 2D with PCA, the visualization results show isometric transformations.Moreover, in the case of small modular networks with 3D interfaces, achieving a high success rateof approximately 100% often requires a considerable amount of training time. Occasionally, thenetwork may converge to a local minimum with a success rate of around 90%. When it converges toa local minimum, the latent representation at its interface typically differs from those that converge tothe global minimum. PCA only provides an intuitive idea of the behavior at module interface, thus,we accompany these visualizations with quantitative analysis.16Fig. 10: Latent Space Visualization Train each policy network four times with four different random seeds(101-104). Without transferable representation, the latent representations at the interfaces have an approximateisometric transformation relationship. With relative representation, these latent representations are isometricallysimilar.Fig. 11: Latent Space Visualization Train each policy network for the reaching task with three different types ofrobots as shown in Figure 3. With relative representation, the latent representations at the module interfaces arenearly the same across different environments. Without it, they have an approximately isometric transformationrelationship.C.2 Quantitative analysis of the latent representations at the modules interfaceTo measure the similarity between two different latent representations, we use cosine and L2 pairwisedistances. We compute the pairwise distance between two latent task states derived from the sameinput state. By considering a dataset of input states, we calculate the mean of the pairwise distancesacross all input states, obtaining the average pairwise distance between two modular networks.Given an input task state set SE,T, the average pairwise cosine distance and L2 distance are definedas ̄dcos=|SE,T|Xi=11−SCg1ksiE,T, g2ksiE,T/|SE,T|, (4) ̄dL2=|SE,T |Xi=1dL2g1ksiE,T, g2ksiE,T/|SE,T|, (5)17Fig. 12: Limitation of visualization with PCA. Raw latent distributions are very similar to each other in the 3Dspace, but after compression to 2D with PCA, the visualization results are quite different.(a)Small modular network(b)Medium modular network(c)Large modular networkFig. 13: Cosine and L2 distances of different networks with and without transferable representation for differentrandom seeds.where SC(a,b) =ab∥a∥∥b∥is the cosine similarity and dL2(p, q) =∥p−q∥is the L2 distance.Fig.13 shows the average pairwise distances of modular networks trained with four different randomseeds. We calculate the distances for different sizes of networks shown in Figure 9 . We also calculatethe mean and standard deviations of the data in Fig.9 and present them in Tab. 2. The results showthat the transferable representation largely reduces the average pairwise distances of the latent spacesbetween different training runs.18We also train the modular networks in different environments and calculate the pairwise distances atthe interfaces. Specifically, we train the policy networks on the reaching task with different robotsshown in Figure 3. The average pairwise distances are shown in Figure 14 and we calculate themean values and standard deviations in Tab.3. These quantitative results show that the relativerepresentation makes the module interfaces much more similar to each other when trained in differentenvironments.cosine distance L2 distancePS 0.0363±0.0319 0 .289±0.141PS(Ablation) 1.106±0.434 1 .426±0.322(a)Small modular networkcosine distance L2 distancePS 0.00231 ±0.00073 0 .1633±0.0294PS(Ablation) 1.054±0.218 1 .442±0.151(b)Meidum modular networkcosine distance L2 distancePS 0.013±0.007 1 .051±0.368PS(Ablation) 0.753±0.081 8 .386±0.687(c)Large modular networkTab. 2: Mean and standard deviation values of the average pairwise distances between trainings with fourdifferent random seeds (101-104)19(a)Small modular network(b)Medium modular network(c)Large modular networkFig. 14: Cosine and L2 distances of different networks with and without transferable representation for differentrobot setup.cosine distance L2 distancePS 0.0082±0.0035 0 .149±0.032PS(Ablation) 0.633±0.153 1 .066±0.165(a)Small modular networkcosine distance L2 distancePS 0.0055±0.0029 0 .240±0.075PS(Ablation) 0.865±0.367 1 .275±0.311(b)Medium modular networkcosine distance L2 distancePS 0.0071±0.0026 0 .743±0.152PS(Ablation) 0.760±0.094 8 .822±0.448(c)Large modular networkTab. 3: Mean and standard deviation values of the average pairwise distances between trainings with threedifferent types of robot kinematics20 |
-HFJuX1uqs | Act3D: 3D Feature Field Transformers forMulti-Task Robotic ManipulationTheophile Gervet∗,1Zhou Xian∗,2Nikolaos Gkanatsios2Katerina Fragkiadaki11Machine Learning Department2Robotics InstituteSchool of Computer ScienceCarnegie Mellon University{tgervet, xianz1, ngkanats, katef }@cs.cmu.eduact3d.github.ioAbstract: 3D perceptual representations are well suited for robot manipulationas they easily encode occlusions and simplify spatial reasoning. Many manipu-lation tasks require high spatial precision in end-effector pose prediction, whichtypically demands high-resolution 3D feature grids that are computationally ex-pensive to process. As a result, most manipulation policies operate directly in 2D,foregoing 3D inductive biases. In this paper, we introduce Act3D, a manipula-tion policy transformer that represents the robot’s workspace using a 3D featurefield with adaptive resolutions dependent on the task at hand. The model lifts2D pre-trained features to 3D using sensed depth, and attends to them to com-pute features for sampled 3D points. It samples 3D point grids in a coarse tofine manner, featurizes them using relative-position attention, and selects whereto focus the next round of point sampling. In this way, it efficiently computes 3Daction maps of high spatial resolution. Act3D sets a new state-of-the-art in RL-Bench, an established manipulation benchmark, where it achieves 10% absoluteimprovement over the previous SOTA 2D multi-view policy on 74 RLBench tasksand 22% absolute improvement with 3x less compute over the previous SOTA3D policy. We quantify the importance of relative spatial attention, large-scalevision-language pre-trained 2D backbones, and weight tying across coarse-to-fineattentions in ablative experiments. Code and videos are available at our projectsite: https://act3d.github.io/ .Keywords: Learning from Demonstrations, Manipulation, Transformers1 IntroductionSolutions to many robot manipulation tasks can be modeled as a sequence of 6-DoF end-effectorposes (3D position and orientation). Many recent methods train neural manipulation policies topredict 3D end-effector pose sequences directly from 2D images using supervision from demon-strations [1, 2, 3, 4, 5, 6]. These methods are typically sample inefficient: they often require manytrajectories to handle minor scene changes at test time and cannot easily generalize across cameraviewpoints and environments, as mentioned in the respective papers and shown in our experiments.For a robot policy to generalize under translations, rotations, or camera view changes, it needs tobe spatially equivariant [7], that is, to map 3D translations and rotations of the input visual sceneto similar 3D translations and rotations for the robot’s end-effector. Spatial equivariance requirespredicting 3D end-effector locations through 2D or 3D action maps, depending on the action spaceconsidered, instead of regressing action locations from holistic scene or image features. Transporter∗Equal contribution7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Act3D is a language-conditioned robot action transformer that learns 3D scene featurefields of arbitrary spatial resolution via recurrent coarse-to-fine 3D point sampling and featurizationusing relative-position attentions. Act3D featurizes multi-view RGB images with a pre-trained 2DCLIP backbone and lifts them in 3D using sensed depth. It predicts 3D location of the end-effectorusing classification of the 3D points of the robot’s workspace, which preserves spatial equivarianceof the scene to action mapping.networks [8] introduced a spatial equivariant architecture for 4-DoF robot manipulation: they re-project RGB-D input images to a top-down image and predict robot end-effector 2D translationsthrough a top-down 2D action map. They showed better generalization with fewer training demon-strations than prior works. However, they are limited to top-down 2D worlds and 4-DoF manipula-tion tasks. This begs the question: how can we extend spatial equivariance in action prediction togeneral 6-DoF manipulation?Developing spatially equivariant 6-DOF manipulation policies requires predicting 3D action mapsby classifying 3D points in the robot’s workspace as candidates for future 3D locations for therobot’s end-effector. Predicting high-resolution 3D action maps, necessary for fine-grained manip-ulation tasks, poses a computational challenge over their 2D counterparts due to the extra spatialdimension. V oxelizing the robot’s 3D workspace and featurizing the 3D voxels at high resolution iscomputationally demanding [9]. The next end-effector pose might be anywhere in free space, whichprevents the use of sparse 3D convolutions [10, 11] to selectively featurize only part of the 3D freespace. To address this, recent work of PerAct [1] featurizes 3D voxels using the latent set bottle-necked self-attention operation of Perceiver [12], whose complexity is linear to the number of voxelsas opposed to quadratic, as the all-to-all self attention operations. However, it gives up on spatialdisentanglement of features due to the latent set bottleneck. Other methods avoid featurizing pointsin 3D free space altogether and instead regress an offset for the robot’s 3D locations from a detected2D image contact point [2, 13, 14], which again does not fully comply with spatial equivariance.In this paper, we introduce Act3D, a language-conditioned transformer for multi-task 6 DoF robotmanipulation that predicts continuous resolution 3D action maps through adaptive 3D spatial com-putation. Act3D represents the scene as a continuous 3D feature field. It computes a scene-levelphysical 3D feature cloud by lifting features of 2D foundational models from one or more viewsusing sensed depth. It learns a 3D feature field of arbitrary spatial resolution via recurrent coarse-to-fine 3D point sampling and featurization. At each iteration, the model samples 3D points in thewhole workspace and featurizes them using relative spatial cross-attention [15] to the physical 3Dfeature cloud. Act3D predicts 3D end-effector locations by scoring 3D point features, and thenregresses the 3D orientation and opening of the end-effector. At inference time, we can trade-offcompute for higher spatial precision and task performance by sampling more 3D points in free spacethan the model ever saw at training time.We test Act3D in RLBench [16], an established benchmark for learning diverse robot manipulationpolicies from demonstrations. We set a new state-of-the-art in the benchmark in both single-taskand multi-task settings. Specifically, we achieve a 10% absolute improvement over prior SOTA onthe single-task setting introduced by HiveFormer [2] with 74 tasks and a 22% absolute improvementover prior SOTA in the multi-task setting introduced by PerAct [1] with 18 tasks and 249 variations.2We also validate our approach on a Franka Panda with a multi-task agent trained from scratch on 8real-world tasks with a total of just 100 demonstrations (see Figure 2). In thorough ablations, weshow the importance of the design choices of our architecture, specifically, relative spatial attention,large-scale vision-language pre-trained 2D backbones, high resolution featurization and weight tyingacross coarse-to-fine attentions.In summary, our contributions are: 1.A novel neural policy architecture for language-conditionedmulti-task 6-DoF manipulation that both reasons directly in 3D and preserves locality of computa-tion in 3D, using iterative coarse-to-fine translation-invariant attention. 2.Strong empirical resultson a range of simulated and real-world tasks, outperforming the previous SOTA 2D and 3D methodson RLBench by large absolute margins, and generalizing well to novel camera placements at testtime. 3.Thorough ablations that quantify the contribution of high-resolution features, tied attentionweights, pre-trained 2D features, and relative position attention design choices.2 Related WorkLearning robot manipulation from demonstrations Many recent work train multi-task manip-ulation policies that leverage Transformer architectures [1, 2, 3, 5, 17, 18] to predict robot actionsfrom video input and language instructions. End-to-end image-to-action policy models, such as RT-1 [5], GATO [18], BC-Z [19], and InstructRL [3], directly predict 6-DoF end-effector poses from2D video and language inputs. They require many thousands of demonstrations to learn spatial rea-soning and generalize to new scene arrangements and environments. Transporter networks [8] andtheir subsequent variants [20, 21, 22] formulate 4-DoF end-effector pose prediction as pixel classi-fication in 2D overhead images. Thanks to the spatial equivariance of their architecture, their modeldramatically increased sample efficiency over previous methods that regress end-effector poses byaggregating global scene features. However, they are limited to top-down 2D planar worlds withsimple pick-and-place primitives. 3D policy models of C2F-ARM [4] and PerAct [1] voxelize therobot’s workspace and are trained to detect the 3D voxel that contains the next end-effector key-pose. Spatially precise 3D pose prediction requires the 3D voxel grid to be high resolution, whichcomes at a high computational cost. C2F-ARM [4] uses a coarse-to-fine voxelization to handlecomputational complexity, while PerAct [1] uses Perceiver’s latent bottleneck [12] to avoid voxel-to-voxel self-attention operations. Act3D avoids 3D voxelization altogether and instead representsthe scene as a continuous resolution 3D feature field. It samples 3D points in the empty workspaceand featurizes them using cross-attentions to the physical 3D point features.Feature pre-training for robot manipulation Many 2D policy architectures bootstrap learningfrom demonstrations from frozen or finetuned 2D image backbones [23, 24, 19, 25] to increaseexperience data sample efficiency. Pretrained vision-language backbones can enable generalizationto new instructions, objects, and scenes [26, 21]. In contrast, SOTA 3D policy models are typicallytrained from scratch from colored point clouds input [1, 4, 27]. Act3D uses CLIP pre-trained 2Dbackbones [28] to featurize 2D image views and lifts the 2D features in 3D using depth [29, 30]. Weshow that 2D feature pretraining gives a considerable performance boost over training from scratch.Relative attention layers Relative attentions have shown improved performance in many 2D vi-sual understanding tasks and language tasks [31, 32]. Rotary embeddings [33] implement relativeattention efficiently by casting it as an inner-product in an extended position feature space. In 3D,relative attention is imperative as the coordinate system is arbitrary. 3D relative attentions have beenused before in 3D Transformer architectures for object detection and point labelling [34, 35]. Weshow in Section 4 that relative attentions significantly boost performance of our model.3 3D Feature Field Transformers for Multi-Task Robot ManipulationThe architecture of Act3D is shown in Figure 1. It is a policy transformer that, at a given timestept, predicts a 6-DoF end-effector pose from one or more RGB-D images, a language instruction,3and proprioception information regarding the robot’s current end-effector pose. Following priorwork [36, 1, 2, 3], instead of predicting an end-effector pose at each timestep, we extract a set ofkeyposes that capture bottleneck end-effector poses in a demonstration. A pose is a keypose if (1)the end-effector changes state (something is grasped or released) or (2) velocities approach nearzero (a common occurrence when entering pre-grasp poses or entering a new phase of a task). Theprediction problem then boils down to predicting the next (best) keypose action given the currentobservation. At inference time, Act3D iteratively predicts the next best keypose and reaches it witha sampling-based motion planner, following previous works [1, 2].We assume access to a dataset of ndemonstration trajectories. Each demonstration is a sequence ofobservations O={o1, o2, .., o t}paired with continuous actions A={a1, a2, .., a t}and, optionally,a language instruction lthat describes the task. Each observation otconsists of RGB-D images fromone or more camera views; more details are in Appendix 7.2. An action atconsists of the 3Dposition and 3D orientation (represented as a quaternion) of the robot’s end-effector, its binary openor closed state, and whether the motion planner needs to avoid collisions to reach the pose:a={apos∈R3, arot∈H, aopen∈ {0,1}, acol∈ {0,1}}Next, we describe the model’s architecture in detail.Visual and language encoder Our visual encoder maps multi-view RGB-D images into a multi-scale 3D scene feature cloud. We use a large-scale pre-trained 2D feature extractor followed by afeature pyramid network [37] to extract multi-scale visual tokens for each camera view. Our inputis RGB-D, so each pixel is associated with a depth value. We “lift” the extracted 2D feature vectorsto 3D using the pinhole camera equation and the camera intrinsics, based on their average depth.The language encoder featurizes instructions with a large-scale pre-trained language encoder. Weuse the CLIP ResNet50 [28] visual encoder and language encoders to exploit their common vision-language feature space for interpreting instructions and referential grounding. Our pre-trained visualand language encoders are frozen, not finetuned, during training of Act3D.Iterative 3D point sampling and featurization Our key idea is to estimate high resolution 3Daction maps by learning 3D perceptual representations of free space with arbitrary spatial resolution,via recurrent coarse-to-fine 3D point sampling and featurization. 3D point candidates (which we willcall ghost points) are sampled, featurized and scored iteratively through relative cross-attention [15]to the physical 3D scene feature cloud, lifted from 2D feature maps of the input image views. We firstsample coarsely across the entire workspace, then finely in the vicinity of the ghost point selectedas the focus of attention in the previous iteration, as shown in Figure 1. The coarsest ghost pointsattend to a global coarse scene feature cloud, whereas finer ghost points attend to a local fine scenefeature cloud.Relative 3D cross-attentions We featurize each of the 3D ghost points and a parametric query(used to select via inner-product one of the ghost points as the next best end-effector position in thedecoder) independently through cross-attentions to the multi-scale 3D scene feature cloud, languagetokens, and proprioception. Featurizing ghost points independently, without self-attentions to oneanother, enables sampling more ghost points at inference time to improve performance, as we showin Section 4. Our cross-attentions use relative 3D position information and are implemented effi-ciently with rotary positional embeddings [15]. The absolute locations of our 3D points are neverused in our featurization, and attentions only depend on the relative locations of two features.Decoding actions We score ghost point tokens via inner product with the parametric query toselect one as the next best end-effector position apos. We then regress the end-effector orientationarotand opening aopen, as well as whether the motion planner needs to avoid collisions to reach theposeacol, from the last iteration parametric query with a 2-layer multi-layer perceptron (MLP).4Figure 2: Tasks. We conduct experiments on 92 simulated tasks in RLBench [16] (only 10 shown),and 8 real-world tasks (only 5 shown).Training Act3D is trained supervised from input-action tuples from a dataset of manipulationdemonstrations. These tuples are composed of RGB-D observations, language goals, and keyposeactions {(o1, l1, k1),(o2, l2, k2), ...}. During training, we randomly sample a tuple and superviseAct3D to predict the keypose action kgiven the observation and goal (o, l). We supervise positionprediction aposat every round of coarse-to-fine with a softmax cross-entropy loss over ghost points,rotation prediction arotwith a MSE loss on the quaternion prediction, and binary end-effector open-ingaopen and whether the planner needs to avoid collisions acolwith binary cross-entropy losses.Implementation details We use three ghost point sampling stages: first uniformly across the entireworkspace (roughly 1meter cube), then uniformly in a 16centimeter diameter ball, and finally in a4centimeter diameter ball. The coarsest ghost points attend to a global coarse scene feature cloud(32x32xncam coarse visual tokens) whereas finer ghost points attend to a local fine scene featurecloud (the closest 32x32xncamout of the total 128x128xncamfine visual tokens). During training,we sample 1000 ghost points in total split equally across the three stages. At inference time, wecan trade-off extra prediction precision and task performance for additional compute by samplingmore ghost points than the model ever saw at training time ( 10,000in our experiments). We’ll showin ablations in Section 4 that our framework is robust to these hyper-parameters but tying weightsacross sampling stages and relative 3D cross-attention are both crucial for generalization. We use abatch size 16 on a Nvidia 32GB V100 GPU for 200k steps (one day) for single-task experiments, anda batch size 48 on 8 Nvidia 32GB V100 GPUs for 600K steps (5 days) for language-conditionedmulti-task experiments. At test time, we call upon a low-level motion planner to reach predictedkeyposes. In simulation, we use native motion planner implementation provided in RLBench, whichis a sampling-based BiRRT [38] motion planner powered by Open Motion Planning Library (OMPL)[39] under the hood. For real-world experiments, we use the same BiRRT planner provided by theMoveIt! ROS package [40]. please, see Appendix 7.4 for more details.4 ExperimentsWe test Act3D in learning from demonstrations single-task and multi-task manipulation policiesin simulation and the real world. We conduct our simulated experiments in RLBench [16], anestablished simulation benchmark for learning manipulation policies, for the sake of reproducibilityand benchmarking. Our experiments aim to answer the following questions:5Figure 3: Single-task performance. On 74 RLBench tasks across 9 categories, Act3D reaches 83%success rate, an absolute improvement of 10% over InstructRL [3], prior SOTA in this setting.Figure 4: Multi-task performance. On 18 RLBench tasks with 249 variations, Act3D reaches 65%success rate, an absolute improvement of 22% over PerAct [1], prior SOTA in this setting.1.How does Act3D compare against SOTA 2D multiview and 3D manipulation policies in single-task and multi-task settings with varying number of training demonstrations?2.How does Act3D generalize across camera viewpoints compared to prior 2D multiview policies?3.How do design choices such as relative 3D attention, pre-trained 2D backbones, weight-tiedattention layers, and the number of coarse-to-fine sampling stages impact performance?4.1 Evaluation in simulationDatasets We test Act3D in RLbench in two settings: 1. Single-task manipulation policy learn-ing. We consider 74 tasks grouped into 9 categories proposed by HiveFormer [2]. Each task includesvariations which test generalization to novel arrangements of the same training objects. Each methodis trained with 100 demonstrations and evaluated on 500 unseen episodes. 2. Multi-task manipu-lation policy learning. We consider 18 tasks with 249 variations proposed by PerAct [1]. Each taskincludes 2-60 variations, which test generalization to new goal configurations that involve novel ob-ject colors, shapes, sizes, and categories. This is a more challenging setting. Each method is trainedwith 100 demonstrations per task split across variations, and evaluated on 500 unseen episodes pertask.Baselines We compare Act3D with the following state-of-the-art manipulation policy learningmethods: 1.InstructRL [3], a 2D policy that directly predicts 6 DoF poses from image and languageconditioning with a pre-trained vision-and-language backbone. 2.PerAct [1], a 3D policy thatvoxelizes the workspace and detects the next best voxel action through global self-attention. 3.HiveFormer [2] and Auto- λ[13], hybrid methods that detect a contact point within an image input,then regress an offset from this contact point. We report numbers from the papers when available.Evaluation metric We evaluate policies by task completion success rate, the proportion of execu-tion trajectories that lead to goal conditions specified in language instructions.6Single-task and multi-task manipulation results We show single-task quantitative results of ourmodel and baselines in Figure 3. Act3D reaches 83% success rate, an absolute improvement of10% over InstructRL [3], prior SOTA in this setting , and consistently outperforms it across all9 categories of tasks. With only 10 demonstrations per task, Act3D is competitive with prior SOTAusing 100 demonstrations per task. Act3D outperforms 2D methods of InstructRL and Hiveformerbecause it reasons directly in 3D. For the same reason, it generalizes much better than them to novelcamera placements, as we show in Table 3.We show multi-task quantitative results of our model and PerAct in Figure 4. Act3D reaches 65%success rate, an absolute improvement of 22% over PerAct, prior SOTA in this setting, consistentlyoutperforming it across most tasks. With only 10 demonstrations per task, Act3D outperformsPerAct using 100 demonstrations per task. Note that Act3D also uses less than a third of PerAct’straining computation budget: PerAct was trained for 16 days on 8 Nvidia V100 GPUs while wetrain for 5 days on the same hardware. Act3D outperforms PerAct because its coarse-to-fine relativeattention based 3D featurization of the 3D workspace is more effective than the perceiver’s latentbottleneck attention in generating spatially disentangled features.4.2 Evaluation in real-worldTask # Train Successreach target 10 10/10duck in oven 15 6/10wipe coffee 15 7/10fruits in bowl 10 8/10stack cups 15 6/10transfer beans 15 5/10press handsan 10 10/10uncrew cap 10 8/10Table 1: Real-world tasks.In our real-world setup, we conduct experiments with a FrankaEmika Panda robot and a single Azure Kinect RGB-D sen-sor. We consider 8 tasks (Figure 2) that involve interactionswith multiple types of objects, spanning liquid, articulated ob-jects, and deformable objects. For each task, we collected10 to 15 kinesthetic demonstrations and trained a languaged-conditioned multi-task model with all of them. We report thesuccess rate on 10 episodes per task in Table 1. Act3D can cap-ture semantic knowledge in demonstration well and performsreasonably well on all tasks, even with a single camera input.One major failure case comes from noisy depth sensing: whenthe depth image is not accurate, the selected point results inimprecise action prediction. Leveraging multi-view input forerror correction could improve this, and we leave this for future work. For videos of the robotexecuting the tasks, please see our project website.4.3 AblationsWe ablate the impact of our design choices in Table 3. We perform most ablations in the single-tasksetting on 5 tasks: pick cup, put knife on chopping board, put money in safe, slide block to target,take umbrella out of stand. We ablate the choice of pre-trained 2D backbone in the multi-task settingwith all 18 tasks.Generalization across camera viewpoints: We vary camera viewpoints at test time for bothAct3D and HiveFormer [2]. The success rate drops to 20.4% for HiveFormer, a relative 77% drop,while Act3D achieves 74.2% success rate, a 24% relative drop. This shows detecting actions in 3Dmakes Act3D more robust to camera viewpoint changes than multiview 2D methods that regressoffsets.Weight-tying and coarse-to-fine sampling: All 3 stages of coarse-to-fine sampling are neces-sary: a model with only 2 stages of sampling and regressing an offset from the position selectedat the second stage suffers a 4.5% performance drop. Tying weights across stages and relative 3Dpositional embeddings are both crucial; we observed severe overfitting without, reflected in respec-tive 17.5% and 42.7% performance drops. Fine ghost point sampling stages should attend to localfine visual features with precise positions: all stages attending to global coarse features leads to a8.3% performance drop. Act3D can effectively trade off inference computation for performance:7Table 2: Ablations.Average success rate insingle-task setting (5 tasks)Core design choicesFull Act3D 98.1Only 2 stages of coarse-to-fine sampling 93.6No weight tying across stages 80.6Absolute 3D positional embeddings 55.4Attention to only global coarse visual features 89.8Only 1000 ghost points at inference time 93.2Viewpoint changesAct3D 74.2HiveFormer 20.4Multi-task setting (18 tasks)BackboneCLIP ResNet50 backbone 65.1ImageNet ResNet50 backbone 53.4sampling 10,000 ghost points, instead of the 1,000 the model was trained with, boosts performanceby 4.9%.Pre-training 2D features: We investigate the effect of the pre-trained 2D backbone in the multi-task setting where language instructions are most needed. A ResNet50 [28] backbone pre-trainedwith CLIP improves success rate by 8.7% over a ResNet50 backbone pre-trained on ImageNet.For additional ablations regarding augmentations and sensitivity to hyperparameters, please see theAppendix section 7.6. We found Random crops of RGB-D images to boost performance but yawrotation perturbations did not help. The model is robust to variations in hyperparameters such as thediameter of ghost point sampling balls or the number of points sampled during training.4.4 Limitations and future workOur framework currently has the following limitations: 1.Act3D is limited by the motion plannerused to connect predicted keyposes with straight trajectory segments. It does not handle manip-ulation of articulated object well, such as opening/closing doors, fridges, and ovens, where robottrajectories cannot be well approximated by few line segments. 2.Act3D does not utilize any de-composition of tasks into subtasks. A hierarchical framework that would predict language subgoalsfor subtasks [41, 42, 43] and feed those to our language-conditioned policy would allow better re-usability of skills across tasks. Addressing these limitations is a direct avenue for future work.5 ConclusionWe presented Act3D, a language-conditioned policy transformer that predicts continuous resolution3D action maps for multi-task robot manipulation. Act3D represents the scene using a continuousresolution 3D feature map, obtained by coarse-to-fine 3D point sampling and attention-based fea-turization. Act3D sets a new state-of-the-art in RLBench, an established robot manipulation bench-mark, and solves diverse manipulation tasks in the real world from a single RGB-D camera viewand a handful of demonstrations. Our ablations quantified the contribution of relative 3D attentions,2D feature pre-training, and weight tying during coarse-to-fine iterations.6 AcknowledgementsThis work is supported by Sony AI, NSF award No 1849287, DARPA Machine Common Sense, anAmazon faculty award, and an NSF CAREER award.8References[1] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning , pages 785–799. PMLR, 2023.[2] P.-L. Guhur, S. Chen, R. G. Pinel, M. Tapaswi, I. Laptev, and C. Schmid. Instruction-drivenhistory-aware policies for robotic manipulations. In Conference on Robot Learning , pages175–187. PMLR, 2023.[3] H. Liu, L. Lee, K. Lee, and P. Abbeel. Instruction-following agents with jointly pre-trainedvision-language models. arXiv preprint arXiv:2210.13431 , 2022.[4] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine q-attention: Efficient learningfor visual robotic manipulation via discretisation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 13739–13748, 2022.[5] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[6] M. Sieb, Z. Xian, A. Huang, O. Kroemer, and K. Fragkiadaki. Graph-structured visual imita-tion. In Conference on Robot Learning , pages 979–989. PMLR, 2020.[7] X. Zhu, D. Wang, O. Biza, G. Su, R. Walters, and R. Platt. Sample efficient grasp learningusing equivariant models, 2022.[8] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021.[9] H.-Y . F. Tung, Z. Xian, M. Prabhudesai, S. Lal, and K. Fragkiadaki. 3d-oes: Viewpoint-invariant object-factorized environment simulators. arXiv preprint arXiv:2011.06464 , 2020.[10] B. Graham. Sparse 3d convolutional neural networks, 2015.[11] C. Choy, J. Gwak, and S. Savarese. 4d spatio-temporal convnets: Minkowski convolutionalneural networks, 2019.[12] A. Jaegle, F. Gimeno, A. Brock, A. Zisserman, O. Vinyals, and J. Carreira. Perceiver: Generalperception with iterative attention, 2021.[13] S. Liu, S. James, A. J. Davison, and E. Johns. Auto-lambda: Disentangling dynamic taskrelationships. arXiv preprint arXiv:2202.03091 , 2022.[14] P. Parashar, J. Vakil, S. Powers, and C. Paxton. Spatial-language attention policies for efficientrobot learning. arXiv preprint arXiv:2304.11235 , 2023.[15] J. Su, Y . Lu, S. Pan, A. Murtadha, B. Wen, and Y . Liu. Roformer: Enhanced transformer withrotary position embedding. arXiv preprint arXiv:2104.09864 , 2021.[16] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020.[17] N. M. Shafiullah, Z. Cui, A. A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. Advances in neural information processing systems , 35:22955–22968,2022.[18] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprintarXiv:2205.06175 , 2022.9[19] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002. PMLR, 2022.[20] D. Seita, P. Florence, J. Tompson, E. Coumans, V . Sindhwani, K. Goldberg, and A. Zeng.Learning to rearrange deformable cables, fabrics, and bags with goal-conditioned transporternetworks, 2021.[21] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[22] N. Gkanatsios, A. Jain, Z. Xian, Y . Zhang, C. Atkeson, and K. Fragkiadaki. Energy-based models as zero-shot planners for compositional scene rearrangement. arXiv preprintarXiv:2304.14391 , 2023.[23] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation, 2022.[24] S. Parisi, A. Rajeswaran, S. Purushwalkam, and A. Gupta. The unsurprising effectiveness ofpre-trained vision models for control, 2022.[25] L. Yen-Chen, A. Zeng, S. Song, P. Isola, and T.-Y . Lin. Learning to see before learning to act:Visual pre-training for manipulation, 2021.[26] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich,F. Xia, C. Finn, and K. Hausman. Open-world object manipulation using pre-trained vision-language models, 2023.[27] Z. Xian, S. Lal, H.-Y . Tung, E. A. Platanios, and K. Fragkiadaki. Hyperdynamics: Meta-learning object and agent dynamics with hypernetworks. arXiv preprint arXiv:2103.09439 ,2021.[28] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models fromnatural language supervision, 2021.[29] H.-Y . F. Tung, R. Cheng, and K. Fragkiadaki. Learning spatial common sense with geometry-aware recurrent networks. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 2595–2603, 2019.[30] A. W. Harley, S. K. Lakshmikanth, F. Li, X. Zhou, H.-Y . F. Tung, and K. Fragkiadaki. Learn-ing from unlabelled videos using contrastive predictive neural 3d mapping. arXiv preprintarXiv:1906.03764 , 2019.[31] P. Shaw, J. Uszkoreit, and A. Vaswani. Self-attention with relative position representations,2018.[32] Z. Liu, H. Hu, Y . Lin, Z. Yao, Z. Xie, Y . Wei, J. Ning, Y . Cao, Z. Zhang, L. Dong, F. Wei, andB. Guo. Swin transformer v2: Scaling up capacity and resolution, 2022.[33] J. Su, Y . Lu, S. Pan, A. Murtadha, B. Wen, and Y . Liu. Roformer: Enhanced transformer withrotary position embedding, 2022.[34] X. Wu, Y . Lao, L. Jiang, X. Liu, and H. Zhao. Point transformer v2: Grouped vector attentionand partition-based pooling, 2022.[35] Y .-Q. Yang, Y .-X. Guo, J.-Y . Xiong, Y . Liu, H. Pan, P.-S. Wang, X. Tong, and B. Guo. Swin3d:A pretrained transformer backbone for 3d indoor scene understanding, 2023.10[36] S. James and A. J. Davison. Q-attention: Enabling efficient learning for vision-based roboticmanipulation. IEEE Robotics and Automation Letters , 7(2):1612–1619, 2022.[37] T.-Y . Lin, P. Doll ́ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramidnetworks for object detection. In Proceedings of the IEEE conference on computer vision andpattern recognition , pages 2117–2125, 2017.[38] J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query pathplanning. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conferenceon Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 2, pages995–1001. IEEE, 2000.[39] I. A. Sucan, M. Moll, and L. E. Kavraki. The open motion planning library. IEEE Robotics &Automation Magazine , 19(4):72–82, 2012.[40] D. Coleman, I. Sucan, S. Chitta, and N. Correll. Reducing the barrier to entry of complexrobotic software: a moveit! case study. arXiv preprint arXiv:1404.3785 , 2014.[41] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[42] W. Huang, F. Xia, D. Shah, D. Driess, A. Zeng, Y . Lu, P. Florence, I. Mordatch, S. Levine,K. Hausman, et al. Grounded decoding: Guiding text generation with grounded models forrobot control. arXiv preprint arXiv:2303.00855 , 2023.[43] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans. arXiv preprint arXiv:2303.12153 , 2023.117 Appendix7.1 Real-world SetupFigure 5: Real-world setup.Figure 6: RLbench simulation setup.Our real-robot setup contains a Franka Pandarobotic arm equipped with a parallel jaw grip-per, as shown in Figure 5. We get RGB-D in-put from a single Azure Kinect sensor at a frontview at 30Hz. The image input is of resolu-tion1280×720, we crop and downsample itto256×256. We calibrate the extrinsics ofthe camera with respect to the robot base us-ing the easy handeye1ROS package. We ex-tract keyposes from demonstrations in the samewas as in simulation. Our real-world multi-taskpolicy is trained on 4 V100 GPUs for 3 days,and we run inference on a desktop with a singleRTX4090 GPU. For robot control, we use theopen-source frankapy2package to send real-time position-control commands to the robot.7.2 RLBench Simulation SetupTo ensure fair comparison with prior work, weusencam∈ {3,4}cameras for simulated ex-periments depending on the evaluation setting.In our single-task evaluation setting first pro-posed by HiveFormer [2], we use the same 3cameras they do {Oleft, Oright, Owrist}. In ourmulti-task evaluation setting first proposed byPerAct [1], we use the same 4 cameras they do{Ofront, Oleft, Oright, Owrist}.1https://github.com/IFL-CAMP/easy_handeye2https://github.com/iamlab-cmu/frankapy127.3 RLBench TasksFigure 7: PerAct [1] tasks. We adopt the multi-task multi-variation setting from PerAct [1] with 18tasks and 249 unique variations across object placement, color, size, category, count, and shape.We adapt the single-task setting of HiveFormer [2] with 74 tasks grouped into 9 categories accordingto their key challenges. The 9 task groups are defined as follows:• The Planning group contains tasks with multiple sub-goals (e.g. picking a basket ball andthen throwing the ball). The included tasks are: basketball in hoop, put rubbish in bin, meatoff grill, meat on grill, change channel, tv on, tower3, push buttons, stack wine.• The Tools group is a special case of planning where a robot must grasp an object to interactwith the target object. The included tasks are: slide block to target, reach and drag, takeframe off hanger, water plants, hang frame on hanger, scoop with spatula, place hanger onrack, move hanger, sweep to dustpan, take plate off colored dish rack, screw nail.• The Long term group requires more than 10 macro-steps to be completed. The includedtasks are: wipe desk, stack blocks, take shoes out of box, slide cabinet open and place cups.• The Rotation-invariant group can be solved without changes in the gripper rotation. Theincluded tasks are: reach target, push button, lamp on, lamp off, push buttons, pick and lift,take lid off saucepan.• The Motion planner group requires precise grasping. As observed in [81] such tasks oftenfail due to the motion planner. The included tasks are: toilet seat down, close laptop lid,open box, open drawer, close drawer, close box, phone on base, toilet seat up, put books onbookshelf.• The Multimodal group can have multiple possible trajectories to solve a task due to a largeaffordance area of the target object (e.g. the edge of a cup). The included tasks are: pickup cup, turn tap, lift numbered block, beat the buzz, stack cups.• The Precision group involves precise object manipulation. The included tasks are: take usbout of computer, play jenga, insert onto square peg, take umbrella out of umbrella stand,insert usb in computer, straighten rope, pick and lift small, put knife on chopping board,place shape in shape sorter, take toilet roll off stand, put umbrella in umbrella stand, setupcheckers.• The Screw group requires screwing an object. The included tasks are: turn oven on, changeclock, open window, open wine bottle.• The Visual Occlusion group involves tasks with large objects and thus there are occlusionsfrom certain views. The included tasks are: close microwave, close fridge, close grill, open13grill, unplug charger, press switch, take money out safe, open microwave, put money insafe, open door, close door, open fridge, open oven, plug charger in power supply7.4 Further Architecture DetailsRelative 3D cross-attentions We featurize each of the 3D ghost points and a parametric query(used to select via inner-product one of the ghost points as the next best end-effector position inthe decoder) independently through cross-attentions to the multi-scale 3D scene feature cloud, lan-guage tokens, and proprioception. Featurizing ghost points independently, without self-attentions toone another, enables sampling more ghost points at inference time to improve performance, as weshow in Section 4. Our cross-attentions use relative 3D position information and are implementedefficiently with rotary positional embeddings [15].Given a point p= (x, y, z )∈R3and its feature x∈Rd, the rotary position encoding function PEis defined as:PE(p,x) =M(p)x=M1...Md/6x (1)Mk=cosxθk−sinxθk 0 0 0 0sinxθkcosxθk 0 0 0 00 0 cos yθk−sinyθk 0 00 0 sin yθkcosyθk 0 00 0 0 0 cos zθk−sinzθk0 0 0 0 sin zθkcoszθk(2)where θk=1100006(k−1)/d,k∈ {1, .., d/ 6}. The dot product of two positionally encoded features isPE(pi,xi)TPE(pj,xj) =xTiM(pi)TM(pj)xj=xTiM(pj−pi)xj (3)which depends only on the relative positions of points piandpj.We extract two feature maps per 256x256 input image view: 32x32coarse visual tokens and128x128fine visual tokens. We use three ghost point sampling stages: first uniformly across theentire workspace (roughly 1meter cube), then uniformly in a 16centimeter diameter ball, and fi-nally in a 4centimeter diameter ball. The coarsest ghost points attend to a global coarse scenefeature cloud ( 32x32xncam coarse visual tokens) whereas finer ghost points attend to a local finescene feature cloud (the closest 32x32xncam out of the total 128x128xncam fine visual tokens).During training, we sample 1000 ghost points in total split equally across the three stages. At infer-ence time, we can trade-off extra prediction precision and task performance for additional computeby sampling more ghost points than the model ever saw at training time ( 10,000in our experiments).We’ll show in ablations in Section 4 that our framework is robust to these hyper-parameters but tyingweights across sampling stages and relative 3D cross-attention are both crucial for generalization.We use 2layers of cross-attention and an embedding size 60for single-task experiments and 120formulti-task experiments. Training samples are augmented with random crops of RGB-D images and±45.0yaw rotation perturbations (only in the real world as this degrades performance in simulationas we’ll show in Section 4). The cropping operation is performed on aligned RGB and depth framestogether, thus maintain pixel-level correspondence. We use a batch size 16 on a Nvidia 32GB V100GPU for 200k steps (one day) for single-task experiments, and a batch size 48 on 8 Nvidia 32GBV100 GPUs for 600K steps (5 days) for language-conditioned multi-task experiments. At test time,we call a low-level motion planner to reach predicted keyposes. In simulation, we use native mo-tion planner implementation provided in RLBench, which is a sampling-based BiRRT [38] motionplanner powered by Open Motion Planning Library (OMPL) [39] under the hood. For real-worldexperiments, we use the same BiRRT planner provided by the MoveIt! ROS package [40].14Figure 8: Scene Feature Cloud Generation . We encode each image independently with a pre-trained and frozen vision backbone to get multi-scale feature maps, pass these feature maps througha feature pyramid network and retain only two: a coarse feature map (at a granularity that lets ghostpoints attend to all tokens within GPU memory) and a fine feature map (as spatially precise asafforded by input images and the backbone). We lift visual tokens from these two feature maps foreach image to 3D scene feature clouds by averaging the positions of pixels in each 2D visual token.Figure 9: Iterative Ghost Point Sampling, Featurization, and Selection .7.5 High Precision ExperimentsIn this section, we further investigate the ability of Act3D to improve over existing 3D methodsthat voxelize the workspace for high-precision tasks. We compare two variants of Act3D againstPerAct [1] on three high-precision tasks in success rate. The first Act3D variant is the standardarchitecture used in the remainder of our experiments operating on 256x256input image views; thesecond operates on higher resolution 512x512input image views, from which it extracts four timesas many visual tokens with more precise 3D positions. This further tests the ability of Act3D toprovide high precision by processing higher-resolution RGB-D views at the cost of extra compute.15Figure 10: Iterative Ghost Point Sampling, Featurization, and Selection .Method insert peg sort shape screw nailPerAct 16 31 12Act3D (256x256) 29 34 31Act3D (512x512) 47 43 55Act3D improves over PerAct on high precision tasks and can further benefit from higher resolutionRGB-D images, at the cost of extra compute.7.6 Further ablationsAugmentations: Random crops of RGB-D images boost success rate by 6.5%, but yaw rotationperturbations drop it by 11.9%. This is in line with PerAct [1] results in RLBench.Hyperparameter sensitivity: Act3D is robust to variations in hyperparameters. Doubling the di-ameter of ghost point sampling balls from (16 cm, 4 cm) to (32 cm, 8 cm) drops success rate by1.5% and halving it to (8 cm, 2 cm) by 6.9%. Halving the total number of ghost points sampledfrom 1,000 to 500 drops success rate by 2.3% whereas doubling it to 2,000 increases success rate by0.3%. We use 1,000 ghost points in our experiments to allow training with a single GPU per task.16Table 3: Ablations.Average success rate inModel single-task setting (5 tasks)Core design choicesBest Act3D model (evaluated in Fig. 3) 98.1Only 2 stages of coarse-to-fine sampling:93.6full workspace, 16 cm ball, regress an offsetNo weight tying across stages 80.6Absolute 3D positional embeddings 55.4Attention to only global coarse visual features 89.8Only 1000 ghost points at inference time 93.2Viewpoint changesBest Act3D model (evaluated in Fig. 3) 74.2HiveFormer 20.4AugmentationsNo image augmentations 91.6With rotation augmentations 86.2Hyperparameter sensitivityDouble sampling ball diameters: 32 cm and 8 cm 96.6Halve sampling ball diameters: 8 cm and 2 cm 91.2500 ghost points at training time 95.82000 ghost points at training time (need 2 GPUs) 98.4Multi-task setting (18 tasks)BackboneCLIP ResNet50 backbone 65.1ImageNet ResNet50 backbone 53.4No backbone (raw RGB) 45.217 |
fvXFBCHVGn | Dynamic Multi-Team Racing: Competitive Drivingon 1/10-th Scale Vehicles via Learning in SimulationPeter Werner*,1,Tim Seyde*,1,Paul Drews2,Thomas Balch2,Wilko Schwarting ,Igor Gilitschenski3,Guy Rossman2,Sertac Karaman4,Daniela Rus11MIT CSAIL,2Toyota Research Institute,3UofT RI,4MIT LIDS,*equal contribution{wernerpe,tseyde }@mit.eduAbstract: Autonomous racing is a challenging task that requires vehicle han-dling at the dynamic limits of friction. While single-agent scenarios like TimeTrials are solved competitively with classical model-based or model-free feed-back control, multi-agent wheel-to-wheel racing poses several challenges includ-ing planning over unknown opponent intentions as well as negotiating interac-tions under dynamic constraints. We propose to address these challenges via alearning-based approach that effectively combines model-based techniques, mas-sively parallel simulation, and self-play reinforcement learning to enable zero-shot sim-to-real transfer of highly dynamic policies. We deploy our algorithm inwheel-to-wheel multi-agent races on scale hardware to demonstrate the efficacyof our approach. Further details and videos can be found on the project website:https://sites.google.com/view/dynmutr/home .Keywords: Multi-Agent RL, Autonomous Racing, Sim-to-Real TransferOadoOtargetOtrackOtrackObservationsπLL πHL vsFigure 1: We train multi-team racing agents in simulation and transfer them directly to hardware.Our approach leverages a hierarchical policy as a layer of abstraction between strategic planning andcontinuous control. A high-level categorical policy predicts goal distributions for a low-level policyto achieve, effectively projecting long-term decision making into a limited preview horizon.1 IntroductionAutonomous racing in a multi-vehicle team setting is a highly challenging task. It requires safelyhandling a vehicle at its dynamic limits while simultaneously reasoning about opponents and teamstrategy. These challenges push the limits of autonomous navigation and have, thus, attracted consid-erable academic interest. At the same time, solving autonomous racing promises to impact decision-making for real-world autonomous vehicles. The frequent interactions and driving at the limits ofcontrollability are reminiscent of certain near-accident scenarios which are absent from commonautonomous vehicle research datasets [1].The multi-agent learning community has focused largely on simulated challenges such as [2, 3,4]. We see multi-team racing as a step towards a simulation and hardware benchmark that is both7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.accessible and realizable without the use of proprietary hardware, due to the vibrant open-sourcecommunity revolving around scaled-car racing [5].In the considered multi-team racing task, two teams of two agents compete for a team win. Ateam’s rank is determined by its best-ranked team member at the end of a race. Solving this taskeffectively is challenging because it requires both long-term strategic planning (e.g. coordinationwith teammates or setting up overtaking maneuvers over multiple corners of the track) and fast,responsive short-term planning for swift and precise handling of the vehicle (e.g. to avoid collisionsor to counteract unexpected slippage).Most autonomous racing systems follow a highly structured pipeline similar to traditional drivingpipelines. This allows for fine-grained design of each component within perception, planning, andcontrol. In contrast, purely learning-based approaches to autonomous racing allow addressing fun-damental questions of embodied AI research such as the emergence of social behavior, embodiedlearning setup, and safe real-world policy transfer. However, these have mostly been tested in simu-lated environments, due to the difficulty of large-scale experience data collection on real hardware.Simulations present many ways to diversify and extend data collection. In particular, simulatingcompetitive multi-agent scenarios enables self-play with large agent populations and avoids poten-tial damage to hardware systems. Yet to this day, sim-to-real transfer in domains where neitherdriving at the limits nor wheel-to-wheel contact dynamics are well modeled remains a challenge.In this work, we demonstrate a sim-to-real approach to multi-team racing. We expand on successfulhierarchical policy architectures and leverage parallelized self-play in simulation to learn competi-tive agents with a variety of emergent behaviors, that transfer onto real hardware. Toward this aim,our work makes the following contributions:• A light-weight GPU-accelerated simulation environment for multi-team racing, augment-ing first principles-based with data-driven models to facilitate highly parallelized learning.• A hierarchical policy structure that serves as a layer of abstraction to disentangle long-termteam-centric strategic planning from short-term ego-centric continuous control.• Training in simulation with zero-shot transfer to scale race cars, demonstrating dynamicmaneuvers in highly interactive wheel-to-wheel multi-team racing scenarios on hardware.Specifically, we review related work in Section 2, followed by our approach for environment mod-eling and control in Sections 3 and 4, respectively. We describe details of our race car platform inSection 3.1, followed by results of transfer for wheel-to-wheel racing on hardware in Section 5.2 Related WorkThis work presents a platform that advances motion planning for autonomous racing, multi-agentreinforcement learning (MARL) applications, and practical approaches for dynamics modeling andsim-to-real transfer. Pushing the limits of autonomous vehicles through autonomous racing has seena recent surge of interest [6]. Investigated systems include both reduced scale platforms [5, 7, 8],full scale platforms [9], as well as arcade-style simulated systems [10, 11, 12].Motion Planning Motion planners can be grouped into three main categories. First, obstacleavoidance planners treat other agents (henceforth referred to as ado-agents , [13]) as moving obsta-cles without explicit consideration of interactions between agents and their objectives. This includescollision-free path identification via dynamic programming [14], filter-based prediction [15], rein-forcement learning (RL) [16], graph-based planning [17], and adapting a low-level optimizationeither via learned reward specifications [18] or trajectory warm-starts [19]. Second, game-theoreticplanners explicitly represent interaction strategies of ado agents and iterate until an equilibrium isreached. This includes formulations as sequential bi-matrix games [20] or iterative dynamic pro-gramming in belief space [21], with analysis commonly limited to 1-vs-1 scenarios. Iterative bestresponse for 6 vehicles is showcased in [22], with the prevalent drawback that game theoretic plan-ners introduce a trade-off between solution quality and real-time capabilities. Finally, learning-based2planners only implicitly reason about interactions from prior data and do not compute explicit adoplans. In [23], hand-crafted scenarios and rewards are combined with model-free RL to beat humanplayers in the game of Grand Turismo, while [11] learns a latent world-model that reasons aboutopponent behavior for competitive 1-vs-1 racing via self-play. The hierarchical approach in [12]enables a high-level policy to propose lane change sequences to a low-level controller in responseto ado behavior. Our approach leverages a hierarchical policy structure as a layer of abstraction be-tween team-centric strategic planning and ego-centric continuous control, learning efficient multi-agent interactions from experience and displaying emergent behavior via self-play. This furtherenables real-time inference when scaling to many-vs-many races on hardware.Multi-Agent Reinforcement Learning This work considers a mixed, cooperative-competitive,racing scenario, where teams receive a joint reward for their overall performance, with only verylimited regularization terms signaling individual contribution. Similar problems are extensivelystudied in the MARL literature [24, 25, 26, 27]. In particular, we consider the centralized trainingdecentralized execution (CTDE) paradigm with factored (action) value functions as described in[28, 29, 30]. These approaches attempt to learn individual payoff assignments between cooperatingagents. This enables decentralized execution by greedy action with respect to the individual payoff.Alternatively factored policies can be directly trained using actor-critic approaches such as ProximalPolicy Optimization [31]. We make use of CTDE to train the high-level policy with DecSARSA, anon-policy adaptation of decoupled Q-Networks [30], learning credit assignment for team members.Dynamics Modeling and Sim-to-Real Transfer To accelerate learning, we leverage massivelyparallel simulation and training running directly on the GPU [32]. A wide variety of dynamics mod-els are used in practice for autonomous vehicles. For example, the authors in [33] use a dynamicsingle-track bicycle model with Pacejka tire models [34] and model randomization for policy learn-ing. The same model formulation was used in [35] containing system identification of parametersfor the F1Tenth platform. In [36], physics-informed basis functions are combined with Bayesianregression to fit a dynamics model used for predictive control. For a comprehensive overview of dy-namics models for racing, see [6]. Finally, our work uses simulation-to-reality (sim-to-real) transfer.See [37, 38] for select surveys on sim-to-real approaches and [27] for MARL transfer onto hardware.3 Simulation and Dynamics ModelingOur approach leverages a highly parallelized, light-weight simulator for time-efficient behaviorlearning on GPUs. The simulator merges data-driven dynamics obtained through system identi-fication on hardware with simplified analytic models to generate high-fidelity interactive scenarios.3.1 Hardware PlatformWe employ the TRIKart hardware platform developed by the Toyota Research Institute [8], a vari-ation of the F1Tenth vehicle design [5]. Each vehicle has an Nvidia Jetson onboard computer tohandle communication with the VESC motor driver, steering servo, and the base station computer.The base station provides real-time control commands together with state estimates obtained froman OptiTrack motion capture system. We further outfit each vehicle with a Lexan cover augmentedwith ABS bars to mitigate damage during vehicle-to-vehicle contact scenarios.3.2 Dynamics ModelThe simulator leverages a switched dynamics model conditioned on the contact state of the system.Out of collision, we employ the kinematic bicycle model [39] augmented with data-driven models.During collisions, we employ the dynamic bicycle model [14]. Both variations are implemented aselementary tensor operations in PyTorch for rapid data generation [40], and described below.3AgentDynamic Bicycle ModelδWextKinematic Bicycle ModelδCollision ModelDatadriven ModelsWext ̇xCollision SwitchFigure 2: Left – 10th-scale TRIKart platform. Right – Block diagram of the switched dynamics.The simulator augments a kinematic bicycle model with data driven front slip angle and drive traindynamics learned from the real vehicle. If a collision event is detected, the dynamics are switchedtemporarily to a lower fidelity dynamic model that directly models the effects of external wrenches.0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0−0.6−0.4−0.20.00.20.40.000.050.100.150.200.25|αf−μTx|[rad]v[m/s]0.00.51.01.52.02.53.03.54.0usteer[rad]−0.6−0.4−0.20.00.20.40.6αf[rad]−0.8−0.6−0.4−0.20.00.20.40.60.8mean prediction+/- sigma epistemic−0.4−0.2 0.0 0.2 0.4 0.6050010001500200025003000Oversteer Understeerαfsign(usteer) [rad]# data points−0.2 −0.1 0.0 0.1 0.2050010001500200025003000αr[rad]Figure 3: A linear model with odd steering angle features is fit on hardware data using BLR tocapture model mismatches based on understeer. Left– Model predictions with epistemic uncertaintyused for automatic domain randomization. Right – Histograms of front, αf, and rear slip angle, αr.The front slip angle is scaled such that positive values correspond to understeer at the front axle,which covers around 87% of the recorded data.Out-of-contact model: We combine the kinematic bicycle model from [39] with feature-basedpredictors of the front tire slip angle as well as a data-driven model of the longitudinal dynamics(see Section 3.3). The dynamics of the kinematic model are then given by ̇x=fkin(x,u) =vcos(θ+β)vsin(θ+β)vtan(usteer+αf)√L2+(lrtan(usteer+αf))2Φlong(v,usteer,uacc),, (1)where x= [X,Y,θ,v]is the vehicle state, u= [usteer,uacc]are the controls, the center of mass slipangle is β=arctantan(usteer+αf)lrlr+lf,αfis the predicted front slip angle, and Φlongrepresentsthe longitudinal acceleration model. The data-driven augmentation yields a sufficiently high-fidelityrepresentation of the hardware dynamics, but does not explicitly model reaction forces. We iden-tify collisions based on rectangular hit boxes and simulate contact wrenches with a spring modelaccounting for penetration depth.In-contact model: When a collision event is detected, the simulation switches to the dynamicbicycle model from [14, 35]. The resulting contact wrenches are then explicitly propagated into thestate dynamics, switching back to the kinematic model after a pre-specified cool-down or if the rearslip angle falls below a threshold. A schematic of the switched dynamics is provided in Figure 2.3.3 System IdentificationThe parameter identification and data-driven model estimation proceed directly on the hardwaresystem. The steering commands are calibrated based on trajectories recorded with a motion capture4system under the condition of minimal tire slip, allowing for the steering angle to be inferred fromthe center of mass slip angle. While the resulting kinematic bicycle model provides a good initialapproximation of the dynamics, it does not capture higher-order effects relating to e.g., variabledrive-train acceleration or tire traction. To better model these effects, we collected a dataset oftrajectories exciting the full range of throttle and steering inputs, while avoiding the oversteer regimedue to the vehicles’ tendency to understeer when running at the dynamic limits on our track. SeeFigure 3. A longitudinal acceleration model is then trained to minimize the one-step prediction errorbased on velocity, steering, and throttle inputs. To model lateral deviations at higher speeds weestimate the slip angle of the front tire via Bayesian linear regression (BLR). Hereby we use simplehand crafted features that are odd in the steering angle to fit our data. We assume our measured slipstems from a linear model in the features with i.i.d. Gaussian measurement noise ε∼N(0,σ2n)anda spherical Gaussian prior on the model weights εw∼N(0,σ2pI). During training the dynamicsare randomized by drawing slip models from the posterior distribution and injecting noise into thereceived commands. See Appendix A and C.4 for details.4 Agent Structure and Training ProcessWe define multi-team racing as a finite-horizon decentralized MDP (decMDP) [41, 42] (brief reviewin Appendix D). We leverage a GPU-accelerated implementation of the dynamics described in theprevious section to train competitive multi-team racing agents. Our approach combines coarse high-level planning with fine-grained low-level control within a hierarchical policy structure trained viabilevel optimization. An overview of the bilevel learning procedure is illustrated in Figure 4.4.1 Hierarchical Control StructureCompetitive control in interactive settings requires both long-term strategic decision-making as wellas short-term responsive action selection. These objectives occur at different layers of abstraction,with low-level control conditioned on high-level decisions, thus lending themselves to bilevel op-timization procedures [43, 44, 45, 46]. Here, we leverage a hierarchical policy design [47, 48] toeffectively guide ego-centric low-level decision-making through team-centric high-level goals. Tothis end, we train a discrete high-level policy to suggest goal states, and a low-level policy to resolvelocal velocity and heading commands in order to achieve these goals, as illustrated in Figure 1.High-level control: Our formulation enables the high-level policy to propose goal states to the low-level controller. This introduces a layer of abstraction that helps to disentangle long-term strategicplanning from short-term control. Goal states are encoded by their relative position along the trackas well as their associated uncertainty. The uncertainty encodes goal importance with respect to thehigh-level strategy and enables the low-level policy to weight short-term gains against long-termconsiderations. We further leverage coarse high-level representations via categorical distributionsto encode mode-switching behavior, e.g., ”slow down and keep to the right” . This encouragesdiverse high-level strategies and can provide favorable exploration characteristics [49]. The high-level policy is obtained as an ε-greedy evaluation of a state action value function trained with De-coupled SARSA, an on-policy variation of Decoupled Q-Networks [30]. This leverages value de-composition across team members and enables Centralized Training with Decentralized Execution(CTDE) [28, 50]. We further adopt Munchausen Value Iteration [51] for regularization.Low-level control: The low-level controller receives high-level goals with associated time budgets,predicting steering and velocity commands. This effectively augments short-horizon local reasoningwith long-horizon strategic planning. However, the goal states only act as soft constraints, enablingthe low-level controller to retain flexibility and prioritize local decision-making when necessary.The low-level policy is trained via PPO [52], with temporal difference (TD) learning of the valuesand Generalized Advantage Estimation (GAE) of the policy, under the primary objective of clear-ing high-level goals, while considering behavior smoothness, obstacle avoidance, and local trackgeometry. The strategic team decisions are thereby implicitly provided through high-level goals.5Figure 4: The hierarchical policy structure abstracts team-centric strategy from ego-centric con-trol. The high-level predicts discrete goal distributions and is trained jointly across team membersleveraging value decomposition with DecSARSA. The low-level learns continuous actions to satisfyindividual goal states with PPO. Training proceeds in parallel simulation via bilevel optimization.Hardware-level control: The steering and velocity commands sampled from the low-level policyserve as references to a hardware-level PD tracking controller. The PD controller generates steeringand acceleration commands, running at a faster rate to mitigate the effects of unmodeled dynamics.4.2 Training in SimulationThe hierarchical policy is trained via bilevel optimization in parallel simulation. Training consists oftwo phases, learning basic driving skills during pre-training and effective racing during fine-tuning.Pre-training: Initially, the low-level policy will have to learn rudimentary driving skills that en-able goal-reaching behavior. During this stage, the agent will only generate low quality learningsignals for the high-level policy that may induce representational collapse [53]. We therefore pre-train the low-level policy for 500 iterations with PPO on goals sampled from the initially uniformhigh-level policy, competing against Pure Pursuit opponents to simulate capable ado behavior.Fine-tuning: The bilevel optimization then alternates every 50 iterations. High-level trainingleverages DecSARSA to refine strategic behavior via goal prediction, while low-level training em-ploys PPO to align local controls with the adapted high-level goal distribution. During this stage,we sample opponents as a mixture from prior policy checkpoints to encourage competitive behaviorto emerge through self-play, and opponents steered by pure-pursuit control to prevent strategic col-lapse [54]. We consider both joint and independent high-level Q-functions for fine-tuning, referringto the two approaches as RL Team and RL Independent.Observations and rewards: The high-level policy observes ego state, rank information, and trackboundaries over a preview horizon. It further receives noisy ado odometry data within a view-limited neighborhood, which is processed by a multi-head attention encoder to extract ado relevancewith respect to the current ego state into a latent representation [55]. The low-level observationsadditionally contain the high-level goals, as in Figure 1. The high-level policy is primarily rewardedbased on team rank information, encouraging strategic behavior that improves positioning of theego team, and receives zero-sum overtaking reward and a small term encouraging ego progress.Hereby, the dominating term is chosen to be a sparse end-of-episode team-rank reward. The low-level policy is primarily rewarded for accomplishing high-level goals at their respective timings, withregularization objectives that modulate vehicle collisions, off-track driving, and control smoothness.Further details regarding the observation and reward structure are provided in Appendices B and C.5 ExperimentsWe evaluate the efficacy of our approach by first training policies in simulation and then transferringthem directly to the scale race car hardware. Policy learning is conducted on a single Nvidia RTX2080Ti with 2048 parallel environments taking roughly 8 hours. Policy inference is conducted on theCPU for all experiments with the high-level policy running at 0.5 Hz and the low-level policy runningat 20 Hz. The employed learning parameters and training details are summarized in Appendix C.6Figure 5: Evaluation in simulation shows consistent improvement of winrate and driving skill overthe course of training. We compare our RL Team agent to an egoistic variation with independenthigh-level learners and a pure pursuit baseline, achieving a winrate of 79% and 69%, respectively.The behavior profiles indicate how increases in winrate yield complex trade-offs between lower lap-times, risk-seeking, and risk-averse behavior (e.g. blocking increases laptime and may risk crashes).The results from fine-tuning in simulation are summarized in Figure 5. We evaluate winrates ofthe RL Team agent over the course of training against itself at convergence, a variation that usesindependent learning, and the pure pursuit baseline. We further quantify behavior characteristicsbased on races against the latter two agents. The winrate of RL Team steadily improves to reacharound 80% against pure pursuit and 70% against RL Independent. In particular, winrate frombehind approaches 60% against RL Independent, while laptime and time spent off-track are reduced.The interplay of other factors is more complex, as e.g. emergence of blocking will initially inducelarger deviation from the racing line in high-risk maneuvers that other agents may exploit, whileskillful blocking will reduce the likelihood of crashes and the need to re-overtake later on. Generally,we find high-level training to significantly improve performance, yield better positioning on thetrack, and enable smoother driving.We further evaluate our approach against two model predictive path integral control (MPPI) [56]agents, running collision avoidance based on an MPPI motion predictions for all 3 ado cars (see[57]), and against two RL Independent agents on hardware for 20 five-lap races each. The 20 racesare split into 3 starting configurations: Front – RL Team starting in 1stand 2nd(for 5 races), Back –RL Team starting in 3rdand 4thplace (for 5 races), and Random – randomized starting configurations(for 10 races). The maximum velocity is set to 3.5 [ms] for all hardware experiments. The resultsfrom the hardware evaluation are summarized in Figure 6. On hardware, we observed single-agentaverage laptimes across 10 laps of 6.8s for RL Team, 7.6s for RL Independent and 6.7s for MPPI.We found that RL Team was great at keeping its position once in front, for example by using the rearteam member to fend off opponents, resulting in a 92% winrate against MPPI and a 100% winrateagainst RL Independent, when starting in 1stplace aggregated over Front andRandom scenarios.We further observe that RL Independent agents display a more reckless driving style reflected in thenumber of collisions, crashes, and offtrack time, see Figure 6, which forces the RL Team agents tonegotiate an increased number of (near-)collision events as reflected by the behavioral differences inthe radar charts. We refer the reader to Appendix E for the un-normailzed data.Emergent behavior observed on hardware is showcased in Figure 7. The hierarchical policies learnmulti-car overtaking maneuvers, altruistic blocking where the rear team-member significantly slowsdown and attempts to block attacking opponents, and pitting maneuvers to jostle opponents out ofthe way, yielding successful sim-to-real transfer of competitive behaviors. The emergent teaming be-havior is facilitated by the centralized high-level controller that sets distinct goals for and distributesteam success to individual members, allowing for altruistic agents that sacrifice own performanceby blocking opponents to profit from the resulting increase in win-chance of their team member(Figure 7, middle). We refer the reader to Appendix F for further simulated examples.6 LimitationsWe briefly discuss limitations to our current approach, which point towards avenues for future work.Ado observation sensitivity : Agents retain sensitivity to their ado observations even when strategicinteractions are not likely. This was more pronounced during hardware experiments when coupled7Figure 6: Transfer to hardware shows strong performance of our RL Team agent compared to the RLIndependent and MPPI baselines, achieving winrates above 80% and 70%, respectively. We observeconsistent wins when starting in front and a strong ability to win from the back of the starting grid.Stripes indicate crash scenarios in which a pursuing agent did not finish the race, colored by winner.Pit ManeuverOvertakingAltruistic Blocking102310231023Figure 7: Three observed emergent behaviors. Overtaking : Agent 0 performs a double overtake ontwo ado agents. Altrusitic Blocking : Agent 1 stays back to block agents 2 and 3, allowing for agent0 to pull away. Pit Maneuver : Agent 1 pulls up behind agent 3 and performs a pitting maneuver.See also supplementary material for video sequences.with system delays and the sim-to-real gap of the dynamics model, sometimes causing exaggerateddeviations from the race line. Collisions between teammates and corner cutting : Teammatesoccasionally jostle for positions, and trade penalties for potential position gains by cutting corners,which could be refined by adapting the problem formulation. Dynamic limits : While this workbrings the vehicles to the limits of friction and demands corrections for understeer, the presentedpipeline does not allow for agents to act at the torque limits of the scaled cars. Future work couldinvestigate this dynamics regime while exploring methods to further reduce hardware system delays.7 ConclusionWe propose a novel light-weight simulator for learning dynamic multi-team wheel-to-wheel racingfor zero-shot sim-to-real transfer. The simulator combines classical analytic models with data-drivencomponents and is GPU-accelerated to enable massively parallel simulation. We further introduce ahierarchical control structure that disentangles high-level strategic planning from low-level continu-ous control. The high-level policy suggests goals to the low-level controller and is represented withcategorical variables to encode mode-switching behavior. The low-level policy predicts continuouscontrols and incorporates high-level guidance into its local decision making, but retains the ability todeviate if required. The high-level policy is obtained from centralized training across team membersand therefore provides strategic information beyond single-agent control.We train multi-agent racing policies in simulation and deploy onto the TRIKart scale race car plat-form without further adaptation. The transferred policies retain their efficacy and are able generatedynamic behavior on hardware when tested with a team of MPPI ado agents, including overtaking,blocking, and even pit maneuvers. Future work will be directed at further reducing the sim-to-realgap, while tackling hardware system engineering challenges in supporting more ado agents. Gener-ally, our results indicate great promise for combining parallelized simulation with hierarchical policyabstraction for achieving zero-shot sim-to-real transfer of dynamic multi-team racing behaviors.8AcknowledgmentsThis work was supported in part by Toyota Research Institute (TRI). This article solely reflects theopinions and conclusions of its authors and not TRI, Toyota, or any other entity. We thank themfor their support. We further thank Velin Dimitrov for assistance with hardware deployment andMarkus Wulfmeier for fruitful discussions on DecSARSA.References[1] M. Zhou, J. Luo, J. Villella, Y . Yang, D. Rusu, J. Miao, W. Zhang, M. Alban, I. Fadakar,Z. Chen, et al. Smarts: Scalable multi-agent reinforcement learning training school for au-tonomous driving. arXiv preprint arXiv:2010.09776 , 2020.[2] M. Samvelyan, T. Rashid, C. S. De Witt, G. Farquhar, N. Nardelli, T. G. Rudner, C.-M. Hung,P. H. Torr, J. Foerster, and S. Whiteson. The starcraft multi-agent challenge. arXiv preprintarXiv:1902.04043 , 2019.[3] K. Kurach, A. Raichuk, P. Stanczyk, M. Zajkac, O. Bachem, L. Espeholt, C. Riquelme, D. Vin-cent, M. Michalski, O. Bousquet, et al. Google research football: A novel reinforcement learn-ing environment. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34,pages 4501–4510, 2020.[4] G. Papoudakis, F. Christianos, L. Sch ̈afer, and S. V . Albrecht. Benchmarking multi-agentdeep reinforcement learning algorithms in cooperative tasks. arXiv preprint arXiv:2006.07869 ,2020.[5] M. O’Kelly, H. Zheng, D. Karthik, and R. Mangharam. F1tenth: An open-source evalua-tion environment for continuous control and reinforcement learning. Proceedings of MachineLearning Research , 123, 2020.[6] J. Betz, H. Zheng, A. Liniger, U. Rosolia, P. Karle, M. Behl, V . Krovi, and R. Mangharam. Au-tonomous vehicles on the edge: A survey on autonomous vehicle racing. IEEE Open Journalof Intelligent Transportation Systems , 3:458–488, 2022.[7] O. So, P. Drews, T. Balch, V . Dimitrov, G. Rosman, and E. A. Theodorou. Mpogames: Efficientmultimodal partially observable dynamic games, 2022.[8] V . Dimitrov, P. Drews, T. Balch, X. Cui, A. A. Allaban, G. Rosman, and S. G. McGil. Trikart:Enabling human-centered and rapid design of shared control in adas. In H-MRS workshop ,2022.[9] J. Kabzan, M. de la Iglesia Valls, V . Reijgwart, H. F. C. Hendrikx, C. Ehmke, M. Prajapat,A. B ̈uhler, N. Gosala, M. Gupta, R. Sivanesan, A. Dhall, E. Chisari, N. Karnchanachari,S. Brits, M. Dangel, I. Sa, R. Dub ́e, A. Gawel, M. Pfeiffer, A. Liniger, J. Lygeros, and R. Sieg-wart. AMZ driverless: The full autonomous racing system. May 2019.[10] T. Seyde, W. Schwarting, S. Karaman, and D. Rus. Learning to plan optimistically:Uncertainty-guided deep exploration via latent model ensembles. arXiv preprint:2010.14641 ,2020.[11] W. Schwarting, T. Seyde, I. Gilitschenski, L. Liebenwein, R. Sander, S. Karaman, and D. Rus.Deep latent competition: Learning to race using visual control policies in latent space. arXivpreprint arXiv:2102.09812 , 2021.[12] R. S. Thakkar, A. S. Samyal, D. Fridovich-Keil, Z. Xu, and U. Topcu. Hierarchical control forcooperative teams in competitive autonomous racing. arXiv preprint arXiv:2204.13070 , 2022.[13] T.-N. dot com. Noh terminology, ado. https://db2.the-noh.com/edic/2010/02/ado.html . 2023.9[14] A. Liniger, A. Domahidi, and M. Morari. Optimization-based autonomous racing of 1:43scale RC cars. Optimal Control Applications and Methods , 36(5):628–647, sep 2015. ISSN01432087.[15] A. Raji, A. Liniger, A. Giove, A. Toschi, N. Musiu, D. Morra, M. Verucchi, D. Caporale,and M. Bertogna. Motion planning and control for multi vehicle autonomous racing at highspeeds. In 2022 IEEE 25th International Conference on Intelligent Transportation Systems(ITSC) , pages 2775–2782. IEEE, 2022.[16] B. Evans, H. A. Engelbrecht, and H. W. Jordaan. Learning the subsystem of local planningfor autonomous racing. In 2021 20th International Conference on Advanced Robotics (ICAR) ,pages 601–606. IEEE, 2021.[17] T. Stahl, A. Wischnewski, J. Betz, and M. Lienkamp. Multilayer graph-based trajectory plan-ning for race vehicles in dynamic scenarios. In 2019 IEEE Intelligent Transportation SystemsConference (ITSC) , pages 3149–3154. IEEE, 2019.[18] R. Reiter, J. Hoffmann, J. Boedecker, and M. Diehl. A hierarchical approach for strategicmotion planning in autonomous racing. arXiv preprint arXiv:2212.01607 , 2022.[19] S. He, J. Zeng, and K. Sreenath. Autonomous racing with multiple vehicles using a parallelizedoptimization with safety guarantee using control barrier functions. In 2022 International Con-ference on Robotics and Automation (ICRA) , pages 3444–3451. IEEE, 2022.[20] A. Liniger and J. Lygeros. A noncooperative game approach to autonomous racing. IEEETransactions on Control Systems Technology , 28(3):884–897, 2019.[21] W. Schwarting, A. Pierson, S. Karaman, and D. Rus. Stochastic dynamic games in belief space.IEEE Transactions on Robotics , 37(6):2157–2172, 2021.[22] Z. Wang, T. Taubner, and M. Schwager. Multi-agent sensitivity enhanced iterative best re-sponse: A real-time game theoretic planner for drone racing in 3d environments. Robotics andAutonomous Systems , 125:103410, 2020.[23] P. R. Wurman, S. Barrett, K. Kawamoto, J. MacGlashan, K. Subramanian, T. J. Walsh,R. Capobianco, A. Devlic, F. Eckert, F. Fuchs, et al. Outracing champion gran turismo driverswith deep reinforcement learning. Nature , 602(7896):223–228, 2022.[24] L. Busoniu, R. Babuska, and B. De Schutter. A comprehensive survey of multiagent reinforce-ment learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applicationsand Reviews) , 38(2):156–172, 2008.[25] K. Zhang, Z. Yang, and T. Bas ̧ar. Multi-Agent reinforcement learning: A selective overviewof theories and algorithms. Nov. 2019.[26] S. Gronauer and K. Diepold. Multi-agent deep reinforcement learning: a survey. ArtificialIntelligence Review , pages 1–49, 2022.[27] T. Haarnoja, B. Moran, G. Lever, S. H. Huang, D. Tirumala, M. Wulfmeier, J. Humplik, S. Tun-yasuvunakool, N. Y . Siegel, R. Hafner, et al. Learning agile soccer skills for a bipedal robotwith deep reinforcement learning. arXiv preprint arXiv:2304.13653 , 2023.[28] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V . Zambaldi, M. Jaderberg, M. Lanctot,N. Sonnerat, J. Z. Leibo, K. Tuyls, et al. Value-decomposition networks for cooperative multi-agent learning. arXiv preprint arXiv:1706.05296 , 2017.[29] T. Rashid, M. Samvelyan, C. S. De Witt, G. Farquhar, J. Foerster, and S. Whiteson. Mono-tonic value function factorisation for deep multi-agent reinforcement learning. The Journal ofMachine Learning Research , 21(1):7234–7284, 2020.10[30] T. Seyde, P. Werner, W. Schwarting, I. Gilitschenski, M. Riedmiller, D. Rus, and M. Wulfmeier.Solving continuous control via q-learning. In The Eleventh International Conference on Learn-ing Representations , 2022.[31] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y . Wang, A. Bayen, and Y . Wu. The surprising effectivenessof PPO in cooperative Multi-Agent games. Oct. 2022.[32] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massivelyparallel deep reinforcement learning. In Conference on Robot Learning , pages 91–100. PMLR,2022.[33] E. Chisari, A. Liniger, A. Rupenyan, L. Van Gool, and J. Lygeros. Learning from simulation,racing in reality. In 2021 IEEE International Conference on Robotics and Automation (ICRA) ,pages 8046–8052. IEEE, 2021.[34] H. B. Pacejka and E. Bakker. The magic formula tyre model. Vehicle system dynamics , 21(S1):1–18, 1992.[35] D. Zahr ́adka. Optimization-based control of the f1/10 autonomous racing car. Czech Techni-cal University in Prague, Master’s Thesis , 2020. URL https://people.ciirc.cvut.cz/~klapajar/studenti/F3-DP-2020-Zahradka-David-thesis.pdf .[36] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving withmodel predictive path integral control. In 2016 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1433–1440. IEEE, 2016.[37] W. Zhao, J. P. Queralta, and T. Westerlund. Sim-to-Real transfer in deep reinforcement learningfor robotics: a survey. Sept. 2020.[38] X. Hu, S. Li, T. Huang, B. Tang, and L. Chen. Sim2real and digital twins in autonomousdriving: A survey. May 2023.[39] J. Kong, M. Pfeiffer, G. Schildbach, and F. Borrelli. Kinematic and dynamic vehicle models forautonomous driving control design. In 2015 IEEE intelligent vehicles symposium (IV) , pages1094–1099. IEEE, 2015.[40] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen,Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. De-Vito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, andS. Chintala. Pytorch: An imperative style, high-performance deep learning li-brary. In Advances in Neural Information Processing Systems 32 , pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.[41] F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs . SpringerInternational Publishing.[42] A. Beynier, F. Charpillet, D. Szer, and A.-I. Mouaddib. DEC-MDP/POMDP. In Markov De-cision Processes in Artificial Intelligence , pages 277–318. John Wiley & Sons, Inc., Hoboken,NJ USA, Mar. 2013.[43] H. v. Stackelberg et al. Theory of the market economy. 1952.[44] J. Bracken and J. T. McGill. Mathematical programs with optimization problems in the con-straints. Operations research , 21(1):37–44, 1973.[45] A. Sinha, P. Malo, and K. Deb. A review on bilevel optimization: From classical to evolu-tionary approaches and applications. IEEE Transactions on Evolutionary Computation , 22(2):276–295, 2017.11[46] T. Seyde, J. Carius, R. Grandia, F. Farshidian, and M. Hutter. Locomotion planning througha hybrid bayesian trajectory optimization. In 2019 International Conference on Robotics andAutomation (ICRA) , pages 5544–5550. IEEE, 2019.[47] M. Wulfmeier, A. Abdolmaleki, R. Hafner, J. T. Springenberg, M. Neunert, T. Hertweck,T. Lampe, N. Siegel, N. Heess, and M. Riedmiller. Compositional transfer in hierarchicalreinforcement learning. arXiv preprint arXiv:1906.11228 , 2019.[48] T. Seyde, W. Schwarting, I. Gilitschenski, M. Wulfmeier, and D. Rus. Strength through diver-sity: Robust behavior learning via mixture policies. In Conference on Robot Learning , pages1144–1155. PMLR, 2022.[49] T. Seyde, I. Gilitschenski, W. Schwarting, B. Stellato, M. Riedmiller, M. Wulfmeier, andD. Rus. Is bang-bang control all you need? solving continuous control with bernoulli poli-cies. Advances in Neural Information Processing Systems , 34:27209–27221, 2021.[50] R. Lowe, Y . I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information pro-cessing systems , 30, 2017.[51] N. Vieillard, O. Pietquin, and M. Geist. Munchausen reinforcement learning. Advances inNeural Information Processing Systems , 33:4235–4246, 2020.[52] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[53] E. Nikishin, M. Schwarzer, P. D’Oro, P.-L. Bacon, and A. Courville. The primacy bias indeep reinforcement learning. In International Conference on Machine Learning , pages 16828–16847. PMLR, 2022.[54] T. Bansal, J. Pachocki, S. Sidor, I. Sutskever, and I. Mordatch. Emergent complexity via multi-agent competition. arXiv preprint arXiv:1710.03748 , 2017.[55] S. Iqbal and F. Sha. Actor-attention-critic for multi-agent reinforcement learning. In Interna-tional conference on machine learning , pages 2961–2970. PMLR, 2019.[56] G. Williams, A. Aldrich, and E. Theodorou. Model predictive path integral control usingcovariance variable importance sampling. arXiv preprint arXiv:1509.01149 , 2015.[57] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou.Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE Interna-tional Conference on Robotics and Automation (ICRA) , pages 1714–1721. IEEE, 2017.[58] C. M. Bishop and N. M. Nasrabadi. Pattern recognition and machine learning , volume 4.Springer, 2006.[59] A. Tavakoli, M. Fatemi, and P. Kormushev. Learning to represent action values as a hypergraphon the action vertices. In International Conference on Learning Representations , 2021.[60] H. Van Seijen, M. Fatemi, J. Romoff, R. Laroche, T. Barnes, and J. Tsang. Hybrid rewardarchitecture for reinforcement learning. Advances in Neural Information Processing Systems ,30, 2017.[61] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. Reinforcement learning with deep energy-based policies. In International conference on machine learning , pages 1352–1361. PMLR,2017.12A System Identification DetailsIn order to identify the slip model we use Bayesian Linear regression. Analogously to [58], Chapter3, we compute the posterior and inference distributions at ˆ xasp(wslip|X,Y) =N(wslip;μ,Σ) (2)μ=XTX+σ2nσ2pI−1XTY (3)Σ=1σ2nXTX+1σ2pI−1(4)p(ˆy|ˆx,X,Y) =N(ˆy;μTˆx,ˆxTΣˆx|{z}epistemic+σ2n|{z}aleatoric) (5)where the resulting uncertainty splits into an epistemic and an aleatoric component. In order to fitthe model we use the following features and targets,X=| | |usteer vusteer u3steer| | |Y="|αf|#(6)along with hand-tuned noise parameters in Table 1.σ2p[rad] σ2n[rad]1.0 0.01Table 1: Employed BLR noise parameters.B Observation Detailsotarget,totarget,lotarget,stotarget,sloado,loado,totrack,Lotrack,RSymbol Meaningotrack,R Evenly spaced samples on left track boundaryotrack,L Evenly spaced samples on right track boundaryotarget,l Longitudinal coordinate of target positionotrack,t Transversal coordinate of target positionotrack,sl Longitudinal shape of targetotrack,st Transversal shape of targetoado, l Longitudinal coord. of relative position to adooado, t Transversal coord. of relative position to adoFigure 8: Track and position-based ado observation components employed to train the hierarchicalpolicies.The high-level observations consist of the ego observations and ado observationsoHL=oegooado(7)13where oadois the concatenation of all individual, view-limited, ado observations oado, i. The individ-ual components readoego=v ̇θotrack,Rotrack,LtepisodeowotoprogressλEλT, oado, i=oado,loado,tsin(θrel)cos(θrel)vrel ̇θrel, (8)where tepisode is the fraction of the high-level episode remaining, owotis the fraction of wheels offtrack, oprogress is the amount of progress along the center line of the track over the last time step, λEandλTare the ego and team rank respectively.The low-level observations are the concatenation of the high-level observations with the target po-sition provided by the high-level policy, where distances are expressed in track coordinates, suchthatoLL=oHLotarget, lotarget, totarget, slotarget, st. (9)C Learning Parameters and Domain RandomizationWe elaborate on aspects of the training setup below, discussing the reward functions and domainrandomization used. Throughout, we consider high-level steps that consist of TLL=20 low-levelsteps. At each high-level step, a goal state is sampled and then held fixed for the duration of thelow-level substeps.C.1 PPO hyperparametersIn order to train the low-level controller we leverage Proximal Policy Optimization (PPO) withGeneralized Advantage Estimation (GAE) similar to the parallelized agent employed in [32]. Theassociated hyperparameters are provided in Table 2. The low-level controller generates single agentdriving behavior based on local observations, while receiving guidance from the high-level policythrough goal-conditioning.Parameter ValueLearning rate 1×10−4Minibatch size 512Minibatches 4Learning epochs 2Clip value 0.2Discount ( γ) 0.99GAE parameter ( λ) 0.95Entropy coefficienct 0.01Desired KL 0.01Max grad norm 10.0MVI α 0.9MVI τ 0.03Table 2: Algorithm hyperparameters14C.2 Decoupled Expected SARSAIn order to train the high-level controller we build on recent results at the intersection of paralleloptimization and multi-agent control and employ Decoupled Expected SARSA, an on-policy vari-ation of Decoupled Q-learning [30] within the family of Hypergraph Q-Network algorithms [59].This approach leverages value decomposition [28] across action dimensions and team members torepresent the state-action value function asQθ(ssst,aaat) =M∑j=1Qjθ(ssst,ajt)M, (10)where coordination is facilitated by individual utility functions being conditioned on the global robotstate and a high-degree of parameter sharing within a unified critic [60]. The linear combination ofunivariate utility functions allows for efficient decomposition of the argmax operator and global op-timization over aaatsimplifies into parallel local optimizations over ajt. Training of the action valuefunction remains centralized, while online action selection is decoupled. Inserting this decompo-sition into the Bellman equation yields a decoupled target representation with expectation over theunderlying policy (e.g. ε-greedy)yt=r(ssst,aaat)+γM∑j=1π(ssst+1,aj)Qjθ(ssst+1,ajt+1)M. (11)We can then insert the factorized value function of Eq. 10 and the decoupled target of Eq. 11 intothe loss function L(θ) =∑Bb=1Lδ(yt−Qθ(st,at)). We furthermore explore the effects of soft Q-learning [61] within the context of Munchausen Value Iteration [51] to yield the adapted targetyt,MV I=r(ssst,aaat)+ατlnπθ(ajt|ssst)+γM∑j=1π(ssst+1,aj)Qjθ(ssst+1,aj)−τlnπθ(ajt+1|ssst+1)M, (12)where πθ=sm(Qθτ)is now taken as the softmax policy with temperature τand scaling factor α. Weobserved improved training using Munchausen Value Iteration and trained our final policies basedon this formulation. The associated hyperparameters are provided in Table 2.C.3 Reward functionsWe provide details on the reward function used to train the high-level (HL) and low-level (LL)policies. For ease of notation, we introduce the following variables: λTandλErepresent team andego rank, respectively, pand ̇pdenote the vehicle position and velocity in track frame, respectively,μgandΣgcorrespond to the mean and standard deviation of the multivariate goal state distributions,respectively, wiis an indicator for whether tire iis off-track, arepresents the actions, and ξis acollision indicator. The wheel-on-track term features a slight asymmetry that up-weights penaltyterms when either the front or rear axle are off-track, while the collision term up-weights rear-ending collisions - we omitted these features from the reward equations for visual clarity. We furtherintroduce index tdenoting time and tLLas the fraction of time left until the next high-level step.rHLt=c1(λTt==0)+c2(λT0−λTt)+c3λTt−1−λTt1+min(λTt−1−λTt)+c4e−λTt+c5e−λEt (13)rLLt=c612π|Σgt|1/2e−12(pt−μgt)TΣg−1t(pt−μgt)+14 ̇pt+c73∑i=0wit+c8∆pt+c9e−| ̇p|+c10(at−at−1)2+c11ξt(14)To provide intuition for the effect of individual reward terms, we discuss the high-level rewardfunction in more detail. The high-level episode duration is set to 16 seconds and low-level episodesterminate after two high-level targets have been received. The first place reward ( c1) provides asparse signal about team performance to each team member at each timestep and serves as theprimary indicator for winning a race. The starting rank reward ( c2) acts as a small regularizer that15Symbol Value Term explanationc1 4.0e-1 Team is in first placec2 5.0e-2 Team improved over starting rankHL c3 1.0e-1 Zero-sum overtaking rewardc4 1.0e-3 Regularize with team rankc5 1.0e-2 Regularize with ego rankc6 I(tLL<0.1)* (1.0e+0) Clear goal at end of low-level episodec7 -1.0e-3 Penalize wheels off trackLL c8 I(∆pt>2.5)* (-2.5e-1) Penalize large progress jumps (cutting)c9 -2e-3 Encourage high velocityc10 -6e-4 Regularize action smoothnessc11 -5e-3 Penalize collisionsTable 3: Reward function componentsup/down-weighs trajectories based on their team rank change, as winning from behind requiresmore skill than winning from the front. The overtaking reward ( c3) explicitly highlights importantsparse interactions that resulted in team rank change to pinpoint the influence of particular short-termmaneuvers on long-term race strategy. The team rank reward ( c4) complements the first place reward(c1) by encouraging lower team rank even if the team is not in first place, providing dense guidanceduring the learning process. The ego rank reward ( c5) further encourages agents to improve theirindividual rank even if not impacting overall team rank. The latter term primarily act as regularizersthat counteracts potential lazy agents to e.g. encourage a team member in last place to remaincompetitive even when their team is in first place and they do not foresee short-term individualimpact. Note that given c1in Table 4, the sparse high-level reward outweighs the highest possibleaccumulated dense reward components, which are weighted by c4andc5.C.4 Domain RandomizationTo improve the sim-to-real transfer of the learned policies we randomize the dynamics simulationduring training time. The simulated slip is randomized by sampling a new weight vector from theposterior distribution in Equation 2 at the beginning of every training episode. We further randomizeselect model parameters with additive noise δsummarized in Table 4.Symbol Distribution Explanationvmax U(3.5,4.5)[ms] Additive noise velocity capδlfU(−0.02,0.02)[m] Additive noise to front lengthδlrU(−0.02,0.02)[m] Additive noise to rear bicycle lengthδusteerU(−0.03,0.03)[rad] Additive noise to all steering commandsδuaccU(−0.15,0.05)[−]Additive noise to simulated throttle commandsTable 4: Ramdomized model parametersIn order to improve robustness of the policies we add uniform random noise, drawn independentlyfromU(−0.005,0.005), to all observations except the rank and team rank observations.D Decentralized Markov Decision Process FormulationWe formulate multi-team racing as a finite horizon decentralized Markov Decision Process(DecMDP), that is characterised by the tuple (γ,S,T,A,Ω,O,R), where γis the discounting fac-tor, S is the set of joint states, Tis the probability densitiy fucntion p(s′|s,a)of transitioning fromstos′, when agents take the joint action a,Ais the set of action sets available to all agents, Ωisthe set of all observation spaces, Ois the set of all observation functions, and Ris the set of rewardfunctions across agents, where each reward function returns the reward for taking joint action aatstate s.16In practice, we define multi-team racing in a symmetric fashion across agents such that their actionspaces are identical. The observation functions are defined as outlined in Section 4, such that eachagent only observes their velocity, nearby track boundaries, number of wheels that are off track,rank, remaining time in the episode, last actions, and respective ado agents.E Hardware ExperimentsThe radar plots in Figure 6 are intended to give the reader an intuitive understanding of the behaviorof the various agents. For clarity the axis are normalized with the maximum value of the respectivestatistic aggregated during both the races against of RL Team (RL T.) against MPPI agents andRL Independent (RL Ind.). In this appendix the raw data is listed in Table 5, along with briefexplanations. All statistics are averaged over the 20 respective races. As a guideline, the averagerace duration is around 40s.Statistic RL T. vs MPPI RL T. vs RL Ind. ExplanationLaptime [s] (8.44 , 8.62) (8.36, 8.26) Avg. Laptime of fastest team memberWins as leader (0.92, 0.38) (1.00, 0.2) Win rate w/ team member starting in 1stWins as pursuer (0.63, 0.08) (0.80, 0.00) Win rate w/ opponent starting in 1stOvertakes (2.85, 0.95) (5.00, 4.70) Number of overtakes per teamOff-track [s] (1.57, 1.54) (3.89, 3.104) Time off track per teamCollisions (1.0 , 0.25) (1.6, 1.8) number of collisions per team memberTable 5: Un-normalized data from the radar charts in Figure 6.17F Emergent Behavior in SimulationWe observe similar emergent behavior in simulation as on hardware. This is shown in Figure 9.The simulator itself features a viewer with visualizations of the command distributions of the highand low-level policies, markers for the observations, and miscellaneous additional information, seeFigure 10.Altruistic BlockingPit ManeuverOvertakingFigure 9: Three observed emergent behaviors in simulation. Overtaking: The marked agent per-forms an overtake. Pit Maneuver: The marked agent pulls up behind the red agent and performs apitting maneuver. Altrusitic Blocking: The marked agent overtakes an opponent and stays back toblock both opponents allowing the blue teammate to pull away.3. high-level target4. track observations5. normalized Q values6. wheel off track indicator2. low-level action distribution 1.1. episode statistics2.3.4.5.6.Figure 10: The viewer of the simulator displays individual environments along with episode infor-mation and information about the distributions of both the high and the low-level policy.18G Tracks used for TrainingIn Figure 11 the tracks employed for training and evaluation are shown. During training and evalu-ation, races take place both in the clockwise and counterclockwise direction. Track b) is a re-scaledversion of the track used in [14].(a) (b) (c) (d)Figure 11: Tracks used for training hierarchical racing policies. Track a) is used for the hardwareevaluation.19 |
keAPCON4jHC | Robust Reinforcement Learning in ContinuousControl Tasks with Uncertainty Set RegularizationYuan ZhangNeurorobotics LabUniversity of Freiburgyzhang@cs.uni-freiburg.deJianhong Wang∗Center for AI FundamentalsUniversity of Manchesterjianhong.wang@manchester.ac.ukJoschka BoedeckerNeurorobotics LabUniversity of Freiburgjboedeck@cs.uni-freiburg.deAbstract: Reinforcement learning (RL) is recognized as lacking generalizationand robustness under environmental perturbations, which excessively restricts itsapplication for real-world robotics. Prior work claimed that adding regularizationto the value function is equivalent to learning a robust policy under uncertain tran-sitions. Although the regularization-robustness transformation is appealing for itssimplicity and efficiency, it is still lacking in continuous control tasks. In thispaper, we propose a new regularizer named Uncertainty SetRegularizer (USR),to formulate the uncertainty set on the parametric space of a transition function.To deal with unknown uncertainty sets, we further propose a novel adversarialapproach to generate them based on the value function. We evaluate USR onthe Real-world Reinforcement Learning (RWRL) benchmark and the Unitree A1Robot, demonstrating improvements in the robust performance of perturbed test-ing environments and sim-to-real scenarios.Keywords: Reinforcement Learning, Robustness, Continuous Control, Robotics1 IntroductionReinforcement Learning (RL) is a powerful algorithmic paradigm used to solve sequential decision-making problems and has resulted in great success in various types of environments, e.g., masteringthe game of Go [1], playing computer games [2] and operating smart grids [3]. The majority of thesesuccesses rely on an implicit assumption that the testing environment is identical to the trainingenvironment . However, this assumption is too strong for most realistic problems, such as controllinga robot. In more detail, there are several situations where mismatches might appear between trainingand testing environments in robotics: (1) Parameter Perturbations indicates that a large number ofenvironmental parameters, e.g. temperature, friction factor could fluctuate after deployment and thusdeviate from the training environment [4]; (2) System Identification estimates a transition functionfrom limited experience. This estimation is biased compared with the real-world model [5]; (3) Sim-to-Real learns a policy in a simulated environment and performs on real robots for reasons of safetyand efficiency [6]. The difference between simulated and realistic environments renders sim-to-reala challenging task.In this paper, we aim to model the mismatch between training environments and testing environ-ments as a robust RL problem, which regards training environments and testing environments ascandidates in an uncertainty set including all possible environments. Robust Markov Decision Pro-∗Corresponding author.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.cesses (Robust MDPs) [7, 8] is a common theoretical framework to analyze the robustness of RLalgorithms. In contrast to the vanilla MDPs with a single transition model P(s′|s, a), Robust MDPsconsider an uncertainty set of transition models P={P}to describe the uncertain environments.This formulation is general enough to cover the various scenarios for the robot learning problemsaforementioned.The aim of Robust RL is to learn a policy under the worst-case scenarios among all transition mod-elsP∈P, named robust policy . If a transition model Pis viewed as an adversarial agent withthe uncertainty set Pas its action space, one can reformulate Robust RL as a zero-sum game [9].In general, solving such a problem is an NP-hard problem [8, 10], however, the employment ofLegendre-Fenchel transform can avoid excessive mini-max computations through converting mini-mization over the transition model to regularization on the value function. Furthermore, it enablesmore feasibility and tractability to design novel regularizers to cover different types of transitionuncertainties. The complexity of these value-function-based regularizers increases with the size ofthe state space, which leads to a nontrivial extension to continuous control tasks with infinitely largestate space. This directly motivates the work of this paper. Due to the page limit, we conclude otherrelated work in Appendix A.We now summarize the contributions of this paper: (1) the robustness-regularization duality methodis extended to continuous control tasks in parametric space; (2) the Uncertainty SetRegularizer(USR) on existing RL frameworks is proposed for learning robust policies; (3) the value functionis learnt through an adversarial uncertainty set when the actual uncertainty set is unknown in somescenarios; (4) the USR is evaluated on the Real-world Reinforcement Learning (RWRL) benchmark,showing improvements for robust performances in perturbed testing environments with unknownuncertainty sets; (5) the sim-to-real performance of USR is verified through realistic experiments onthe Unitree A1 robot.2 PreliminariesRobust MDPs. The mathematical framework of Robust MDPs [7, 8] extends regular MDPs inorder to deal with uncertainty about the transition function. A Robust MDP can be formulatedas a 6-tuple ⟨S,A,P, r, μ 0, γ⟩, where S,Astand for the state and action space respectively, andr(s, a) :S × A → Rstands for the reward function. Let ∆Sand∆Abe the probability measureonSandArespectively. The initial state is sampled from an initial distribution μ0∈∆S, andthe future rewards are discounted by the discount factor γ∈[0,1]. The most important conceptin robust MDPs is the uncertainty set P={P(s′|s, a)}that controls the variation of transitionfunction P:S × A → ∆S, compared with the stationary transition function Pin regular MDPs.LetΠ ={π(a|s) :S → ∆A}be the policy space; the objective of Robust RL can then be formulatedas a minimax problem such thatJ∗= maxπ∈ΠminP∈PEπ,P"+∞Xt=0γtr(st, at)#. (1)Robust Bellman Equation. While Wiesemann et al. [8] has proved NP-hardness of this mini-max problem with an arbitrary uncertainty set, most recent studies [7, 9, 10, 11, 12, 13, 14,15, 16] approximate it by assuming a rectangular structure on the uncertainty set, i.e., P=×(s,a)∈S×APsa,where Psa={P(s′|s, a)Psa(s′)in short }denotes the local uncertainty of thetransition at (s, a). In other words, the variation of transition is independent at every (s, a)pair.Under the assumption of a rectangular uncertainty set, the robust action value Qπ(s, a)under policyπmust satisfy the following robust version of the Bellman equation [17] such thatQπ(s, a) =r(s, a) + minPsa∈PsaγXs′Psa(s′)Vπ(s′),(2)where Vπ(s′) =Pa′π(a′|s′)Qπ(s′, a′). Nilim and Ghaoui [11] have shown that a robust Bellmanoperator admits a unique fixed point of Equation 2, the robust action value Qπ(s, a).2Robustness-Regularization Duality. Solving the minimization problem in the RHS of Equation2 can be further simplified by the Legendre-Fenchel transform [18]. For an arbitrary function f:X→R, its convex conjugate is f∗(x∗) := sup {⟨x∗, x⟩ −f(x) :x∈X}. Define δPsa(Psa) = 0ifPsa∈Psaand+∞otherwise, Equation 2 can be transformed to its convex conjugate (refer toDerman et al. [10] for detailed derivation) such thatQπ(s, a) =r(s, a) + minPsaγXs′Psa(s′)Vπ(s′) +δPsa(Psa) =r(s, a)−δ∗Psa(−Vπ(·)).(3)The transformation implies that the robustness condition on transitions can be equivalently expressedas a regularization term on the value function, which is referred to as the robustness-regularizationduality. The duality can extensively reduce the cost of solving the minimization problem over in-finite transition choices and thus is widely studied in the robust reinforcement learning researchcommunity [19, 20, 21].As a special case, Derman et al. [10] considered a L2norm uncertainty set on transitions, i.e.,Psa={ ̄Psa+α ̃Psa:∥ ̃Psa∥2≤1}, where ̄Psais usually called the nominal transition model. Itcan represent the prior knowledge of a transition model or a numerical value of the training environ-ment. The uncertainty set implies that the transition model is allowed to fluctuate around the nominalmodel with some degree α. Therefore, the corresponding Bellman equation in Equation 3 becomesQπ(s, a) =r(s, a)+γPs′ ̄Psa(s′)Vπ(s′)−α∥Vπ(·)∥2. Similarly, the L1norm has also been usedas uncertainty set on transitions [14], i.e., Psa={ ̄Psa+α ̃Psa:∥ ̃Psa∥1≤1}, and the Bellman equa-tion becomes the form such that Qπ(s, a) =r(s, a) +γPs′ ̄Psa(s′)Vπ(s′)−αmax s′|Vπ(s′)|.This robustness-regularization duality works well in the finite state space but the extension to theinfinite state space is still a question. We claim that such an extension is non-trivial, since bothregularizers ∥Vπ(·)∥2andmax s′|Vπ(s′)|need to be calculated on the infinite-dimensional vec-torVπ(·). In this work, we extend this concept to the continuous state space which is a criticalcharacteristic in robotics.3 Uncertainty Set Regularized Robust Reinforcement LearningHaving introduced the robustness-regularization duality and the difficulties regarding its extension tothe continuous state space in Section 2, here, we will first present a novel extension to the continuousstate space with the uncertainty set defined on the parametric space of a transition function. We willthen utilize this extension to derive a robust policy evaluation method that can be directly pluggedinto existing RL algorithms. Furthermore, to deal with the unknown uncertainty set, we propose theadversarial uncertainty set and visualize it in a simple moving-to-target task.3.1 Uncertainty Set Regularized Robust Bellman Equation (USR-RBE)For environments with continuous state space, the transition model P(s′|s, a)is usually representedas a parametric function P(s′|s, a;w), where wdenotes the parameters of the transition function.Instead of defining the uncertainty set on the distribution space, we directly impose a perturba-tion on wwithin a set Ωw. Consequently, the robust objective function (Equation 1) becomesJ∗= max π∈Πminw∈ΩwEπ,P(s′|s,a;w)hP+∞t=0γtr(st, at)i. We further assume that the parameterwfluctuates around a nominal parameter ̄w, such that w= ̄w+ ̃w, with ̄wbeing a fixed parameterand ̃w∈Ω ̃w={w− ̄w|w∈Ωw}being the perturbation part. Inspired by Equation 3, we canderive a robust Bellman equation on the parametric space for continuous control problems as shownin Proposition 3.1.Proposition 3.1 (Uncertainty Set Regularized Robust Bellman Equation) Suppose the uncer-tainty set of wisΩw(i.e., the uncertainty set of ̃w=w− ̄wisΩ ̃w), the robust Bellman equation onthe parametric space can be represented as follows:Qπ(s, a) =r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′−γZs′δ∗Ω ̃w[−∇wP(s′|s, a; ̄w)Vπ(s′)]ds′,(4)3where δΩw(w)is the indicator function that equals 0ifw∈Ωwand+∞otherwise, and δ∗Ωw(w′)isthe convex dual function of δΩw(w).The proof is presented in Appendix B.1. Intuitively, Proposition 3.1 shows that ensuring robust-ness on parameter wcan be transformed into regularization on action value Qπ(s, a)that re-lates to the product of the state value function Vπ(s′)and the derivative of the transition model∇wP(s′|s, a; ̄w). Taking the L2uncertainty set (also used in Derman et al. [10]) as a special case,i.e.,Ωw={ ̄w+α ̃w:∥ ̃w∥2≤1}, where ̄wstands for the parameter of the nominal transition modelP(s′|s, a′; ̄w), the robust Bellman equation in Proposition 3.1 becomesQπ(s, a) =r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′−αZs′∥∇wP(s′|s, a; ̄w)Vπ(s′)∥2ds′.(5)3.2 Uncertainty Set Regularized Robust Reinforcement Learning (USR-RRL)To derive a practical robust RL algorithm using the proposed USR-RBE, we follow the policy iter-ation framework [22] commonly used in RL research. Regarding the policy evaluation procedure,Theorem 3.2 proposes an operator on action value Qπand ensures its convergence to a unique fixedpoint by recursively running this operator. The proof can be found in Appendix B.2. Intuitively,this theorem indicates that one can acquire a robust action value given a certain policy and uncer-tainty set if the discount factor γand the coefficient in the uncertainty set αare properly set, whichis satisfied in the practical implementations. As for the policy improvement procedure, all exist-ing techniques (e.g. policy gradient methods) can be adopted to optimize the policy. By iteratingthe policy evaluation and improvement cycle, the policy will eventually converge to an equilibriumtrading off optimality and robustness.Theorem 3.2 (Convergence of Robust Policy Evaluation) For any policy π∈∆A, the follow-ing operator Tcan reach a unique fixed point as the robust action value Qπ(s, a)if0≤γ+αmax s,aRs′∥∇wP(s′|s, a; ̄w)∥2ds′≤1.TQπ(s, a) =r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′−αZs′∥∇wP(s′|s, a; ̄w)Vπ(s′)∥2ds′,where Vπ(s′) =Rs′π(a′|s′)Q(s′, a′)da′.A practical concern on this algorithm is that calculating USR-RBE requires knowledge of the tran-sition model P(s′|s, a; ̄w). This naturally applies to model-based RL, as model-based RL learns apoint estimate of the transition model P(s′|s, a; ̄w)by maximum likelihood approaches [23]. Formodel-free RL, we choose to construct a local Gaussian model with mean as parameters inspired byprevious work [24]. Specifically, suppose that one can access a triple (state s, action aand next statex) from the experience replay buffer, then a local transition model P(s′|s, a; ̄w)can be modelled as aGaussian distribution with mean xand covariance Σ(with Σbeing a hyperparameter), i.e., the nom-inal parameter ̄wconsists of (x,Σ). With this local transition model, we now have the full knowl-edge of P(s′|s, a; ̄w)and∇wP(s′|s, a; ̄w), which allows us to calculate the RHS of Equation 5.To further approximate the integral calculation in Equation 5, we sample Mpoints {s′1, s′2, ..., s′M}from the local transition model P(s′|s, a; ̄w)and use them to approximate the target action valuebyQπ(s, a)≈r(s, a) +γPMi=1hVπ(s′i)−α∥∇wP(s′i|s, a; ̄w)Vπ(si)∥2/P(s′i|s, a; ̄w)i, where ̄w= (x,Σ). With this approximation, the Bellman operator is guaranteed to converge to the robustvalue, and policy improvement is applied to robustly optimize the policy. We explain how to incor-porate USR-RBE into a classic RL algorithm called Soft Actor Critic (SAC) [25] in Appendix B.3.3.3 Adversarial Uncertainty SetThe proposed method in Section 3.2 relies on the prior knowledge of the uncertainty set of the para-metric space. The Lpnorm uncertainty set is most widely used in the Robust RL and robust optimal4control literature. However, such a fixed uncertainty set may not sufficiently adapt to various pertur-bation types. The Lpnorm uncertainty set with its larger region can result in an over-conservativepolicy, while the one with a smaller region may lead to a risky policy. In this section, we learn anadversarial uncertainty set through the agent’s policy and value function to avoid the above issues.!′#(!′)∇'#!′;)*+()*)Ω'forwardbackwardnormalizegeneratesample(!,.)Figure 1: Illustration of the procedure for generating an ad-versarial uncertainty set.Generating the Adversarial Un-certainty Set. The basic idea ofthe adversarial uncertainty set isto provide an appropriate uncer-tainty range to parameters that aremore sensitive to the value func-tion, which is naturally measuredby the derivative. The agent learn-ing based on such an adversarialuncertainty set is easier to adaptto the various perturbation types ofparameters. We generate the adver-sarial uncertainty set in a 5-step procedure as illustrated in Figure 1, (1) sample next state s′ac-cording to the distribution P(·|s, a; ̄w), given the current state sand action a; (2) forward passby calculating the state value V(s′)at next state s′; (3) backward pass by using the reparame-terization trick [26] to compute the derivative g( ̄w) =∇wV(s′; ̄w); (4) normalize the derivativebyd( ̄w) =g( ̄w)/[PWig( ̄w)2i]0.5; (5) generate the adversarial uncertainty set Ωw={ ̄w+α ̃w:∥ ̃w/d( ̄w)∥2≤1}. The pseudo-code to generate the adversarial uncertainty set is explained in Ap-pendix B.4 Algorithm 2.: training parameters : testing parameters : uncertainty set (b) Non uncertainty set (c) L2 uncertainty set (d) L1 uncertainty set (e) Adversarial uncertainty set (a) Task: moving to the target!"(1,1)'(4,3)*(0,0),∗(−0,8,−0.6)Figure 2: Illustration of different types of uncertainty sets to investigate their characteristics.Characteristics of the Adversarial Uncertainty Set. To further investigate the characteristics ofthe adversarial uncertainty set, we visualize it in a simple moving-to-target task: controlling a parti-cle to move towards a target point e(Figure 2.a). The two-dimensional state sinforms the positionof the agent, and the two-dimensional action a= (a1, a2)(∥a∥2= 1 is normalized) controls theforce in two directions. The environmental parameter w= (w1, w2)represents the contact frictionin two directions respectively. The transition function is expressed as s′∼ N(s+(a1w1, a2w2),Σ),and the reward is defined by the difference of the distances to the target point at two successive stepsminus a stage cost: r(s, a, s′) =d(s, e)−d(s′, e)−2. The nominal value ̄w= (1,1)(Figure2.b) indicates the equal friction factor in two directions for the training environment. It is easyto conjecture that the optimal action is pointing towards the target point, and the optimal valuefunction is V∗(s) =−d(s, e). We visualize L2,L1and adversarial uncertainty set of the con-tact friction win Figure 2.(c,d,e) respectively, at a specific state s= (4,3)and the optimal actiona∗= (−0.8,−0.6). The uncertainty sets L2andL1satisfy (w21+w22)0.5≤1and|w1|+|w2| ≤1re-spectively. Adversarial uncertainty set is calculated by following the generation procedure describedin Section 3.3, where the normalized derivative d( ̄w)is[0.8,0.6]Tand adversarial uncertainty set is(w21/0.64 +w22/0.36)0.5≤1, as an ellipse in Figure 2.e. Compared with the L2uncertainty set, theadversarial uncertainty set extends the perturbation range of the horizontal dimensional parametersince it could be more sensitive to the final return. As a result, the agent learning to resist such anuncertainty set is expected to perform well on unseen perturbation types, which will be verified inmore realistic experiments in the next section.54 ExperimentsIn this section, we provide experimental results on the Real-world Reinforcement Learning (RWRL)benchmark [27], to validate the effectiveness of the proposed regularizing USR for resisting pertur-bations in the environment. Besides, we apply USR on a sim-to-real task to show its potential onreal-world robots.4.1 Experimental SetupsTask Description. RWRL, whose back-end is the Mujoco environment [28], is a continuous con-trol benchmark consisting of real-world challenges for RL algorithms. Using this benchmark, wewill evaluate the proposed algorithm regarding the robustness of the learned policy in physical en-vironments with perturbations of parameters of the state equations (dynamics). In more detail, wefirst train a policy through interaction with the nominal environments (i.e., the environments withoutany perturbations), and then test the policy in the environments with perturbations within a rangeover relevant physical parameters. The setup is visualized as a process map in Appendix C.1. Inthis paper, we conduct experiments on six tasks: cartpole balance ,cartpole swingup ,walker stand ,walker walk ,quadruped walk ,quadruped run, with growing complexity in state and action space.More details about the specifications of tasks are shown in Appendix C.2. The perturbed variablesand their value ranges can be found in Table 2.Evaluation metric. A severe problem for Robust RL research is the lack of a standard metric toevaluate policy robustness. To resolve this obstacle, we define a new robustness evaluation met-ric which we call Robust-AUC to calculate the area under the curve of the return with respect tothe perturbed physical variables, in analogy to the definition of regular AUC [29]. More specif-ically, a trained policy πis evaluated in an environment with perturbed variable Pwhose valuesvchange in the range [vmin, vmax]and achieves different returns r. Then, these two sets of dataare employed to draw a parameter-return curve C(v, r)to describe the relationship between re-turns rand perturbed values v. We define the relative area under this curve as Robust-AUC suchthatRobust-AUC =Area (C(v,r))vmax−vmin, v∈[vmin, vmax]. Compared to the vanilla AUC, Robust-AUCdescribes the correlation between returns and the perturbed physical variables, which can sensi-tively reflect the response of a learning procedure (to yield a policy) to unseen perturbations, i.e.,the robustness. We further explain the practical implementations to calculate Robust-AUC of theexperiments in Appendix C.3.Baselines and Implementation of Proposed Methods. We first compare USR with a standardversion of Soft Actor Critic (SAC) [25], which stands for the category of algorithms without reg-ularizers ( None-Reg ). Another category of baselines is to directly impose Lpregularization onthe parameters of the value function ( L1-Reg ,L2-Reg ) [30], which is a common way to improvethe generalization of function approximation but without consideration of the environmental per-turbations; For a fixed uncertainty set as introduced in Section 3.2, we implement two types ofuncertainty sets on transitions, L2-USR andL1-USR , which can be viewed as an extension of Der-man et al. [10] and Wang and Zou [14] for continuous control tasks respectively; finally, we alsoevaluate the adversarial uncertainty set (Section 3.3), denoted as Adv-USR . We conclude all modelstructures and hyperparameters in Appendix C.4 - C.5. The code of experiments is available ongithub.com/mikezhang95/rrl_usr .4.2 Main ResultsWe show the Robust-AUC and its significance value of cartpole swingup ,walker stand ,quadruped walk in Table 1. Due to page limits, the results of other tasks are presented in Ap-pendix D.1. In addition to calculating Robust-AUC under different perturbations, we also rank allalgorithms and report the average rank as an overall robustness performance of each task. Notably,L1-Reg andL2-Reg do not improve on the robustness, and even impair the performance in compar-ison with the None-Regularized agent on simple domains ( cartpole andwalker ). In contrast, we6Table 1: Robust-AUC of all algorithms and their uncertainties on RWRL benchmark.Task Name VariablesAlgorithmsNone-Reg L1-Reg L2-Reg L1-USR L2-USR Adv-USRcartpole swinguppole length 393.41 (21.95) 319.73 (23.84) 368.16 (6.33) 444.93 (12.77) 424.48 (2.27) 430.91 (2.35)pole mass 155.25 (4.79) 96.85 (4.32) 131.35 (12.42) 175.28 (3.33 ) 159.61 (2.41) 193.13 (2.27)joint damping 137.20 (4.55) 140.16 (0.58) 165.01 (0.16) 164.21 (0.48) 169.88 (2.68) 170.39 (0.76)slider damping 783.76 (9.14) 775.73 (16.50) 797.59 (5.52) 793.55 (5.02) 781.02 (5.80) 819.32 (3.00)average rank 4.5 5.75 3.75 2.5 3.25 1.25walker standthigh length 487.02 (40.50) 461.95 (50.03) 497.38 (43.98) 488.71 (42.42) 511.16 (49.84) 505.88 (50.22)torso length 614.06 (41.10) 586.16 (68.84) 586.20 (44.31) 598.02 (36.17) 610.93 (38.87) 623.56 (46.47)joint damping 607.24 (115.28) 387.89 (109.53) 443.82 (63.52) 389.77 (76.96) 527.87 (116.75) 514.77 (126.00)contact friction 946.74 (22.20) 947.24 (29.16) 941.92 (22.21) 943.11 (21.97) 940.73 (20.89) 945.69 (16.02)average rank 2.50 4.75 4.25 4.25 3.00 2.25quadruped walkshin length 492.55 (124.44) 406.77 (90.41) 503.13 (106.50) 540.39 (126.22) 564.60 (135.49) 571.85 (64.99)torso density 471.45 (99.14) 600.86 (45.21) 526.22 (79.11) 442.05 (70.73) 472.80 (83.22) 602.09 (55.36)joint damping 675.95 (67.23) 711.54 (91.84) 794.56 (65.64) 762.50 (79.76) 658.17 (112.67) 785.11 (40.79)contact friction 683.80 (135.80) 906.92 (100.19) 770.44 (158.42) 777.40 (106.04) 767.80 (109.00) 969.73 (21.24)average rank 5.25 3.50 3.00 3.75 4.25 1.25observe that both L2-USR andL1-USR can outperform the default version under certain perturba-tions (e.g. L1-USR incartpole swingup for pole length, L2-USR inwalker stand for thigh length);they are, however, not effective for all scenarios. We argue that the underlying reason could be thatthe fixed shape of the uncertainty set cannot adapt to all perturbed cases. This is supported by the factthatAdv-USR achieves the best average rank among all perturbed scenarios, showing the best zero-shot generalization performance in continuous control tasks. For complex tasks like quadruped run,it is surprising that L1-Reg can achieve competitive results compared with Adv-USR but with slightlylarger uncertainty on the Robust-AUC , probably because the sparsity by L1 regularization can reduceredundant features. We also compare the computational cost of all algorithms both empirically andtheoretically in Appendix D.2. It is concluded that Adv-USR can improve the robustness in mostcases without increasing too much computational burden. More limitations on the computationaltime are discussed in Section 6. We also carry out two additional testing scenarios to imitate theperturbation in real: all parameters deviate from the nominal values simultaneously (Appendix D.3)and the perturbed value follows a random walk during the testing episode (Appendix D.4). Adv-USRconsistently performs best and is well-adapted to different perturbations. The additional analysis onthe training process of Adv-USR can be found in Appendix D.5.4.3 Study on Sim-to-real Robotic TaskSimulation Real Robot0200400600800Episode Returns None-RegAdv-USR(a) StandingSimulation Real Robot0200400600800Episode Returns None-RegAdv-USR (b) LocomotionFigure 3: Episode returns of all algorithms on sim-to-real task.In this section, we further investi-gate the robustness of Adv-USR ona sim-to-real robotic task. Sim-to-real is a commonly adopted setup toapply RL algorithms on real robots,where the agent is first trained insimulation and then transferred toreal robots. Unlike the environ-mental setup in Section 4.1 withadditional perturbations during thetesting phases, sim-to-real inher-ently possesses a mismatch be-tween training and testing environ-ments potentially due to: (1) the simulator possesses a simplified dynamics model and suffers fromaccumulated error [31] and (2) there are significant differences between simulators and real hard-ware in robot’s parameters, such as a quadruped example in Table 5. As a result, this setup is anideal testbed and practical application of the proposed robust RL algorithm.Specifically, we use the Unitree A1 robot [32] and the Bullet simulator [33] as the platform for sim-to-real transfer. The agents learn standing and locomotion in simulations and directly perform onreal robots without adaption [34]. Since other baselines cannot generalize well even in pure simu-7lated RWRL environments, we only compare SAC agents with and without Adv-USR method. Mostprevious works [35, 36] utilize domain randomization (DR) techniques [37] to deal with sim-to-realmismatches. DR requires training on multiple randomly initialized simulated instances with diverseenvironmental parameters, expecting the policy to be generalized in the testing environment (realrobot). In contrast, Adv-USR only requires training on single nominal parameters, which tremen-dously improves the efficiency and feasibility. Detailed setup of the sim-to-real task is described inAppendix C.6. We also additionally compare DR theoretically and empirically in Appendix E.We run 50 testing trials per baseline and report the episodic returns in Figure 3. Both agents succeedin learning a nearly optimal policy in simulation for both tasks. For the standing task, the agent withAdv-USR maintains its performance on real robots while the other agent fails to achieve the standingposition. For the more complex locomotion task, we notice that the observation is hugely differentfrom simulation and real robots as the position and velocity estimators are noisy and delayed andparameters in Table 5 are varied. That’s why None-Reg hardly moves any legs on real robots as inFigure 4b. But Adv-USR reveals a certain level of robustness by iterative bending and moving alllegs (Figure 4a) and surpassing velocity (Figure 4c). However, the performance still deterioratescompared with the simulation. To alleviate the extreme sim-to-real difference, combining Adv-USRand other sim-to-real techniques could be a more powerful strategy, which would be verified infurther work.(a)Adv-USR(b)None-Reg0 200 400 600 800 1000Evaluation Timesteps0.050.000.050.100.150.200.25VelocityNone RegAdv USR(c) Velocity on Real RobotFigure 4: Agent behaviours of all algorithms on locomotion task.5 ConclusionIn this paper, we adopt the robustness-regularization duality method to design new regularizers forcontinuous control problems to improve the robustness and generalization of RL algorithms. Fur-thermore, to deal with unknown uncertainty sets, we design an adversarial uncertainty set depend-ing on the learned action state value function and implement it as a new regularizer. The proposedmethod shows great promise regarding generalization and robustness under environmental perturba-tions in both simulated and realistic robotic tasks. Noticeably, it does not require training in multiplediverse environments or fine-tuning in testing environments, which makes it an efficient and valuableadd-on to RL for robot learning.6 LimitationsThe limitations of this work are discussed as follows. First, although the computational cost ofAdv-USR is acceptable [24] for the local Gaussian model owing to low-dimensional proprioceptiveobservations (e.g. position, velocity) in the experiments, it is a critical factor when Adv-USR isapplied to more complex dynamics with millions of parameters (i.e. common in recent offline andmodel-based RL research [38]). Therefore, methods to automatically detect critical variables maybe required in future work. On the other hand, it is probable to extend Adv-USR with domainrandomization to tackle more sophistical robotic tasks in the future.8AcknowledgmentsThis research receives funding from the European Union’s Horizon 2020 research and innovationprogram under the Marie Skłodowska-Curie grant agreement No. 953348 (ELO-X). Jianhong Wangis fully supported by UKRI Turing AI World-Leading Researcher Fellowship, EP/W002973/1. Theauthors thank Jasper Hoffman, Baohe Zhang for the inspiring discussions.References[1] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert,L. Baker, M. Lai, A. Bolton, Y . Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche,T. Graepel, and D. Hassabis. Mastering the game of go without human knowledge. Nature ,550(7676):354–359, Oct. 2017. ISSN 1476-4687. doi:10.1038/nature24270.[2] V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Ried-miller. Playing atari with deep reinforcement learning. CoRR , abs/1312.5602, 2013.[3] J. Wang, W. Xu, Y . Gu, W. Song, and T. Green. Multi-agent reinforcement learning for activevoltage control on power distribution networks. Advances in Neural Information ProcessingSystems , 34, 2021.[4] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. TheInternational Journal of Robotics Research , 32(11):1238–1274, 2013.[5] S. Schaal. Learning from demonstration. In Advances in Neural Information Processing Sys-tems, volume 9. MIT Press, 1996.[6] OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. J ́ozefowicz, B. McGrew, J. Pachocki,A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder,L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. CoRR , abs/1808.00177,2018.[7] G. N. Iyengar. Robust dynamic programming. Mathematics of Operations Research , 30(2):257–280, May 2005. ISSN 0364-765X, 1526-5471. doi:10.1287/moor.1040.0129.[8] W. Wiesemann, D. Kuhn, and B. Rustem. Robust markov decision processes. Mathematics ofOperations Research , 38(1):153–183, Feb. 2013. ISSN 0364-765X. doi:10.1287/moor.1120.0566.[9] C. P. Ho, M. Petrik, and W. Wiesemann. Fast bellman updates for robust mdps. In Proceedingsof the 35th International Conference on Machine Learning , pages 1979–1988. PMLR, July2018.[10] E. Derman, M. Geist, and S. Mannor. Twice regularized mdps and the equivalence betweenrobustness and regularization. In Advances in Neural Information Processing Systems , May2021.[11] A. Nilim and L. Ghaoui. Robustness in markov decision problems with uncertain transitionmatrices. In Advances in Neural Information Processing Systems , volume 16. MIT Press, 2004.[12] A. Roy, H. Xu, and S. Pokutta. Reinforcement learning under model mismatch. In Advancesin Neural Information Processing Systems , volume 30. Curran Associates, Inc., 2017.[13] D. J. Mankowitz, N. Levine, R. Jeong, A. Abdolmaleki, J. T. Springenberg, Y . Shi, J. Kay,T. Hester, T. Mann, and M. Riedmiller. Robust reinforcement learning for continuous controlwith model misspecification. In International Conference on Learning Representations , Sept.2019.[14] Y . Wang and S. Zou. Online robust reinforcement learning with model uncertainty. In Advancesin Neural Information Processing Systems , May 2021.9[15] J. Grand-Cl ́ement and C. Kroer. Scalable first-order methods for robust mdps. In Thirty-FifthAAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on InnovativeApplications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Ad-vances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021 , pages 12086–12094. AAAI Press, 2021. doi:10.1609/aaai.v35i13.17435.[16] E. Derman, D. J. Mankowitz, T. A. Mann, and S. Mannor. Soft-robust actor-critic policy-gradient. In A. Globerson and R. Silva, editors, Proceedings of the Thirty-Fourth Conferenceon Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10,2018 , pages 208–218. AUAI Press, 2018.[17] R. Bellman. Dynamic programming. Science , 153(3731):34–37, July 1966. doi:10.1126/science.153.3731.34.[18] R. T. Rockafellar. Convex Analysis . Princeton Landmarks in Mathematics and Physics. Prince-ton University Press, 1970. ISBN 978-1-4008-7317-3.[19] H. Husain, K. Ciosek, and R. Tomioka. Regularized policies are reward robust. In Proceedingsof The 24th International Conference on Artificial Intelligence and Statistics , pages 64–72.PMLR, Mar. 2021.[20] B. Eysenbach and S. Levine. Maximum entropy rl (provably) solves some robust rl problems.InInternational Conference on Learning Representations , Sept. 2021.[21] R. Brekelmans, T. Genewein, J. Grau-Moya, G. Detetang, M. Kunesch, S. Legg, and P. A.Ortega. Your policy regularizer is secretly an adversary. Trans. Mach. Learn. Res. , 2022,2022.[22] R. S. Sutton and A. G. Barto. Reinforcement Learning - an Introduction . Adaptive Computa-tion and Machine Learning. MIT Press, 1998. ISBN 978-0-262-19398-6.[23] B. K ́egl, G. Hurtado, and A. Thomas. Model-based micro-data reinforcement learning: Whatare the crucial model properties and which model to choose? In 9th International Conferenceon Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenRe-view.net, 2021.[24] D. Nguyen-Tuong, M. Seeger, and J. Peters. Model learning with local gaussian process re-gression. Advanced Robotics , 23(15):2015–2034, Jan. 2009. ISSN 0169-1864, 1568-5535.doi:10.1163/016918609X12529286896877.[25] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum en-tropy deep reinforcement learning with a stochastic actor. In J. G. Dy and A. Krause, editors,Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stock-holmsm ̈assan, Stockholm, Sweden, July 10-15, 2018 , volume 80 of Proceedings of MachineLearning Research , pages 1856–1865. PMLR, 2018.[26] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In Y . Bengio and Y . LeCun,editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB,Canada, April 14-16, 2014, Conference Track Proceedings , 2014.[27] G. Dulac-Arnold, D. J. Mankowitz, and T. Hester. Challenges of real-world reinforcementlearning. CoRR , abs/1904.12901, 2019.[28] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033, Vilamoura-Algarve, Portugal, Oct. 2012. IEEE. ISBN 978-1-4673-1736-8 978-1-4673-1737-5 978-1-4673-1735-1. doi:10.1109/IROS.2012.6386109.10[29] J. Huang and C. Ling. Using auc and accuracy in evaluating learning algorithms. IEEE Trans-actions on Knowledge and Data Engineering , 17(3):299–310, Mar. 2005. ISSN 1558-2191.doi:10.1109/TKDE.2005.50.[30] Z. Liu, X. Li, B. Kang, and T. Darrell. Regularization matters in policy optimization - anempirical study on continuous control. In 9th International Conference on Learning Represen-tations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021.[31] T. Erez, Y . Tassa, and E. Todorov. Simulation tools for model-based robotics: Comparison ofbullet, havok, mujoco, ode and physx. In 2015 IEEE International Conference on Roboticsand Automation (ICRA) , pages 4397–4404, May 2015. doi:10.1109/ICRA.2015.7139807.[32] A. Unitree. Unitree. a1: More dexterity, more posibility, 2018. https://www.unitree.com/a1/,Jan. 2018.[33] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning, 2016.[34] J. Lee, J. Hwangbo, and M. Hutter. Robust recovery controller for a quadrupedal robot usingdeep reinforcement learning. CoRR , abs/1901.07517, 2019.[35] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.In D. A. Shell, M. Toussaint, and M. A. Hsieh, editors, Robotics: Science and Systems XVII,Virtual Event, July 12-16, 2021 , 2021. doi:10.15607/RSS.2021.XVII.011.[36] R. Yang, M. Zhang, N. Hansen, H. Xu, and X. Wang. Learning vision-guided quadrupedallocomotion end-to-end with cross-modal transformers. CoRR , abs/2107.03996, 2021.[37] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of roboticcontrol with dynamics randomization. In 2018 IEEE International Conference on Roboticsand Automation (ICRA) , pages 3803–3810, May 2018. doi:10.1109/ICRA.2018.8460528.[38] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy op-timization. In H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch ́e-Buc, E. B. Fox, andR. Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Confer-ence on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019,Vancouver, BC, Canada , pages 12498–12509, 2019.[39] R. Kirk, A. Zhang, E. Grefenstette, and T. Rockt ̈aschel. A survey of zero-shot generalisation indeep reinforcement learning. J. Artif. Intell. Res. , 76:201–264, 2023. doi:10.1613/jair.1.14174.[40] J. Moos, K. Hansel, H. Abdulsamad, S. Stark, D. Clever, and J. Peters. Robust reinforcementlearning: A review of foundations and recent advances. Machine Learning and KnowledgeExtraction , 4(1):276–315, Mar. 2022. ISSN 2504-4990. doi:10.3390/make4010013.[41] L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta. Robust adversarial reinforcement learning.In D. Precup and Y . W. Teh, editors, Proceedings of the 34th International Conference onMachine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017 , volume 70 ofProceedings of Machine Learning Research , pages 2817–2826. PMLR, 2017.[42] C. Tessler, Y . Efroni, and S. Mannor. Action robust reinforcement learning and applicationsin continuous control. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36thInternational Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach,California, USA , volume 97 of Proceedings of Machine Learning Research , pages 6215–6224.PMLR, 2019.[43] P. Kamalaruban, Y .-T. Huang, Y .-P. Hsieh, P. Rolland, C. Shi, and V . Cevher. Robust reinforce-ment learning via adversarial training with langevin dynamics. In H. Larochelle, M. Ranzato,R. Hadsell, M.-F. Balcan, and H.-T. Lin, editors, Advances in Neural Information Processing11Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS2020, December 6-12, 2020, Virtual , 2020.[44] E. Vinitsky, Y . Du, K. Parvate, K. Jang, P. Abbeel, and A. M. Bayen. Robust reinforcementlearning using adversarial populations. CoRR , abs/2008.01825, 2020.[45] A. Pattanaik, Z. Tang, S. Liu, G. Bommannan, and G. Chowdhary. Robust deep reinforcementlearning with adversarial attacks. In E. Andr ́e, S. Koenig, M. Dastani, and G. Sukthankar, edi-tors, Proceedings of the 17th International Conference on Autonomous Agents and MultiAgentSystems, AAMAS 2018, Stockholm, Sweden, July 10-15, 2018 , pages 2040–2042. InternationalFoundation for Autonomous Agents and Multiagent Systems Richland, SC, USA / ACM, 2018.[46] H. Zhang, H. Chen, C. Xiao, B. Li, M. Liu, D. S. Boning, and C.-J. Hsieh. Robust deep re-inforcement learning against adversarial perturbations on state observations. In H. Larochelle,M. Ranzato, R. Hadsell, M.-F. Balcan, and H.-T. Lin, editors, Advances in Neural InformationProcessing Systems 33: Annual Conference on Neural Information Processing Systems 2020,NeurIPS 2020, December 6-12, 2020, Virtual , 2020.[47] T. P. Oikarinen, W. Zhang, A. Megretski, L. Daniel, and T.-W. Weng. Robust deep rein-forcement learning through adversarial loss. In M. Ranzato, A. Beygelzimer, Y . N. Dauphin,P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems 34:Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, Decem-ber 6-14, 2021, Virtual , pages 26156–26167, 2021.[48] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus.Intriguing properties of neural networks. In Y . Bengio and Y . LeCun, editors, 2nd InternationalConference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014,Conference Track Proceedings , 2014.[49] B. Mehta, M. Diaz, F. Golemo, C. J. Pal, and L. Paull. Active domain randomization. InProceedings of the Conference on Robot Learning , pages 1162–1176. PMLR, May 2020.[50] J. Wang, Z. Kurth-Nelson, H. Soyer, J. Z. Leibo, D. Tirumala, R. Munos, C. Blundell, D. Ku-maran, and M. M. Botvinick. Learning to reinforcement learn. In G. Gunzelmann, A. Howes,T. Tenbrink, and E. J. Davelaar, editors, Proceedings of the 39th Annual Meeting of the Cogni-tive Science Society, CogSci 2017, London, UK, 16-29 July 2017 . cognitivesciencesociety.org,2017.[51] Y . Tassa, Y . Doron, A. Muldal, T. Erez, Y . Li, D. d. L. Casas, D. Budden, A. Abdolmaleki,J. Merel, A. Lefrancq, T. P. Lillicrap, and M. A. Riedmiller. Deepmind control suite. CoRR ,abs/1801.00690, 2018.[52] D. Yarats and I. Kostrikov. Soft actor-critic (sac) implementation in pytorch, 2020.[53] H. van Hasselt. Reinforcement learning in continuous state and action spaces. In M. Wieringand M. van Otterlo, editors, Reinforcement Learning: State-of-the-Art , Adaptation, Learn-ing, and Optimization, pages 207–251. Springer, Berlin, Heidelberg, 2012. ISBN 978-3-642-27645-3.[54] I. Kostrikov, L. M. Smith, and S. Levine. Demonstrating a walk in the park: Learning towalk in 20 minutes with model-free reinforcement learning. In K. E. Bekris, K. Hauser, S. L.Herbert, and J. Yu, editors, Robotics: Science and Systems XIX, Daegu, Republic of Korea,July 10-14, 2023 , 2023. doi:10.15607/RSS.2023.XIX.056.12A Related WorkRobust Reinforcement Learning (Robust RL) has recently become a popular topic [7, 27, 39, 40],due to its effectiveness in tackling perturbations. Besides the transition perturbation in this paper,there are other branches relating to action, state and reward. We will briefly discuss them in thefollowing paragraphs. Additionally, we will discuss the relation of Robust RL, sim-to-real, BayesianRL and Adaptive RL approaches, which are also important topics in robot learningAction Perturbation. Early works in Robust RL concentrated on action space perturbations. Pintoet al. [41] first proposed an adversarial agent perturbing the action of the principle agent, trainingboth alternately in a mini-max style. Tessler et al. [42] later performed action perturbations withprobability αto simulate abrupt interruptions in the real world. Afterwards, Kamalaruban et al. [43]analyzed this mini-max problem from a game-theoretic perspective and claimed that an adversarywith mixed strategy converges to a mixed Nash Equilibrium. Similarly, Vinitsky et al. [44] involvedmultiple adversarial agents to augment the robustness, which can also be explained in the view of amixed strategy.State Perturbation. State perturbation can lead to the change of state from stosp, and thus mightworsen an agent’s policy π(a|s)[45]. Zhang et al. [46], Oikarinen et al. [47] both assume an Lp-norm uncertainty set on the state space (inspired by the idea of adversarial attacks widely used incomputer vision [48]) and propose an auxiliary loss to encourage learning to resist such attacks. It isworth noting that state perturbation is a special case of transition perturbation, which can be coveredby the framework proposed in this paper, as further explained in Appendix F.Reward Perturbation. The robustness-regularization duality has been widely studied, especiallywhen considering reward perturbations [19, 20, 21]. One reason is that the policy regularizer isclosely related to a perturbation on the reward function without the need for a rectangular uncertaintyassumption. However, it restricts the scope of these works as reward perturbation, since it canbe shown to be a particular case of transition perturbation by augmenting the reward value in thestate [20]. Besides, the majority of works focus on the analysis of regularization to robustness,which can only analyze the effect of existing regularizers instead of deriving novel regularizers forrobustness as in the work we present here.Sim-to-real. Sim-to-real is a key research topic in robot learning. Compared to the Robust RL prob-lem, it aims to learn a robust policy from simulations for generalization in real-world environments.Domain randomization is a common approach to ease this mismatch in sim-to-real problems [6, 37].However, Mankowitz et al. [13] has demonstrated that it actually optimizes the average case of theenvironment rather than the worst-case scenario (as seen in our research), which fails to performrobustly during testing. More recent active domain randomization methods [49] resolve this flaw byautomatically selecting difficult environments during the training process. The idea of learning anadversarial uncertainty set considered in this paper can be seen as a strategy to actively search formore valuable environments for training.Bayesian RL. One commonality between Bayesian RL and Robust RL is that they both store un-certainties over the environmental parameter (posterior distribution q(w)in Bayesian RL and uncer-tainty set Ωwin Robust RL). Uncertainties learned in Bayesian RL can benefit Robust RL in twoways: (1) Robust RL can define an uncertainty set Ωw={w:q(w)> α}to learn a robust policythat can tolerate model errors, which is attractive for offline RL and model-based RL; (2) A softrobust objective with respect to the distribution q(w)can ease the conservative behaviours causedby the worst-case scenario [16].Adaptive RL. Adaptive RL (often referred as Meta RL [50]) is another popular technique to dealwith the perturbations in environments parallel to Robust RL introduced in this paper. The maindifference between Robust RL and Adaptive RL is whether policy parameters are allowed to changewhen environmental parameters vary. Robust RL is a zero-shot learning technique aiming to learnone single robust policy that can be applied to various perturbed environments. Adaptive RL is afew-shot learning technique aiming to quickly change policy to adapt to the changing environments.13These two techniques can be combined to increase the robustness in real-world robots. One can firstuse Robust RL to learn a base policy as the warm start and fine-tune the policy on certain perturbedenvironments with Adaptive RL.B Extra Algorithm DetailsB.1 Proof of Uncertainty Set Regularized Robust Bellman EquationThe proof is as follows:Qπ(s, a) =r(s, a) +γminw∈ΩwZs′P(s′|s, a;w)Vπ(s′)ds′=r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′+γminw∈ΩwZs′(w− ̄w)T∇wP(s′|s, a; ̄w)Vπ(s′)ds′=r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′−γmax ̃wZs′− ̃wT∇wP(s′|s, a; ̄w)Vπ(s′)ds′−δΩ ̃w( ̃w)=r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′−γZs′δ∗Ω ̃w[−∇wP(s′|s, a; ̄w)Vπ(s′)]ds′(6)The second line utilizes the first-order Taylor Expansion at ̄w. The third line reformulates the mini-mization on wto maximization on ̃wand adds an indicator function as a hard constraint on ̃w. Thelast line directly follows the definition of convex conjugate function.B.2 Convergence of Robust Policy EvaluationWe now prove that the Bellman operator with an extra regularizer term as the policy evaluation stagecan converge to the robust action value function given some specific conditions.Since Vπ(s) =Raπ(a|s)Qπ(s, a)da, we define the equivalent operator with respect to Qπto theone proposed in this paper as Tsuch thatTQπ(s, a) =r(s, a) +γZs′P(s′|s, a; ̄w)Za′Qπ(s′, a′)da′ds′−αZs′∥∇wP(s′|s, a; ̄w)Za′Qπ(s′, a′)da′∥2ds′=r(s, a) +γZs′P(s′|s, a; ̄w)Vπ(s′)ds′−αZs′∥∇wP(s′|s, a; ̄w)Vπ(s′)∥2ds′| {z }The operator proposed in this paper..(7)To ease the derivation, we will not expand the term Vπas the form of Qπat the beginning in thefollowing procedures.14||TQπ1−TQπ2||∞= maxs,ar(s, a) +γZs′P(s′|s, a; ̄w)Vπ1(s′)ds′−αZs′∥∇wP(s′|s, a; ̄w)Vπ1(s′)∥2ds′−r(s, a)−γZs′P(s′|s, a; ̄w)Vπ2(s′)ds′+αZs′∥∇wP(s′|s, a; ̄w)Vπ2(s′)∥2ds′= maxs,aγZs′P(s′|s, a; ̄w)hVπ1(s′)−Vπ2(s′)ids′+αZs′h∥∇wP(s′|s, a; ̄w)Vπ2(s′)∥2− ∥∇ wP(s′|s, a; ̄w)Vπ1(s′)∥2ids′≤γmaxs,aZs′P(s′|s, a; ̄w)hVπ1(s′)−Vπ2(s′)ids′| {z }Partition 1.+αmaxs,aZs′h∥∇wP(s′|s, a; ̄w)Vπ2(s′)∥2− ∥∇ wP(s′|s, a; ̄w)Vπ1(s′)∥2ids′| {z }Partition 2.Since Partition 1 is a conventional Bellman operator, following the contraction mapping propertywe can directly write the inequality such thatγmaxs,aZs′P(s′|s, a; ̄w)hVπ1(s′)−Vπ2(s′)ids′≤γ||Vπ1(s′)−Vπ2(s′)||∞. (8)Next, we will process Partition 2. The details are as follows:αmaxs,aZs′h∥∇wP(s′|s, a; ̄w)Vπ2(s′)∥2− ∥∇ wP(s′|s, a; ̄w)Vπ1(s′)∥2ids′≤αmaxs,aZs′∇wP(s′|s, a; ̄w)hVπ2(s′)−Vπ1(s′)i2ds′≤αmaxs,aZs′∥∇wP(s′|s, a; ̄w)∥2|Vπ2(s′)−Vπ1(s′)|ds′≤αmaxs,amaxs|Vπ2(s′)−Vπ1(s′)|Zs′∥∇wP(s′|s, a; ̄w)∥2ds′≤αmaxs,aZs′∥∇wP(s′|s, a; ̄w)∥2ds′| {z }:=δmaxs′|Vπ1(s′)−Vπ2(s′)|=δ||Vπ1−Vπ2||∞. (9)Combining the results of Eq.8 and Eq.9 and expanding the term Vπ, we can directly get that||TQπ1−TQπ2||∞≤(γ+δ)||Vπ1−Vπ2||∞≤(γ+δ) maxs′|Vπ1(s′)−Vπ2(s′)|≤(γ+δ) maxs′,a′|Qπ1(s′, a′)−Qπ2(s′, a′)|≤(γ+δ)||Qπ1−Qπ2||∞To enable Tto be a contraction mapping in order to converge to the exact robust Q-values, we haveto let 0≤γ+δ≤1. In more details, the norm of the gradient of transition function with respectto the uncertainty set (i.e., ∥∇wP(s′|s, a; ̄w)∥2inside δ) is critical to the convergence of robust Q-values under some robust policy. Given the above conditions, the robust value function will finallyconverge.15B.3 Algorithm on Incorporating Uncertainty Set Regularizer into Soft Actor CritickThe proposed Uncertainty Set Regularizer method is flexible to be plugged into any existing RLframeworks as introduced in Section 3.2. Here, we include a specific implementation on Soft ActorCritic algorithm, see Algorithm 1.Algorithm 1 Uncertainty Set Regularized Robust Soft Actor Critic1:Input: initial State s, action value Q(s, a;θ)’s parameters θ1,θ2, policy π(a|s;φ)’s parametersφ, replay buffer D=∅, transition nominal parameters ̄w, value target update rate ρ2:Set target value parameters θtar,1←θ1,θtar,2←θ23:repeat4: Execute a∼π(a|s;φ)in the environment5: Observe reward rand next State s′6:D ← D ∪ { s, a, r, s′}7: s←s′8: foreach gradient step do9: Randomly sample a batch transitions, B={s, a, r, s′}fromD10: Construct adversarial uncertainty set Ω ̃was introduced in Section 3.3 (for adversarial un-certainty set only)11: Compute robust value target y(s, a, r, s′, ̄w)by calculating the RHS of Equation 412: Update action state value parameters θifori∈ {1,2}by minimizing mean squared loss tothe target:13: ∇θi1|B|P(s,a,r,s′)∈B(Q(s, a;θi)−y(s, a, r, s′, ̄w))2fori= 1,214: Update policy parameters φby policy gradient:15: ∇φ1|B|P(s)∈B(min i=1,2Q(s, ̃a;θi)−αlogπ( ̃a|s;φ))16: where ̃ais sampled from π(a|s;φ)and differentiable w.r.t. φ17: Update target value parameters:18: θtar,i←(1−ρ)θtar,i+ρθifori= 1,219: end for20:until convergenceB.4 Algorithm on Generation of Adversarial Uncertainty SetThis is the pseudo-code to generate the adversarial uncertainty set introduced in the paper.Algorithm 2 Generation of Adversarial Uncertainty Set1:Input: current state s, current action a, value function V(s;θ), next state distributionP(·|s, a; ̄w) =N(μ(s, a; ̄w),Σ(s, a; ̄w))2:do3: Sample σ∼ N(0, I)and calculate next state s′=μ(s, a; ̄w) + Σ( s, a; ̄w)σ(the reparame-terization trick to sample s′∼P(·|s, a; ̄w));4: Forward pass to calculate the next state value V(s′;θ);5: Backward pass to compute the derivative g( ̄w) = ∇wV(s′;θ) =∂V(s′;θ)∂s′∂μ(s,a; ̄w)+Σ( s,a; ̄w)σ∂w;6: Normalize the derivative by d( ̄w) =g( ̄w)/[PWig( ̄w)2i]0.5;7: Generate the adversarial uncertainty set Ωw={ ̄w+α ̃w:∥ ̃w/d( ̄w)∥2≤1}.8:doneC Extra Experimental SetupsC.1 Process Map of ExperimentsWe would like to clarify again the experimental setup in this paper. We drew a process map 5to facilitate the understanding. During both training and testing, the agents can only acquire16TrainingNominal Env0PerturbedEnv1PerturbedEnv2PerturbedEnv3......Agentobservation(NO Envparameters)actionAgentobservation(NO Envparameters)action......Agentposition,velocity...forceAgentposition,Velocity...forceBullet SimulatorReal RobotAgentmotorangles,orientation...AgentTestingExample:RWRL CartpoleExample:Sim-to-real A1motoranglesmotorangles,orientation...motoranglespole_length: 1.0pole_masss: 0.1pole_length: 0.3pole_masss: 0.1pole_length: 1.0pole_masss: 5.0Figure 5: Process map of experiments and specific examples.observations (e.g. position, velocity) from the environment without knowing the environmentalparameters (e.g. pole length, robot mass). During training, the agents can only interact with a singleset of environment parameters (nominal environment), so it’s impossible to predict the patternof the perturbation. During testing, all trained agents are fixed and tested on various perturbedenvironment parameters. So to speak, the agents have to learn a single policy on an unperturbedenvironment that can adapt to various perturbed environments, which might be far more challengingthan expected. Adv-USR shows a stable improvement over all other baselines.C.2 RWRL BenchmarksIn this paper, we conduct experiments on six tasks: cartpole balance ,cartpole swingup ,walker stand ,walker walk ,quadruped walk ,quadruped run. All tasks involve a domain and a17movement. For example, cartpole balance task represents the balance movement on cartpole do-main. In this paper, we consider 3 domains with 2 movements each.The 3 domains are cartpole, walker and quadruped respectively:•Cartpole has an unactuated pole based on a cart. One can apply a one-direction forceto balance the pole. For cartpole balance task, the pole starts near the upright while incartpole swingup task the pole starts pointing down.•Walker is a planar walker to be controlled in 6 dimensions. The walker stand task re-quires an upright torso and some minimal torso height. The walker walk task encourages aforward velocity.•Quadruped is a generic quadruped with a more complex state and action space than cart-pole and walker. The quadruped walk andquadruped runtasks encourage a different levelof forward speed.For a detailed description of these tasks, please refer DM CONTROL [51]. For each task, we followthe RWRL’s setup by selecting 4environmental variables and perturbing them during the testingphase. The perturbed variables and their value ranges can be found in Table 2. We also report thenominal values of these perturbed variables to indicate the differences between training and testingenvironments. Notably, all tasks are run for a maximum of 1000 steps and the max return is 1000.Table 2: Tasks in RWRL benchmark and the perturbed variables.Task Name Observation Dimension Action Dimension Perturbed Variables Perturbed Range Nominal Valuecartpole balancecartpole swingup5 1pole length [0.3, 3.0] 1.0pole mass [0.1, 10.0] 0.1joint damping [2e-6, 2e-1] 2e-6slider damping [5e-4, 3.0] 5e-4walker standwalker walk24 6tigh length [0.1, 0.7] 0.225torso length [0.1, 0.7] 0.3joint damping [0.1, 10.0] 0.1contact friction [0.01, 2.0] 0.7quadruped walkquadruped run78 12shin length [0.25, 2.0] 0.25torso density [500, 10000] 1000joint damping [10, 150] 30contact friction [0.1, 4.5] 1.5C.3 Practical Implementations of Robust-AUCTo calculate Robust-AUC in RWRL experiments, each agent is trained with 5 random seeds. Duringthe testing phase, for each environmental variable P, we uniformly sample 20 perturbed values vinthe range of [vmin, vmax]. For each value v, the environment variable Pis first modified to valuevand the agent is tested for 100 episodes (20 episodes per seed). We then select the 10%-quantile2as the return rat value v. By doing so we roughly have an approximated curve C(v, r)and cancalculate Robust-AUC defined previously. We also report the area between 5%-quantile and 15%-quantile as the statistical uncertainty of the reported Robust-AUC .C.4 Model StructureThe model structure for all experimental baselines is based on the Yarats and Kostrikov [52]’s im-plementation of Soft Actor Critic (SAC) [25] algorithm. The actor network is a 3-layer feed-forwardnetwork with 1024 hidden units and outputs the Gaussian distribution of action. The critic networkadopts the double-Q structure [53] and also has 3hidden layers with 1024 hidden units on eachlayer, but only outputs a real number as the action State value.210% -quantile as a worst-case performance evaluates the robustness of RL algorithms more reasonably thancommon metrics.18C.5 HyperparametersTo compare all algorithms fairly, we set all hyperparameters equally except the robust method andits coefficient. All algorithms are trained with Adam optimizer [26]. The full hyperparameters areshown in Table 3. For regularizer coefficients of all robust update methods, please see Table 4.Notably, the principle to choose the coefficient is to increase the value until the performance on thenominal environment drops. In the future, it can be automatically tuned by learning an unregularizedvalue function and comparing the difference between robust value and unregularized value. Allexperiments are carried out on NVIDIA GeForce RTX 2080 Ti and Pytorch 1.10.1.Table 3: Hyperparameters of Robust RL algorithms.HYPERPARAMETERS VALUE DESCRIPTIONBATCH SIZE 1024 T HE NUMBER OF TRANSITIONS FOR EACH UPDATEDISCOUNT FACTOR γ 0.99 T HE IMPORTANCE OF FUTURE REWARDSREPLAY BUFFER SIZE 1E6 T HE MAXIMUM NUMBER OF TRANSITIONS STORED IN MEMORYEPISODE LENGTH 1E3 T HE MAXIMUM TIME STEPS PER EPISODEMAX TRAINING STEP 1E6 T HE NUMBER OF TRAINING STEPSRANDOM STEPS 5000 T HE NUMBER OF RANDOMLY ACTING STEPS AT THE BEGINNINGACTOR LEARNING RATE 1E-4 T HE LEARNING RATE FOR ACTOR NETWORKACTOR UPDATE FREQUENCY 1 T HE FREQUENCY FOR UPDATING ACTOR NETWORKACTOR LOG STD BOUNDS [-5, 2] T HE OUTPUT BOUND OF LOG STANDARD DEVIATIONCRITIC LEARNING RATE 1E-4 T HE LEARNING RATE FOR CRITIC NETWORKCRITIC TARGET UPDATE FREQUENCY 2 T HE FREQUENCY FOR UPDATING CRITIC TARGET NETWORKCRITIC TARGET UPDATE COEFFICIENT 0.005 T HE UPDATE COEFFICIENT OF CRITIC TARGET NETWORK FOR SOFT LEARNINGINIT TEMPERATURE 0.1 INITIAL TEMPERATURE OF ACTOR ’S OUTPUT FOR EXPLORATIONTEMPERATURE LEARNING RATE 1E-4 T HE LEARNING RATE FOR UPDATING THE POLICY ENTROPYSAMPLE SIZE 1 THE SAMPLE SIZE TO APPROXIMATE THE ROBUST REGULARIZORTable 4: Regularization coefficients of Robust RL algorithms.Task NameAlgorithmsNone-Reg L1-Reg L2-Reg L1-USR L2-USR Adv-USRcartpole balance - 1e-5 1e-4 5e-5 1e-4 1e-5cartpole swingup - 1e-5 1e-4 1e-4 1e-4 1e-4walker stand - 1e-4 1e-4 5e-5 1e-4 1e-4walker walk - 1e-4 1e-4 1e-4 1e-4 5e-4quadruped walk - 1e-5 1e-4 1e-4 1e-4 5e-4quadruped run - 1e-4 1e-4 5e-5 1e-4 7e-5C.6 Extra Setups of Sim-to-real TaskWe use the Unitree A1 robot [32] and the Bullet simulator [33] as the platform for sim-to-realtransfer. The Unitree A1 is a quadruped robot with 12motors ( 3motors per leg). The Bulletsimulator is a popular simulation tool specially designed for robotics.It is well known that there is a non-negligible difference between simulators and real robots dueto:(1) the simulator possesses a simplified dynamics model and suffers from accumulated error [31]and (2) there are significant differences between simulators and real hardware in robot’s parameters,such as a quadruped example in Table 5. Therefore, training policies in simulation and applyingthem to real robots (sim-to-real) is a challenging task for robotics.Specifically, we perform 2 sim-to-real tasks: standing and locomotion following previous work [34].The detailed description of the experiment is as follows:Observation. The observation contains the following features of 3 steps: motor angles (12 dim),root orientation (4 dim, roll, pitch, roll velocity, pitch velocity), and previous actions (12 dim). Sothe observation space is 84 dimensions.19Action. All 12 motors can be controlled in the position control mode, which is further con-verted to torque with an internal PD controller. The action space for each leg is defined as[p−o, p+o]. The specific values for different parts (hip, upper leg, knee) in the stand-ing task are p= [0.00,1.6,−1.8], o= [0.8,2.6,0.8]. The values in the locomotion task arep= [0.05,0.7,−1.4], o= [0.2,0.4,0.4].Reward. For the standing task, the reward consists of 3 parts: r(s, a) = 0 .2∗rHEIGHT + 0.6∗rPOSE+ 0.2∗rVEL.rHEIGHT = 1− |z−0.2587|/0.2587 is rewarded for approaching the standingposition on the z-axis. rPOSE= exp{−0.6∗P|mtarget−m|}is for correct motor positions. rVELis punished for positive velocity (standing should be still in the end). For the locomotion task, thereward function is inspired by Kostrikov et al. [54]. r(s, a) =rv(s, a)−0.1∗v2YAW.vYAWis angularyaw velocity and rv(s, a) = 1 forvx∈[0.5,1.0],= 0forvx≥2.0and= 1− |vx−0.5|otherwise,is rewarded for velocity in x-axis.Simulation Training. We first train SAC agents with and without Adv-USR in simulations. Eachstep simulates 0.033 seconds so that the control frequency is 33 Hz. Agents are trained for 1e6stepsin the standing task and 2e6steps in the locomotion task. The model structures and hyperparametersare the same as in RWRL experiments and can be referred to Appendix C.4 and C.5. The regularizercoefficients for Adv-USR are 1e-3 and 1e-4 for two tasks.Real Robot Evaluation. After training, we directly deploy the learned policies on real robots.Since all sensors are internal on robots, there are no external sensors required. The control frequencyis set as 33 Hz. We run each policy 50 episodes with 1000 steps and report the 10%-quantile of thefinal return and 5%−15%-quantile as the error bar in Figure 3.Table 5: Unitree A1’s Parameters in Simulation and Real Robot.Parameters Simulation Real RobotMass (kg) 12 [10, 14]Center of Mass (cm) 0 [-0.2, 0.2]Motor Strength ( ×default value) 1.0 [0.8, 1.2]Motor Friction (Nms/rad) 1.0 [0.8, 1.2]Sensor Latency (ms) 0 [0, 40]Initial position (m) (0, 0, 0.25) ([-1, 1], [-1, 1], [0.2, 0.3])D Extra Experimental ResultsD.1 Constant Perturbation on System ParametersExtra experimental results for task cartpole balance ,walker walk andquadruped runcan be foundin Table 6. We can observe similar results as in the main paper that both L2-USR andL1-USR canoutperform the default version under some certain perturbations (e.g. L1-USR incartpole balancefor pole mass, L2-USR inwalker walk for thigh length), while Adv-USR achieves the best averagerank among all perturbed scenarios, showing the best zero-shot generalization performance in con-tinuous control tasks. Notably, L2-Reg inwalker walk andL1-Reg inquadruped runalso achievea competitive robust performance compared with Adv-USR . A possible reason is that, for environ-ments with high-dimensional state and action spaces, some of them are redundant and direct regu-larization on value function’s parameters is effective to perform dimensionality reduction and thuslearns a generalized policy.D.2 Computational Cost of All AlgorithmsWe report the average computation time (in milliseconds) for a single value update of all algorithmsin Table 7. We notice that the computation of all algorithms increases as the environment’s com-20Table 6: Robust-AUC of all algorithms and their uncertainties on RWRL benchmark.Task Name VariablesAlgorithmsNone-Reg L1-Reg L2-Reg L1-USR L2-USR Adv-USRcartpole balancepole length 981.45 (5.92) 989.85 (3.74) 989.33 (8.32) 798.07 (22.93) 944.89 (23.33) 959.66 (26.58)pole mass 623.88 (28.64) 605.35 (55.60) 607.79 (23.18) 632.54 (14.74) 588.13 (38.33) 627.00 (22.90)joint damping 970.83 (21.89) 978.97 (9.95) 982.71 (15.24) 985.57 (10.01) 978.62 (17.52) 982.43 (130.03)slider damping 999.44 (0.26) 999.30 (0.43) 999.34 (0.57) 999.45 (0.31) 999.49 (0.48) 999.55 (0.32)average rank 4.00 4.00 3.25 2.75 4.50 2.50walker walkthigh length 315.64 (37.24) 237.90 (25.04) 345.12 (40.30) 316.61 (37.86) 350.01 (34.37) 318.88 (53.73)torso length 498.01 (54.04) 300.39 (114.06) 533.96 (47.73) 550.44 (50.83) 543.39 (42.52) 543.91 (54.36)joint damping 364.70 (50.33) 283.19 (30.18) 420.23 (51.84) 357.39 (56.04) 356.22 (49.74) 368.35 (64.76)contact friction 885.01 (27.47) 714.94 (27.15) 907.13 (18.94) 897.65 (23.49) 900.58 (21.46) 902.03 (24.68)average 4.50 6.00 2.00 3.25 3.00 2.25quadruped runshin length 204.14 (91.36) 280.11 (61.49) 168.95 (38.19) 246.43 (117.07) 214.18 (56.06) 250.07 (79.37)torso density 321.24 (76.70) 417.68 (88.55) 252.37 (88.41) 319.43 (90.79) 225.32 (80.49) 383.14 (67.34)joint damping 367.05 (139.61) 641.08 (19.12) 687.42 (12.85) 324.38 (14.73) 692.02 (6.98) 664.25 (19.35)contact friction 654.43 (57.94) 614.21 (76.60) 473.58 (61.72) 632.64 (95.18) 624.32 (124.39) 537.19 (76.22)average rank 3.75 3.00 5.00 3.00 3.25 3.00Table 7: The computational cost (in milliseconds) for each value update of Robust RL algorithms.Task NameAlgorithmsNone-Reg L1-Reg L2-Reg L1-USR L2-USR Adv-USRcartpole balancecartpole swingup 14.72 ± 1.57 16.48 ± 1.68 17.05 ± 1.63 17.62 ± 1.49 22.48 ± 3.21 22.48 ± 3.21walker standwalker walk 15.71 ± 1.23 18.89 ± 1.58 17.52 ± 1.92 18.06 ± 1.80 18.39 ± 1.90 23.16 ± 1.72quadruped walkquadruped run 15.93 ± 1.68 19.47 ± 1.47 19.56 ± 1.67 20.79 ± 4.20 19.13 ± 1.98 25.23 ± 2.14plexity grows, and L1-Reg ,L1-Reg ,Adv-USR ’s complexities are acceptable compared with otherbaselines ( ×1∼1.25time cost). The computation only becomes a problem when applying USRmethods to dynamics with millions of parameters (common in model-based RL [38]). To tackle thisissue, we can identify important parameters to reduce computation costs, as stated in Section 6.Theoretically, the additional computational cost largely depends on the norm term∥∇wP(s′|s, a; ̄w)Vπ(s′)∥2in Equation 5, time complexity is O(W)(Wis the number ofparameters).D.3 Constant Perturbation on Multiple System ParametersIn real-world scenarios, there would be uncertainties in all system parameters. We provide the fol-lowing additional experimental results to show the robustness when all parameters are perturbedsimultaneously. The specific environmental setup is that all 4 parameters are perturbed simultane-ously during testing. The perturbation intensity grows from 0 to 1. 0 resembles training environ-ments without perturbations and 1 represents the allowed maximum perturbed values in Table 2. Weadopt the same metric Robust-AUC and report it in the following table. All methods become lessrobust due to the increasing difficulty of perturbations, but Adv-USR still outperforms others.Table 8: Robust-AUC of all algorithms and their uncertainties on RWRL benchmark.Task NameAlgorithmsNone-Reg L1-Reg L2-Reg L1-USR L2-USR Adv-USRcartpole swingup 867.42 (0.27) 856.87 (0.52) 866.99 (0.23) 867.61 (0.44) 867.45 (0.26) 881.36 (0.21)walker stand 254.04 (32.91) 235.36 (25.29) 254.64 (35.01) 262.57 (25.02) 263.35 (24.85) 266.97 (7.75)quadruped walk 522.98 (34.93) 524.24 (85.03) 525.51 (24.58) 525.14 (85.68) 506.17 (54.92) 534.61 (18.25)21D.4 Noisy Perturbation on System ParametersOne may also be interested in the noisy perturbation setup where system parameters keep changingat every time step. This setup extends the Robust RL framework where the perturbation is fixedthroughout the whole episode. The specific experimental setup noisy perturbation is as follows: theenvironmental parameter starts from the nominal value and follows a zero-mean Gaussian randomwalk at each time step. The nominal value and the standard deviation of the Gaussian randomwalk are recorded in Table 9. The experimental result on quadruped walk is shown in Figure 6. Inthis experiment, L1-Reg achieves the best robustness, while our method Adv-USR achieves Top-2performance in 3 out of 4 perturbations. While L1-Reg performs less effectively in the case of fixedperturbation, it implies that different regularizers do have different impacts on these two types ofperturbations. For noisy perturbation, environmental parameters walk randomly around the nominalvalue and reach the extreme value less often, which requires a less conservative robust RL algorithm.Our algorithm Adv-USR , originally designed for fixed perturbation problem, achieves good but notthe best performance, which leads to an interesting future research direction on the trade-off betweenrobustness and conservativeness.Table 9: The perturbed variables for the noise perturbation experiment.Task Name Perturbed Variables Start Value Step Standard Deviation Value Rangequadruped walkshin length 0.25 0.1 [0.25, 2.0]torso density 1000 500 [500, 10000]joint damping 30 10 [10, 150]contact friction 1.5 0.5 [0.1, 4.5]None Reg L1 Reg L2 Reg L1 USR L2 USR Adv USRbaselines520530540550560570580590returnquadruped_walk - shin_lengthNone Reg L1 Reg L2 Reg L1 USR L2 USR Adv USRbaselines550.0552.5555.0557.5560.0562.5565.0567.5570.0returnquadruped_walk - torso_densityNone Reg L1 Reg L2 Reg L1 USR L2 USR Adv USRbaselines575580585590595600605returnquadruped_walk - joint_dampingNone Reg L1 Reg L2 Reg L1 USR L2 USR Adv USRbaselines590595600605610615620625returnquadruped_walk - contact_frictionFigure 6: The parameter-return bar graph of all algorithms. All bars represent 10%-quantile valueof episodic return under noisy environmental parameters.D.5 Training PerformancesWe further analyze the effects of Adv-USR during the training process. Since all algorithms aretrained on the same nominal environment, they all learn to perform well on the nominal environmentwith a similar episode reward curve as in Figure 7a. However, their target values considering therobustness of learned policies are quite different (Figure 7b). Adv-USR has the lowest target value,indicating that the adversarial uncertainty set actually considers the pessimistic objective whichencourages to learn more robust policy to resist this uncertainty. This verifies the correctness of ourtheoretical claims.E Comparison with Domain RandomizationThe rethink on domain randomization (DR) directly motivates this paper: domain randomizationutilizes a variety of environments and trains an average model across them. Could we develop amore efficient way? Here we discuss the drawbacks of DR and why our method could be a possiblesolution.•Requirements on expert knowledge: DR needs to randomize multiple environments withvarious environmental parameters. However, which parameters and their range to random-22ize all require heavy expert knowledge. We have experience when applying DR on thesim-to-real locomotion task. If we set the PD controller’s gain in a large range, RL failsto learn even in simulation. If this range is small, the learned policy can’t transfer to realrobots. In comparison, our method reduces this effort by only training a robust policy on asingle nominal environment with a virtual adversarial uncertainty set.•Feasibility on certain setups: In some cases, we don’t have access to create multiplesimulated environments. This could happen in commercial autonomous driving softwarewithout exposing the low-level dynamics system, or the current powerful ChatGPT-styledcloud-based language model (if viewing the chatbot as a simulator), or even real-to-realtransfer (from one real robot to another slightly different real robot). It’s not possible tocreate different environments for DR but still possible to learn an adversarial uncertaintyset and train a robust policy based on that.•Training convergence: In general, DR has a slower and more unstable training processsince randomized multiple environments can be quite different and learning a single policyon them can be hard.We compare the popular domain randomization (DR) techniques on sim-to-real tasks (A1 quadrupedstanding and locomotion). The experimental setup is as follows. During training in simulation ,forNone-Reg andAdv-USR , agents are trained on the nominal environment with standard robots’parameters (”Other” column in Table 10). For DR, agents are trained on randomized initializedparameters within a certain range (”DR” column in Table 10). During testing on real robots , allpolicies are fixed and evaluated on the same robots with 50 episodes. We present the training curvesin Figure 8 and testing performance in Figure 9.DR has a slower convergence rate during training since randomized environments can be quite dif-ferent and learning a single policy on them can be hard. Adv-USR can already replace DR on simplesim-to-real tasks (standing). For more complex tasks (locomotion), DR still performs better sincethe performance is more affected by the model mismatch. But we believe our method is still valuableto provide an alternative concise choice to DR. Furthermore, as we have mentioned in the limita-tion section, our method is not perpendicular to the DR. Combing adversarial uncertainty sets couldpotentially reduce the range of randomization of DR while reaching the same robust performance.0.0 0.2 0.4 0.6 0.8 1.0Timesteps (mil)02004006008001000Episode RewardNone RegL1 RegL2 RegL1 USRL2 USRAdv USR(a) Episode reward0.0 0.2 0.4 0.6 0.8 1.0Timesteps (mil)020406080100T arget ValueNone RegL1 RegL2 RegL1 USRL2 USRAdv USR (b) Target valueFigure 7: Training performances of all algorithms on quadruped walk task.23Table 10: Unitree A1’s Parameters in Simulation for different algorithms.Parameters Other DRMass (kg) 12 [10, 14]Center of Mass (cm) 0 [-0.2, 0.2]Motor Strength ( ×default value) 1.0 [0.8, 1.2]Motor Friction (Nms/rad) 1.0 [0.8, 1.2]Sensor Latency (ms) 0 [0, 40]Initial position (m) (0, 0, 0.25) ([-1, 1], [-1, 1], [0.2, 0.3])0.0 0.2 0.4 0.6 0.8 1.0Timesteps (mil)02004006008001000Episode Reward None RegAdv USRDomain Random(a) Standing0.2 0.4 0.6 0.8 1.0Timesteps (mil)02004006008001000Episode Reward None RegAdv USRDomain Random (b) LocomotionFigure 8: Episode returns during training (simulation).F Comparison with State PerturbationState perturbation describes the uncertainties in the output space of the dynamics model, which isa special case of transition perturbation. We illustrate how to transform the State perturbation intotransition perturbation in the following case.Considering a L2uncertainty set on the output of the dynamics model, Ωsp={sp|s′∼P(·|s, a; ̄w),∥s′−sp∥ ≤ 1}. We can rewrite the next State distribution s′∼P(·|s, a; ̄w)ass′=f(s′|s, a; ̄w) +η, where ηis a random noise η∼ N (0,1). Then the perturbed State canbe written as sp=f(s′|s, a; ̄w) +η+β,∥β∥ ≤1. Viewing βas one additional parameter in thedynamics model, the uncertainty set on βis actually Ωβ={∥β∥ ≤1}. Based on this uncertainty setSimulation Real Robot0200400600800Episode ReturnsNone-RegAdv-USRAdv-USR(a) StandingSimulation Real Robot0200400600800Episode ReturnsNone-RegAdv-USRAdv-USR (b) LocomotionFigure 9: Episode returns during testing (simulation and real robot).24onβ, one can further design corresponding regularizers on value function to increase the robustnessas discussed in the paper.If the uncertainty set on state space is unknown, denoted as Ωβ, it is still feasible to include βas anadditional parameter in the dynamics model. As a result, Adv-USR could still be used to handle thisunknown uncertainty set on β.25 |
WGSR7HDuHu | Learning Robot Manipulation fromCross-Morphology DemonstrationGautam Salhotra∗I-Chun Arthur Liu∗Gaurav S. Sukhatme†Robotic Embedded Systems LaboratoryUniversity of Southern California[salhotra,ichunliu,gaurav]@usc.eduAbstract: Some Learning from Demonstrations (LfD) methods handle small mis-matches in the action spaces of the teacher and student. Here we address the casewhere the teacher’s morphology is substantially different from that of the stu-dent. Our framework, Morphological Adaptation in Imitation Learning ( MAIL ),bridges this gap allowing us to train an agent from demonstrations by other agentswith significantly different morphologies. MAIL learns from suboptimal demon-strations, so long as they provide some guidance towards a desired solution. Wedemonstrate MAIL on manipulation tasks with rigid and deformable objects in-cluding 3D cloth manipulation interacting with rigid obstacles. We train a visualcontrol policy for a robot with one end-effector using demonstrations from a sim-ulated agent with two end-effectors. MAIL shows up to 24% improvement in anormalized performance metric over LfD and non-LfD baselines. It is deployedto a real Franka Panda robot, handles multiple variations in properties for objects(size, rotation, translation), and cloth-specific properties (color, thickness, size,material). An overview is on this website.Keywords: Imitation from Observation, Learning from Demonstration1 IntroductionLearning from Demonstration (LfD) [1, 2] is a set of supervised learning methods where a teacher(often, but not always, a human) demonstrates a task, and a student (usually a robot) uses thisinformation to learn to perform the same task. Some LfD methods cope with small morphologicalmismatches between the teacher and student [3, 4] ( e.g., five-fingered hand to two-fingered gripper).However, they typically fail for a large mismatch ( e.g., bimanual human demonstration to a robotarm with one gripper). The key difference is that to reproduce the transition from a demonstrationstate to the next, no single student action suffices - a sequence of actions may be needed.Supervised methods are appealing where demonstration-free methods [5] do not converge or under-perform [6] and purely analytical approaches are computationally infeasible [7, 8]. In such settings,human demonstrations of complex tasks are often readily available e.g., it is straightforward for ahuman to show a robot how to fold a cloth. An LfD-based imitation learning approach is appealingin such settings provided we allow the human demonstrator to use their body in the way they findmost convenient ( e.g., using two hands to hang a cloth on a clothesline to dry). This requirementinduces a potentially large morphology mismatch - we want to learn and execute complex tasks withdeformable objects on a single manipulator robot using natural human demonstrations.We propose a framework, Morphological Adaptation in Imitation Learning ( MAIL ), to bridge thismismatch. We focus on cases where the number of end-effectors is different from teacher to student,∗Equal contribution†G.S. Sukhatme holds concurrent appointments as a Professor at USC and as an Amazon Scholar. Thispaper describes work performed at USC and is not associated with Amazon.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.although the method may be extended to other forms of morphological differences. MAIL enablespolicy learning for a robot with mend-effectors from teachers with nend-effectors. It does notrequire demonstrator actions, only the states of the objects in the environment making it potentiallyuseful for a variety of end-effectors (pickers, suction gripper, two-fingered grippers, or even hands).It uses trajectory optimization to convert state-based demonstrations into (suboptimal) trajectoriesin the student’s morphology. The optimization uses a learned (forward) dynamics model to tradeaccuracy for speed, especially useful for tasks with high-dimensional state and observation spaces.The trajectories are then used by an LfD method, optionally with exploration components like re-inforcement learning, which is adapted to work with sub-optimal demonstrations and improve uponthem by interacting with the environment. end-effectors (n=2) demonstrationsn end-effectors (m=1) rollouts in simulation and real worldmLearned Spatio-temporal Dynamics ModelIndirect Trajectory Optimization end-effectors demos mLfD MethodFigure 1: MAIL generalizes LfD to large morphologicalmismatches between teacher and student in difficult ma-nipulation tasks. We show an example task: hang a clothto dry on a plank (D RYCLOTH ). The demonstrations arebimanual, yet the robot learns to execute the task with asingle arm and gripper. The learned policy transfers tothe real world and is robust to object variations.Though the original demonstrationscontain states, we generalize the solu-tion to work with image observationsin the final policy. We showcase ourmethod on challenging cloth manipula-tion tasks (Sec. 4.1) for a robot with oneend-effector, using image observations,shown in Fig. 1. This setting is chal-lenging for multiple reasons. First, clothmanipulation is easy for bimanual hu-man demonstrators but challenging fora one-handed agent (even humans findcloth manipulation non-trivial with onehand). Second, deformable objects existin a continuous state space; image ob-servations in this setting are also high-dimensional. Third, the cloth being ma-nipulated makes a large number of con-tacts (hundreds) that are made/broken per time step. These can significantly slow down simulation,and consequently learning and optimization. We make the following contributions:1. We propose a novel framework, MAIL , that bridges the large morphological mismatch in LfD.MAIL trains a robot with mend-effectors to learn manipulation from demonstrations with adifferent ( n) number of end-effectors.2. We demonstrate MAIL on challenging cloth manipulation tasks on a robot with one end-effector.Our tasks have a high-dimensional ( >15000 ) state space, with several 100 contacts beingmade/broken per step, and are non-trivial to solve with one end-effector. Our learned agent out-performs baselines by up to 24% on a normalized performance metric and transfers zero-shot toa real robot. We introduce a new variant of 3D cloth manipulation with obstacles - D RYCLOTH .3. We illustrate MAIL providing different instances of end-effector transfer, such as a 3-to-2, 3-to-1, and 2-to-1 end-effector transfer, using a simple rearrangement task with three rigid bodiesin simulation and the real world. We further explain how MAIL can potentially handle moreinstances of n-to-mend-effector transfer.2 Related WorkImitation Learning and Reinforcement Learning with Demonstrations (RLfD): Imitation learn-ing methods [9, 10, 11, 12, 13] and methods that combine reinforcement learning and demonstra-tions [14, 15, 1, 2] have shown excellent results in learning a mapping between observations and ac-tions from demonstrations. However, their objective function requires access to the demonstrator’sground truth actions for optimization. This is infeasible for cross-morphology transfer due to actionspace mismatch. To work around this, prior works have proposed systems for teachers to providedemonstrations in the students’ morphology [16] which limits the ability of teachers to efficientlyprovide data. Similar to imitation learning, offline RL [17, 18, 19] learns from demonstrations stored2in a dataset without online environment interactions. While offline RL can work with large datasetsof diverse rollouts to produce generalizable policies [20, 21], it requires the availability of rolloutsthat have the same action space as the learning agent. MAIL learns across morphologies and is notaffected by this limitation.Imitation from Observation: Imitation from observation (IfO) methods [3, 9, 22, 23, 24, 25, 26]learn from the states of the demonstration; they do not use state-action pairs. In [27], an approachis proposed to learn repetitive actions using Dynamic Movement Primitives [28] and Bayesian op-timization to maximize the similarity between human demonstrations and robot actions. Many IfOmethods [3, 23, 24, 29] assume that the student can take a single action to transition from the demon-stration’s current state to the next state. Some methods [3, 23] use this to train an inverse dynamicsmodel to infer actions. Others extract keypoints from the observations and compute actions by sub-tracting consecutive keypoint vectors. XIRL [30] uses temporal cycle consistency between demon-strations to learn task progress as a reward function, which is then fed to RL methods. However,when the student has a different action space than the teacher, it may require more than one actionfor the student to reach consecutive demonstration states. For example, in an object rearrangementtask, a two-picker teacher agent can move two objects with one pick-place action. But a one-pickerstudent will need two or more actions to achieve the same result. Zero-shot visual imitation [9]assumes that the statistics of visual observations and agents observations will be similar. However,when solving a task with different numbers of arms, some intermediate states will not be seen inteacher demonstrations. State-of-the-art learning from observation methods [25, 31] have madesignificant advancements in exploiting information between states. However, their tasks have muchlonger horizons, hence more states and learning signals than ours. Whether these methods work wellon short-horizon, difficult manipulation tasks is uncertain. To address this and provide a meaningfulcomparison, we conducted experiments to compare MAIL with these methods (Sec. 4).Trajectory Optimization: Trajectory optimization algorithms [32, 8, 33] optimize a trajectory byminimizing a cost function, subject to a set of constraints. It has been used for manipulation ofrigid and deformable objects [7], even through contact [34] using complementarity constraints [35].Indirect trajectory optimization only optimizes the actions of a trajectory and uses a simulator forthe dynamics instead of adding dynamics constraints at every step.Learned Dynamics: Learning dynamics models is useful when there is no simulator, or if thesimulator is too slow or too inaccurate. Learned models have been used with Model-PredictiveControl (MPC) to speed up prediction times [36, 37, 38]. A common use case is model-basedRL [39], where learning the dynamics is part of the algorithm and has been shown to learn dynamicsfrom states and pixels [40] and applied to real-world tasks [41].3 Formulation and Approach3.1 PreliminariesWe formulate the problem as a partially observable Markov Decision Process (POMDP) with states∈ S, action a∈ A , observation o∈ O , transition function T:S × A → S , horizon H,discount factor γand reward function r:S × A → R. The discounted return at time tisRt=PHi=tγir(si,ai)andsi∼ T(si−1,ai−1). A task is instantiated with a variant sampled from thetask distribution, v∼ V. The initial environment state depends on the task variant, s0(v),v∼ V.We train a policy πθto maximize expected reward J(πθ)of an episode over task variants v,J(πθ) =Ev∼V[R0],subject to initial state s0(v)and the dynamics from T.For an agent with morphology M, we differentiate between datasets available as demonstrations(DMDemo ) and those that are optimized ( DMOptim ). For our cloth environments, our teacher mor-phology is two-pickers ( M= 2p) and student morphology is one-picker ( M= 1p). We assume thedemonstrations are from teachers with a morphology that can be different from the student (and fromeach other). We refers to these as teacher demonstrations, DTeacher , to emphasize that they do notnecessarily come from an expert or an oracle. Further, these can be suboptimal. The demonstrations3Random Actions end-effectors mDRandom={(s,a,s/uni2032 ),...}Learned Dynamics /uni2225/uni0394Psim−/uni0394Ppred/uni22252Teacher Demos end-effectors nDTeacher={(s,s/uni2032 ),...}Optimized Demos end-effectors mDStudent={(s,a,s/uni2032 ),...}LfD Method R(st,at)/uni03C0(o)Indirect Trajectory Optimization /uni2225sgoal−sH/uni22252Figure 2: An example cloth folding task with demonstrations from a teacher with n= 2 end-effectors, deployed on a Franka Panda with m= 1 end-effector (parallel-jaw gripper). We traina network to predict the forward dynamics of the object being manipulated in simulation, using arandom action dataset DRandom . For every state transition, we match the predicted particle dis-placements from our model, ∆Ppred, to that of the simulator, ∆Psim. Given this learned dynamicsand the teacher demonstrations we use indirect trajectory optimization to find student actions thatsolve the task. The optimization objective is to match with the object states in the demonstration.Finally, we pass the optimized dataset DStudent to a downstream LfD method to get a final policyπthat generalizes to task variations and extends task learning to image space, enabling real-worlddeployment.are state trajectories τT= (s0, . . . , sH−1). The teacher dataset is made up of KTsuch trajectories,DTeacher ={τT,i}∀i= 1, . . . , K T, using a few task variations from the task distribution vd∼ V.We now discuss the components of MAIL , shown in Fig. 2. The user provides teacher demonstra-tionsDTeacher . First, we create a dataset of random actions, DRandom , and use it to train a dynamicsmodel, Tψ.Tψreduces computational cost when dealing with contact-rich simulations like cloth ma-nipulation (Sec. 4.1). Next, we convert each teacher demonstration to a trajectory suitable for thestudent’s morphology. For our tasks, we find gradient-free indirect trajectory optimization [33] per-forms the best (Appendix Sec. A.1). We used Tψfor this optimization as it provides the appropriatespeed-accuracy trade-off. The optimization objective is to match with object states in the demonstra-tion (we cannot match demonstration actions across morphologies). We combine these optimizedtrajectories to create a dataset DStudent for the student. Finally, we pass DStudent to a downstreamLfD method to learn a policy πthat generalizes from the task variations in DTeacher to the taskdistribution V. It also extends πto use image observations and deploys zero-shot on a real robot(rollouts in Fig. 5).3.2 Learned Spatio-temporal Dynamics ModelMAIL uses trajectory optimization to convert demonstrations into (suboptimal) trajectories in thestudent’s morphology. This can be prohibitively slow for large state spaces and complex tasks suchas cloth manipulation. Robotic simulators have come a long way in advancing fidelity and speed, butsimulating complex deformable objects and contact-rich manipulation still requires significant com-putation making optimization intractable for challenging simulations. We use the NVIDIA FLeXsimulator that is based on extended position-based dynamics [42]. We learn a CNN-LSTM basedspatio-temporal forward dynamics model with parameters ψ,Tψ, to approximate cloth dynamics, T.This offers a speed-accuracy trade-off with a tractable computation time in environments with largestate spaces and complex dynamics. The states of objects are represented as Nparticle positions:s=P={pi}i=1...N. Each particle state consists of its x, y, and z coordinates. For each task, wegenerate a corpus of random pick-and-place actions and store them in the dataset DRandom ={di},where i= 1, . . . , K Randdi= (Pi, ai, P′i). For each datum i, we feed Pito the CNN networkto extract features of particle connectivity. These features are concatenated with aiand input to theLSTM model to extract features based on the previous particle positions. A fully connected layerfollowed by layer normalization and tanh activation is used to learn the non-linear combinationsof features. The outputs are the predicted particle displacements. The objective function is the4distance between predicted and ground-truth particle displacements, ∥∆Psim−∆Ppred∥2. Here∆Psim={∆pi}i=1,...,N is obtained from the simulator and ∆pi=pi+1−pifor every particle i.Due to its simplicity, the CNN-LSTM dynamics model provides fast inference, compared to a sim-ulator which may have to perform many collision checks at any time step. This speedup is crucialwhen optimizing over a large state space, as long as the errors in particle positions are tolerable. Inour experiments, we were able to get 162 fps with Tψ, compared to 3.4 fps with the FleX simulator,a 50x speed up (Fig. 8). However, this stage is optional if the environment is low-dimensional, orif the simulation speed-up from inference is not significant. Simulation accuracy is important whentraining a final policy, to provide accurate pick-place locations for execution on a real robot. Hence,the learned dynamics model is not used for training in the downstream LfD method.3.3 Indirect Trajectory Optimization with Learned DynamicsWe use indirect trajectory optimization [33] to find the open-loop action trajectory to match theteacher state trajectory, τT. This optimizes for the student’s actions while propagating the state witha simulator. We use the learned dynamics Tψto give us fast, approximate optimized trajectories.This is in contrast to direct trajectory optimization (or collocation) that optimizes both states andactions at every time step. Direct trajectory optimization requires dynamics constraints to ensureconsistency among states being optimized, which can be challenging for discontinuous dynamics.We use the Cross-Entropy Method (CEM) for optimization, and compare this against other methods,such as SAC (Appendix A.1). Optimization hyperparameters are described in 5. The optimizationobjective is to match the object’s goal state sgoalin the demonstration with the same task variant vd.Formally, the problem is defined as:minat∥sgoal−sH∥2subject to s0=s0(vd)andst+1=T(st,at)∀t= 0, . . . , H −1(1)where sHis the predicted final state. Note that if τThas a longer time horizon, it would help tomatch intermediate states and use multiple-shooting methods. After optimizing the action trajec-tories for each demonstration τT,i∈ DTeacher , we use them with the simulator to obtain the opti-mized trajectories in the student’s morphology. These are combined to create the student dataset,DStudent ={τ1, τ2, τ3, . . .}, where τi= (st,ot, at,st+1,ot+1, rt, d)∀t= 1. . . H−1. For gen-eralizability and real-world capabilities, we train an LfD method using DStudent . Note that weuse the learned dynamics model at this stage, trading faster simulation speed for lower accuracy inthe learned model. This is also partially responsible for why DStudent contains suboptimal demon-strations. To reduce the effect of learned model errors, once we obtain the optimized actions, weperform a rollout with the true simulator to get the demonstration data.3.4 Learning from the Optimized DatasetOur chosen LfD method is DMfD [43], an off-policy RLfD actor-critic method that utilized expertdemonstrations as well as rollouts from its own exploration. It learns using an advantage-weightedformulation [44] balanced with an exploration component [5]. As mentioned above, we use thesimulator instead of the learned dynamics model Tψat this stage, because accuracy is importantin the final reactive policy. Hence, we cannot take the speed-accuracy tradeoff that Tψprovides.However, one may choose to use other LfD methods that do not need to interact with the environ-ment [45], in which case neither a simulator nor learned dynamics are needed.As part of tuning, we employ 100 demonstrations, about two orders of magnitude fewer than the8000 recommended by the original work. To prevent the policy from overfitting to suboptimaldemonstrations in DStudent , we disable demonstration-state matching, i.e.,resetting the agent todemonstration states and applying imitation reward (see Appendix A.5). These were originallyproposed [46] as reference state initialization (RSI). These modifications are essential for our LfDimplementation, where the demonstrations do not come from an expert.From DMfD, the policy πis parameterized by parameters θ, and learns from data collected in areplay buffer B. The policy loss contains an advantage-weighted loss LAwhere actions are weighted5by the advantage function Aπ(s,a) =Qπ(s,a)−Vπ(s)and temperature parameter λ. It alsocontains an entropy component LEto promote exploration during data collection. The final policylossLπis a combination of these terms (Eq. 2).LA=Es,a,o∼Blogπθ(a|o) exp1λAπ(s,a)LE=Es,a,o∼B[αlogπθ(a|o)−Q(s,a)]Lπ= (1−wE)LA+wELE,0≤wE≤1 (2)where wEis a tuneable hyper-parameter. The resulting policy is denoted as πθ. We pre-populatebufferBwithDStudent . Using LfD, we extend from state inputs to image observations, and gener-alize from vdto any variation sampled from V.4 ExperimentsOur experiments are designed to answer the following: (1) How does MAIL compare to state-of-the-art (SOTA) methods? (Sec. 4.2) (2) How well can MAIL solve tasks in the real world? (Sec. 4.2.1)(3) Does MAIL generalize to different n-to-mend-effector transfers? (Sec. 4.3) Additional ex-periments demonstrating how different MAIL components affect performance are in Appendix A.4.1 TasksWe experiment with cloth manipulation tasks that are easy for humans to demonstrate but difficultto perform on a robot. We also discuss a simpler rearrangement task with rigid bodies to illustrategeneralizability. The tasks are shown in Appendix Fig. 6. We choose a 6-dimensional pick-and-placeaction space, with xyz positions for pick and place. The end-effectors are pickers in simulation, anda two-finger parallel jaw gripper on the real robot.CLOTH FOLD: Fold a square cloth in half, along a specified line. D RYCLOTH : Pick up a squarecloth from the ground and hang it on a plank to dry, variant of [47]. T HREE BOXES : A simpleenvironment with three boxes along a line that needs to be rearranged to designated goal locations.For details on metrics and task variants, see Appendix B.Cloth fold Dry cloth−0.20.00.20.40.60.8Normalized PerformanceGNSSAC-CURLSAC-DrQGPILGAIfOSAC-DrQ-IRMAIL (ours)Figure 3: SOTA performance comparisons. Foreach training run, we used the best model in eachseed’s training run, and evaluated using 100 roll-outs across 5 seeds, different from the trainingseed. Bar height denotes the mean, error bars in-dicate the standard deviation. MAIL outperformsall baselines, in some cases by as much as 24%.We use particle positions as the state for train-ing dynamics models and trajectory optimiza-tion. We use a 32x32 RGB image as the vi-sual observation, where applicable. We recordpre-programmed demonstrations for the teacherdataset for each task. Details of the datasets totrain the LfD method and the dynamics modelare in Appendix E and Appendix F. The in-stantaneous reward, used in learning the policy,is the task performance metric at a given state.Further details on architecture and training arein the supplementary material. In all experi-ments, we compare each method’s normalizedperformance, measured at the end of the taskgiven by ˆp(t) =p(st)−p(s0)popt−p(s0), where pis the per-formance metric of state stat time t, and poptis the best performance achievable by the task.We use ˆp(H)at the end of the episode ( t=H).4.2 SOTA comparisonsMany LfD baselines (Sec. 2) are not directly applicable, as they do not handle large differencesin action space due to different morphologies. We compare MAIL with those LfD baselines thatproduce a policy with image observations, given demonstrations without actions.1. SAC-CURL [48]: An image-based RL algorithm that uses contrastive learning and SAC [5] asthe underlying RL algorithm. It does not require demonstrations.62. SAC-DrQ [49]: An image-based RL algorithm that uses a regularized Q-function, data augmen-tations, and SAC as the underlying RL algorithm. It does not require demonstrations.3. GNS [50]: A SOTA method that represents cloth as a graph and predicts dynamics using a graphneural network (GNN). It does not require demonstrations but learns dynamics from the randomactions dataset.We run this learned model with a planner [51], given full state information.4. SAC-DrQ-IR: A custom variant of SAC [5] that uses DrQ-based [49] image encoding and a state-only imitation reward (IR) to reach the desired state of the object to be manipulated. It does notimitate actions, as they are unavailable.5. GAIfO [25]: An adversarial imitation learning algorithm that trains a discriminator on state-statepairs (s, s′)from both the demonstrator and agent. This is a popular extension of GAIL [13] thatlearns the same from state-action pairs (s, a).6. GPIL [31] A goal-directed LfD method that uses demonstrations and agent interactions to learna goal proximity function. This function provides a dense reward to train a policy.Fig. 3 has performance comparisons against all baselines. In each environment, the first threecolumns are demonstration-free baselines, and the last four are LfD methods. MAIL outperformsall baselines, in some cases by as much as 24%. For the easier C LOTH FOLD task, the SAC-DrQbaseline came within 11% ofMAIL .(a) Teacher demonstration with three pickers.(b) Final policy: Two pickers(c) Final policy: One picker(d) Final policy: One Franka Panda robotFigure 4: Sample trajectories of the THREEBOXES task. A three-picker teacher trajectory toreach the goal state (Fig. 4a). Final policies of thetwo-picker and one-picker agent, and real-worldexecution of the one-picker agent.However, all baselines do not perform well inthe more difficult D RYCLOTH task. RL meth-ods fail because they have not explored the pa-rameter space enough without guidance fromdemonstrations. Our custom LfD baseline,SAC-DrQ-IR, has reasonable performance, butthe results show that naive imitation aloneis not a good form of guidance to solve it.The other LfD baselines, GAIfO and GPIL,have poor performance in both environments.The primary reason is the effect of cross-morphological demonstrations. They performsignificantly better with student morphologydemonstrations, even if they are suboptimal.Moreover, environment difficulty also plays animportant part in the final performance. Theseand other ablations are in Appendix A.Surprisingly, the GNS baseline with structureddynamics does not perform well, even though ithas been used for cloth modeling [52]. This isbecause it is designed to learn particle dynamics via small displacements, but our pick-and-placeaction space enables large displacements. Similar to [51], we break down each pick-and-placeaction into 100 delta actions to work with the small displacements that GNS is trained on. Thus,planning will accumulate errors from the 100GNS steps for every action of the planner, whichcan grow superlinearly due to compounding errors. This makes it difficult to solve the task. It isespecially seen in D RYCLOTH , where the displacements required to move the entire cloth over theplank are much higher than the displacements needed for C LOTH FOLD. The rollouts of MAIL onDRYCLOTH show the agent following the demonstrated guidance - it learned to hang the cloth overthe plank. It also displayed an emergent behavior to straighten out the cloth on the plank to spreadit out and achieve higher performance. This was not seen in the two-picker teacher demonstrations.4.2.1 Real-world resultsFor D RYCLOTH and C LOTH FOLD tasks, we deploy the learned policies on a Franka Panda robotwith a single parallel-jaw gripper (Fig. 5, statistics over 10 rollouts). We test the policies withmany different variations of square cloth (size, rotation, translation, thickness, color, and material).See Appendix D for performance metrics. The policies achieve ∼80% performance, close to theaverage simulation performance, for both tasks.74.3 GeneralizabilityWe show examples of how MAIL learns from a demonstrator with a different number of end-effectors, in a simple T HREE BOXES task (Fig. 4). Consider a three-picker agent that solves thetask in one pick-place action. Given teacher demonstrations DTeacher , we transfer them into one-picker or two-picker demonstrations using indirect trajectory optimization and the learned dynamicsmodel. These are the optimized datasets that are fed to a downstream LfD method. In both cases,the LfD method learns a model, specific to that morphology, to solve the task. It generalizes fromstate inputs in the demonstrations to the image inputs received from the environment. Fig. 4 showsthe three picker demonstration, a 3-to-2 and 3-to-1 end-effector transfer. We have also done this forthe 2-to-1 case (omitted here for brevity). These examples illustrate n-to-mend-effector transferwithn > m ; it is trivial to perform the transfer for n-to-mwithn≤mby simply appending theteacher’s action space with m−narms that do no operations.4.4 LimitationsMAIL requires object states in demonstrations and during simulation training, however, full stateinformation is not needed at deployment time. It has been tested on the pick-place action space. Ithas been tested only on cases where the number of end-effectors is different from teacher to stu-dent.While it works for high-frequency actions (Appendix A.7), it will likely be difficult to optimizeactions to create the student dataset for high-dimensional actions. This is because the curse of di-mensionality will apply for larger action spaces when optimizing for Dstudent . The state-visitationdistribution of demonstration trajectories must overlap with that of the student agent; this overlapmust contain the equilibrium states of the demonstration. For example, a one-gripper agent cannotreach a demonstration state where two objects are moving simultaneously, but it canreach a statewhere both objects are stable at their goal locations (equilibrium). MAIL cannot work when thestudent robot is unable to reach the goal or intermediate states in the demonstration. For example,in trying to open a flimsy bag with two handles, both end-effectors may simultaneously be neededto keep the bag open. When we discuss generalizability for the case n≤m, our chosen methodto tackle morphological mismatch is to use fewer arms on the student robot, in lieu of trajectoryoptimization. This is an inefficient approach since we ignore some arms of the student robot. MAILbuilds a separate policy for each student robot morphology and each task. While it is possible totrain a multi-task policy conditioned on a given task (provided as an embedding or a natural lan-guage instruction), extending MAIL to output policies for a variable number of end-effectors wouldrequire more careful consideration. Subsequent work could learn a single policy conditioned on thedesired morphology - another way to think about a base model for generalized LfD.5 ConclusionCloth foldPerformance0.818Dry clothSpread metric8/10Figure 5: Real-world results for CLOTHFOLD and DRYCLOTH .We presented MAIL , a framework that enables LfDacross morphologies. Our framework enables learn-ing from demonstrations where the number of end-effectors is different from teacher to student. Thisenables teachers to record demonstrations in the set-ting of their own morphology, and vastly expandsthe set of demonstrations to learn from. We showan improvement of up to 24% over SOTA baselinesand discuss other baselines that are unable to handlea large mismatch between teacher and student. Ourexperiments are on challenging household cloth ma-nipulation tasks performed by a robot with one end-effector based on bimanual demonstrations. Weshowed that our policy can be deployed zero-shot on a real Franka Panda robot, and generalizesacross cloths of varying size, color, material, thickness, and robustness to cloth rotation and trans-lation. We further showed examples of LfD generalizability with instances of transfer from n-to-mend-effectors, with multiple rigid objects. We believe that this is an important step towards allowingLfD to train a robot to learn from anyrobot demonstrations, regardless of robot morphology, expertknowledge, or the medium of demonstration.8References[1] K. Pertsch, Y . Lee, Y . Wu, and J. J. Lim. Demonstration-guided reinforcement learning withlearned skills. 5th Conference on Robot Learning , 2021.[2] I.-C. A. Liu, S. Uppal, G. S. Sukhatme, J. J. Lim, P. Englert, and Y . Lee. Distilling motionplanner augmented policies into visual control policies for robot manipulation. In A. Faust,D. Hsu, and G. Neumann, editors, Proceedings of the 5th Conference on Robot Learning ,volume 164 of Proceedings of Machine Learning Research , pages 641–650. PMLR, 08–11Nov 2022. URL https://proceedings.mlr.press/v164/liu22b.html .[3] I. Radosavovic, X. Wang, L. Pinto, and J. Malik. State-only imitation learning for dexterousmanipulation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 7865–7871. IEEE, 2021.[4] Y . Yang, Y . Li, C. Fermuller, and Y . Aloimonos. Robot learning manipulation action plansby “watching” unconstrained videos from the world wide web. Proceedings of the AAAIConference on Artificial Intelligence , 29(1), Mar. 2015. doi:10.1609/aaai.v29i1.9671. URLhttps://ojs.aaai.org/index.php/AAAI/article/view/9671 .[5] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In International conference on machinelearning , pages 1861–1870. PMLR, 2018.[6] J. Hietala, D. Blanco–Mulero, G. Alcan, and V . Kyrki. Learning visual feedback control fordynamic cloth folding. In 2022 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 1455–1462, 2022. doi:10.1109/IROS47612.2022.9981376.[7] S. Jin, D. Romeres, A. Ragunathan, D. K. Jha, and M. Tomizuka. Trajectory optimization formanipulation of deformable objects: Assembly of belt drive units. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 10002–10008. IEEE, 2021.[8] J. M. Bern, P. Banzet, R. Poranne, and S. Coros. Trajectory optimization for cable-driven softrobot locomotion. In Robotics: Science and Systems , volume 1, 2019.[9] D. Pathak, P. Mahmoudieh, G. Luo, P. Agrawal, D. Chen, Y . Shentu, E. Shelhamer, J. Malik,A. A. Efros, and T. Darrell. Zero-shot visual imitation. In ICLR , 2018.[10] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning via meta-learning. In S. Levine, V . Vanhoucke, and K. Goldberg, editors, Proceedings of the 1st AnnualConference on Robot Learning , volume 78 of Proceedings of Machine Learning Research ,pages 357–368. PMLR, 13–15 Nov 2017. URL https://proceedings.mlr.press/v78/finn17a.html .[11] Y . Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, andW. Zaremba. One-shot imitation learning. In Proceedings of the 31st International Conferenceon Neural Information Processing Systems , NIPS’17, page 1087–1098, Red Hook, NY , USA,2017. Curran Associates Inc. ISBN 9781510860964.[12] M. Laskey, J. Lee, R. Fox, A. D. Dragan, and K. Goldberg. DART: noise injection for robustimitation learning. In 1st Annual Conference on Robot Learning, CoRL 2017, Mountain View,California, USA, November 13-15, 2017, Proceedings , volume 78 of Proceedings of MachineLearning Research , pages 143–156. PMLR, 2017. URL http://proceedings.mlr.press/v78/laskey17a.html .[13] J. Ho and S. Ermon. Generative adversarial imitation learning. In D. Lee,M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances inNeural Information Processing Systems , volume 29. Curran Associates, Inc.,2016. URL https://proceedings.neurips.cc/paper/2016/file/cc7e2b878868cbae992d1fb743995d8f-Paper.pdf .9[14] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demon-strations. In Proceedings of Robotics: Science and Systems (RSS) , 2018.[15] M. Vecer ́ık, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. M. O. Heess, T. Roth ̈orl,T. Lampe, and M. A. Riedmiller. Leveraging demonstrations for deep reinforcement learningon robotics problems with sparse rewards. ArXiv , abs/1707.08817, 2017.[16] A. Mandlekar, J. Booher, M. Spero, A. Tung, A. Gupta, Y . Zhu, A. Garg, S. Savarese, andL. Fei-Fei. Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulationdataset through human reasoning and dexterity. In 2019 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , pages 1048–1055. IEEE, 2019.[17] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv e-prints , pages arXiv–2005, 2020.[18] S. Lange, T. Gabel, and M. Riedmiller. Batch reinforcement learning. In Reinforcement learn-ing, pages 45–73. Springer, 2012.[19] F. Fuchs, Y . Song, E. Kaufmann, D. Scaramuzza, and P. D ̈urr. Super-human performance ingran turismo sport using deep reinforcement learning. IEEE Robotics and Automation Letters ,6(3):4257–4264, 2021.[20] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforce-ment learning. Advances in Neural Information Processing Systems , 33:1179–1191, 2020.[21] P. Rashidinejad, B. Zhu, C. Ma, J. Jiao, and S. Russell. Bridging offline reinforcement learningand imitation learning: A tale of pessimism. Advances in Neural Information ProcessingSystems , 2021.[22] F. Torabi, G. Warnell, and P. Stone. Recent advances in imitation learning from observation.InProceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence,IJCAI-19 , pages 6325–6331. International Joint Conferences on Artificial Intelligence Orga-nization, 7 2019. doi:10.24963/ijcai.2019/882. URL https://doi.org/10.24963/ijcai.2019/882 .[23] F. Torabi, G. Warnell, and P. Stone. Behavioral cloning from observation. In Proceed-ings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 4950–4957. International Joint Conferences on Artificial Intelligence Organization,7 2018. doi:10.24963/ijcai.2018/687. URL https://doi.org/10.24963/ijcai.2018/687 .[24] Y .-T. A. Sun, H.-C. Lin, P.-Y . Wu, and J.-T. Huang. Learning by watching via key-point extraction and imitation learning. Machines , 10(11), 2022. ISSN 2075-1702. doi:10.3390/sun22KeypointExtraction. URL https://www.mdpi.com/2075-1702/10/11/1049 .[25] F. Torabi, G. Warnell, and P. Stone. Generative adversarial imitation from observation. arXivpreprint arXiv:1807.06158 , 2018.[26] L. Smith, N. Dhawan, M. Zhang, P. Abbeel, and S. Levine. A VID: Learning Multi-StageTasks via Pixel-Level Translation of Human Videos. In Proceedings of Robotics: Science andSystems , Corvalis, Oregon, USA, July 2020. doi:10.15607/RSS.2020.XVI.024.[27] J. Yang, J. Zhang, C. Settle, A. Rai, R. Antonova, and J. Bohg. Learning periodic tasks fromhuman demonstrations. IEEE International Conference on Robotics and Automation (ICRA) ,2022.10[28] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Learning rhythmic movements by demonstrationusing nonlinear oscillators. In IEEE/RSJ International Conference on Intelligent Robots andSystems , volume 1, pages 958–963, 2002. doi:10.1109/IRDS.2002.1041514.[29] F. Al-Hafez, D. Tateo, O. Arenz, G. Zhao, and J. Peters. Ls-iq: Implicit reward regularizationfor inverse reinforcement learning. In Eleventh International Conference on Learning Repre-sentations (ICLR) , 2023. URL https://openreview.net/pdf?id=o3Q4m8jg4BR .[30] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodimentinverse reinforcement learning. Conference on Robot Learning (CoRL) , 2021.[31] Y . Lee, A. Szot, S.-H. Sun, and J. J. Lim. Generalizable imitation learning from observationvia inferring goal proximity. In Advances in Neural Information Processing Systems , 2021.[32] P. T. de Boer, D. P. Kroese, S. Mannor, and R. Y . Rubinstein. A tutorial on the cross-entropymethod. Annals of Operations Research , 134:19–67, 2005.[33] M. Kelly. An introduction to trajectory optimization: How to do your own direct collocation.SIAM Review , 59(4):849–904, 2017.[34] M. Posa, C. Cantu, and R. Tedrake. A direct method for trajectory optimization of rigid bodiesthrough contact. The International Journal of Robotics Research , 33(1):69–81, 2014.[35] Z.-Q. Luo, J.-S. Pang, and D. Ralph. Mathematical programs with equilibrium constraints .Cambridge University Press, 1996.[36] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou.Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE Interna-tional Conference on Robotics and Automation (ICRA) , pages 1714–1721. IEEE, 2017.[37] A. Venkatraman, R. Capobianco, L. Pinto, M. Hebert, D. Nardi, and J. A. Bagnell. Improvedlearning of dynamics models for control. In 2016 International Symposium on ExperimentalRobotics , pages 703–713. Springer, 2017.[38] T. G. Thuruthel, E. Falotico, F. Renda, and C. Laschi. Learning dynamic models for openloop predictive control of soft robotic manipulators. Bioinspiration & Biomimetics , 12(6):066003, oct 2017. doi:10.1088/1748-3190/aa839f. URL https://dx.doi.org/10.1088/1748-3190/aa839f .[39] A. S. Polydoros and L. Nalpantidis. Survey of model-based reinforcement learning: Applica-tions on robotics. Journal of Intelligent & Robotic Systems , 86(2):153–173, 2017.[40] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latentdynamics for planning from pixels. In International conference on machine learning . PMLR,2019.[41] P. Wu, A. Escontrela, D. Hafner, K. Goldberg, and P. Abbeel. Daydreamer: World models forphysical robot learning. Conference on Robot Learning , 2022.[42] M. Macklin, M. M ̈uller, and N. Chentanez. Xpbd: position-based simulation of compliant con-strained dynamics. In Proceedings of the 9th International Conference on Motion in Games ,2016.[43] G. Salhotra, I.-C. A. Liu, M. Dominguez-Kuhne, and G. S. Sukhatme. Learning deformableobject manipulation from expert demonstrations. IEEE Robotics and Automation Letters , 7(4):8775–8782, 2022. doi:10.1109/LRA.2022.3187843.[44] A. Nair, A. Gupta, M. Dalal, and S. Levine. Awac: Accelerating online reinforcement learningwith offline datasets. arXiv preprint arXiv:2006.09359 , 2020.11[45] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[46] X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne. Deepmimic: Example-guided deep re-inforcement learning of physics-based character skills. ACM Transactions on Graphics (TOG) ,2018.[47] J. Matas, S. James, and A. J. Davison. Sim-to-real reinforcement learning for deformableobject manipulation. In Conference on Robot Learning , pages 734–743. PMLR, 2018.[48] M. Laskin, A. Srinivas, and P. Abbeel. CURL: Contrastive unsupervised representations for re-inforcement learning. In H. D. III and A. Singh, editors, Proceedings of the 37th InternationalConference on Machine Learning , volume 119 of Proceedings of Machine Learning Research ,pages 5639–5650. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/laskin20a.html .[49] D. Yarats, I. Kostrikov, and R. Fergus. Image augmentation is all you need: Regularizingdeep reinforcement learning from pixels. In 9th International Conference on Learning Repre-sentations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021. URLhttps://openreview.net/forum?id=GY6-6sTvGaf .[50] A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. Battaglia. Learningto simulate complex physics with graph networks. In International conference on machinelearning , pages 8459–8468. PMLR, 2020.[51] X. Lin, Y . Wang, Z. Huang, and D. Held. Learning visible connectivity dynamics for clothsmoothing. In Conference on Robot Learning , pages 256–266. PMLR, 2022.[52] Z. Huang, X. Lin, and D. Held. Mesh-based Dynamics with Occlusion Reasoning for ClothManipulation. In Proceedings of Robotics: Science and Systems , New York City, NY , USA,June 2022. doi:10.15607/RSS.2022.XVIII.011.[53] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving withmodel predictive path integral control. In 2016 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1433–1440. IEEE, 2016.[54] X. Lin, Y . Wang, J. Olkin, and D. Held. Softgym: Benchmarking deep reinforcement learningfor deformable object manipulation. Conference on Robot Learning (CoRL) , 2020.[55] A. Jaegle, S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, D. Zoran,A. Brock, E. Shelhamer, et al. Perceiver io: A general architecture for structured inputs &outputs. In International Conference on Learning Representations , 2021.12Figure 6: Environments used in our experiments, with one end-effector. The end-effectors arepickers (white spheres). In C LOTH FOLD (left) the robot has to fold the cloth (orange and pink)along an edge (inspired by the SoftGym [54] two-picker cloth fold task). In D RYCLOTH (middle)the robot has to hang the cloth (orange and pink) on the drying rack (brown plank). In T HREEBOXES (right), the robot has to rearrange three rigid boxes along a line.A AblationsWe use the D RYCLOTH task for all ablations unless specified; it is the most challenging of ourtasks. We provide detailed answers to the following questions in Appendix A. Appendix Fig. 7illustrates the ablations corresponding to each part of the overall method. (1) How do differentmethods perform in creating optimized dataset DStudent ? (2) What is the best architecture to learnthe task dynamics? (3) How good is DStudent compared to the recorded demonstrations? (4) Howwell does the downstream LfD method handle different kinds of demonstrations? (5) How does theuse of expert state matching affect the downstream LfD? (6) How do the baselines perform acrossrelated morphologies and environment?We discovered that the Cross-Entropy Method (CEM) is the most effective optimizer for generatingaDStudent from demonstrations. When combined with CEM, the 1D CNN-LSTM architectureproduces the best results for trajectory optimization. Our optimized DStudent performs similarlyto the pre-programmed D1pDemo , which has access to full state information of the environment. Byutilizing our chosen downstream LfD method, we can successfully complete tasks with a variety ofdemonstrations and achieve superior performance compared to both DStudent andDTeacher . Expertstate matching negatively impacts the performance of DMfD. Lastly, we found that GAIfO trainedon our DStudent outperforms GAIfO trained on the DTeacher , and the difficulty of the environmentsignificantly influences the performance of GAIfO and GPIL.A.1 Ablate the method for creating optimized dataset DStudentWe answer the question: how do different methods perform in creating optimized dataset DStudent ?We ablate the optimizer used to create DStudent from the demonstrations, labeled ABL1 in Fig. 7,and compare the following methods, given state inputs from DTeacher .• Random: A trivial random guesser, that serves as a lower benchmark.• SAC: An RL algorithm that tries to reach the goal states of the demonstrations.• Covariant Matrix Adaption Evolution Strategy (CMA-ES): An evolutionary strategy thatsamples optimization parameters from a multi-variate Gaussian, and updates the mean andcovariance at each iteration.• Model Predictive Path Integral (MPPI): An information theoretic MPC algorithm that cansupport learned dynamics and complex cost criteria [36, 53].• Cross-Entropy Method (CEM, ours): A well-known gradient-free optimizer, where weassume a Gaussian distribution for optimization parameters.13Random Actions end-effectorsmLearned Spatio-temporal Dynamics ModelTeacher Demos end-effectorsnOptimized Demos end-effectorsmLfD MethodIndirect Trajectory OptimizationABL 1ABL 2ABL 3,4ABL 5Figure 7: Ablations toMAIL components.We did not use gradient-based trajectory optimizers since the contact-rich simulation will give riseto discontinuous dynamics and noisy gradients. As shown in Table 1a, SAC is unable to improveupon the random baseline, likely because of the very large state-space of our environment ( >15000states for >5000 cloth particles) and error accumulations from the imprecision of learned dynamicsmodel. Trajectory optimizers achieve the highest performance, and we chose CEM as the bestoptimizer based on the performance of the optimized trajectory.A.2 Ablate the dynamics modelWe answer the question: what is the best architecture to learn the task dynamics? We ablate thelearned dynamics model Tψ, labeled ABL2 in Fig. 7. The environment state is the state fromDTeacher i.e.,positions of cloth particles. This is a structured but large state space since the cloth isdiscretized into >5000 particles.Table 1b shows the performance of trajectories achieved by using the dynamics models. We see thatCNN-LSTM models work better than models that contain only CNNs, graph networks (GNS [50]),transformers (Perceiver IO [55]), or LSTMs. We hypothesize that this is the case since we needto capture the spatial structure of cloth and capture a temporal element across the whole trajectorysince particle velocity is not captured in the state. Further, a 1D CNN works better because the clothstate can be simply represented as a 2D vector ( N×3which represents the xyz for Nparticles).This is easier to learn with than the 3D state vector fed into 2D CNNs.GNS performs poorly also due to the reasons of error accumulation from large displacements, dis-cussed in Sec. 4.2. Although Perceiver IO did not perform as well as CNN-LSTM, it did not affectthe downstream performance for the LfD method. We conducted an experiment to compare DMfDperformance when it was trained on the DStudent obtained from Perceiver IO and CNN-LSTM andfound that they had comparable results, shown in Fig. 9. This indicates that MAIL is adaptable todifferent DStudent and capable of learning from suboptimal demonstrations.Our learned dynamics model Tψwas significantly faster than the simulator. We tested it on a simpletraining run of SAC [5], without parallelization. Our learned dynamics gave 162 fps, about 50xfaster than the 3.4 fps with the simulator. However, the dynamics error was not insignificant. Wecompute the state changes in cloth by considering the cloth particles as a point cloud, and computingdistances between point clouds using the chamfer distance. We then executed actions on the clothfor the D RYCLOTH task, comparing the cloth state before an action with the model’s predictedstate and the simulator’s true state after the action. Over 100 state transitions, we observed a clothmovement of 0.67 m in the true simulator, and an error of 0.17 m between the true and predicted14T/uni03C8T/uni03C8SimulatorSimulatorFigure 8: Predictions of the learned spatio-temporal dynamics model Tψand the FleX simu-lator. Predictions are made for the same state and action, shown for both cloth tasks. The learnedmodel supports optimization approximately 50xfaster than the simulator, albeit at the cost of accu-racy.state of the cloth. This accuracy was tolerable for trajectory optimization, qualitatively shown inFig. 8, where we did not need optimal demonstrations.A.3 Compare performance of optimized dataset D1pOptimCloth fold Dry cloth0.00.20.40.60.8Normalized Performance1D CNN, LSTMPerceiverIOFigure 9: Performance comparison betweenDMfD trained on DStudent obtained using dif-ferent learned dynamics models: 1D CNN-LSTMand Perceiver IO. For each training run, we usedthe best model in each seed’s training run, andevaluated using 100 rollouts across 5 seeds, dif-ferent from the training seed. Bar height denotesthe mean, error bars indicate the standard devia-tion.We answer the question: how good is DStudentcompared to the recorded demonstrations? Thisablation gauges the performance of the op-timized dataset that we used as the studentdataset for LfD, DStudent =D1pOptim . Wecompare this to other relevant datasets to solvethe task, as shown in Table 1c. It is labeledABL3 in Fig. 7. The two-picker demonstra-tionsD2pDemo are recorded for an agent withtwo pickers as end-effectors. This is used asthe teacher demonstrations in our experimentDTeacher =D2pDemo . The one-picker demon-strations D1pDemo are recorded for an agent withone picker as an end-effector. This is to contrastagainst the optimized demonstrations in thesame morphology, D1pOptim . The random actiontrajectories are with a one-picker agent, addedas a lower performance benchmark. They arethe same random trajectories used to train thespatio-temporal dynamics model Tψ. Naturally,the teacher dataset is the best, as it is trivial todo this task with two pickers. The one-pickerdataset has about the same performance as theoptimized dataset D1pOptim , both of which are suboptimal. It can be inferred that it is not trivial tomanipulate cloth with one hand. This is the kind of task we wish to unlock with this work: tasks thatare easy to do for teachers in one morphology but difficult to program or record demonstrations forin the student’s morphology. Note that D1pOptim has been optimized on the fast but inaccurate learned15dynamics model, which is one reason for the reduced performance. This is why the downstream LfDmethod uses the simulator, as accuracy is very important in the final policy.Method 25th%μ±σ median 75th%Random 0.000 0.003 ±0.088 0.000 0.000SAC 0.000 0.000 ±0.006 0.000 0.000CMA-ES 0.104 0.270 ±0.258 0.286 0.489MPPI 0.070 0.289 ±0.264 0.275 0.474CEM 0.351 0.502 ±0.242 0.501 0.702(a) Ablation on the method chosen for creating demonstrations.Method 25th%μ±σ median 75th%Perceiver IO 0.305 0.450 ±0.258 0.486 0.628GNS -0.182 0.002 ±0.223 -0.042 0.1492D CNN, LSTM 0.157 0.376 ±0.305 0.382 0.602No CNN, LSTM 0.327 0.465 ±0.213 0.463 0.5951D CNN, No LSTM 0.202 0.407 ±0.237 0.387 0.5871D CNN, LSTM (ours) 0.351 0.502 ±0.242 0.501 0.702(b) Ablation on the dynamics network architecture.Dataset 25th%μ±σ median 75th%DRandom 0.000 0.003 ±0.088 0.000 0.000D1pDemo 0.344 0.484 ±0.169 0.446 0.641D2pDemo 0.696 0.744 ±0.068 0.724 0.785D1pOptim 0.351 0.502 ±0.242 0.501 0.702(c) Compare the performance of the optimized dataset.Table 1: Ablation results for MAILA.4 Ablate modality of demonstrationsWe answer the question: how well does the downstream LfD method handle different kinds ofdemonstrations? This ablates the composition of the student dataset fed into LfD, and is labeledABL4 in Fig. 7. We compare the following datasets for DStudent , using the notation for datasetsexplained in Sec. 3.1:• Demonstrations in one-picker morphology, D1pDemo : These are non-trivial to create and arethus not as performant, discussed above. Creating these is increasingly difficult as the taskbecomes more challenging.• Optimized demos, D1pOptim : This is optimized from the two-picker teacher demonstrations(DTeacher =D2pDemo ), which are easy to collect as the task is trivial with two pickers.• 50% D1pDemo and 50% D1pOptim : A mix of trajectories from the two cases above. This is anexample of handling multiple demonstrators with different morphologies.Fig. 10 illustrates that all three variants achieve similar final performance. This demonstrates that thedownstream LfD method is capable of solving the task with a variety of suboptimal demonstrations.This could be from one dataset of demonstrations, or even a combination of datasets obtained froma heterogeneous set of teachers.160.600.650.700.750.800.850.90Performance0.0 0.1 0.2 0.3 0.4 0.5Million Steps0.00.1100% one-picker demos50% one-picker demos and 50% DStudent100% DStudentFigure 10: Ablation on the modality of demon-strations on LfD performance. Similar perfor-mance shows that MAIL can learn from a widevariety of demonstrations, or even a mixture ofthem, without loss in performance. See Sec. A.4.0.600.650.700.750.800.850.90Performance0.0 0.1 0.2 0.3 0.4 0.5Million Steps0.00.1DMfD with RSI,IRDMfD without RSI,IRFigure 11: Ablation on the effect of referencestate initialization (RSI) and imitation reward(IR) on LfD performance. RSI is not helpfulhere because our tasks are not as dynamic or longhorizon as DeepMimic [46]. See Sec. A.5.An interesting observation here is that by comparing Fig. 10 and Table 1c, we see that the finalpolicy is better than the suboptimal demonstrations by a considerable margin, and also slightly im-proves upon the performance of the teacher demonstrations. This improvement comes from the LfDmethod’s ability to effectively utilize demonstrations and generalize across task variations. This re-sult, combined with the ablation that we need demonstrations in Sec. 4.2, shows that our downstreamLfD method is well adapted to work with suboptimal demonstrations to solve a task.A.5 Ablate Reference State Initialization in DMfDWe answer the question: how does the use of demonstration state matching affect the downstreamLfD? An improvement we made over the original DMfD algorithm is to disable matching withexpert states, known as RSI-IR, first proposed in [46]. We justify this improvement in this ablation,labeled ABL5 in Fig. 7.As shown in Fig. 11, removing RSI and IR has a net positive effect throughout training, and around10% on the final policy performance. This means that matching expert states exactly via imitationreward does not help, even during the initial stages of training when the policy is randomly initial-ized. We believe this is because RSI helps when there are hard-to-reach intermediate states that thepolicy cannot reach during the initial stages of training. This is true for dynamic or long-horizontasks, such as karate chops and roundhouse kicks. However, our tasks are quasi-static, and also havea short horizon of 3 for the cloth tasks. In other words, removing this technique allows the policy tofreely explore the state space while the demonstrations can still guide the RL policy learning via theadvantage-weighted loss from DMfD.A.6 Ablate the effect of cross-morphology on LfD baselinesWe answer the question: how do established LfD baselines perform across morphologies? Thisablation studies the effect of cross-morphology in the demonstrations, where we compare the per-formance of GAIfO, when provided demonstrations from the teacher dataset DTeacher and (subop-timal) student dataset DStudent , for the D RYCLOTH task.As we can see in Table 2, there is a 36% performance improvement when using DStudent instead ofDTeacher . The primary difference that the agent sees during training is the richness of demonstrationstates, as the demonstration actions are not available to learn from. Since the student morphologyhas only one picker, any demonstration for the task (D RYCLOTH ) includes multiple intermediatestates of the cloth in various conditions of being partially hung for drying. By contrast, the teacher17requires fewer pick-place steps to complete the task, and thus there are fewer intermediate states inthe demonstrations.A.7 Ablate the effect of environment difficulty on LfD baselinesWe answer the question: how do established LfD baselines perform across environments? Given thesubpar performance of the LfD baselines GAIfO and GPIL on our SOTA environments, we ablatedthe effect of environment difficulty. We took the easy cloth environment (C LOTH FOLD) and usedan easier variant of it, C LOTH FOLD DIAGONAL PINNED [43]. In this variant, the agent has to foldcloth along a diagonal, which can be done by manipulating only one corner of the cloth. Moreover,one corner of the cloth is pinned to prevent sliding, making it easier to perform. We used state-basedobservations and a small-displacement action space, where the agent outputs incremental pickerdisplacements instead of pick-and-place locations. We can see in Table 3 that the same baselinesare able to perform significantly better in this environment. Hence, we believe manipulating withlong-horizon pick-place actions, with an image observation, makes it challenging for the baselinesto perform cloth manipulation tasks described in Sec. 4.1 and Appendix B.Method 25th%μ±σ median 75th%DTeacher -0.198 -0.055 ±0.183 -0.043 0.078DStudent 0.199 0.363 ±0.245 0.409 0.528Table 2: Ablation of GAIfO on the effect of cross-morphology. We compare the normalized perfor-mance, measured at the end of the task.Method 25th%μ±σ median 75th%GPIL 0.356 0.427 ±0.162 0.487 0.553GAIfO 0.115 0.374 ±0.267 0.471 0.592Table 3: Measuring performance on the easy cloth task, C LOTH FOLD DIAGONAL PINNED . Wecompare the normalized performance, measured at the end of the task.B TasksHere we give more details about the tasks, including the performance functions, teacher dataset, andsample images. Fig. 6 shows images all of simulation environments used for SOTA comparisons andgeneralizability, with one end-effector. In each environment, the end-effectors are pickers (whitespheres). In cloth-based environments, the cloth is discretized into an 80x80 grid of particles, givinga total of 6400 particles.1. C LOTH FOLD: Fold a square cloth in half, along a specified line. The performance metricis the distance of the cloth particles left of the folding line, to those on the right of thefolding line. A fully folded cloth should have these two halves virtually overlap. Teacherdemonstrations are from an agent with two pickers ( i.e.,DTeacher =D2pDemo ); we solvethe task on a student agent with one picker. Task variations are in cloth rotation.2. D RYCLOTH : Pick up a square cloth from the ground and hang it on a plank to dry, variantof [47]. The performance metric is the number of cloth particles (in simulation) on eitherside of the plank and above the ground. Teacher demonstrations are from an agent withtwo pickers ( i.e.,DTeacher =D2pDemo ); we solve the task on a student agent with onepicker. Task variations are in cloth rotations and translations with respect to the plank.183. T HREE BOXES : A simple environment with three boxes along a line that need to be rear-ranged to designated goal locations. Teacher demonstrations are from an agent with threepickers ( i.e.,DTeacher =D3pDemo ); we solve the task on student agents with one pickerand two pickers. Performance is measured by the distance of each object from its goallocation. This task is used to illustrate the generalizability of MAIL with various n-to-mend-effector transfers, and is not used in the SOTA comparisons.C Hyperparameter choices for MAILIn this section, Table 4 shows the hyperparameters chosen for training the forward dynamics modelTψ. Table 5 shows the details of CEM hyperparameter choices. Table 6 shows the hyperparametersfor our chosen LfD method (DMfD).Parameter DescriptionCNN4 layers, 32 channels, 3x3 kernel, leaky ReLU activation.stride = 2 for the first layer, stride = 1 for subsequent layersLSTMOne layerHidden size = 32Other ParametersLearning rate α=1e-5Batch size = 128Table 4: Hyper-parameters for training the forward dynamics model.PlanningHorizonNumber ofoptimization iterationsNumber of envinteractions1 1 2 21,0002 2 2 15,0003 2 2 21,0004 2 2 31,0005 2 2 34,0006 2 10 21,0007 2 1 21,0008 2 1 15,0009 2 1 32,00010 3 2 21,00011 3 10 21,00012 4 2 21,00013 4 10 21,000Table 5: CEM hyper-parameters tested for tuning the trajectory optimization. We conducted tenrollouts for each parameter set and used the set with the highest average normalized performanceon the teacher demonstrations. Population size is determined by the number of environment inter-actions. The number of elites for each CEM iteration is 10% of population size.D Performance metrics for real-world cloth experimentsIn this section, we explain the metrics for measuring performance of the cloth, to explain thesim2real results discussed in Sec. 4.2.1For C LOTH FOLD task, we measure performance at time tby the number of pixels of the top colorpixtop,t and bottom color pixbot,tof the flattened cloth, compared to the maximum number of pixels,pixmax (Fig. 12).For D RYCLOTH task, it is challenging to measure pixels on the sides and top of the plank. Moreover,we could be double counting pixels if they are visible in both side and top views. Hence, we measure19Parameter DescriptionState encodingFully connected network (FCN)2 hidden layers of 1024, ReLU activationImage encoding32x32 RGB input, with random crops.CNN: 4 layers, 32 channels, stride 1, 3x3 kernel, leaky ReLU activationFCN: 1 layer of 1024 neurons, tanh activationActorFully connected network2 hidden layers of 1024, leaky ReLU activationCriticFully connected network2 hidden layers of 1024, leaky ReLU activationOther parametersDiscount factor: γ= 0.9Entropy loss weight: wE= 0.1Entropy regularizer coefficient: α= 0.5Batch size = 256Replay buffer size = 600,000RSI-IR probability = 0 (disabled)Table 6: Hyper-parameters used in the LfD method (DMfD).0.00 0.25 0.50 0.75 1.00Fraction of pixels ftop,fbot0.00.20.40.60.81.0PerformanceTop metric p(top)Bottom metric p(bot)Figure 12: Performance function for CLOTH FOLD on the real robot. At time t, we measurethe fraction of pixels visible to the maximum number of pixels visible ftop=pixtop,t/pix max andfbot=pixbot,t/pix max. Performance for the top of the cloth should be 1 when it is not visible,p(top) = 1 −ftop. Performance for the bottom of the cloth should be 1 when it is exactly half-folded on top of the top side, p(bot) = min [2 (1 −fbot),2fbot]. Final performance is an averageof both metrics, p(st) =p(top) +p(bottom )/2. Note that the cloth is flattened at the start, thuspixmax=pixtop,0.20the cloth to determine whether the length of the cloth on top of the plank is equal to or greater thanthe side of the square cloth. We call this the spread metric.The policies achieve ∼80% performance, which is about the average performance of our method insimulation, for both tasks. However, since these performance metrics are different in the simulationand real world, we cannot quantify the sim2real gap through these numbers.E Collected dataset of teacher demonstrationsWe have 100 demonstrations provided by the teacher, mentioned on Sec. 3.4. The diversity of thetask comes from the initial conditions for these demonstrations, which are sampled from the taskdistribution vd∼ V. This variability in the initial state adds diversity to the dataset. The quality andperformance of these teacher demonstrations were briefly discussed in the ablations (Sec. A.4).All demonstrations come from a scripted policy. For ClothFold, the teacher has two end-effectorsand picks two corners of the cloth to move them towards the other two corners. For DryCloth, theteacher has two end-effectors and picks two corners of the cloth to move them to the other side ofthe rack. They maintain the same distance between each other during the move to ensure the cloth isspread out when it hangs on the rack. For ThreeBoxes, the teacher has three end-effectors. It picksup all the boxes simultaneously and places them in their respective goals.F Random actions dataset used for training the dynamics modelWe trained the dynamics model on random actions from various states, to cover the state-actiondistributions our tasks would operate under.For C LOTH FOLD, our random action policy is to pick a random cloth particle and move the particleto a random goal location within the action space. For D RYCLOTH , the random action policy is topick a random cloth particle, and move it to a random goal location around the drying rack, to learncloth interactions around the rack. For completeness, we also trained a forward dynamics model forthe T HREE BOXES task. Here, the random action policy is to pick the boxes in order and sample arandom place location within the action space.Each task’s episode horizon is 3. Our actions are pick-and-place actions, and the action space is inthe full range of visibility of the camera. For D RYCLOTH , this limit is [−0.5,0,0.5]to[0.5,0.7,0.5].For C LOTH FOLD it is[−0.9,0,0.9]to[0.9,0.7,0.9]. For T HREE BOXES it is−0.1to1.35.21 |
JdpleC92J4 | AR2-D2:Training a Robot Without a RobotJiafei Duan1Yi Ru Wang1Mohit Shridhar1Dieter Fox1,2Ranjay Krishna1,31University of Washington2NVIDIA3Allen Institute for Artificial Intelligence{duanj1, yiruwang, mshr, fox, ranjay }@cs.washington.eduwww.ar2d2.siteAbstract: Diligently gathered human demonstrations serve as the unsung heroesempowering the progression of robot learning. Today, demonstrations are col-lected by training people to use specialized controllers, which (tele-)operate robotsto manipulate a small number of objects. By contrast, we introduce AR2-D2: asystem for collecting demonstrations which (1) does not require people with spe-cialized training, (2) does not require any real robots during data collection, andtherefore, (3) enables manipulation of diverse objects with a real robot. AR2-D2is a framework in the form of an iOS app that people can use to record a video ofthemselves manipulating any object while simultaneously capturing essential datamodalities for training a real robot. We show that data collected via our systemenables the training of behavior cloning agents in manipulating real objects. Ourexperiments further show that training with our AR data is as effective as train-ing with real-world robot demonstrations. Moreover, our user study indicates thatusers find AR2-D2 intuitive to use and require no training in contrast to four otherfrequently employed methods for collecting robot demonstrations.Keywords: Demonstration Collection, Imitation Learning, Augmented Reality1 IntroductionManually curated datasets are often the inglorious heroes of many large-scale machine learningprojects [1, 2, 3]; this is especially true in robotics, where human-generated datasets of robot demon-strations are indispensable [4, 5] especially with recent success in robot learning via imitation learn-ing [6, 7, 8, 9] of these demonstration data. For example, one recent effort collected ∼130krobotdemonstrations, with a fleet of 13robots over the course of 17months [10]. As a result, researchershave spent considerable effort developing various mechanisms for demonstration collection. Onepopular option for collecting robot demonstrations is through kinesthetic-teaching, where a personguides a robot through a desired trajectory [11]. Although intuitive, this mechanism can be tediousand slow [12]. Alternatively, teleoperation with various controllers has become popular: using akeyboard and mouse [13, 14], a video game controller [15], a 3D-mouse [16, 6], special purposemaster-slave interfaces [17, 18], and even virtual reality (VR) controllers [19, 20, 21].Despite all these demonstration collection efforts, there are three key challenges limiting robot datacollection. First, people need to be trained to produce useful demonstrations: kinesthetic methodsare labor-intensive while teleoperation methods require learning specialized controllers. Second, theability to parallelize data collection is limited by how many—often expensive—robots are available.Third, robots are usually bulky and locked within a laboratory, reducing their exposure to a handfulof nearby objects. Lastly, (tele)-operation in simulation has the potential to scale efficiently with-out real robot hardware, but addressing the sim2real gap and limited variety of trainable tasks insimulation environments are challenges to overcome.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1. AR2-D2 collects robot demonstrations without needing a real robot. (Top left) Using AR2-D2, the user captures a video manipu-lating an object with their arm. AR2-D2 projects an operational URDF (unified robotics description format) of an AR Franka Panda robot arminto a physical environment. It uses a hand-pose tracking algorithm to move the AR robot’s end effector to align with and mirror the 6D poseof the human hand. (Right) With this video demonstration, we train a perceiver-actor agent and (Bottom left) deploy the agent on a real-worldrobot to demonstrate its ability to learn from AR demonstrations.We introduce AR2-D2: a system for collecting robot demonstrations that (1) does not require peopleto have specialized training, (2) does not require any real robots during data collection, and therefore,(3) enables manipulation of diverse objects with a real robot. AR2-D2 is a framework in the formof an iOS app that enables users to record a video of themselves manipulating any object. Oncethe video is captured, AR2-D2 uses the iOS depth sensor to place an AR robot in the scene anduses a motion planner to produce a trajectory where it appears as if the AR robot manipulates theobject (Figure 1). Manipulating objects and recording a video is so intuitive that users do not needany training to use AR2-D2. Our system completely removes the need for a real robot duringdemonstration collection, allowing data collection to potentially parallelize without being limitedby expensive real robots. Finally, since videos can be captured anywhere, AR2-D2 re-situatesdemonstration collection out of the laboratory; users can take videos anywhere, making it easyto collect demonstrations involving manipulation of diverse objects. Furthermore, unlike collectingvisual observations of human activities, our approach uses AR projection during robot demonstrationcollection to provide constant feedback on the robot’s pose and physical constraints in the givenenvironment while performing the task.Our experiments show that AR2-D2’s AR demonstrations can effectively train a real robot to manip-ulate real-world objects. We use AR2-D2 to collect robot demonstrations cross three manipulationtasks ( press ,pick-up andpush ) on 9 personalized objects. These personalized objects are uniquelyshaped, sized, or textured items designed to meet the specific needs or functionalities of individualusers within their personalized environments. We collect and use as few as five effortlessly collecteddemonstrations to train a behavior cloning agent [6]. This trained agent needs to be finetuned for3,000 iterations (which is equivalent to less than 10 minutes of training) on a dummy real-world taskto overcome the sim-to-real gap; Once finetuned, a real robot is capable of manipulating real objectseven though that object was only encountered by the AR robot. This AR-trained agent performscomparable to agents trained with real-world tele-operated demonstrations (specifically the PerActdemonstration collection [6]).We assess AR2-D2’s usability through a within-subjects user study (N=10). For the user study,participants are asked to provide demonstrations for two standard manipulation tasks: pick-up andstacking . Besides AR2-D2, users collect demonstrations using four alternative methods, includingkeyboard and mouse [22], VR controller [23], 3D-mouse [22], and kinesthetic-teaching [24]. Resultssuggest AR2-D2 is intuitive, requires no training, and enables quick demonstration production,comparable to kinesthetic teaching and faster than other methods. AR2-D2 paves the way fordemocratizing robot training: an estimated 1.36 billion1iPhone users could create personalizedmanipulation data to train real robots for their household objects.1Source: https://www.bankmycell.com/blog/number-of-iphone-users2Figure 2. AR2-D2 collection process .(Left) Once the user records themselves manipulating an object, AR2-D2 extracts the followinginformation: 6D hand pose, hand state, RGB frames and depth estimations. We replace the hand with an AR robot, aligning its motionsto align its end effector with the hand’s. (Right) We create a 3D voxelized observation over time from the extracted keyframes. This 3Drepresentation is used to train a P ERACT[6] agent. We also use the generated video to train an image-conditioned BC agent [6].2 Related workDemonstration collection methods. There are several conventional methods available for gatheringrobot demonstrations. One popular approach involves kinesthetically controlling the robot to followa desired trajectory; the generated robot trajectories showcase the behavior required to accomplish aspecific task [11]. Teleoperation techniques [25, 26, 27] have also been a popular collection process,using various user interfaces including keyboard and mouse [13, 14], a video game controller [15],3D-mouse [16, 6], mobile phones [28], special purpose master-slave interfaces [17, 18],virtual real-ity controllers [19, 20, 21, 29] and using human videos and extracts visual priors to project them intoa simple set of robot primitives for collect robot demonstrations [30, 31, 32, 33, 34]. However, all ofthese methods require a real physical robot to be controlled, bottle-necking demonstration collectionby how many robots are available and limited to the laboratories that house these robots.Demonstrations for behavior cloning. Recently, robot demonstrations are primarily used as train-ing data for imitation learning, which has pioneered a paradigm shift in robot training. Offline be-havior cloning from robot demonstrations is currently the de-facto imitation learning paradigm [35].These demonstrations are collected either in simulation or through human control using a real robotin the real world [36, 37]. For example, Task and Motion Planning (TAMP) uses expert taskplanners to create large-scale simulation demonstrations [36]. Meanwhile in the real-world, usersemploy techniques such as teleportation or vision-based guidance are used to create demonstra-tions [20, 38, 39, 7]. Recent methods have also begun developing specialized hardware to stream-line demonstration collection. For example, a low-cost handheld device featuring a plastic grabbertool outfitted with an RGB-D camera and a servo can control the binary opening and closing of agrabber’s fingers [40]. By contrast, our real-world data collection approach requires no teleopera-tion hardware [28], no simulators [41], and most importantly, no real robots [42]. All we need is aniPhone camera to record users manipulating objects with their hands.3 The AR2-D2 systemWe introduce AR2-D2, a system for collecting robot demonstrations without requiring a physicalrobot. In this section, we describe AR2-D2’s features, its supported data collection procedure, itsimplementation details.3.1 AR2-D2 system featuresAR2-D2 contributes the following features:No need for a physical robot. In traditional robotics research, obtaining demonstrations ofteninvolves operating a physical robot [38, 39, 20, 40]. AR2-D2 presents a new paradigm for collecting3demonstrations; it forgoes access to a real robot, enabling users to collect high-quality demonstrationdata from anywhere with only their mobile devices.Real-time control of AR robots in the real-world. AR2-D2 leverages LiDAR sensors, whichtoday are ubiquitous in iPhones and other smartphones to estimate the 3D layout in front of the cam-era to project an AR robot. LiDAR helps the AR robot obey physical and visual realism. Users cancontrol the AR robot in one of three supported interactions: by pointing at 3D points that the robot’send-effector should move to, by using the iPhone’s GUI control, or through AR kinesthetic control(see appendix). The projected robot’s motions are tightly coupled with the real-world environment,and receives feedback upon collisions with real-world objects.Real-time visualization of task feasibility. AR2-D2 simplifies the demonstration collection byasking users to specify key-points that the robot end-effector should move to in order to completea task. Once each key-point is specified, AR2-D2 visualizes the AR robot’s motion, moving itsend-effector from its current position to the new key-pose. This real-time feedback enables users toassess the feasibility and accuracy of the specified key-point and revise their selections if necessary.3.2 Design and implementationAR2-D2’s design and implementation consists of two primary components (Figure 2). The firstcomponent is a phone application that projects AR robots into the real-world, allowing users tointeract with physical objects and the AR robot. The second component convert collected videosinto a format that can be used to train different behavior cloning agents, which can later be deployedon real robots.The phone application. We designed AR2-D2 as an iOS application, which can be run on aniPhone or iPad. Since modern mobile devices and tablets come equipped with in-built LiDAR, wecan effectively place AR robots into the real world. The phone application is developed atop theUnity Engine and the AR Foundation kit. The application receives camera sensor data, includingcamera intrinsic and extrinsic values, and depth directly from the mobile device’s built-in function-ality. The AR Foundation kit enables the projection of the Franka Panda robot arm’s URDF intothe physical space. To determine user’s 2D hand pose, we utilize Apple’s human pose detectionalgorithm. This, together with the depth map is used to reconstruct the 3D pose of the human hand.By continuously tracking the hand pose at a rate of 30frames per second, we can mimic the posewith the AR robot’s end-effector.Training data creation. Given language instructions for a task (e.g., ”Pick up the plastic bowl”),we hire users to generate demonstrations using AR2-D2. From the user-generated demonstrationvideo, we create training data to train and deploy on a real robot. To make this training data, weconvert the video to show an AR robot manipulating the object. We remove the human hand withSegment-Anything [43] and fill the gap left behind by the missing hand with a video in-paintingtechnique, E2FGVI [44]. Finally, we produce a video with the AR robot arm moving to the user’shand’s key-points. This processed video makes it look like an AR robot arm manipulated the objectand can be used as training data for visual-based imitation learning [45]. Additionally, with accessto the scene’s depth estimation, we can create a 3D voxelized representation of the scene to trainagents like Perceiver-Actor.(P ERACT) [6].4 Evaluating AR2-D2 with real usersTo evaluate AR2-D2’s efficacy, we conduct an extensive within-subjects user study ( N= 10 ) across5demonstration collection techniques for 2tasks. Participants demonstrated each task 3times witheach technique, resulting in a total of 300collected demonstrations. Participants were locally hired;they were aged between 23and30.Baselines collection techniques. In order to compare how effectively real participants createdemonstrations with AR2-D2, we also ask them to use 4other baseline collection techniques. Twocollection techniques utilize real robots in the real-world and two control simulation robots. In sim-4Figure 3. Evaluating AR2-D2 with real users. We conduct an extensive within-subjects user study, comparing AR2-D2 against 4alternativecollection techniques: keyboard & mouse, 3D mouse (6-DoF), kinesthetic teaching, and HTC Hive controller. (a, b) We find that participantsspend significantly less time (with an average of 22.1 and 29.5 seconds across the two tasks) using our system than others versus the next best(kinesthetic teaching with an average of 41.6 and 61.4 seconds). (c, d) We show that participants are able to successfully collect a demonstrationwith the same rate of success using our system as kinesthetic teaching, both of which have significantly higher success rate versus others.ulation, participants control a simulated Franka Panda with either keyboard and mouse or with a3D Space Mouse. Using the keyboard and mouse, users can manipulate the 6D end-effector of thesimulated robot within the Isaac Sim environment, utilizing ORBIT [22]. The 3D Space Mouse is ajoystick capable of simultaneous translation and rotation along the (x, y, z) axes; it operates withinthe same environment as the keyboard. In the real world, participants use kinesthetic teaching oran HTC Vive VR controller. Kinesthetic teaching allows participants to manipulate a real FrankaPanda, using its default zero-gravity feature. The demonstration collection interface using the HTCVive VR controller was developed in a recent paper and enables teleoperation of the robot [29].Study protocol. Each participant was tasked with collecting demonstrations for two specific tasks:picking and a stacking (Figure 3). Participants were asked to provide demonstrations for each taskacross 3trials, with 3attempts allowed per trial. We imposed a time constraint are each trial: 3minute limit for the picking and a 5 minute limit for stacking . After all the data was collected,participants filled out a system usability scale (SUS) survey.Measured variables. We evaluate the different data collection techniques using two metrics. First,we measure average data collection time (in seconds). Lower values are better because it impliesthat participants are able to collect demonstrations quicker. Second, we measure task success rate,which calculates the percentage of trials that lead to a successful demonstration.Results. We show that participants using AR2-D2 are both significantly faster (Figure 3 (a, b)) andas likely (Figure 3 (c, d)) to collect a successful demonstration as kinesthetic teaching. In comparisonwith kinesthetic teaching, which has an average task completion time of 41.6 and 61.4 seconds fortask 1 and 2 respectively, our method exhibits a substantial reduction in time with only 22.1 and 29.5seconds for both tasks respectively. Furthermore, the t-tests for task 1 and task 2 yielded t-statisticsoft1= 6.194andt2= 6.199, with p-values of p1= 7.587×10−6andp2= 7.514×10−6respec-tively. Hence, we could confidently say that there is a statistically significant difference betweenkinesthetic teaching and our approach, with kinesthetic teaching having, on average, significantlylonger timings compare to ours. This concludes that our method is capable of collecting robotdemonstrations faster than the traditionally favored kinesthetic teaching.5Figure 4. Evaluating AR2-D2 data by training a real robot to manipulate real objects . We employ AR2-D2 as a tool for gathering adiverse array of manipulations encompassing three fundamental actions, involving a wide variety of customized objects. These manipulationsrange from performing precise actions such as pressing a computer mouse or a Minecraft torch button at specific locations, to pushing smallLEGO train toys towards table-sized drawers, and even encompassing the ability to pick up objects varying from chess pieces to takeawaybags. By leveraging a limited number of real-world action demonstrations conducted with random dummy objects and fine-tuning for 3,000iterations which is equivalent to 10 minutes of training, we have achieved the capacity to apply the PerAct framework in manipulating all thesepersonalized objects with broad generalization.Task Press ( Succ.% ) Push ( Succ.% ) Pick up ( Succ.% )Personalized object Computer mouse Minecraft torch Buzzer LEGO train 8 ball Drawer Queen piece Plastic bowl Takeaway bagSimulation 13.3 6.7 30.0 13.3 20.0 3.3 3.3 20.0 16.7VR interface (w/o personalized objects) 3.3 6.7 16.7 13.3 10.0 3.3 0.0 16.7 13.3VR interface (with personalized objects) 60.0 63.3 83.3 30.0 70.0 40.0 46.7 56.7 60.0AR2-D2 (Ours) 56.7 53.3 73.3 50.0 55.7 23.3 46.7 53.3 63.3Table 1. Task test results .We utilized AR2-D2 to collect demonstrations and train BC agents for real robot deployment. Our observationsrevealed comparable results between our data collection approach and alternative methods. Success rates (mean %) of the foundational skillstested on personalized objects collected via AR2-D2. For each skill, we evaluated it across ten different sets of distractors with the target objectand repeated thrice for consistency. The result has shown that our data collection approach with minimal fine-tuning achieves comparableresults to real-world data collected on these personalized objects via PerAct’s VR interface with a physical robot.We find that participants using AR2-D2 are fast from the get-go (Figure 3(a, b)). Participantsare consistently faster when collecting demonstrations from the very first trial. This consistency isreflected in the relatively low standard deviation values of 5.75and8.89seconds for the two tasksacross participants. In contrast, the next quickest contender, kinesthetic teaching, exhibits a standarddeviation of 9.62and14.02. Additionally, users have indicated a higher preference for our systemin the SUS survey [46] (Figure 3 (d)). Our method garners a similar level of user preference askinesthetic-teaching, which necessitates a physical robot, with a mere ±6% difference in SUS scoresbetween the two techniques.5 Evaluating AR2-D2 with a real robot deploymentWith AR2-D2, we collect demonstrations and train behavior cloning agents for deployment on areal robot. Here, we present our experimental setup and three key takeaways. First, we validate thatAR2-D2 demonstrations can train a real robot to manipulate personalized objects without access toa physical robot. Second, the agent trained using AR2-D2’s demonstrations perform on par withtraining on real robot demonstrations. Third, AR2-D2’s demonstrations can enable learning policiesfrom both image as well as voxelized inputs.6AR2-D2 demonstration collection. We collect AR2-D2 demonstrations on a set of personalizedobjects, and demonstrate that a policy trained on this data executes on a real Franka Panda robot.We gather demonstrations centered around three common robotics tasks: {press, push, pick up }.The three tasks are delineated as follows: pressing down on the targeted object, pushing the tar-geted object across a surface, and picking up the targeted object. For each task, we collect fivedemonstrations using three different objects, which vary in color, size, geometry, texture, and evenfunctionality (see Figure 4).Behaviour cloning. We use Perciver-Actor (P ERACT) [6] to train a transformer-based behaviorcloning policy. P ERACTtakes a 3D voxel observation and a language goal (v, l)as input andproduces outputs for translation, rotation, and gripper state of the end-effector. These outputs, witha motion planner, enable the execution of the task specified by the language goal.Training procedure. Following existing work [6], we train an individual agent for each task. Wetrain an agent for 30k iterations per set of demonstrations. We then freeze the backbone of thePERACTarchitecture and finetune the rest using the set of VR (using HTC Hive) demonstrations ondummy objects. This fine-tuning process spans an additional 3k iterations, equivalent to approxi-mately 10minutes of wall clock training. Fine-tuning allows us to close the domain gap resultingfrom differences in depth cameras between the Kinect V2 used by P ERACTand the iPhone/iPaddepth camera used by AR2-D2.Task 2D data (Image-BC) 3D data (PerAct)Press the buzzer from the side 0.00% 40.00%Pick up the queen piece 6.67% 33.34%Press the computer mouse 6.67% 40.00%Table 2. Training with Different Data Modalities. AR2-D2 is capableof offering diverse data modalities to facilitate training BC models, suchas Image-BC for 2D data and PerAct for 3D data. We assess these dis-parate data modalities, gathered via AR2-D2, across three distinct tasksusing Image-BC for 2D data and PerAct for 3D data, conducted withoutany form of fine-tuningFinetuning demonstration collection. Fine-tuning demonstrations are collected using theVR interface from PerAct [6]. It involves us-ing a VR handset to guide the real-robot to de-sired end-effector poses. We collect 5demon-strations for each task using three dummy ob-jects:{red button, yellow block, tennis ball },corresponding to the three tasks, respectively.These specific objects are only used for fine-tuning and not used during testing. We also ablate the agent’s performance without finetuning.Figure 5. Analysis on Fine-tuning. We conducted adiagnostic analysis to determine the optimal number ofiterations and demonstrations required. By varying thenumber of demonstrations and iterations for fine-tuning,we found that using 5 demonstrations and 3,000 itera-tions yielded the best results.Testing procedure. We evaluate the trained policies’ability to manipulate personalized objects in the realworld. The personalized objects are comprised of dis-tinctly different objects from the AR2-D2-enabled real-world demonstrations. Each test environment is infectedwith ten different distractor objects. We repeat run infer-ence three times for each environment setup and averagetheir performance.Baseline collection techniques. We compare AR2-D2’sdemonstrations against two alternative techniques: real-world and simulation data collection. We finetune allmethods using the same set of finetuning demonstrationson dummy objects. Real world data collection uses aVR controller interface to capture the training demonstra-tions [6]. Real-world demonstrations are collected with and without the personalized objects (seeTable 1). Simulation demonstrations use RLBench and its key-frame point extraction technique, ac-companied by motion planning to generate each demonstration [41]. We implemented domain ran-domization to introduce texture variations, aiding in the transfer from simulation to the real world.5.1 ResultsTable 1 reports success rates of all the nine personalized objects across three tasks with demonstra-tions from real-world, from simulation, and from AR2-D2.7AR2-D2 demonstrations yields useful representation for training a real-robot. In general,AR2-D2’s data outperforms policies trained using simulation of real-world (without personalizedobjects). In fact, in one case, we outperform P ERACT’s real world data collection (without personal-ized objects) by a large margin of 53.4%. These findings highlight the significance of our approach,which facilitates access to collecting demonstrations with such personalized objects which mightnot be available in the laboratory that houses the robot. This capability to produce training data withpersonalized objects is particularly important since behavior cloning agents perform better whentheir training exposures them to the objects they are expected to manipulate.AR2-D2 demonstrations train policies as accurately as demonstrations collected from realrobots. Referencing Table 1, it is evident that even when real-world data collection is trained withpersonalized objects during the demonstration data collection phase, our method delivers compa-rable results. Remarkably, our system’s data even surpasses the real-robot collection data in taskssuch as pushing the LEGO train and picking up the paper bag. While for the remaining personalizedobjects, our approach maintains a ≤14.3%gap across the three foundational skills. The t-test re-sults, with a calculated t-value of 0.547 and a p-value of 0.592, indicating that there is no statisticalsignificance in the observed difference between the two methods.5.2 AblationsAnalysis on Fine-tuning. We investigate how many finetuning demonstrations ( {1,3,5,10,15}) ondummy objects and how many training iterations ( {0,1000,2000,3000,4000,5000}) are requiredto maximize the agent’s performance. These ablations pretrain the policy using 5AR2-D2 demon-strations of the “mouse” pressing task trained for 30k training iterations. Each ablation is testedon1real-world scene with the computer mouse but we evaluated it across 5trials with varying tar-get object poses and placement. We find that 5fine-tuning demonstrations trained for 3k iterations(equivalent to 10 minutes of training) yields the most effective outcome (see Figure 5).Training with voxelized inputs is better than using 2D inputs. AR2-D2 demonstrations store2D image and 3D depth data, facilitating training of image-based behavior cloning (Image-BC) and3D voxelized methods (P ERACT[6]). With fixed camera calibration offset and no finetuning duringtraining, 3D-input agents outperform 2D counterparts (refer to Table 2 and supplementary).6 Limitations and ConclusionLimitations. Our research does present certain limitations. Firstly, due to the inherent character-istics of our method, it proves challenging to validate experimental outcomes via simulation. Con-sequently, the verification relies on real-world assessments, which, despite our extensive multi-trialevaluations using varied layouts, cannot completely encompass all conceivable scenarios. Secondly,while our user-study participant count mirrors the standards set by RoboTurk [28], we acknowledgethat a larger participant pool might have enhanced the statistical significance of the performanceresults across various methods. Lastly, due to the disparity between the camera sensors and the do-main gap, there is still a need for fine-tuning to match the performance of real data. Nevertheless,future work can explore better approaches to further bridge this domain gap either through betterdata augmentation techniques or hardware such as Apple’s AR head-mounted display.Conclusion. We present AR2-D2, an intuitive robot demonstration collection system that enablesthe collection of quality robot demonstrations for diverse objects without the need for any real robotsor the need to train people before use. Our results highlight the effectiveness of this approach,showing that as few as five AR demonstrations suffice to train a real-world robot to manipulatepersonalized objects. Our extensive real-world experiments further confirmed that AR2-D2’s ARdata is on par with training using real-world demonstrations. Moreover, through our comprehensiveuser-study, it revealed that users found our method intuitive and easy to use, requiring no priortraining, setting it apart from traditional collection methods. Finally, AR2-D2 paves the way towardsdemocratizing robot training by enabling any individual to gather significant robot training data formanipulating their personalized objects at any place and time.8AcknowledgmentsWe thank the members of the Robotics State Estimation lab and Krishna’s group for the helpfuldiscussions and feedback on the paper. Jiafei Duan is supported by the National Science Scholarshipfrom The Agency for Science, Technology and Research (A*STAR), Singapore.References[1] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[2] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen. Hierarchical text-conditional imagegeneration with clip latents. arXiv preprint arXiv:2204.06125 , 2022.[3] J. Duan, S. Yu, H. L. Tan, H. Zhu, and C. Tan. A survey of embodied ai: From simulators toresearch tasks. IEEE Transactions on Emerging Topics in Computational Intelligence , 6(2):230–244, 2022.[4] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, andC. Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215 , 2019.[5] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, andS. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets.arXiv preprint arXiv:2109.13396 , 2021.[6] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. arXiv preprint arXiv:2209.05451 , 2022.[7] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar. Mimicplay:Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422 ,2023.[8] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[9] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprintarXiv:2210.03094 , 2022.[10] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[11] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[12] S. Osentoski, C. Crick, G. Jay, and O. C. Jenkins. Crowdsourcing for closed loop control.InProc. of the NIPS Workshop on Computational Social Science and the Wisdom of Crowds,NIPS , pages 4–7, 2010.[13] D. Kent, C. Saldanha, and S. Chernova. A comparison of remote robot teleoperation inter-faces for general object manipulation. In Proceedings of the 2017 ACM/IEEE InternationalConference on Human-Robot Interaction , pages 371–379, 2017.[14] A. E. Leeper, K. Hsiao, M. Ciocarlie, L. Takayama, and D. Gossow. Strategies for human-in-the-loop robotic grasping. In Proceedings of the seventh annual ACM/IEEE internationalconference on Human-Robot Interaction , pages 1–8, 2012.9[15] M. Laskey, C. Chuck, J. Lee, J. Mahler, S. Krishnan, K. Jamieson, A. Dragan, and K. Goldberg.Comparing human-centric and robot-centric sampling for robot deep learning from demon-strations. In 2017 IEEE International Conference on Robotics and Automation (ICRA) , pages358–365. IEEE, 2017.[16] A. D. Dragan and S. S. Srinivasa. Online customization of teleoperation interfaces. In 2012IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human InteractiveCommunication , pages 919–924. IEEE, 2012.[17] B. Akg ̈un, K. Subramanian, and A. L. Thomaz. Novel interaction strategies for learning fromteleoperation. In AAAI Fall Symposium: Robots Learning Interactively from Human Teachers ,volume 12, page 07, 2012.[18] J. Liang, J. Mahler, M. Laskey, P. Li, and K. Goldberg. Using dvrk teleoperation to facilitatedeep learning of automation tasks for an industrial robot. In 2017 13th IEEE Conference onAutomation Science and Engineering (CASE) , pages 1–8. IEEE, 2017.[19] D. Whitney, E. Rosen, E. Phillips, G. Konidaris, and S. Tellex. Comparing robot graspingteleoperation across desktop and virtual reality with ros reality. In Robotics Research: The18th International Symposium ISRR , pages 335–350. Springer, 2019.[20] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel. Deep imita-tion learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 5628–5635. IEEE, 2018.[21] J. I. Lipton, A. J. Fay, and D. Rus. Baxter’s homunculus: Virtual reality spaces for teleoperationin manufacturing. IEEE Robotics and Automation Letters , 3(1):179–186, 2017.[22] M. Mittal, C. Yu, Q. Yu, J. Liu, N. Rudin, D. Hoeller, J. L. Yuan, R. Singh, Y . Guo, H. Mazhar,A. Mandlekar, B. Babich, G. State, M. Hutter, and A. Garg. Orbit: A unified simulationframework for interactive robot learning environments. IEEE Robotics and Automation Letters ,pages 1–8, 2023. doi:10.1109/LRA.2023.3270034.[23] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. ArXiv , abs/2209.05451, 2022.[24] G. Ajaykumar, M. Stiber, and C.-M. Huang. Designing user-centric programming aids forkinesthetic teaching of collaborative robots. Robotics and Autonomous Systems , 145:103845,2021.[25] K. Goldberg, M. Mascha, S. Gentner, J. Tossman, N. Rothenberg, C. Sutter, and J. Wiegley.Beyond the web: Excavating the real world via mosaic. In Second International WWW Con-ference , pages 1–12, 1994.[26] V . J. Lumelsky and E. Cheung. Real-time collision avoidance in teleoperated whole-sensitiverobot arm manipulators. IEEE Transactions on Systems, Man, and Cybernetics , 23(1):194–203, 1993.[27] P. F. Hokayem and M. W. Spong. Bilateral teleoperation: An historical survey. Automatica , 42(12):2035–2057, 2006.[28] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, et al. Roboturk: A crowdsourcing platform for robotic skill learning through imita-tion. In Conference on Robot Learning , pages 879–893. PMLR, 2018.[29] A. Jaegle, F. Gimeno, A. Brock, A. Zisserman, O. Vinyals, and J. Carreira. Perceiver: Generalperception with iterative attention, 2021. URL https://arxiv.org/abs/2103.03206 .[30] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. arXiv preprintarXiv:2207.09450 , 2022.10[31] H. Xiong, Q. Li, Y .-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching:Physical imitation of manipulation skills from human videos. In 2021 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 7827–7834. IEEE, 2021.[32] K. Zakka, A. Zeng, P. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodimentinverse reinforcement learning. In Conference on Robot Learning , pages 537–546. PMLR,2022.[33] K. Shaw, S. Bahl, and D. Pathak. Videodex: Learning dexterity from internet videos. InConference on Robot Learning , pages 654–665. PMLR, 2023.[34] W. Yuan, C. Paxton, K. Desingh, and D. Fox. Sornet: Spatial object-centric representations forsequential manipulation. In Conference on Robot Learning , pages 148–157. PMLR, 2022.[35] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances inneural information processing systems , 1, 1988.[36] M. Dalal, A. Mandlekar, C. Garrett, A. Handa, R. Salakhutdinov, and D. Fox. Imitating taskand motion planning with visuomotor transformers, 2023.[37] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine q-attention: Efficient learningfor visual robotic manipulation via discretisation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 13739–13748, 2022.[38] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning viameta-learning. In Conference on robot learning , pages 357–368. PMLR, 2017.[39] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot programming by demonstration. InSpringer handbook of robotics , pages 1371–1394. Springer, 2008.[40] S. Song, A. Zeng, J. Lee, and T. Funkhouser. Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations. IEEE Robotics and Automation Letters , 5(3):4978–4985, 2020.[41] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020.[42] Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. van den Hengel. Visual question an-swering: A survey of methods and datasets. Computer Vision and Image Understanding , 163:21–40, 2017.[43] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643 , 2023.[44] Z. Li, C.-Z. Lu, J. Qin, C.-L. Guo, and M.-M. Cheng. Towards an end-to-end frameworkfor flow-guided video inpainting. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 17562–17571, 2022.[45] S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto. Visual imitation madeeasy, 2020.[46] A. Bangor, P. T. Kortum, and J. T. Miller. An empirical evaluation of the system usability scale.Intl. Journal of Human–Computer Interaction , 24(6):574–594, 2008.11 |
VtJqMs9ig20 | CAT: Closed-loop Adversarial Training forSafe End-to-End DrivingLinrui Zhang†, Zhenghao Peng‡, Quanyi Li§, Bolei Zhou‡†Tsinghua University,‡UCLA,§The University of EdinburghAbstract: Driving safety is a top priority for autonomous vehicles. Orthogonalto prior work handling accident-prone traffic events by algorithm designs at thepolicy level, we investigate a Closed-loop Adversarial Training (CAT) frameworkfor safe end-to-end driving in this paper through the lens of environment augmen-tation. CAT aims to continuously improve the safety of driving agents by trainingthe agent on safety-critical scenarios that are dynamically generated over time.A novel resampling technique is developed to turn log-replay real-world drivingscenarios into safety-critical ones via probabilistic factorization, where the ad-versarial traffic generation is modeled as the multiplication of standard motionprediction sub-problems. Consequently, CAT can launch more efficient physi-cal attacks compared to existing safety-critical scenario generation methods andyields a significantly less computational cost in the iterative learning pipeline.We incorporate CAT into the MetaDrive simulator and validate our approach onhundreds of driving scenarios imported from real-world driving datasets. Experi-mental results demonstrate that CAT can effectively generate adversarial scenarioscountering the agent being trained. After training, the agent can achieve superiordriving safety in both log-replay and safety-critical traffic scenarios on the held-out test set. Code and data are available at https://metadriverse.github.io/cat.1 IntroductionWhile end-to-end driving has achieved promising performance in urban piloting [1] and track rac-ing [2], safely handling accident-prone traffic events is one of the crucial capabilities to achievefor autonomous driving (AD). Benchmarking the safety and performance of an AI driving agent insimulation is a stepping stone for the real-world deployment [3]. However, it is insufficient to trainor evaluate an end-to-end driving agents on traffic scenarios only retrieved from real-world trafficdatasets [4, 5] since accident-prone events are extremely rare and difficult to collect in practice [6, 7].Prior work improves the driving agent against safety-critical scenarios through various methods suchas rule-based reasoning [8], motion verification [9], and constrained reinforcement learning [10]. Or-thogonal to the elaborate algorithm designs at the policy level, recent studies obtain robust drivingpolicies at the environment level by creating a set of accident-prone scenarios before hand as aug-mented training samples [11, 12]. Nevertheless, the learned policy may still easily overfit the fixedset of training samples thus fail to handle unknown hazards [13].An alternate approach is to dynamically generate challenging scenarios that match the current ca-pability of the driving agent being trained in a closed-loop manner. However, the state-of-the-artsafety-critical scenario generation methods [11, 12, 14] are not yet applicable for that purpose dueto the following issues: (i) Scene generalizability : probabilistic graph methods like CausalAF [11]require human prior knowledge of each scene graph and thus cannot scale to large and complexdriving datasets; (ii) Model dependency : kinematics gradient methods like KING [12] relies on theforward simulation of the running policy and the backward propagation based on the environmentaltransition, which might not be accessible in the model-free end-to-end driving; (iii) Time efficiency:7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Adversarial Traffic Scenario Normal Traffic Scenario Risk-Aware DrivingSafety-Critical ResamplingDriving Policy OptimizationEnvironmentInitializationScenario Pool Safe AD PolicyEgo VehicleOpponent VehicleIrrelevant VehicleClosed-loop Adversarial Training (CAT)Figure 1: CAT iterates over safety-critical scenario generation and driving policy optimization in aclosed-loop manner. In this example, the safety-critical resampling technique alters the behavior ofthe opponent vehicle (blue car) such that it suddenly cuts into the lane of the ego vehicle (red car),enforcing the agent to learn risk-aware driving skills such as deceleration and yielding.autoregression-based generation methods like STRIVE [14] take minutes to optimize the adversarialtraffic per scenario, which is time prohibitive for large-scale training with millions of episodes.In this paper, we present the Closed-loop Adversarial Training (CAT) framework for safe end-to-enddriving. As shown in Fig. 1, CAT imports driving scenarios from real-world driving logs and thengenerates safety-critical counterparts as adversarial training environments tailored to the currentdriving policy. The agent continuously learns to address emerging challenges and improves riskawareness in a closed-loop pipeline. CAT directly launches physical attacks against the estimatedego trajectory, the proposed framework is thus agnostic to the driving policy used by the agent andis compatible with a wide range of end-to-end learning approaches, such as reinforcement learning(RL) [15], imitation learning (IL) [16], and human-in-the-loop feedback (HF) [17].One crucial component of the proposed framework is a novel factorized safety-critical resamplingtechnique that efficiently turns logged driving scenarios into safety-critical ones during training.Specifically, we cast the safety-critical traffic generation as the risk-conditioned Bayesian probabilitymaximization and then decompose it into the multiplication of standard motion forecasting sub-problems. Thus, we can utilize off-the-shelf motion forecasting models [18, 19] as the learnedprior to generate adversarial scenarios with high fidelity, diversity, and efficiency. Compared toprevious safety-critical traffic generation methods, the proposed technique obtains a competitiveattack success rate while significantly reducing the computational cost, making the CAT frameworkeffective and efficient for closed-loop end-to-end driving policy training.To demonstrate the efficacy of our approach, we incorporate the proposed CAT framework into theMetaDrive simulator [20] and compose adversarial traffic environments from five hundred complexdriving scenarios in a closed-loop manner to train RL-based driving agents without any ad-hocsafety designs. Experimental results show that CAT generates realistic and challenging physicalattacks, and the resulting agent obtains superior driving safety in both log-replay and adversarialtraffic scenarios on the held-out test set. The contributions of this paper are summarized as follows:i) We propose an efficient safety-critical scenario generation technique by resampling the learnedtraffic prior, which improves attack success rate and lowers computation cost compared to priorwork, making continuous adversarial scenario generation viable in closed-loop AD training.ii) We present a closed-loop adversarial training framework for end-to-end safe driving basedon the above technique and demonstrate the proposed framework substantially improves AIdriving safety in complex testing scenarios imported from the real world.2 Related WorkAdversarial Training for Autonomous Driving. Deep neural networks (DNNs), pervasively usedin learning-based AD systems, are found vulnerable to adversarial attacks [21, 22]. Recent stud-2ies tend to manipulate the physical environment to generate realistic yet adversarial observationsequences from LiDAR inputs [23], camera inputs [24], and other physical-world-resilient ob-jectives [25]. Compared to the above work focusing on perception, adversarial training for ADdecision-making is much less explored. Ma et al. [26] first investigate the adversarial RL on a singleautonomous driving scenario. Wachi [27] employs the multi-agent DDPG algorithm [28] to enforcethe competition between player and non-player vehicles. In addition to algorithmic level designs, amore natural but less explored approach is to iteratively propose challenging scenarios during train-ing [29]. There is a line of works on evolving training environments in RL [30, 31]. However,existing approaches are evaluated only in simplified environments like bipedal walker and heuris-tically modify the terrain or static barriers, which is not sufficient for complex AD tasks. In thiswork, we focus on generating realistic and safety-critical traffic scenarios to facilitate closed-loopadversarial training for end-to-end driving.Safety-critical Traffic Scenario Generation. Safety-critical traffic scenario generation is of greatvalue in adaptive stress testing [32] and corner case analysis [33] for the research and development ofautonomous vehicles. L2C [34] learns to place and trigger a cyclist to collide with the target vehiclevia RL algorithms, but it goes far to model complex vehicle interactions in real-world scenes. Forrobust imitation learning, kinematics gradients [12] and black-box optimization [23] can be usedto magnify traffic risks. However, it relies on the forward simulation of the running policy and thebackward propagation based on the vehicle kinematics, which might not be accessible in model-freeend-to-end driving. CausalAF [11] builds scenario causal graphs to uncover behavior of interest andgenerates additional training samples to improve the robustness of driving policies. Nevertheless, theevaluations are limited to three scenarios since it requires human prior knowledge of each scene andthus hardly scale to a larger dataset. STRIVE [14] constructs a latent space to constrain the trafficprior and searches for the best responsive mapping via gradient-based optimization on that denserepresentation. Despite its impressive results on realistic traffic flows, the autoregression on rastermaps takes several minutes to optimize the adversarial traffic for each scene, which brings about acostly computational burden for periodic policy optimization. We refer to the survey [35] for moredetailed safety-critical scenario generation methodologies. Different from the above literature, wepropose a novel adversarial traffic generation algorithm for real-world scenarios with an admissibletime consumption, making it viable for large-scale policy iterations involving millions of episodes.3 MethodIn this section, we first formulate the closed-loop adversarial training (CAT) for safe end-to-enddriving as a min-max problem in the context of RL, and then introduce the factorization of thelearned traffic prior so as to generate adversarial driving scenarios efficiently in practice.3.1 Problem FormulationEnd-to-end driving directly uses raw sensor data as the inputs and outputs the low-level controlcommand. Safe end-to-end driving incorporates risk-awareness into the above end-to-end pipelineand aims to minimize traffic accidents while maintaining the performance of route completion. Wefocus on reinforcement learning (RL)-based driving policy in this work, though the proposed CATcan be extended to accommodate a range of end-to-end driving policies. In our scope, the drivingtask can be formulated as Markov Decision Process (MDP) [36] in the form of (S, A, R, f ).SandAdenote the state and action spaces, respectively. Sincludes maps sensor readings such as cameraimages or LiDAR point cloud, high-level navigation commands and vehicle states. Aconsists oflow-level control commands like steering, throttle and brake. The reward function can be defined asR=d−ηc, wherein dis the displacement toward the destination, cis a boolean value indicatingcollision with other objects and ηis a hyper-parameter for the reward shaping. fis the transitionfunction to describe the dynamics of the traffic scenario. The goal is to maximize the expected returnJ(π, f) =Eτ∼πPTt=0R(st, at)the driving policy πreceives within the time horizon T, whereτ∼πis short handed for at∼π(·|st), st+1∼f(·|st, at). CAT aims to enhance the robustness of3the learning agent via the following adversarial optimization:maxπminfAdv∈FJ(π, fAdv). (1)Here, the adversarial transition function fAdvmust be within the feasible set Fthat is aligned withrealistic traffic distribution, otherwise the learned driving policy πis not applicable in practice.The fundamental problem is to construct fAdvby generating compliant future traffic trajectoriesthat are prone to collisions with the agent’s rollouts. To formalize the traffic collision, we denotethe vehicle controlled by the learning agent as the ego vehicle (Ego) and other vehicles as opponentvehicles (Op) and represent a traffic scenario as a tuple (M, SEgo1:T,SOp1:T)with duration Ttime steps.Here, the High-Definition (HD) road map Mconsists of road shapes, traffic signs, traffic lights, etc.SEgo1:tdenotes the past states of the ego vehicle. SOp1:tis anN-element array [SOp11:t, ..., SOpN1:t], whereineach element stands for the past states of the corresponding opponent. For simplicity, we denoteX= (M, SEgo1:t,SOp1:t)as the information cutoff by step tandYEgo=SEgot:T,YOp=SOpt:Tare thefuture trajectories of ego and opponent starting from t, respectively. YEgois conditioned on the RLagent π. The cutoff step tis fixed. We define a binary random variable Coll ={True, False }to denote whether YEgocollides with YOp. Considering that the opponent vehicle must launcheffective attacks based on the potential ego behavior which is responsive to the YOp, the opponents’trajectories YOpand the ego vehicle’s trajectory YEgoare thus not independent. Therefore, we modelYOpandYEgojointly and the safety-critical scenario distribution is expressed as:P(YEgo,YOp|Coll =True, X ) (2)Proposition 1 further shows that the construction of fAdvcan be cast as marginal probability max-imization of opponent trajectories YOpbased on the above joint posterior distribution, where weassume that YEgogenerated by the current driving policy πis sampled from Y(π).Proposition 1. Suppose that πforces the agent to approach the destination and the episode termi-nates when any traffic collision happens, then we haveminfAdv∈FJ(π, fAdv)⇔maxYOpXYEgo∼Y(π)P(YEgo,YOp|Coll =True, X ). (3)3.2 Factorized Safety-Critical ResamplingThe joint distribution in Eq. (3) is still intractable. However, under the assumptions that the ego ve-hicle’s reactions are unidirectionally based on the future traffic, we can factorize it with the Bayesianformula as shown in Proposition 2.Proposition 2. Suppose that YEgodepends on YOpunidirectionally, then we haveP(YEgo,YOp|Coll =True, X )∝P(YOp|X)P(YEgo|YOp, X)P(Coll =True|YEgo,YOp).(4)After the factorization, we can search the best responsive∗YOpto magnify the probability of trafficcollisions with the ego agent as possible through the marginal probability maximization given as:maxYOpXYEgo∼Y(π)P(YEgo,YOp|Coll =True, X )= maxYOpP(YOp|X)|{z}1st TermXYEgo∼Y(π)P(YEgo|YOp, X)| {z }2nd TermP(Coll =True|YEgo,YOp)| {z }3rd Term.(5)It is beneficial to perform the above safety-critical traffic probability factorization since each term inEq. (5) features a specific meaning and is tractable to handle. They are interpreted as follows:i)Traffic prior. The 1st term is the standard motion prediction problem in which we can lever-age arbitrary probabilistic traffic models [18, 37, 38, 39] to portray the multi-modal trajectorydistribution. Taking the pre-trained model as the traffic prior enables the attack plausibility incomplex scenarios without human specifications.4Opponent Vehicle Ego Vehicle Ego Trajectory Opponent Trajectory(B) (C) (D) (A)Figure 2: Illustration of Factorized Safety-Critical Resampling. ( A) We initialize 1s traffic historywith the dense map representation. ( B) We then predict the traffic prior as well as the agent’sreaction. ( C) The most accident-prone trajectory of the opponent vehicle is selected. ( D) Thegenerated scene is thus expected to be safety-critical.ii)Ego estimation. The 2nd term denotes the interactive ego trajectory yielding to the current stateand upcoming traffic flow. The transition can be deterministic if the world model is learned oraccessible under model-based settings [12]. As for the inference of real-world-compliant trafficflows, we can employ an interactive motion predictor [19] conditioned on known surroundingvehicles’ trajectories to better reflects the ego compliance under risky interactions.iii)Collision likelihood. The 3rd term reflects the likelihood of a collision in the compositionalfuture, which can be simulated directly or treated as a binary classifier to fit [40].As shown in Fig. 2, it is possible to approach the near-optimal adversarial trajectory via numericaloptimization after each term is calculated.3.3 Practical ImplementationWe summarize the overall implementation of the CAT framework for safe end-to-end driving inAlgorithm 1 . Recalling the training objective of CAT in Eq. (1), we need to perform iterative opti-mization of policy learning and adversarial environment generation synchronously in a closed loop.The policy optimization can be achieved by arbitrary end-to-end driving policy learning approaches,e.g., a vanilla RL algorithm. Below, we focus on the adversarial environment generation, wherewe utilize the proposed factorized safety-critical resampling in Eq. (5). Note that we make a sim-plification in CAT by enforcing a single rival to launch the attack in each generated scene whilesimply maneuvering the other vehicles to avoid self-collisions. This is reasonable since most trafficaccidents are caused by two traffic participants rather than involving multiple vehicles.We first predict the traffic prior P(YOp|X)using a pre-trained probabilistic traffic forecasting modelG. Considering the strong performance and the ease of sampling, we adopt DenseTNT [18], ananchor-free goal-based motion predictor, in this work. Specifically, we propose Mpossible candi-dates{(YOpi, POpi)}Mi=1in parallel. The component YOpi,kin the k-th time step consists of the predictedposition and yaw of the opponent vehicle. The probability of the trajectory POpicoincides with theprobability of the corresponding destination goal.We then tackle the ego estimation term P(YEgo|YOp, X). Considering the non-stationary policyduring training, we notice that the ego behavior does not necessarily match the logged behavior inthe dataset. Consequently, directly utilizing the pre-trained traffic estimator derived from naturaltraffic flows [19] to provide ego trajectory probability has a severe bias. Alternatively, we record thelatest Nrollouts of the ego vehicle in each scenario formed as {(YEgoj, PEgoj)}Nj=1wherein we derivethe likelihood of visited state sequences deduced by the current policy π:PEgoj,k+1=PEgoj,k·π(ak|sk).At last, we empirically estimate the collision likelihood P(Coll|YEgo, YOp). Given the specificcompositional future of YEgojandYOpi, we compute the minimal distance between their boundingboxes in the following steps and set the collision likelihood as PColli,j=αkif the closest gap is ≤0at timestep k. If the collisions happen at multiple step, the earliest kwill be used. Here, α∈(0,1]is a heuristic decay factor to reflect the increasing uncertainty of the traffic model.5Algorithm 1: Closed-loop Adversarial Training (CAT) for Safe End-to-End Driving.Input: Initial driving policy π, learning algorithm T, trajectory predictor G, the simulator.Output: Robust driving policy π∗.1Initialize the scenario pool D={X1, X2, ...X|D|}from real-world datasets.2Initialize the ego trajectory buffer for each scenario.3while πis not converged do4 Randomly sample a logged traffic Xfrom the scenario pool D.5 Retrieve the ego trajectory buffer for this scenario {(YEgoi, PEgoi)}Ni=1.6{(YOpi, POpi)}Mi=1∼ G(X) ▷Generate the traffic prior, MOp’s trajectories.7 fori in1,2, ..., M do ▷For each Op candidate.8 forj in1,2, ..., N do ▷For each Ego candidate.9 PCollij=(αkif BBox (YEgoj,k)collides with BBox (YOpi,k)at step k,0 otherwise.10 P(YOpi|π, Coll, X ) =POpiPNj=1PEgojPCollij ▷Compute the posterior probability.11 YOp∗= arg maxYOpiP(YOpi|π, Coll, X ) ▷Select the best Op’s trajectory.12 obs = simulator.reset( X, YOp∗) ▷Reset sim to replay the adversarial scenario.13 Initialize YEgo={},PEgo= 1.fort in1,2,3...,|T|do ▷Rollout the policy against theadversarial scenario.14 act∼π(·|obs)15 obs = simulator.step(act)16 YEgo←YEgoS{YEgot} ▷Update Ego trajectory.17 PEgo←PEgo·π(act|obs) ▷Update Ego probability.18 π← T (π) ▷Policy optimization.19 Add(YEgo, PEgo)to the ego trajectory buffer for this scenario.4 Experiments4.1 Experiment SetupWe import 500 real-world traffic scenarios involving complex vehicle interactions from the WaymoOpen Motion Dataset (WOMD) [4] as the raw data. Each scene in WOMD contains a traffic partic-ipant labeled as Object of Interest regarding the ego car, which is also designated as the opponentvehicle in our experiments. All the experiments are conducted in MetaDrive [20], an open-sourceand lightweight AD simulator. The specific state, action and reward function in policy training anddetailed hyper-parameter settings in safety-critical scenario generation are placed in Appendix Cand D. Here, we point out some pivotal parameters. Each scene lasts 9s, in which we take the first1s traffic history as Xand manipulate the following 8s to generate the adversarial trajectory YOp.We set M= 32 as the number of opponent trajectory candidates, N= 5as the length of ego rolloutqueue and α= 0.99to penalize the uncertainty of motion forecasting.4.2 Evaluation of Safety-critical Traffic Generation in CATThe factorized safety-critical resampling is the crucial component of CAT to generate adversarialtraining samples. We provide qualitative and quantitative comparisons with the following baselines:(A) Raw Data : Replaying the recorded real-world traffic. (B) M2I (adv) [19]: The interactive trafficmotion prediction is similar to our factorized formulation and thus can be modified as an adversar-ial scenario generator. (C) STRIVE [14]: The state-of-the-art safety-critical scenario generationmethods performing gradient-based optimization on latent variables.Qualitative analysis. In Fig. 3, we present 9 different types of safety-critical scenarios that CATgenerates from raw scenes, according to the pre-crashed traffic categorized by the National HighwayTraffic Safety Administration (NHTSA). It can be concluded that CAT is able to generate adversarialtraffic given arbitrary real-world raw scenes. Meanwhile, the generated trajectories are in line withhuman driver behavior, even though we don’t specify prior knowledge of that scene. In Fig. 4, we6(1) Right Turn (2) Left Turn (3) U-Turn(4) Rear-End (5) Emergent Brake (6) Lane Change(7) Cross Paths (8) Run-Off-Road (9) Opposite DirectionFigure 3: Qualitative results on the diversity of safety-critical scenarios generated by CAT. In eachsubfigure, the left and right are the raw scene and the adversarial counterpart. The ego and adversar-ial trajectories are highlighted with red and blue arrows, respectively.Ego Vehicle Opponent Vehicle Irrelevant Vehicle Traffic Light(D) CATOff the Driveway Effective Attack(A) Raw (B) M2I (C) STRIVENo AccidentFigure 4: Qualitative results on the plausibility of safety-critical scenarios generated by CAT. Theattack is regarded as effective only if leading traffic accidents are consistent with real-world events.compare the generated adversarial traffic of the four methods on the same intersection. In the rawscene, the leading vehicle turns preferentially and does not cross the path of the ego vehicle. Theopponent attempts to collide with the agent at the intersection through the safety-critical generation.However, M2I (adv) has a bias in estimating the reaction of the ego vehicle, which does not cause theexpected accident. STRIVE finds the solution to enforce a crash, but it is still cumbersome to tweakthe multinomial loss function to balance the goal of colliding as soon as possible and reasonabledriving behavior, like keeping the vehicle in the driveway. By contrast, our factorized safety-criticalresampling leverages the learned motion prior to regularize the opponent’s trajectory, magnifyingthe traffic risk while preserving its plausibility. More visualization can be found in Appendix E.Table 1: Comparing adversarial generation methods.MethodsAttack Success Rate ↑Per SceneGeneration Time ↓ Replay IDM PretrainedRaw Data 0% 34% 14% /M2I (adv) 47% 41% 19% 0.41±0.03sSTRIVE 85% 82% 66% 153 .10±47.33sCAT (N= 1) 91% 71% 62% 0 .66±0.09sCAT (N= 5)91% 86% 69% 3 .34±0.41sQuantitative analysis. In Tab. 1, we com-pare adversarial traffic generation methodson 100 test scenes, focusing on two met-rics. The first metric of interest is the attacksuccess rate as the driving policies are re-sponsive and even defensive to the trafficflow. We adopt three kinds of agents withfixed policies to validate: (i) Replay Agent :Replay the original trajectory of the ego ve-hicle logged in real-world data-set. (ii) IDM Agent : A heuristic controller well-adopted in ADtasks [41]. (iii) Pre-trained Agent : A pre-trained RL policy on WOMD. We find that M2I (adv) isinsufficient for ego prediction and attacks less effectively especially against low-level policy, whichis fatal for end-to-end driving. CAT collects ego rollouts to enhance the confidence of ego estima-tion during training ( N= 5) and testing ( N= 1) which significantly improves the attack successrate and is competitive with the SOTA method STRIVE. The second metric of interest is the timeconsumption per scene, which is non-negligible considering the large number of scenario iterations72 4 6 8T otal Interactions 1e550%55%60%65%70%75%Route Completion (Log-replay)2 4 6 8T otal Interactions 1e510%15%20%25%30%Crash Rate (Log-replay)2 4 6 8T otal Interactions 1e550%55%60%65%70%75%Route Completion (Safety-critical)2 4 6 8T otal Interactions 1e525%30%35%40%45%50%Crash Rate (Safety-critical)No Adv/ Replay Rule-based Adv Open-loop Adv Closed-loop AdvFigure 5: The learning curves of the policies trained with different pipelines.Table 2: Performance of driving policies with different training pipelines on the held-out test set.MethodsLog-replay Scenarios Safety-critical ScenariosRoute Completion ↑ Crash Rate ↓ Route Completion ↑ Crash Rate ↓No Adv/ Replay 72.91%±2.05% 19.89%±1.95% 63 .48%±1.46% 43 .33%±1.13%Rule-based Adv 62.42%±3.99% 15 .61%±1.98% 56 .68%±4.66% 30 .31%±3.33%Open-loop Adv 68.89%±1.05% 17 .15%±1.80% 63 .48%±1.46% 36 .96%±1.66%Closed-loop Adv 72.47%±2.04% 13.43%±0.88% 67 .62%±1.89% 28 .15%±1.63%during training. We find that STRIVE generally requires 2-3 minutes to process a single scene dueto its autoregression procedure on the raster map, which means it takes days to train the agent ina closed loop involving thousands of episodes. By contrast, our approach best balance the attacksuccess rate and computational time compared and admits a privileged advantage in closed-loopadversarial training for end-to-end driving.4.3 Evaluation of Closed-loop Adversarial Training in CATWe show how the driving agent improves its safety performance within CAT framework. We splitthe 500 raw scenes into 400training and 100testing scenarios. We train a TD3 [42] driving policyfrom scratch with 4 types of training pipelines: (A) No Adv/ Replay : The raw driving scenariosare used as the training environments. (B) Rule-based Adv : We implement a rule-based systemthat overwrites the trajectories in data to generate physical attacks (see the Appendix F for details).(C) Open-loop Adv : We generate the opponent trajectories that collide with the ego trajectoriesagainst the log-replayed ego rollout before training. (D) Closed-loop Adv : We use CAT to generateadversarial scenario on-the-fly against the ego trajectories generated by the learning agent.We evaluate the driving policies trained from different pipelines with two metrics. The first metricis the route completion rate, which measures the progress the agent makes; The second metric isthe crash rate, the ratio of episodes that the ego vehicle crashes into others. We first evaluate thepolicy on the held-out testing scenarios with logged traffic ( Log-replay Scenarios ). Then we runCAT against the policy to generate adversarial traffic. Finally, we run the policy in the testingscenarios with CAT-generated traffic ( Safety-critical Scenarios ). As shown in Table 2 and Fig. 5,we find that CAT substantially enhances safety performance compared with vanilla RL training,reducing crash rate by 6.46% in log-replayed scenarios and 15.18% in safety-critical ones withcompetitive route completion. More qualitative results can be referred in Appendix G. Besides, wedemonstrate that generating adversarial environments against current policy on-the-fly makes thetrained policy performs better. At last, factorized safety-critical resampling can preserve the realistictraffic distribution so the learned policy has competitive route completion rate. On the contrary, therule-based attacks lead to over-conservative driving policy that has inferior route completion.5 ConclusionIn this paper, we investigate how to improve the safety of end-to-end driving through the lens ofsafety-critical traffic scenario augmentation. Empirical results demonstrate that the proposed closed-loop adversarial training (CAT) framework can provide realistic physical attacks efficiently duringtraining and enhance AI driving safety performance in the test time.8AcknowledgmentsThis work was supported by the National Science Foundation under Grant No. 2235012 and theCisco Faculty Award.References[1] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. Carla: An open urban drivingsimulator. In Conference on robot learning , pages 1–16. PMLR, 2017.[2] J. Herman, J. Francis, S. Ganju, B. Chen, A. Koul, A. Gupta, A. Skabelkin, I. Zhukov, M. Kum-skoy, and E. Nyberg. Learn-to-race: A multimodal control environment for autonomous rac-ing. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages9793–9802, 2021.[3] C. Xu, W. Ding, W. Lyu, Z. Liu, S. Wang, Y . He, H. Hu, D. Zhao, and B. Li. Safebench:A benchmarking platform for safety evaluation of autonomous vehicles. arXiv preprintarXiv:2206.09682 , 2022.[4] S. Ettinger, S. Cheng, B. Caine, C. Liu, H. Zhao, S. Pradhan, Y . Chai, B. Sapp, C. R. Qi,Y . Zhou, et al. Large scale interactive motion forecasting for autonomous driving: The waymoopen motion dataset. In Proceedings of the IEEE/CVF International Conference on ComputerVision , pages 9710–9719, 2021.[5] H. Caesar, J. Kabzan, K. S. Tan, W. K. Fong, E. Wolff, A. Lang, L. Fletcher, O. Beijbom,and S. Omari. nuplan: A closed-loop ml-based planning benchmark for autonomous vehicles.arXiv preprint arXiv:2106.11810 , 2021.[6] F. M. Favar `o, N. Nader, S. O. Eurich, M. Tripp, and N. Varadaraju. Examining accident reportsinvolving autonomous vehicles in california. PLoS one , 12(9):e0184952, 2017.[7] A. Sinha, S. Chand, V . Vu, H. Chen, and V . Dixit. Crash and disengagement data of au-tonomous vehicles on public roads in california. Scientific data , 8(1):298, 2021.[8] B. Mirchevska, C. Pek, M. Werling, M. Althoff, and J. Boedecker. High-level decision makingfor safe and reasonable autonomous lane changing using reinforcement learning. In 201821st International Conference on Intelligent Transportation Systems (ITSC) , pages 2156–2162.IEEE, 2018.[9] D. Isele, A. Nakhaei, and K. Fujimura. Safe reinforcement learning on autonomous vehicles.In2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages1–6. IEEE, 2018.[10] L. Wen, J. Duan, S. E. Li, S. Xu, and H. Peng. Safe reinforcement learning for autonomousvehicles through parallel constrained policy optimization. In 2020 IEEE 23rd InternationalConference on Intelligent Transportation Systems (ITSC) , pages 1–7. IEEE, 2020.[11] W. Ding, H. Lin, B. Li, and D. Zhao. Causalaf: Causal autoregressive flow for safety-criticaldriving scenario generation. In Conference on Robot Learning , pages 812–823. PMLR, 2023.[12] N. Hanselmann, K. Renz, K. Chitta, A. Bhattacharyya, and A. Geiger. King: Generatingsafety-critical driving scenarios for robust imitation via kinematics gradients. In ComputerVision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Pro-ceedings, Part XXXVIII , pages 335–352. Springer, 2022.[13] D. Katare, N. Kourtellis, S. Park, D. Perino, M. Janssen, and A. Y . Ding. Bias detectionand generalization in ai algorithms on edge for autonomous driving. In 2022 IEEE/ACM 7thSymposium on Edge Computing (SEC) , pages 342–348. IEEE, 2022.9[14] D. Rempe, J. Philion, L. J. Guibas, S. Fidler, and O. Litany. Generating useful accident-pronedriving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 17305–17315, 2022.[15] B. R. Kiran, I. Sobh, V . Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. P ́erez. Deepreinforcement learning for autonomous driving: A survey. IEEE Transactions on IntelligentTransportation Systems , 23(6):4909–4926, 2021.[16] Z. Zhu and H. Zhao. A survey of deep rl and il for autonomous driving policy learning. IEEETransactions on Intelligent Transportation Systems , 23(9):14043–14065, 2021.[17] Z. Peng, Q. Li, C. Liu, and B. Zhou. Safe driving via expert guided policy optimization. InConference on Robot Learning , pages 1554–1563. PMLR, 2022.[18] J. Gu, C. Sun, and H. Zhao. Densetnt: End-to-end trajectory prediction from dense goal sets.InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 15303–15312, 2021.[19] Q. Sun, X. Huang, J. Gu, B. C. Williams, and H. Zhao. M2i: From factored marginal trajectoryprediction to interactive prediction. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 6543–6552, 2022.[20] Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue, and B. Zhou. Metadrive: Composing diverse drivingscenarios for generalizable reinforcement learning. IEEE transactions on pattern analysis andmachine intelligence , 2022.[21] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 ieeesymposium on security and privacy (sp) , pages 39–57. Ieee, 2017.[22] Q. Zhang, S. Hu, J. Sun, Q. A. Chen, and Z. M. Mao. On adversarial robustness of trajectoryprediction for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 15159–15168, 2022.[23] J. Wang, A. Pun, J. Tu, S. Manivasagam, A. Sadat, S. Casas, M. Ren, and R. Urtasun. Advsim:Generating safety-critical scenarios for self-driving vehicles. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 9909–9918, 2021.[24] A. Boloor, X. He, C. Gill, Y . V orobeychik, and X. Zhang. Simple physical adversarial examplesagainst end-to-end autonomous driving models. In 2019 IEEE International Conference onEmbedded Software and Systems (ICESS) , pages 1–7. IEEE, 2019.[25] Z. Kong, J. Guo, A. Li, and C. Liu. Physgan: Generating physical-world-resilient adversarialexamples for autonomous driving. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 14254–14263, 2020.[26] X. Ma, K. Driggs-Campbell, and M. J. Kochenderfer. Improved robustness and safety forautonomous vehicle control with adversarial reinforcement learning. In 2018 IEEE IntelligentVehicles Symposium (IV) , pages 1665–1671. IEEE, 2018.[27] A. Wachi. Failure-scenario maker for rule-based agent using multi-agent adversarial reinforce-ment learning and its application to autonomous driving. arXiv preprint arXiv:1903.10654 ,2019.[28] R. Lowe, Y . I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information pro-cessing systems , 30, 2017.[29] L. Anzalone, P. Barra, S. Barra, A. Castiglione, and M. Nappi. An end-to-end curriculumlearning approach for autonomous driving scenarios. IEEE Transactions on Intelligent Trans-portation Systems , 23(10):19817–19826, 2022.10[30] R. Wang, J. Lehman, J. Clune, and K. O. Stanley. Poet: open-ended coevolution of environ-ments and their optimized solutions. In Proceedings of the Genetic and Evolutionary Compu-tation Conference , pages 142–151, 2019.[31] R. Wang, J. Lehman, A. Rawal, J. Zhi, Y . Li, J. Clune, and K. Stanley. Enhanced poet: Open-ended reinforcement learning through unbounded invention of learning challenges and theirsolutions. In International Conference on Machine Learning , pages 9940–9951. PMLR, 2020.[32] Z. Zhong, Y . Tang, Y . Zhou, V . d. O. Neves, Y . Liu, and B. Ray. A survey on scenario-based testing for automated driving systems in high-fidelity simulation. arXiv preprintarXiv:2112.00964 , 2021.[33] S. Riedmaier, T. Ponn, D. Ludwig, B. Schick, and F. Diermeyer. Survey on scenario-basedsafety assessment of automated vehicles. IEEE access , 8:87456–87477, 2020.[34] W. Ding, B. Chen, M. Xu, and D. Zhao. Learning to collide: An adaptive safety-critical sce-narios generating method. In 2020 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 2243–2250. IEEE, 2020.[35] W. Ding, C. Xu, M. Arief, H. Lin, B. Li, and D. Zhao. A survey on safety-critical drivingscenario generation—a methodological perspective. IEEE Transactions on Intelligent Trans-portation Systems , 2023.[36] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction . MIT press, 1998.[37] T. Gilles, S. Sabatini, D. Tsishkou, B. Stanciulescu, and F. Moutarde. Home: Heatmap outputfor future motion estimation. In 2021 IEEE International Intelligent Transportation SystemsConference (ITSC) , pages 500–507. IEEE, 2021.[38] B. Varadarajan, A. Hefny, A. Srivastava, K. S. Refaat, N. Nayakanti, A. Cornman, K. Chen,B. Douillard, C. P. Lam, D. Anguelov, et al. Multipath++: Efficient information fusion andtrajectory aggregation for behavior prediction. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 7814–7821. IEEE, 2022.[39] S. Shi, L. Jiang, D. Dai, and B. Schiele. Motion transformer with global intention localizationand local movement refinement. arXiv preprint arXiv:2209.13508 , 2022.[40] X. Wang, J. Liu, T. Qiu, C. Mu, C. Chen, and P. Zhou. A real-time collision prediction mecha-nism with deep learning for intelligent transportation system. IEEE transactions on vehiculartechnology , 69(9):9497–9508, 2020.[41] M. Treiber, A. Hennecke, and D. Helbing. Congested traffic states in empirical observationsand microscopic simulations. Physical review E , 62(2):1805, 2000.[42] S. Fujimoto, H. Hoof, and D. Meger. Addressing function approximation error in actor-criticmethods. In International conference on machine learning , pages 1587–1596. PMLR, 2018.[43] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the fourteenth international conferenceon artificial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Pro-ceedings, 2011.11A Proof of Proposition 1Proposition. Suppose that πforces the agent to approach the destination and the episode terminateswhen any traffic collision happens, then we haveminfAdv∈FJ(π, fAdv)⇔maxYOpXYEgo∼Y(π)P(YEgo,YOp|Coll =True, X ). (A.1)Proof. According to the definition of return J(π)and reward function R=d−σc, we haveminfAdv∈FJ(π, fAdv)⇔minfAdv∈FX(d−σc) (A.2)Since πforces the agent to approach the destination and the episode terminates when any trafficcollision happens, Jis minimized when encountering collisions; otherwise J=Pdreaches itsupper bound. Considering that the construction of fAdvis to maneuver the surrounding vehicleswhen the map is given, it equals that we search the best constraint-satisfying YOpin the priortrajectory distribution. Thus, we haveminfAdv∈FJ(π, fAdv)⇔maxYOpP(YOp|X)s.t Ego controlled by π collides with Op.(A.3)We then rewrite Eq. (A.3) in the form of posterior probability distribution maximization asminfAdv∈FJ(π, fAdv)⇔maxYOpP(YOp|π, Coll =True, X ) (A.4)Suppose that YEgogenerated by the current driving policy πcan be sampled from Y(π), Eq. (A.4)is equivalent to marginal maximization over the joint trajectory distribution, which follows asminfAdv∈FJ(π, fAdv)⇔maxYOpXYEgo∼Y(π)P(YEgo,YOp|Coll =True, X ).(A.5)The proof of Proposition 1 is completed.B Proof of Proposition 2Proposition. Suppose that YEgodepends on YOpunidirectionally, then we haveP(YEgo,YOp|Coll =True, X )∝P(YOp|X)P(YEgo|YOp, X)P(Coll =True|YEgo,YOp).(B.1)Proof. According to Bayes theorem, we haveP(YEgo,YOp|Coll =True, X )∝P(Coll =True|YEgo,YOp, X)P(YEgo,YOp, X) (B.2)Since Coll merely depends on YEgoandYOp, (B.2) is equivalent toP(YEgo,YOp|Coll =True, X )∝P(Coll =True|YEgo,YOp)P(YEgo,YOp, X) (B.3)Since we assume that YEgodepends on YOpunidirectionally; continuing with Bayes theorem, wehaveP(YEgo,YOp|Coll =True, X )∝P(Coll =True|YEgo,YOp)P(YEgo|YOp, X)P(YOp, X)∝P(Coll =True|YEgo,YOp)P(YEgo|YOp, X)P(YOp|X)P(X)(B.4)Since the past state Xis given, we can omit the last item P(X)in (B.4). Therefore, it holds thatP(YEgo,YOp|Coll =True, X )∝P(YOp|X)P(YEgo|YOp, X)P(Coll =True|YEgo,YOp)(B.5)The proof of Proposition 2 is completed.12C RL Experimental SettingsWe implement CAT in MetaDrive [20]. MetaDrive simulator provides off-the-self RL environmentsfor end-to-end driving. We follow the basic setting in MetaDrive1.In MetaDrive RL environments, the state includes maps sensor readings (Camera or LiDAR), high-level navigation command and self vehicle states. In our experiments, we use 2D LiDAR as thesensor to detect the surrounding vehicles, road boundaries and road lines. The state vector consistsof three parts:• Ego State: current states such as the steering, heading, velocity. (ii) Navigation: the nav-igation information that guides the vehicle toward the destination. Concretely, MetaDrivefirst computes the route from the spawn point to the destination of the ego vehicle.• Navigation: the navigation information that guides the vehicle toward the destination. Con-cretely, MetaDrive first computes the route from the spawn point to the destination of theego vehicle. Then a set of checkpoints are scattered across the whole route with certainintervals. The relative distance and direction to the next checkpoint and the next nextcheckpoint will be given as the navigation information.• Surrounding: the surrounding information is encoded by a vector containing the Lidar-likecloud points. We use 72 lasers to scan the neighboring area with radius 50 meters.The action consists of low-level control commands like steering, throttle and brake. MetaDrivereceives normalized action as input to control each target vehicle: a= [a1, a2]T∈[−1,1]2. At eachenvironmental time step, MetaDrive converts the normalized action into the steering us(degree),acceleration ua(hp) and brake signal ub(hp) in the following ways: (i) us=Smaxa1, (ii)ua=Fmaxmax(0 , a2), (iii) ub=−Bmaxmin(0 , a2), wherein Smax (degree) is the maximal steeringangle, Fmax (hp) is the maximal engine force, and Bmax (hp) is the maximal brake force.MetaDrive uses a compositional reward function as R=Rdriving +Rcrash vehicle penalty +Routofroad penalty . Here, the driving reward Rdriving =dt−dt−1, wherein the dtanddt−1denote the longitudinal coordinates of the target vehicle in the current lane of two consecutive timesteps, providing dense reward to encourage agent to move forward. By default, the penalty is -1 ifthe agent collides with surrounding vehicles, and the penalty is -10 if the agent runs out of the road.D Hyper-parameter SettingsTable 3: CATHyper-parameter ValueScenario Horizon T 9sHistory Horizon t 1s# of OV candidates M 32# of EV candidates N 5Penalty Factor α 0.99Policy Training Steps 10E6Table 4: TD3Hyper-parameter ValueDiscounted Factor γ0.99Train Batch Size 256Critic Learning Rate 3E-4Actor Learning Rate 3E-4Policy Delay 2Target Network τ 0.005Table 5: DenseTNT and M2IHyper-parameter ValueTrain Batch size 256Train Epoches 30Sub Graph Depth 3Global Graph Depth 1NMS Threshold 7.2Number of Mode 321https://metadrive-simulator.readthedocs.io/en/latest/index.html13E Qualitative Results of Safety-critical Traffic GenerationRaw CAT Raw CATFigure 6: More comparison between the original scenarios in raw datasets and the safety-criticalscenarios generated by CAT. The red car is the ego vehicle and the blue car is the opponent vehicle.14RawM2ISTRIVECATFigure 7: Comparing the different scenario generation methods. The red car is the ego vehicle andthe blue car is the opponent vehicle.F Details of the Rule-based Adversarial Traffic GenerationWay-points to fit Bezier curveInitial Condition Safety-Critical TrafficRule-based GenerationFigure 8: An example of the rule-based adversarial traffic generation.15Considering the HD-map in Waymo datasets are highly unstructured, thus we design a rule-basedsystem as follows:1. We heuristically take the vehicle labeled as ‘Object of Interest’ as the adversary.2. We take some waypoints on the navigation path of the ego vehicle, which will be occupiedby the adversary later to minimize the ego vehicle’s drivable area.3. We mix above waypoints with those on the original path of the adversarial vehicle.4. We fit a Bezier curve based on all the way-points to derive a smooth and feasible path ofthe rival vehicle.G Qualitative Results of Safety Improvement after CATBefore CAT After CAT Before CAT After CATBefore CAT After CAT Before CAT After CATCase 1Case 3Case 2Case 4Figure 9: Driving behaviour before and after CAT. The red car is the ego vehicle and the blue car isthe opponent vehicle. In case 1, the opponent car makes an unprotected left turn at an intersection;the driving agent learns to stay away from potentially dangerous vehicles. In case 2, the leading carslows down; the driving agent learns to change its lane and overtake. In case 3, the opponent carcuts into the lane suddenly; the driving agent learns to yield and change its lane ahead of time. Incase 4, two vehicles traveling in opposite directions meet and the driving agents learns to pass by.H Further DiscussionLimitations: Following limitations wait to be addressed in future work: (i) we only consider ad-versarial vehicles in this work but the safety-critical behaviors of pedestrians and cyclists are alsoof importance for safe driving and yet to be done, it requires the access to a different motion fore-casting model; (ii) Experiment on five hundred scenes cannot cover all the accident-prone situa-tions, thus there are other possible failure modes in the resulting agent; (iii) we only investigatethe RL-based driving policy but the adversarial scenarios should also benefit the human-in-the-loopimitation learning [17, 43].Transferring to real-world driving: The proposed adversarial training method and the comparisonwith prior methods are evaluated in the simulation of one hundred complex traffic scenarios importedfrom real-world driving dataset [4]. Thus, the evaluation contains realistic and complex vehicleinteractions and shows promise for transferring to real-world settings.16 |
8L6pHd9aS6w | XSkill: Cross Embodiment Skill DiscoveryMengda Xu1, 2, Zhenjia Xu1, Cheng Chi1, Manuela Veloso2,3, Shuran Song11Department of Computer Science, Columbia University2J.P. Morgan AI Research3School of Computer Science, Carnegie Mellon University (emeritus)Abstract: Human demonstration videos are a widely available data source forrobot learning and an intuitive user interface for expressing desired behavior.However, directly extracting reusable robot manipulation skills from unstructuredhuman videos is challenging due to the big embodiment difference and unobservedaction parameters. To bridge this embodiment gap, this paper introduces XSkill,an imitation learning framework that 1) discovers a cross-embodiment representa-tion called skill prototypes purely from unlabeled human and robot manipulationvideos, 2) transfers the skill representation to robot actions using conditional dif-fusion policy, and finally, 3) composes the learned skill to accomplish unseen tasksspecified by a human prompt video. Our experiments in simulation and real-worldenvironments show that the discovered skill prototypes facilitate both skill transferand composition for unseen tasks, resulting in a more general and scalable imi-tation learning framework. The benchmark, code, and qualitative results are onproject website.Keywords: Manipulation, Representation Learning, Cross-EmbodiementsXSkill Space Human video on a new task Unaligned Skill Identification (a) Training skill discovery with cross-embodiment videos (b) Inference on novel tasks Skill Alignment Current State SkillRobot Action Policy p(a|s,z) videos Figure 1: Cross Embodiment Skill Discovery. XSkill first learns a cross-embodiment skill representationspace (XSkill Space on the left). During inference, given a human demonstration of unseen tasks, XSkill firstidentifies the human skills by projecting the video demonstration onto the learned cross-embodiment skillrepresentation space. The identified skills are then executed by the skill-conditioned visuomotor policy.1 IntroductionA successful imitation learning algorithm from human demonstration is enabled by three criticalcapabilities: 1) Discover , decomposing the demonstrated task into a set of sub-tasks and identifyingthe common and reusable skills required to accomplish these sub-tasks. 2) Transfer , mapping eachof the observed skills to its own embodiment, which is different from that of the demonstrator. 3)Compose , performing novel compositions of the learned skills to accomplish new tasks.This paper addresses these critical capabilities by decomposing and identifying appropriate “skills”from human demonstration so that they are transferable to robots and composable to perform newtasks. We refer to the task as “Cross-Embodiment Skill Discovery” and introduce our method7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.XSkill for this task. At its core, XSkill learns a shared embedding space for the robot and humanskills through self-supervised learning [1, 2, 3, 4, 5]. The algorithm extracts features for unalignedhuman and robot video sequences, where video clips share similar action effects (i.e., similar skill)should result in a closer feature distance. To encourage across-embodiment alignment, we introducea set of learnable skill prototypes through feature clustering. These prototypes act as representativeanchors in the continuous embedding space. By force to share the same set of prototypes, we couldeffectively align the skill representations between embodiments.With the identified cross-embodiment skill prototypes, the robot can then learn a skill-conditionedvisuomotor policy that transfers each identified skill to the robot’s action space. During inference,the algorithm can one-shot generalize to new tasks using the learned skills, where the new task isdefined by a single human demonstration (i.e., prompt video). With the proposed skill alignmenttransformer, the algorithm can robustly align skills in the human video to the robot visual observa-tion, despite the embodiment difference and unexpected execution failures.Our approach improves upon the direct imitation learning method [6] by decomposing the complexlong-horizon tasks into a set of reusable skills (i.e., low-level visuomotor policies), which is mucheasier to learn and generalizable to new tasks through composition. Meanwhile, our approach differsfrom existing work on single-embodiment skill discovery [7, 8, 9], which solely relies on on-robotdemonstration data. By learning cross-embodiment skill prototypes, our framework can use directhuman demonstration, which is more cost-effective and scalable, even for non-expert demonstrators.In summary, our contributions are as follows:• We formulate the task of cross-embodiment skill discovery , a useful and essential buildingblock for imitation learning. Together with the new cross-embodiment dataset in simulation andthe real world, we hope to inspire future exploration in this area.• Introducing the first attempt toward this task XSkill that consists of three novel components: 1)A self-supervised representation learning algorithm that discovers a set of share skill prototypesfrom the unlabeled robot and human videos. 2) A skill-conditioned diffusion policy that trans-lates the observed human demonstration into robot actions. 3) A skill composition framework(with skill alignment transformer) to robustly detect, align and compose the learned skills toaccomplish new tasks from a single human demonstration.Our experiments in simulation and real-world environments show that the discovered skill proto-types facilitate both skill transfer and composition for unseen tasks, resulting in a more general andscalable imitation learning framework. The dataset and code will be publicly available.2 Related WorkRobot Skill Discovery. A large body of works have been proposed for discovering robot skill viaoption frameworks [10, 7, 11, 12, 13, 14] or through the lens of mutual information [15, 8, 16, 17,18, 19, 20]. Most of these works require interacting with the environment and yield high samplecomplexity. To ease the sample complexity, the other line of works [9, 21, 22, 23, 24] have exploreddiscover skill directly from robot demonstration data. The majority of those prior works requirephysical state or robot action trajectories. BUDS [25] eases this requirement by discovering skillsthrough raw RGB data. Unlike these prior works which discover skills only for a single embodiment,XSkill explores skill discovery in a cross-embodiment setting.Imitation learning. Learning robot behavior from demonstration data is a longstanding chal-lenge [6]. Prior works on imitation learning have shown promising results on real-world robotmanipulation task through explicit policy [26, 27, 28, 29, 30, 31, 32, 33], implicit policy [34]or diffusion model [35, 36, 37]. Our works utilize the hierarchical imitation learning framework[38, 39, 40, 41, 42] to transfer the discovered skills through skill-conditioned diffusion policy[35].XSkill can be categorized as one-shot imitation learning [43]. Most of the prior works imitate fromthe same-embodiment demonstration [44] or from a different embodiment [45, 46] but require task2Human & Robot Videos Temporal Transformer ...ƒtempor al...Predicted probability ...Target Cross Entropy Softmax Sample Learnable Skill Prototypes (Linear Layer) ...v z2 v2 ƒtempor al Augmentation z1 v1 ƒpr ot otypeSinkhorn Clustering Human & Robot Videos ...ƒtempor al...Predicted probability of prototypes ...Target Cross Entropy Softmax Sample Learnable Skill Prototypes Augutation ...V Z2 V2 ƒtempor al Augutation Z1 V1 ƒpr ot otypemultiply Sinkhorn Clustering Temporal Skill Encoder (Transformer) Augmentation Figure 2: XSkill Discover: At each training iteration, a batch of video are sampled from the same embodimentdataset. Each video vtiis augmented into two versions and encoded using temporal encoder ftemporal . Thelearnable skill prototypes fprototype are implemented as a normalized linear layer without bias. Both ftemporal andfprototype are trained jointly to minimize the CorssEntorpy loss between the predicted and target the probabilityof skill prototypes. Sinkhorn regularization is applied to the target probability, ensuring all prototypes are usedfor each batch (same embodiment), thereby encouraging prototype sharing across embodiments.label during the training. In contrast, XSkill does not require any task label or correspondencebetween embodiments during training to accomplish one-shot imitation from human demonstration.Learning from human video. A number of works [47, 48, 49, 50, 51] have studied leveraginghuman videos to learn robotic policy. One approach is to construct reward functions from humanvideo through domain translation [48, 52, 53, 54, 55, 56], video classifier [50, 57], or state represen-tation learning [58, 49, 59]. Despite showing promising results in learning from cross-embodimentdemonstration, most of these works require reinforcement learning (RL) to learn policy based onthe constructed reward functions, which is expensive to deploy in the real world. In contrast to themajority of these works, our method does not involve RL in the loop and focuses on one-shot im-itation from human videos. Bahl et al. [60] proposed to initialize policy through human prior butstill requires interaction with the environment to improve the policy. Our work is related to Yu et al.[51, 61], which explored one-shot imitation through meta-learning. Unlike meta-learning approach,our method does not require any task pairing information during the training. More recently, Mim-icPlay [62] proposed leveraging human videos to learn a cross-embodiment plan latent space. WhileMimicPlay aims to minimize robot demo collection, our emphasis is on learning a cross-embodimentrepresentation that allows us to reduce the reliance on robot demonstrations during inference.3 ApproachThe XSkill framework consists of three phases: Discover §3.1, Transfer §3.2, and Compose §3.3that uses three different data sources. In the discover phase, the algorithm has access to a humandemonstration dataset Dhand a robot teleoperation dataset Drto discover a cross-embodimentskill representation space Z. This space, along with Kcommon learnable skill prototypes, is learnedthrough self-supervised learning. Both datasets are unsegmented andunaligned and each videoin the dataset performs a subset of Nskills. In the transfer phase, the algorithm uses the robotteleoperation dataset Drto learn the skill-conditioned visuomotor policy P(a|s, z), where z∈ Zandsincludes both robot proprioception and visual observation o. In the Compose phase, thealgorithm takes as input a single human prompt video τhprompt for a new task that requires anunseen composition of skills to complete. From this video prompt, the algorithm first identifies theorder of skills used in the prompt and then composes the skills using the learned policy P(a|s, z).3.1 Discover: Learning Shared Skill PrototypesAs the first step, XSkill aims to discover skills in a self-supervised manner such that the learnedvisual representation of the same skills executed by different embodiments can be close in thecross-embodiment skill representation space Z. Off-the-shelf vision representations are often insuf-ficient since they are often sensitive to the visual appearance of the agent or environment. Instead, wewant the learned skill representation to focus on the underlying skills being performed. To achievethis goal, XSkill introduces two key ideas:• Learning a set of shared skill prototypes through soft-assignment clustering. These discreteprototypes act as representative anchors in the continuous embedding space. By forcing the useof shared prototypes, we can effectively align skill representations between embodiments.3• Regularizing the training process using Sinkhorn-Knopp clustering [63, 1] within single-embodiment batches. Together, they ensure all prototypes are used for each batch (all fromthe same embodiment). This design avoids the degeneration case where different embodimentmaps to different prototypes, thereby ensuring prototype sharing.Skill representations. XSkill extracts skill representations zusing human and robot videos fromDhandDr, mapping them into a shared representation space Z. To mitigate variations in exe-cution speed across different embodiments, we sample Mframes uniformly from each video Viand construct video clips {vij}Mj=0using a moving window of length L. Then, we extract the skillrepresentation zij=ftemporal (vij)from each video clip with a temporal skill encoder consistingof a vision backbone and a transformer encoder [64]. We append a learnable representation token[65, 66] into the sequence to better capture the motion across frames.Skills as Prototypes. Once the skill representations are obtained from demonstration videos,XSkill maps representations from all embodiments to a set of Kskill prototypes {ck}Kk=1, whereeach is a learnable vector. The skill prototypes are implemented as a normalized linear layer fprototypewithout bias. This mapping is accomplished through a self-supervised learning framework similar toSwA V [1]. The XSkill starts by augmenting the video clip vijusing a randomly selected transforma-tion before feeding into ftemporal (e.g., random crop). Subsequently, XSkill projects the normalizedrepresentationzij||zij||2onto the set of learnable skill prototypes {ck}Kk=1. The probability pijof skillsbeing executed in the given video clip vijis predicted by applying the Softmax function. The targetdistribution qijis obtained from the other augmented version of the same video clip. The targetprobability, instead of applying the Softmax function over projection, is obtained by running on-line clustering Sinkhorn-Knopp algorithm, which we describe later in the paper. Both ftemporal andfprototype are trained jointly to minimize the CorssEntropy loss between the predicted pijand targetqijskill prototypes distributions: Lprototype =−BPi=1MPj=0KPk=1q(k)ijlogp(k)ij, where Bis batch size.Learning Aligned Skill Representation. To ensure that the skill representation focuses on underly-ing skills rather than embodiment and is aligned across embodiments, XSkill employs a combinationof data sampling and entropy regularization during the clustering process in training. In each train-ing iteration, XSkill samples video clips from the same embodiment and constructs a batch. Thisbatch is then fed into the framework shown in Fig. 2, where clustering is performed on the fea-tures from the same embodiment, disregarding the embodiment differences. A significant challengearises as the skill embedding space might be segmented by embodiment, with skill representationsfor each embodiment occupying distinct regions in the embedding space. To address this issue, ourgoal is to enable the skill representation for each embodiment to fully utilize the entire embeddingspace. By allowing different embodiments to share each region in the space, the clustering algorithmis compelled to group representations based on the effect of the skill, resulting in an aligned skillrepresentation space. We approach this as an entropy-regularized clustering problem, which can beefficiently solved using the Sinkhorn clustering algorithm. Further details and pseudocode for theSinkhorn-Knopp can be found in the appendix.Time Contrastive Learning. XSkill utilizes a time contrastive loss [67, 68] in order to encap-sulate the temporal effects of skills within video demonstrations. It posits that skill prototypeprobabilities should be similar for video clips closer in time and dissimilar for those farther apart.This is achieved by establishing a positive window wpand a negative window wn. For a givenclipvixat time xin video Vi, a positive sample viyis chosen within wpfrom time x, and anegative sample vizis selected outside wn. XSkill minimizes the following InfoNCE loss [69]:Ltcn=−BPi=1logexp(S(pix,piy)/τtcn)exp (S(pix,piy)/τtcn)+exp ( S(pix,piz)/τtcn), wherein Sis the measure of similarity,which is implemented as dot product in XSkill and τtcnis the temperature. Here, pix,piy,pizrepresent the skill prototype probabilities for clips vix,viy, andviz, respectively.3.2 Transfer: Skill conditioned imitation learningTo transfer skill representations into concrete robot actions, we train a skill-conditioned visuomotorpolicy using imitation learning. In theory, any imitation learning policy can be used with the XSkill4Sequence of Skills from human video Skill Alignment Transformer Observation Encoder Human Prompt Video for an Unseen Task Observation Feature oRobot Observation Temporal Encoder (Pretrained)...... ...Skill-Conditioned Diffusion Policy Robot Action aCurrent Skill z (b) Skill Alignment (a) Input (c)Imitation Policy p (a |s , z ) ttt tt tCurrent State sw/ Proprioception tFigure 3: Transfer & Composition: During inference, a human demonstration of a new task is given, XSkillfirst extracts a sequence of skills, which can be viewed as a high-level task plan. However, this plan is notimmediately aligned with robot execution speed due to the embodiment gap. Therefore we need to align theplan based on the robot’s current observation, which is achieved by the Skill Alignment Transformer. Theinferred skills are then passed into a skill-conditioned diffusion policy to get the robot’s actions.framework. In practice, we prefer to use diffusion policy as it achieves state-of-the-art results inmany existing benchmarks. More specifically, our approach builds upon diffusion policy [35] whichuses Denoising Diffusion Probabilistic Models (DDPMs) [70] to represent the multimodal actiondistributions from human teleoperation demonstration. Diffusion policy has been shown to be stableto train and work well with a small amount of data, both are essential for our tasks.Our diffusion-based imitation learning policy p(at|st, zt)is trained with robot teleoperation datasetDr, where atdenotes an action-sequence {at, ..., a t+L}of length Lstarting from state st. Thediffusion policy takes skill representation ztand state stwhich includes robot proprioception andvisual observation otas input and produces an action-sequence at. The skill representation ztiscomputed with trained ftemporal using vt={ot, ot+1..., ot+L}as we described in §3.1.3.3 Compose: Performing unseen task from one-shot human prompt videoOnce skills have been discovered and transferred into robot manipulations through imitation learn-ing, our object is to compose the skills to solve unseen tasks based on a human prompt video whichcontains a demonstration by a human on how to complete an unseen task. To do so, XSkill first mapsa human prompt video with length Tprompt into the cross-embodiment skill representation space Zusing learned ftemporal . This generates a sequence of skill representation for the demonstrated task,denoted as ̃z={zt}Tpromptt=0, which is essentially a task execution plan. The robot can complete the taskby sequentially executing the skills in the plan by querying the skill-conditioned diffusion policy.However, directly following the skill sequence ̃zfor execution often results in a fragile system thatis sensitive to unexpected failures or speed mismatch. For example, if it fails to open a light, it needsto retry the skill to succeed. If the robot sequentially executes the ̃z, it will proceed to execute thenext skill without correcting the error. Therefore to improve the system’s robustness, we introducea Skill Alignment Transformer (SAT) that is described below.Skill Alignment Transformer (SAT). The Skill Alignment Transformer, denoted as φ(zt|ot, ̃z),aligns the robot with the intended skill execution based on its current state. The key idea is that byanalyzing the current state and comparing it to the skill sequence ̃z, the robot can determine whichpart of the skills has already been executed and infer the most likely skill to be executed next. Thecluster structure created by discrete prototypes aids in facilitating skill identification by SAT duringinference time. This alignment process allows the robot to synchronize its task execution with thedemonstrated task in the human prompt, thereby minimizing discrepancies arising from variationsin execution speed between robots and humans and offering robustness against execution failures.As illustrated in Fig. 3, SAT regards each skill in the skill sequence ̃zas a skill token. It employsa state encoder, fstate-encoder , to convert the robot’s current visual observation into a state token. Theskill and state tokens with position encoding, are then passed into a transformer encoder to predictthe skill that needs to be executed next. Within the transformer, the state token can attend to each5skill token to determine whether the skill has been executed given the current state information. Forexample, if the light is already open in the current state, the skill to open the light should not beconsidered the next skill to be executed.To train SAT φ(zt|ot, ̃z), we sample a full trajectory from the robot teleoperation dataset and extractthe skill sequence ̃z. This is achieved by passing robot trajectory video clips {vt}Tt=0, where vt={ot, ot+1..., ot+L}, through the temporal skill encoder, yielding {zt}Tt=0. The length of the sampledtrajectory is represented by T. Next, a time index tis chosen randomly within the range [0, T]. Oursystem predicts ˆztusing the skill sequence ̃zand visual observation otas inputs to SAT. We optimizethe model by minimizing the mean square error between ˆztand the actual zt.4 EvaluationEnvironment. We test XSkill on both simulated and real-world environments:•Franka Kitchen: is a simulated kitchen environment [71] that includes 7 sub-tasks and is accom-panied by 580 robot demonstration trajectories. To create a cross-embodiment demonstration, weconstruct a sphere agent that is visually significantly different from the original robot. To furtherincrease the domain gap, we sub-sample sphere agent demonstrations to emulate the executionspeed differences. During the inference, the robot must complete an unseen composition of sub-tasks after viewing a prompt video from the sphere agent demonstration.•Realworld Kitchen: is a new benchmark we introduce to evaluate algorithm performance onphysical robot hardware. The dataset contains four sub-tasks, namely, opening the oven, graspinga cloth, closing a drawer, and turning on a light. We have recorded 175 human demonstrations and175 teleoperation demonstrations. Each demonstration completes three sub-tasks in a randomlydetermined order. During inference, the robot is required to complete an unseen composition ofeither three or four sub-tasks after observing a prompt video taken from a human demonstration.Baselines. We compare XSkill with the following baselines:•GCD Policy: Instead of using skill-conditioned policy, we compare to a goal-conditioned diffu-sion policy π(at|st, gt), where the goal image gtis the image in prompt video Hsteps after thecurrent time tafter alignment. The alignment is done using nearest-neighbor matching betweenrobot observation and prompt video in the embedding space, where the embedding is trainedjointly with the policy.•GCD Policy w. TCN: Same as the GCD Policy above but replacing the video encoder withpre-trained Time-Contrastive Network (TCN)[67].•XSkill w. NN-composition: XSkill removing skill alignment transformer. Instead, find thealignment using the nearest neighbor image between the robot observation and the prompt video.The image embedding is extracted using the same encoder ftemporal as XSkill.•XSkill w.o proto. loss: XSkill removing prototype loss Lprototype in the discover phase.Implementation Details. We set the number of prototypes Kas 128 in the simulated environmentand 32 prototypes in the real-world environment. The video clip length Land uniform sampleframes Mare set as 8 and 100 for both simulated and real-world kitchens. The ablation study on K,time contrastive loss, and more implementation details can be found in the supplementary material.Real Kitchen Env (Sim) Franka Kitchen Figure 4: Evaluation Environments.Evaluation protocol. During inference, therobot is required to accomplish the sub-tasks inthesame order as demonstrated in the promptvideo. The performance of XSkill and all base-line methods is evaluated based on both sub-task completion and order of completion. If therobot executes an undemonstrated sub-task, theepisode ends. The evaluation metric is denotedasnumber of sub-tasks completednumber of total sub-tasks. In the simulation, each method is trained using three distinct seeds andtested under 32 unique initial environment conditions during the inference. In the real-world kitchen,each task is assessed 10 times under varying initial environment conditions during the inference.614.60s Probability 01 Probability 0133.64s Robot Human Pick the towel Close the drawer Press the Light Open the oven Turn on Light Close Drawer Open Oven Grasp Cloth Robot Human (a) t-SNE Visualization of Skill Space Human Video Robot Video (b) Skill Prototype Distribution Over Time Human and Robot execute the same task: Turn on Light + Grasp Cloth + Open Oven Figure 5: XSkill embedding. (a) We utilize t-SNE visualization to showcase the alignment of skill representa-tions among various embodiments when in contact with the same object. (b)We present projected prototypesfor both humans and robots executing identical tasks. XSkill achieves efficient alignment of representations,not just during physical contact, but also during the transition between manipulating different objects.4.1 Key FindingsXSkill can learn cross-embodiment skill representation. [XSkill] learns cross-embodiment skillrepresentation by effectively extracting reusable skills from demonstrations including both in-contact manipulation with various objects and also the transitions between them. In Fig. 5(a), wevisualize these learned skill representations using t-SNE, where the skills executed by the humanand robot to manipulate the same object are grouped together and clearly separated from the others.Additionally, we visualize the projected prototypes for human and robot completion of the sametask in Fig. 5(b). Despite differences in execution speed (humans execute ×2 faster), [XSkill] candecompose the task into meaningful and aligned skill prototypes for both in-contact manipulationsand transitions between them. Consequently, the performance of [XSkill] with cross-embodimentprompts only drops around 5%, compared to using the same embodiment prompt (Tab. 2 seen tasks).Table 1: Simulation Result (%)Same Cross Embodiment AvgExecution speed ×1×1×1.3×1.5 /GCD Policy 91.4 0.00 0.00 0.00 22.8GCD Policy w. TCN 2.50 3.55 2.00 1.25 2.32XSkill w. NN-compose 93.7 61.2 23.4 15.2 48.4XSkill w.o proto. loss 80.1 56.3 12.5 3.75 38.2XSkill 95.8 89.4 83.7 70.2 84.8XSkill can generalize the imitation pol-icy to unseen tasks. [XSkill] achieves70.2% and 60% success (Tab. 1 & 2)on unseen tasks with cross-embodimentprompts in simulated and real-world en-vironments, which outperforms all base-lines. As shown in Fig. 6 (a), [XSkill]is capable of decomposing unseen tasksinto sequences of previously seen skill abstractions which can be executable by the learned skill-conditional imitation policy. As a result, [XSkill] enables one-shot imitation learning through skilldecomposition and re-composition. The performance of [XSkill] slightly drops in the real worlddue to novel transition dynamics presented in prompts and a scarcity of collected robot data. Forinstance, the robot struggles to complete tasks involving grasping the cloth followed by closing thedrawer, since no such transition dynamics are present in the collected robot teleoperation dataset. Insummary, [XSkill] demonstrates promising task generalization, but its efficacy is still limited by thediversity in the robot teleoperation data.Table 2: Real-world Result (%)3 Subtasks 4 Subtasks AvgSame Cross Cross /Seen Unseen Seen Unseen Unseen /GCD policy 68.3 53.3 0.00 0.00 0.00 24.3GCD w. TCN 25.0 22.2 26.7 23.3 15.6 22.6XSkill 86.7 80.0 81.7 76.7 60.0 77.0Skill prototypes are essential for cross-embodiment learning. To assess the im-portance of shared skill prototypes, wecompare our approach with [XSkill w.oproto. loss] in simulation. [XSkill] out-performs the baseline significantly withcross-embodiment prompts. Further, we observe that the performance of [XSkill w.o proto. loss]deteriorates rapidly when the cross-embodiment agent operates at a higher speed. These results sug-gest that skill prototypes not only facilitate the learning of morphology-invariant representations butalso avoid learning a speed-sensitive one.7(f) Replan after Perturbation Close Drawer Approach Light Turn on Light Grasp Cloth Approach Oven Open Oven (c)One-shot Imitation on a novel task(e)Replan after Perturbation (a) Human Skill Identification (c) Robot Skill Prediction Timeline (d) One-shot Imitation on a Novel Task (b) Skill Alignment (e) Perturbation 19.23s 68.97s Figure 6: Execution on a novel task and robustness to perturbation. (a) XSkill analyzes a human video ofa novel task, identifying skills for each timestep (represented by distinct colors). (c)The robot leverages thisanalysis to predict the appropriate skill based on the current observation and subsequently executes the corre-sponding skill. Skill alignment (b)is critical to handle execution speed difference caused by cross embodiment.(e)With appropriate skill conditions at each step, the robot achieves one-shot imitation of the novel task. (e)Deliberately introducing a perturbation, a human manually turns off the light. (f)The robot accurately predictsthe necessary skills and adaptively replans the execution to successfully reach the goal state once again. Pleasecheck out the project website for videosSAT can align skills based on task progress. [XSkill] outperforms [XSkill w. NN-composition] bymore than 50% with cross-embodiment prompt. The performance of NN-composition significantlydeclines when the cross-embodiment agent executes at a much faster speed (Tab. 1). We down-sample cross-embodiment demonstrations and also prompt videos with different ratio. For instance,a ratio of ×1.5emulates that human execute ×1.5faster than the robot in the real world. Thiscan be attributed to two main factors. Firstly, relying solely on visual representation is prone todistraction due to morphological differences. Second, when the demonstration speed in the promptvideos is much faster than the robot’s execution speed, certain states might not be captured in theprompt video. As a result, the retrieved skills can become inaccurate and unstable. In contrast, asillustrated in Fig. 6, the SAT enhances the robustness of [XSkill] to the demonstration speed andenables adaptive adjustment of skills based on the current state of the robot and the task progress.Sequential input benefits cross-embodiment learning. Unlike [GCD Policy w. TCN], which relieson TCN encoding for single images, [XSkill] utilizes sequential image input to generate representa-tions. In the cross-embodiment scenario, [GCD Policy w. TCN] outperforms vanilla [GCD Policy],indicating partial bridging of the embodiment gap through TCN embedding. However, [GCD Pol-icy w. TCN] cannot follow the sub-tasks completion order in the prompt video, resulting in a lowerevaluation score. This suggests that TCN embedding does not completely align the representations.4.2 Limitation and Future WorkOne limitation of XSkill is the requirement of specifying the number of prototypes as an input to thealgorithm. While our ablation study demonstrates that XSkill is not highly sensitive to this number,fine-tuning hyperparameters may be necessary for optimal performance based on the dataset and itsuse. In addition, XSkill doesn’t demand labeled correspondence between human and robot datasets.However, our current benchmark only comprises videos from the same laboratory camera setup.Future research could investigate the applicability of our approach in more diverse camera setups andenvironments, leveraging readily available YouTube videos and multi-environment datasets [72].5 ConclusionWe introduce XSkill for the task of cross-embodiment skill discovery. This framework extractscommon manipulation skills from unstructured human, robot videos in a way that is transferrableto robots and composable to perform new tasks. Extensive experiments in simulation and real-world demonstrate that XSkill improves upon the direct behavior cloning method especially complexlong-horizon tasks. Moreover, by leveraging cross-embodiment skill prototypes, XSkill can directlyleverage non-expert human demonstration for new tasks definition, making the framework muchmore cost-effective and scalable.8AcknowledgmentsWe would like to thank Zeyi Liu, Huy Ha, Mandi Zhao, Samir Yitzhak Gadre and Dominik Bauerfor their helpful feedback and fruitful discussions.Mengda Xu’s work is supported by JPMorgan Chase & Co. This paper was prepared for informa-tion purposes in part by the Artificial Intelligence Research group of JPMorgan Chase & Co and itsaffiliates (“JP Morgan”), and is not a product of the Research Department of JP Morgan. JP Morganmakes no representation and warranty whatsoever and disclaims all liability, for the completeness,accuracy or reliability of the information contained herein. This document is not intended as invest-ment research or investment advice, or a recommendation, offer or solicitation for the purchase orsale of any security, financial instrument, financial product or service, or to be used in any way forevaluating the merits of participating in any transaction, and shall not constitute a solicitation underany jurisdiction or to any person, if such solicitation under such jurisdiction or to such person wouldbe unlawful. This work was supported in part by NSF Award #2143601, #2037101, and #2132519.We would like to thank Google for the UR5 robot hardware. The views and conclusions containedherein are those of the authors and should not be interpreted as necessarily representing the officialpolicies, either expressed or implied, of the sponsors.References[1] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin. Unsupervised learning ofvisual features by contrasting cluster assignments. Advances in neural information processingsystems , 33:9912–9924, 2020.[2] Y . M. Asano, C. Rupprecht, and A. Vedaldi. Self-labelling via simultaneous clustering andrepresentation learning. arXiv preprint arXiv:1911.05371 , 2019.[3] M. A. Bautista, A. Sanakoyeu, E. Tikhoncheva, and B. Ommer. Cliquecnn: Deep unsupervisedexemplar learning. Advances in Neural Information Processing Systems , 29, 2016.[4] M. Caron, P. Bojanowski, A. Joulin, and M. Douze. Deep clustering for unsupervised learningof visual features. In Proceedings of the European conference on computer vision (ECCV) ,pages 132–149, 2018.[5] D. Yarats, R. Fergus, A. Lazaric, and L. Pinto. Reinforcement learning with prototypicalrepresentations. 2021.[6] B. Argall, S. Chernova, M. M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics Auton. Syst. , 57:469–483, 2009.[7] A. Bagaria and G. Konidaris. Option discovery using deep skill chaining. In International Con-ference on Learning Representations , 2020. URL https://openreview.net/forum?id=B1gqipNYwH .[8] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skillswithout a reward function. ArXiv , abs/1802.06070, 2018.[9] A. Sharma, S. S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-aware unsuperviseddiscovery of skills. ArXiv , abs/1907.01657, 2019.[10] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework fortemporal abstraction in reinforcement learning. Artif. Intell. , 112:181–211, 1999.[11] R. Fox, S. Krishnan, I. Stoica, and K. Goldberg. Multi-level discovery of deep options. ArXiv ,abs/1703.08294, 2017.[12] G. D. Konidaris and A. G. Barto. Skill discovery in continuous reinforcement learning domainsusing skill chaining. In NIPS , 2009.9[13] V . C. V . Kumar, S. Ha, and C. K. Liu. Expanding motor skills using relay networks. InConference on Robot Learning , 2018.[14] J. Achiam, H. Edwards, D. Amodei, and P. Abbeel. Variational option discovery algorithms,2018.[15] K. Gregor, D. J. Rezende, and D. Wierstra. Variational intrinsic control. ArXiv ,abs/1611.07507, 2016.[16] K. Hausman, J. T. Springenberg, Z. Wang, N. M. O. Heess, and M. A. Riedmiller. Learningan embedding space for transferable robot skills. In International Conference on LearningRepresentations , 2018.[17] T. Shankar and A. Gupta. Learning robot skills with temporal variational inference. In Pro-ceedings of (ICML) International Conference on Machine Learning , pages 8624 – 8633, July2020.[18] L. Lee, B. Eysenbach, E. Parisotto, E. Xing, S. Levine, and R. Salakhutdinov. Efficient ex-ploration via state marginal matching, 2020. URL https://openreview.net/forum?id=Hkla1eHFvS .[19] H. Liu and P. Abbeel. Aps: Active pretraining with successor features. In International Con-ference on Machine Learning , 2021.[20] M. Laskin, H. Liu, X. B. Peng, D. Yarats, A. Rajeswaran, and P. Abbeel. Un-supervised reinforcement learning with contrastive intrinsic control. 35:34478–34491,2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/debf482a7dbdc401f9052dbe15702837-Paper-Conference.pdf .[21] D. Tanneberg, K. Ploeger, E. Rueckert, and J. Peters. SKID RAW: skill discovery from rawtrajectories. IEEE Robotics and Automation Letters , 6(3):4696–4703, 2021. doi:10.1109/LRA.2021.3068891.[22] A. Singh, H. Liu, G. Zhou, A. Yu, N. Rhinehart, and S. Levine. Parrot: Data-driven behavioralpriors for reinforcement learning. In International Conference on Learning Representations ,2021. URL https://openreview.net/forum?id=Ysuv-WOFeKR .[23] K. Pertsch, Y . Lee, and J. J. Lim. Accelerating reinforcement learning with learned skill priors.InConference on Robot Learning (CoRL) , 2020.[24] M. Xu, M. Veloso, and S. Song. ASPire: Adaptive skill priors for reinforcement learning. InAdvances in Neural Information Processing Systems , 2022. URL https://openreview.net/forum?id=sr0289wAUa .[25] Y . Zhu, P. Stone, and Y . Zhu. Bottom-up skill discovery from unsegmented demonstrations forlong-horizon robot manipulation. IEEE Robotics and Automation Letters , 7:4126–4133, 2021.[26] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart’in-Mart’in. What matters in learning from offline human demonstrationsfor robot manipulation. In Conference on Robot Learning , 2021.[27] P. R. Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotorpolicy learning. IEEE Robotics and Automation Letters , 5:492–499, 2019.[28] R. Rahmatizadeh, P. Abolghasemi, L. B ̈ol ̈oni, and S. Levine. Vision-based multi-task ma-nipulation for inexpensive robots using end-to-end learning from demonstration. 2018 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 3758–3765, 2017.10[29] A. Zeng, P. R. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world forrobotic manipulation. In Conference on Robot Learning , 2020.[30] T. Zhang, Z. McCarthy, O. Jow, D. Lee, K. Goldberg, and P. Abbeel. Deep imitation learningfor complex manipulation tasks from virtual reality teleoperation. 2018 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 1–8, 2017.[31] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. In Thirty-Sixth Conference on Neural Information Processing Systems ,2022. URL https://openreview.net/forum?id=agTr-vRQsa .[32] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. VIOLA: Object-centric imitation learning for vision-based robot manipulation. In 6th Annual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=L8hCfhPbFho .[33] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. Joshi, R. Ju-lian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath,I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1: Robotics transformer for real-world control at scale. In arXiv preprint arXiv:2204.01691 ,2022.[34] P. Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mor-datch, and J. Tompson. Implicit behavioral cloning. Conference on Robot Learning (CoRL) ,2021.[35] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023.[36] T. Pearce, T. Rashid, A. Kanervisto, D. Bignell, M. Sun, R. Georgescu, S. V . Macua, S. Z.Tan, I. Momennejad, K. Hofmann, and S. Devlin. Imitating human behaviour with diffusionmodels. In The Eleventh International Conference on Learning Representations , 2023. URLhttps://openreview.net/forum?id=Pv1GPQzRrC8 .[37] M. Reuss, M. X. Li, X. Jia, and R. Lioutikov. Goal-conditioned imitation learning using score-based diffusion policies. ArXiv , abs/2304.02532, 2023.[38] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learninglatent plans from play. In Conference on Robot Learning , 2019.[39] H. M. Le, N. Jiang, A. Agarwal, M. Dud ́ık, Y . Yue, and H. D. I. au2. Hierarchical imitationand reinforcement learning, 2018.[40] A. Mandlekar, F. Ramos, B. Boots, L. Fei-Fei, A. Garg, and D. Fox. Iris: Implicit reinforce-ment without interaction at scale for learning control from offline robot manipulation data.2020 IEEE International Conference on Robotics and Automation (ICRA) , pages 4414–4420,2019.[41] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, S. Savarese, and L. Fei-Fei. Learning to generalizeacross long-horizon tasks from human demonstrations. ArXiv , abs/2003.06085, 2020.[42] K. Shiarlis, M. Wulfmeier, S. Salter, S. Whiteson, and I. Posner. Taco: Learning task decompo-sition via temporal alignment for control. In International Conference on Machine Learning ,2018.11[43] Y . Duan, M. Andrychowicz, B. C. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, andW. Zaremba. One-shot imitation learning. In NIPS , 2017.[44] Z. Mandi, F. Liu, K. Lee, and P. Abbeel. Towards more generalizable one-shot visual imitationlearning. 2022 International Conference on Robotics and Automation (ICRA) , pages 2434–2444, 2021.[45] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning viameta-learning. ArXiv , abs/1709.04905, 2017.[46] S. Dasari and A. K. Gupta. Transformers for one-shot visual imitation. ArXiv , abs/2011.05970,2020.[47] T. Xiao, I. Radosavovic, T. Darrell, and J. Malik. Masked visual pre-training for motor control.ArXiv , abs/2203.06173, 2022.[48] Y . Liu, A. Gupta, P. Abbeel, and S. Levine. Imitation from observation: Learning to imitatebehaviors from raw video via context translation, 2018.[49] K. Zakka, A. Zeng, P. R. Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. In Conference on Robot Learning , 2021.[50] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. Concept2robot: Learning manip-ulation concepts from instructions and human demonstrations. The International Journal ofRobotics Research , 40(12-14):1419–1434, 2021.[51] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine. One-shot imitation fromobserving humans via domain-adaptive meta-learning, 2018.[52] L. Smith, N. Dhawan, M. Zhang, P. Abbeel, and S. Levine. Avid: Learning multi-stage tasksvia pixel-level translation of human videos, 2020.[53] P. Sharma, D. Pathak, and A. Gupta. Third-person visual imitation learning via decoupledhierarchical controller, 2019.[54] H. Xiong, Q. Li, Y .-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching:Physical imitation of manipulation skills from human videos. 2021 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 7827–7834, 2021.[55] K. Schmeckpeper, O. Rybkin, K. Daniilidis, S. Levine, and C. Finn. Reinforcement learningwith videos: Combining offline observations with interaction. CoRR , abs/2011.06507, 2020.URL https://arxiv.org/abs/2011.06507 .[56] P. Sermanet, K. Xu, and S. Levine. Unsupervised perceptual rewards for imitation learning.CoRR , abs/1612.06699, 2016. URL http://arxiv.org/abs/1612.06699 .[57] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from ”in-the-wild” human videos. ArXiv , abs/2103.16817, 2021.[58] M. Sieb, X. Zhou, A. Huang, O. Kroemer, and K. Fragkiadaki. Graph-structured visual imita-tion. In Conference on Robot Learning , 2019.[59] S. Kumar, J. Zamora, N. Hansen, R. Jangir, and X. Wang. Graph inverse reinforcement learningfrom diverse videos. In Conference on Robot Learning , 2022.[60] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. ArXiv ,abs/2207.09450, 2022.[61] T. Yu, P. Abbeel, S. Levine, and C. Finn. One-shot hierarchical imitation learning of compoundvisuomotor tasks, 2018.12[62] C. Wang, L. J. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar. Mim-icplay: Long-horizon imitation learning by watching human play. ArXiv , abs/2302.12422,2023.[63] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances inneural information processing systems , 26, 2013.[64] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[65] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectionaltransformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018.[66] I. Kostrikov, D. Yarats, and R. Fergus. Image augmentation is all you need: Regularizing deepreinforcement learning from pixels. arXiv preprint arXiv:2004.13649 , 2020.[67] P. Sermanet, C. Lynch, Y . Chebotar, J. Hsu, E. Jang, S. Schaal, and S. Levine. Time-contrastivenetworks: Self-supervised learning from video, 2018.[68] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation, 2022.[69] A. v. d. Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 , 2018.[70] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. ArXiv ,abs/2006.11239, 2020.[71] A. Gupta, V . Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solvinglong-horizon tasks via imitation and reinforcement learning, 2019.[72] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, andS. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets,2021.AppendixA.1 Generalization to unseen transitionTo evaluate the generalization capacity of XSkill, we conducted experiments within the simulationenvironment to assess its performance with unseen transitions. We designed two levels of taskdifficulty by excluding transitions used in the inference task from the training dataset. In Level 1,25% of the transitions were removed, and in Level 2, 50% were removed. Notably, the main paper’sexperiment in the simulation was already conducted in the scenario involving the removal of 25%of transitions. The complete experiment results are summarized in Table A1.Table A1: Generalization Study Result (%)Level 1 Level 2Same Cross Embodiment Overall Same Cross Embodiment OverallExecution speed ×1×1 ×1.5 / ×1×1 ×1.5 /XSkill K=128 95.8 89.4 70.2 85.1 62.5 55.0 47.5 55.0XSkill K=256 98.3 86.6 76.7 87.2 91.3 81.3 61.3 77.9XSkill K=512 97.5 90.7 71.8 86.7 87.5 84.4 67.5 79.8The results of these experiments demonstrate XSkill’s capacity to generalize to new tasks involv-ing previously unseen skill compositions and transitions. XSkill achieved success rates of 90.7%13and84.4%in Level 1 and Level 2 tasks, respectively, when using the same-speed cross-embodimentprompt, and 71.8%and67.5%for the cross-embodiment prompt at ×1.5speed. Further, we observethat increasing the number of prototypes Kenhanced the generalization potential. While variousKchoices demonstrated comparable performance in Level 1 tasks, larger values of K(256,512)significantly outperformed smaller ones in the more challenging Level 2 tasks, which lacked 50%of the inference task transitions from the training data. We hypothesize that augmenting Ken-hances the granularity of the representation space, thereby facilitating better generalization throughinterpolation.A.2 Ablation StudyA.2.1 Number of skill prototypesWe performed an ablation study to assess the impact of the number of skill prototypes ( K) in ourXSkill framework within the simulated Franka Kitchen environment. We tested Kvalues of 32,128, 256, and 512, with the results for K= 128 reported in our main paper. The outcome of thisablation study can be found in Tab. A2.We observed that increasing the number of skill prototypes ( K) to 256 or 512 did not degrade theperformance of XSkill. However, reducing Kdid impact performance adversely. We hypothesizethat a smaller Kvalue (i.e., 32) may limit the representation capacity of the skill space, as all skillrepresentations zare enforced to map around one of the skill prototypes. This could potentially forcedistinct skills to map around the same prototype, resulting in diminished manipulation performance.On the contrary, a larger Kvalue doesn’t hinder performance; in fact, increasing Kmight augmentsthe granularity of the representation space, allowing unique skills to have distinct representationswithin this space.These results suggest that the performance of our framework is not significantly affected by thechoice of K. We believe this is due to the fact that the projected skill prototypes are not directlyinputted into the imitation learning policy P(at|st, zt)and SAT φ(zt| ̃z, ot). Instead, we utilize thecontinuous skill representation zprior to projection. This choice allows for greater flexibility andgranularity in representing skills, making the specific choice of Kless crucial. Therefore, whilefine-tuning Kmay still be necessary for optimal results in certain environments (e.g., a simulatedFranka Kitchen with seven sub-tasks requiring large Kas opposed to a real-world kitchen withfour sub-tasks where a Kvalue of 32 may suffice), our framework demonstrates robustness againstvariations in the number of skill prototypes.Table A2: Ablation: Number of K(%)Same Cross EmbodimentExecution speed ×1×1×1.5XSkill K= 32 91.6 67.5 48.7XSkill K= 128 95.8 89.4 70.2XSkill K= 256 98.3 86.6 76.7XSkill K= 512 97.5 90.7 71.8A.2.2 Time contrastive lossIn order to demonstrate the significance of time contrastive loss, we conduct a comparative studywith [XSkill w.o TC loss] using simulations. The results, presented in Tab. A3, clearly show asignificant drop in performance for [XSkill w.o TC loss] compared to [XSkill], even under the same-embodiment setting. This empirical evidence underscores the vital role of time contrastive loss inenabling our representation to effectively capture the temporal effects of skills. Consequently, thelearned skill representation with time contrastive loss is beneficial for downstream manipulationtasks.14Table A3: Ablation: Time contrastive loss (%)Same Cross Embodiment AvgExecution speed ×1×1×1.3×1.5 /XSkill w.o TC loss 3.75 2.25 2.25 1.25 2.38XSkill 95.8 89.4 83.7 70.2 84.8A.3 Additional Experiment ResultsWe present the performance results of XSkill for each task using cross-embodiment prompts duringinference for the real-world environment in Table A4. When combined with the additional studyon generalization in Appendix A1, a primary limitation for achieving generalization in real-worldenvironments becomes apparent: the diversity present in robot teleoperation data.Our observations indicate that XSkill is capable of generalizing to previously unseen transitions insimulations due to the sufficient and diverse nature of the collected data. This allows for effectiveinterpolation and generalization. However, it’s important to note that the data collected for the sub-task Drawer in the real-world environment lacks multi-modal properties, as illustrated in Figure 5in the main paper. As a consequence, XSkill struggles to extend its capabilities to solve unseentransitions involving the Drawer sub-task.Table A4: XSkill Inference Task Per Task Results (%)Inference Task Cross EmbodimentOven, Draw, Cloth 80.0Draw, Cloth, Oven 73.3Oven, Light, Cloth, Draw 75.0Draw, Cloth, Light, Oven 90.0Draw, Oven, Cloth, Light 25.0Draw, Light, Cloth, Oven 50.0A.4 Implementation DetailsWe have presented a summary of the three phases, namely Discover, Transfer, and Compose, inpseudocode. The pseudocode for each phase is provided in Algorithm 1, 2, and 3, respectively.Algorithm 1 Cross-embodiment Skill Discovery1:Input: K: Number of skill prototypes. Dh: Human demonstration dataset. Dr: Robot teleop-eration dataset2:Require: T: Random augmentation operation. mm: Matrix multiplication3:Require: ftemporal : Temporal skill encoder. C= [c1, . . . , c K]:KSkill prototypes4:while not converge do5: Sample a batch of video clips vfromDhorDr6: vA=T(v)andvB=T(v) ▷Compute two augmentations of v:7: zA=ftemporal (vA)andzB=ftemporal (vB) ▷Compute skill representations8: sA=mm(zA, C)andsB=mm(zB, C) ▷Compute projection9: pA=Softmax (sA)andpB=Softmax (sB) ▷Predict skill prototypes probability10: qA=Sinkhorn (sB)andqB=Sinkhorn (sA) ▷Compute target probability11: Lproto =12(CrossEntropy (pA, qA) +CrossEntropy (pB, qB)) ▷Compute prototype loss12: Sample positive and negative video clips for vA:vposA, vnegA ▷Or for vb13: Compute associated skill prototypes probability pposA, pnegA ▷Follow line 7 to line 914: Ltcn=InfoNce (pA, pposA, pnegA) ▷Compute time contrastive loss15: Ldiscovery =Lproto +Ltcn ▷Compute skill discovery loss16: Update ftemporal andC ▷ Update models17:end while15Algorithm 2 Cross-embodiment Skill Transfer1:Input: Dr: Robot teleoperation dataset.2:Require: ftemporal : Learned temporal skill encoder (freeze).3:Require: φ: Skill Alignment Transformer (SAT). p: Imitation learning policy4:while not converge do5: o,a,sprop∼ Dr▷Sample a robot trajectory with length T6: ̃z=Skill Identification( o) ▷Identify skills in video and form skill execution plan7: t∼[0, T] ▷Sample a time index8: ˆzt=φ(ot, ̃z) ▷Predict the skill need to be executed at time t9: LSAT = MSE( ˆzt, ̃zt) ▷Compute SAT loss10: ˆat=p(ot,spropt, ̃zt) ▷Predict the actions based on identified skill ̃ztat time t11: Lbc=MSE(ˆat,at) ▷Compute behavior cloning loss12: Ltransfer =LSAT+Lbc13: Update φandp ▷ Update models based on transfer loss14:end whileAlgorithm 3 Cross-embodiment Skill Compose (Inference)1:Input: τhprompt : Human prompt video2:Require: φ: Learned Skill Alignment Transformer (SAT). p: Learned Imitation learning policy3: ̃z=Skill Identification( τhprompt ) ▷Identify skills in video and form skill execution plan4:while not success orepisode not end do5: ot,spropt=env.get obs() ▷Get observation and robot proprioception from environment6: ˆzt=φ(ot, ̃z) ▷Predict the skill need to be executed at time t7: ˆat=p(ot,spropt, ̃zt) ▷Predict the actions based on predicted skill8: Execute the ˆatin environment. ▷execute actions9:end whileA.4.1 Sinkhorn-Knopp AlgorithmXSkill employs the Sinkhorn algorithm to solve an entropy-regularized soft-assignment clusteringprocedure. As outlined in our main paper, our objective is to enhance cross-embodiment skill repre-sentation learning. We strive to ensure that each embodiment fully leverages the entire embeddingspace. Given that the skill prototypes serve as the cluster centroids, they essentially act as anchors inthe representation space. We can efficiently realize this aim by uniformly soft-assigning all samplesto every prototype.Our goal is to project a batch of skill representation Z= [z1, . . . , z B]onto the skill prototypesmatrix C= [c1, . . . , c K], where the columns of the matrix are c1, . . . , c K. The intended codeQ= [q1, . . . , q B]or the target skill prototype probability should retain similarity with the projectionwhile maintaining a specific entropy level. This can be expressed as an optimal transport problemwith entropy regularization [63, 1]:maxQ∈QTRQ⊤C⊤Z+εH(Q) (1)The solution [63, 1] is given by:Q∗=Diag(u) expC⊤ZεDiag(v), (2)where uandvare normalization vectors, which can be iteratively computed using the Sinkhorn-Knopp algorithm. Sinkhorn-Knopp receives projection C⊤Zas the input and iteratively modifiesthe matrix to satisfy the entropy regularization by producing double stochastic matrix. Through ourexperiments, we’ve observed that three iterations are sufficient. The Pytorch-like pseudocode forthe Sinkhorn-Knopp can be found in pseudocode lisiting 1. The target skill probability for zican beobtained from the ithcolumn from the output Q∗.16A.4.2 Temporal Skill Encoder & Prototypes LayerThe temporal skill encoder ftemporal consists of a vision backbone and a transformer encoder. To effi-ciently process a large batch of images, we employ a straightforward 3-layer CNN network followedby an MLP layer as our vision backbone. This network can be trained on a single NVIDIA 3090.Each image in the input video clip is first augmented by a randomly selected operation from a setof image transformations, including random resize crop, color jitter, grayscale, and Gaussian blur.The augmented video clip is then passed into the vision backbone, and the resulting features are flat-tened into 512-dimensional feature vectors. The transformer encoder, on the other hand, comprises8 stacked layers of transformer encoder layers, with each layer employing 4 heads. The dimensionof the feedforward network is set to 512.The prototype layer fprototype is implemented as a single linear layer without bias, and we normalizeits weights with every training iteration. We freeze its weights for the first 3 training iterations tostabilize the training process. For the TCN loss, in practice, we replace the skill prototype probabilitywith its unnormalized version zTtC(before applying the Softmax function). We noticed that theSoftmax function saturates the gradient, leading to unstable training.The additional hyperparameters are summarized in Table A5 and Table A6 for simulated and real-world kitchen environments, respectively.Table A5: Simulated Kitchen Skill Discovery HyperparameterHyperparameter ValueVideo Clip length l 8Sampling Frames T 100Sinkhorn iterations 3Sinkhorn epsilon 0.03Prototype loss coef 0.5Prototype loss temperature 0.1TCN loss coef 1TCN positive window wp 4TCN negative window wn 12TCN negative samples 16TCN temperature τtcn 0.1Batch Size 16Training iteration 100Learning rate 1e-4Optimizer ADAMA.4.3 Skill Alignment TransformerThe Skill Alignment Transformer (SAT) comprises a state encoder, denoted as fstate-encoder , and atransformer encoder. The state encoder is implemented as standard Resnet18. The transformerencoder consists of 16 stacked layers of transformer encoder layers, each employing 4 heads. andthe feedforward network has a dimension of 512. As depicted in Section 3.3 of the paper, a setof skill representations {zt}Tit=0is extracted from the sample trajectory τiand passed into SAT asskill tokens. For practical purposes, XSkill adopts a uniform sampling approach, selecting NSATprototypes from the skill list. This approach is motivated by two primary reasons. First, skillsare typically executed over extended periods, and we only require information about the start andend times, as well as the time allocated to each skill. Uniform sampling preserves this necessaryinformation while reducing redundant prototypes in the list. Second, human demonstrations mayoccur at a significantly faster pace than the robot’s execution, leading to variations in the length ofthe skill list. This discrepancy can hinder the learning algorithm’s performance during inference.By uniformly sampling a fixed number of frames from the set, the learning algorithm operates underconsistent conditions in both learning and inference stages. NSAT is set to approximately half of the17Table A6: Realworld Kitchen Skill Discovery HyperparameterHyperparameter ValueVideo Clip length l 8Sampling Frames T 100Sinkhorn iterations 3Sinkhorn epsilon 0.03Prototype loss coef 0.5Prototype Softmax temperature 0.1TCN loss coef 1TCN positive window wp 6TCN negative window wn 16TCN negative samples 16TCN temperature τtcn 0.1Batch Size 20Training iteration 500Learning rate 1e-4Optimizer ADAMaverage length of frames in robot demonstrations. During inference, if the length of the extractedskill list is less than NSAT, XSkill uniformly up-samples the skill list. We include a representationtoken after the skill token and the state token to summarize the prediction information. The latentrepresentation of the representation token is then passed into a multi-layer perceptron (MLP) topredict the desired skill z. We set NSAT= 100 in the simulated kitchen environment and NSAT= 200as the realworld robot trajectories is significantly longer than those in simulation.A.4.4 Diffusion PolicyWe use the original code base from Chi et al. [35] and adapt same the configuration for both thesimulated and realworld environment. We refer the reader to the paper for details.Listing 1: Pseudocode for Sinkhorn”””PyTorch − l i k e pseudocode f o r Sinkhorn −Knopp”””# Sinkhorn −Knoppdef s i n k h o r n ( s c o r e s , eps = 0 . 0 5 , n i t e r s = 3 ) :Q = exp ( s c o r e s / eps ) . TQ /= sum (Q)K, B = Q. shapeu , r , c = z e r o s (K) , ones (K) / K, ones (B) / Bf o r in range ( n i t e r s ) :u = sum (Q, dim =1)Q*= ( r / u ) . unsqueeze ( 1 )Q*= ( c / sum (Q, dim = 0 ) ) . unsqueeze ( 0 )return (Q / sum (Q, dim =0 , keepdim=True ) ) . TA.5 Environment&Data CollectionsWe begin with a formal description of the three distinct data sources. 1). Human demonstrationdataset : Herein, τhi={o0, .., o Ti}, where otdenotes the RGB visual observation at the time t.Within each trajectory, a subset of skills {zj}Jij=0is sampled from a skill distribution p(Z)contain-ingNunique skills and a human performs in a random sequence. 2). Robot teleoperation data :This dataset comprises teleoperated robot trajectories τri={(o0, sprop0, a0), ..,(oTi, spropTi, aTi)},where spropt,atcorrespond to robot proprioception data and end-effector action at time trespec-tively. We utilize stas the symbol for ot, spropt throughout the main paper. Analogous to the human18demonstration dataset, each trajectory incorporates a subset of skills zjJij=0, sampled from the skilldistribution p(Z), which the robot executes in a random sequence. 3). Human prompt video : Thissingle trajectory of human video τhprompt ={o0, .., o Tprompt}demonstrate unseen composition ofskills{zj}Jpromptj=0 taken from the skill distribution p(Z). We represent the RGB video trajectoriesthat include only the RGB visual observation {o0, .., o Ti}for both human and robot in the mainpaper as Vifor the sake of simplicity.A.6 SimulationIn order to produce a cross-embodiment dataset, we modified the initial Franka Kitchen setup, in-troducing a sphere agent distinctly visual from the original Franka robot. The sphere agent demon-stration dataset was generated by substituting the Franka robot arm with the sphere agent and re-rendering all 600 Franka demonstrations. The images for both Franka robot and sphere agent demon-stration are in a resolution of 384x384. Each trajectory in this dataset features the robot completingfour sub-tasks in a randomized sequence. The demonstration from both embodiments was dividedinto a training set and a prompt set, with the latter containing trajectories involving unseen com-binations of sub-tasks. This requires the robot to complete tasks namely, opening the microwave,moving the kettle, switching the light, and sliding the cabinet in order.For skill discovery, we downsampled the demonstration videos to a resolution of 112x112 and ran-domly applied color jitter, random cropping, Gaussian blur, and grayscale to the input video clips.For diffusion policy training, the environment observation incorporates a 112 x 112 RGB image anda 9-dimensional joint position (include gripper). We used a stack of two consecutive steps of theobservation as input for the policy.A.7 RealworldWe conducted data collection for our cross-embodiment dataset in a real-world kitchen environmentusing a UR5 robot station. The UR5 robot is equipped with a WSG50 gripper and a 3D printed softfinger. It operates by accepting end-effector space position commands at a rate of 125Hz. In therobot station, we have installed two Realsense D415 cameras that capture 720p RGB videos at 30frames per second. One camera is mounted on the wrist, while the other provides a side view.Our dataset consists of demonstrations involving both human and robot teleoperation for four spe-cific sub-tasks: opening the oven, grasping cloth, closing the drawer, and turning on the light. Tointroduce variability, the initial locations of the oven and the pose of the cloth are different for eachtrajectory. Each demonstration trajectory involves the completion of three sub-tasks in a randomorder.For training, we created seven distinct tasks for each embodiment and collected 25 trajectories foreach task. The robot teleoperation demonstrations were recorded using a 3Dconnexion SpaceMouseat a rate of 10Hz. For the inference task, we created two unseen tasks with three sub-tasks each, andfour unseen tasks with four sub-tasks each. For tasks with three sub-tasks, we recorded both humanand robot demonstrations as prompt videos, while for tasks with four sub-tasks, we recorded humandemonstrations only. The details of task collections are illustrated in Tab. A7During skill discovery, we exclusively utilized videos recorded from the side camera and down-sampled them to 160x120 at 10fps. Similar to before, we applied random transformations such asrandom crop, Gaussian blur, and grayscale to the input video clips. For diffusion policy training,we used visual inputs from both cameras, downscaled to 320x240. The input to the diffusion policyincluded a 6-dimensional end effector pose, a 1-dimensional gripper width, and two visual inputsfrom both cameras. We only considered one step of observation as the policy input. Position controlwas selected as the diffusion-policy action space, encompassing the 6-dimensional end effector poseand 1-dimensional gripper width. During training, we applied random crop with a shape of 260x288,while during inference, we utilized a center crop with the same shape.19Table A7: Training & Inference TaskTasks Human(seconds) Robot(seconds)Overlapping Training Task Draw, Light, Oven 12.92 + 1.19 29.12 + 2.03Light, Cloth, Oven 15.23 + 0.92 32.76 + 2.42Draw, Light, Cloth 15.73 + 1.22 26.83 + 2.28Draw, Cloth, Light 17.21+1.05 31.71 + 3.74Human exclusive Training TaskOven, Draw, Cloth 11.62+1.58 /Cloth, Oven, Light 12.69+0.86 /Cloth, Light, Oven 13.37+0.67 /Robot exclusive Training TaskLight, Oven, Draw / 32.41+3.04Oven, Light, Cloth / 26.75+2.56Light, Draw, Cloth / 27.10+1.95Inference TaskOven, Draw, Cloth 14.4 45.7Draw, Cloth, Oven 12.6 41.4Oven, light, Cloth, Draw 20.5 /Draw, Cloth, Light, Oven 20.9 /Draw, Oven, Cloth, Light 21.0 /Draw, Light, Cloth, Oven 19.2 /20 |
xJ7XL5Wt8iN | CLUE: Calibrated Latent Guidance for OfflineReinforcement LearningJinxin Liu∗Zhejiang UniversityWestlake Universityliujinxin@westlake.edu.cnLipeng Zu∗Westlake Universityzulp@mail.ustc.edu.cnLi HeWestlake Universityheli.copter@foxmail.comDonglin Wang†Westlake Universitywangdonglin@westlake.edu.cnAbstract: Offline reinforcement learning (RL) aims to learn an optimal policyfrom pre-collected and labeled datasets, which eliminates the time-consumingdata collection in online RL. However, offline RL still bears a large burden ofspecifying/handcrafting extrinsic rewards for each transition in the offline data. Asa remedy for the labor-intensive labeling, we propose to endow offline RL taskswith a few expert data and utilize the limited expert data to drive intrinsic rewards,thus eliminating the need for extrinsic rewards. To achieve that, we introduceCalibrated Latent g Uidanc E(CLUE), which utilizes a conditional variationalauto-encoder to learn a latent space such that intrinsic rewards can be directlyqualified over the latent space. CLUE’s key idea is to align the intrinsic rewardsconsistent with the expert intention via enforcing the embeddings of expert data toa calibrated contextual representation. We instantiate the expert-driven intrinsicrewards in sparse-reward offline RL tasks, offline imitation learning (IL) tasks, andunsupervised offline RL tasks. Empirically, we find that CLUE can effectivelyimprove the sparse-reward offline RL performance, outperform the state-of-the-artoffline IL baselines, and discover diverse skills from static reward-free offline data.Keywords: Offline Reinforcement Learning, Intrinsic Rewards, Learning Skills1 IntroductionRecent advances in reinforcement learning (RL) have shown great success in decision-makingdomains ranging from robot manipulation [ 1,2] to navigation [ 3,4] and large-language models [ 5].Generally, an RL agent receives two sources of supervisory signals associated with the learningprogress: 1) environment transition dynamics and 2) task-specifying rewards, where 1) the transitiondynamics coordinate the agent’s behaviors toward the environment affordances and 2) the task-specifying rewards capture the designer’s preferences over agent behaviors. However, the twosupervised signals themselves also limit the applicability of RL methods, since in many tasks,especially in real-world domains, either collecting online environmental transitions or labelingcomplex task-specifying rewards is time-consuming and laborious.To tackle the above challenges, two separate RL branches have been proposed: 1) offline RL [6],also known as batch RL, which promises to learn effective policies from previously-collected staticdatasets without further online interaction, and 2) intrinsic rewards [7], which aim to capture a richform of task knowledge (such as long-term exploration or exploitation) that provides additionalguidance on how an agent should behave. Aligning with the task-specifying rewards, such intrinsic∗Jinxin Liu and Lipeng Zu contribute equally to this work.†Corresponding author: Donglin Wang.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.sparse -reward offline datasuccessful trajectoriesCLUEsrs′areward -relabeled offline datasොrs′areward -free offline data...CLUEreward -labeled offline data...class1class3class2ss′asොrs′asොrs′asොrs′aOffline RL reward -free offline dataexpert trajectoriesCLUEss′areward -labeled offline datasොrs′aOffline RL Offline RL Offline RL Offline RL 1)2)3)Figure 1: Three instantiations for the assumed "expert" data in offline RL settings: 1) sparse-rewardoffline RL, 2) offline imitation learning (IL) setting, and 3) unsupervised offline RL setting (aimingto learn diverse skills/policies from static reward-free offline data).rewards promise to accelerate online RL by augmenting or replacing the manual task-specifyingrewards (hereafter extrinsic rewards ). In fact, prior offline RL methods [ 6] typically introduce apolicy/value regularization and operate in a form of reward augmentation, which thus can be seen as aspecial kind of intrinsic motivation. However, such intrinsic motivation is only designed to eliminatethe potential out-of-distribution (OOD) issues in offline RL and does not account for representingtask-specifying behaviors (extrinsic rewards). In this work, we aim to design an offline RL intrinsicreward that promotes offline RL performance while representing task-specifying behaviors.It is worth noting that adapting the online intrinsic rewards to offline RL problems is non-trivial.In online RL, intrinsic rewards often capture the long-term temporal dependencies of interactiontrajectories [ 7,8]. For example, Badia et al. [9]capture the novelty of states across multiple episodes;Eysenbach et al. [10] quantify the discriminability between skills represented by latent variables.However, such temporal dependencies rely on online interaction transitions, which thus cannot bestraightly captured in offline settings. In this work, we thus propose to discard the above temporaldependence scheme and use an "expert" to facilitate labeling intrinsic rewards and guiding the offlineagent. To do so, we identify three scenarios for the expert instantiations in offline RL settings:1)For sparse-reward offline RL tasks, we filter out the trajectories that do not accomplish the tasks andtake the success trajectories as the expert behaviors. By relabeling continuous intrinsic rewards forthose failed trajectories, we expect such a reward relabelling procedure can promote offline learning.2)For reward-free offline RL tasks, we assume that the agent has access to additional (limited)expert data generated by an expert policy. We expect that such limited expert data can provide usefulintrinsic rewards for unlabeled transitions and then bias the learning policy toward expert behaviors.3)Also considering the reward-free offline RL setting, we do not assume any additional expert data.Instead, we choose to cluster the offline transitions into a number of classes and take each class as aseparate "expert". Then, we encourage offline agents to produce diverse behaviors when conditionedon different classes, in a similar spirit to the unsupervised skill learning in online RL.We can see that in all three settings (Figure 1), we assume the existence of an expert (limited expertdata), either from trajectory filtering, from an external expert, or through clustering. To instantiatethe above intrinsic rewards, we propose Calibrated Latent g Uidanc E(CLUE), which aims to labelintrinsic rewards for unlabeled (or spare-reward) transitions in the offline RL setting. Specifically,CLUE uses a conditional variational auto-encoder to learn a latent space for both expert data andunlabeled data, then labels intrinsic rewards by computing the distance between the latent embeddingsof expert and unlabeled data. CLUE’s key idea is to explicitly bind together all the embeddings ofexpert data, thus learning a calibrated embedding for all expert behaviors. Intuitively, this bindingprocedure encourages the latent space to capture task-oriented behaviors such that latent space canproduce task-oriented intrinsic guidance when computing distance over the latent space.2In summary, we make the following contributions in this paper: 1) We propose CLUE, which canprovide pluggable intrinsic rewards for offline RL methods. 2) We demonstrate CLUE can effectivelyimprove the spare-reward offline RL performance. 3) Considering offline imitation learning (IL)settings, CLUE can achieve better or comparable results compared to both the reward-labeled offlineRL methods and the state-of-the-art offline IL methods. 4) We find that CLUE is able to discoverdiverse behaviors/skills in the unsupervised (reward-free) offline RL setting.2 Related WorksThe goal of our work is to learn task-oriented intrinsic rewards for sparse-reward or reward-free offlinedata. While there is a large body of research on learning rewards for RL tasks [ 11,12,13,14,15],most work assumes online RL settings, while we consider the offline RL setting. Additionally, littlework has yet to verify intrinsic rewards across sparse-reward, IL, and unsupervised RL tasks together.Typically, many intrinsic rewards have been proposed to encourage exploration in sparse-reward(online) RL tasks. In this case, intrinsic rewards are often formulated as state visitation counts [ 16,17,18], prediction error [ 19,20], prediction uncertainty [ 21,22], information gain [ 23], state entropy [ 24,25,26], and deviation from a default policy [ 27,28]. However, these intrinsic rewards are often notwell aligned with the task that the agent is solving. In contrast, the goal of our work is to learn a task-oriented intrinsic reward such that it promotes the policy learning progress for sparse-reward tasks.Beyond the standard offline RL setup, learning from (static) reward-labeled offline data [ 6,29,30,31,32], offline imitation learning (IL) considers learning from expert trajectories and (reward-free) sub-optimal offline data, which can be generally folded into two paradigms [33]: behavior cloning (BC)and offline inverse RL (IRL). BC directly learns a policy from expert trajectories using supervisedlearning [ 34]. Due to compounding errors induced by covariate shift [ 35], BC methods require alarge amount of expert data, thus hindering the application on data-scarce scenarios. To overcomesuch limitations, offline IRL methods consider matching the state-action distributions induced by theexpert [ 36,37,38,39,40,41]. Typically, they formulate the expert matching objective by introducinga discriminator and trying to find the saddle point of a min-max optimization, which tends to be brittleand sensitive to the training (offline) data. However, our CLUE does not introduce any adversarialobjective, thus exhibiting more robust performance on a wide variety of tasks.The idea of unsupervised RL is to learn diverse behaviors/skills in an open-ended environment withoutaccess to extrinsic rewards [ 42,43,44]. Previous unsupervised RL methods are often formulatedthrough the lens of empowerment [45]. Central to this formulation is the information-theoretic skilldiscovery approach, where diverse skills can be discovered by optimizing the long-term temporaldependencies of interaction trajectories, e.g., maximizing the mutual information between inducedtrajectories and some latent/context variables [ 10,46,47,48,49,50,51]. In this work, we proposeto discard this online temporal dependence scheme and use clustering methods to formulate suchdiversity and use CLUE to label intrinsic rewards to guide offline agents.3 PreliminaryOffline RL. We consider RL in a Markov Decision Process (MDP) M:= (S,A, T, r, p 0, γ), whereSis the state space, Ais the action space, Tis the environment transition dynamics, ris the task-oriented extrinsic reward function, p0is the initial state distribution, and γis the discount factor. Thegoal of RL is to find an optimal policy πθ(a|s)that maximizes the expected return Eπθ(τ)[P∞t=0γtrt]when interacting with the environment M, where trajectory τ:= (s0,a0, r0,s1,···)denotes thegenerated trajectory, s0∼p0(s0),at∼πθ(at|st),st+1∼T(st+1|st,at), and rtdenotes theextrinsic reward r(st,at)at time step t. In offline RL, the agent can not interact with the environmentand only receives a static dataset of trajectories D:={τi}ni, pre-collected by one or a mixture of(unknown) behavior policies. Then, the goal of offline RL is to find the best policy from offline data.Conditional variational auto-encoders (CV AE). Given offline data x, the variational auto-encoder(V AE) [52] proposes to maximize the variational lower bound,3logpθ(x) =KL(qφ(z|x)∥pθ(z|x)) +Eqφ(z|x)[−logqφ(z|x) + log pθ(x,z)] (1)≥ −KL(qφ(z|x)∥pθ(z)) +Eqφ(z|x)[logpθ(x|z)], (2)where pθ(z)is the prior distribution, qφ(z|x)denotes the encoder model, and pθ(x|z)denotes decodermodel. Considering the structured output prediction settings, conditional V AE (CV AE) maximizesthe variational lower bound of the conditional log-likelihood:logpθ(x|y)≥ −KL(qφ(z|x,y)∥pθ(z|y)) +Eqφ(z|x,y)[logpθ(x|z,y)]. (3)4 CLUE: Calibrated Latent GuidanceIn this section, we introduce our method CLUE ( Calibrated Latent g Uidanc E) that learns a calibratedlatent space such that intrinsic rewards can be directly gauged over the latent space. We begin byassuming access to limited expert offline data and describe how we can use it to label intrinsicrewards for reward-free offline data in Section 4.1. Next in Section 4.2, we describe three offlineRL instantiations, including one sparse-reward and two reward-free (offline imitation learning andunsupervised offline RL) settings, each corresponding to a scenario discussed previously (Figure 1).4.1 Calibrated Intrinsic RewardsAssuming access to limited expert offline data De:={(s,a,s′)}and a large number of reward-freeoffline data D:={(s,a,s′)}, our goal is to use Deto learn an intrinsic reward function ˆr(s,a)forthe reward-free transitions in D, such that we can recover expert behaviors using the relabeled offlinedataDˆr:={(s,a,ˆr,s′)}. With a slight abuse of notation, here we write ˆrin transitions {(s,a,ˆr,s′)}to denote the relabeled intrinsic reward ˆr(s,a).We first use CV AE to model the mixed offline behaviors in De∪ D. Specifically, we take state sasthe input/conditional variable and take action aas the prediction variable. For each behavior samples(s,a)in mixed data De∪ D, we maximize the following variational lower bound:logpθ(a|s)≥ −KL(qφ(z|s,a)∥pθ(z|s)) +Eqφ(z|a,s)[logpθ(a|z,s)] (4)≈ −KL(qφ(z|s,a)∥pθ(z|s)) +1LLXl=1logpθ(a|z(l),s)≜LCV AE(s,a;θ, φ),(5)where z(l)∼ N (z|μφ(s,a), σ2φ(s,a))2,Lis the number of samples, and LCV AE(s,a;θ, φ)is thecorresponding empirical lower bound. For simplicity, we set the prior distribution as the standardGaussian distribution, i.e.,pθ(z|s) =N(0,1)./uni00000013/uni00000011/uni00000016 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000016/uni00000050/uni00000048/uni00000044/uni00000051/uni00000013/uni00000011/uni0000001b/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000015/uni00000013/uni00000011/uni0000001c/uni00000019/uni00000014/uni00000011/uni00000013/uni00000013/uni00000056/uni00000057/uni00000044/uni00000051/uni00000047/uni00000044/uni00000055/uni00000047/uni00000003/uni00000047/uni00000048/uni00000059/uni0000004c/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000026/uni00000039 /uni00000024/uni00000028/uni00000051/uni00000052/uni00000051/uni00000010/uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057/uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015/uni00000050/uni00000048/uni00000044/uni00000051/uni00000013/uni00000011/uni0000001c/uni00000019/uni00000013/uni00000011/uni0000001c/uni0000001a/uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000013/uni00000011/uni0000001c/uni0000001c/uni00000056/uni00000057/uni00000044/uni00000051/uni00000047/uni00000044/uni00000055/uni00000047/uni00000003/uni00000047/uni00000048/uni00000059/uni0000004c/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000026/uni0000002f/uni00000038/uni00000028/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000051/uni00000052/uni00000051/uni00000010/uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057 /uni00000048/uni0000005b/uni00000053/uni00000048/uni00000055/uni00000057Figure 2: Latent embeddings of expert and non-expert offline data on D4RL antmaze-medium-diverse-v2 dataset, where embeddings are learnedby the naive CV AE ( left) and our CLUE ( right ).For a query sample (s,a), we label its intrin-sic reward ˆr(s,a)by computing the negativedistance between the latent embeddings of theexpert data and the query sample,ˆr(s,a) = exp ( −c· ∥ze−z(s,a)∥2),(6)where ze=E(s,a)∼De[qφ(z|s,a)],z(s,a)∼qφ(z|s,a), and c >0is a temperature factor.However, naively maximizing LCV AE in Equa-tion 5 may lead to undesirable embeddings withvarying scales that do not capture task-relevant behaviors when computing the latent distance. Forexample, in Figure 2 left, we visualize the embeddings of the expert data Deand the unlabeled(non-expert) offline data D. We can see that the embeddings for the expert and non-expert data aregenerally mixed together without clear separation. There is a large variance in expert data embeddings,2We define μφ(s,a)andσφ(s,a)to be feed-forward networks with parameters φ, taking concatenated sanda, and outputting the parameters (mean and std) of a Gaussian distribution in the latent space, respectively.4and directly estimating the mean of the expert embeddings ( i.e.,ze=E(s,a)∼De[qφ(z|s,a)]) cannoteffectively represent task-oriented behaviors, causing the labeled intrinsic reward ˆr(s,a)to be biased.To guarantee the intrinsic reward formulation ˆr(s,a)in Equation 6 to be task-oriented, we thuspropose to learn calibrated embeddings. To do so, we explicitly bind together the expert embeddings,expecting the expert embeddings to "collapse" into a single embedding. Thus, we introduce thefollowing calibration regularization over expert embeddings:minLcalibr:=E(s,a)∼De∥μφ(s,a)∥2+∥σφ(s,a)∥2. (7)Due to the standard Gaussian prior for pθ(z|s)in Equation 5, we constrain not only the variance ofthe expert embeddings but also the mean of the expert in Equation 7. Intuitively, Lcalibr unifies expertembeddings ("collapsed" to a single point), therefore providing effective zewhen computing intrinsicrewards ˆr(s,a)in Equation 6. As shown in Figure 2 right , the expert embeddings and their mean arealmost bound to a single point, so we can directly measure intrinsic rewards in latent space.4.2 Intrinsic Reward InstantiationsHere we describe three offline instantiations, one spare-reward, and two reward-free offline settings,that permit us to meet the previous expert data assumption (in Section 4.1) and label intrinsic rewards.Spare-reward offline RL. Considering the challenging spare-reward offline data {(s,a, r,s′)}, wecan filter out the unsuccessful trajectories and take the finished trajectories as the expert data. Then,we can replace the original spare rewards with the learned continuous intrinsic rewards.Offline IL. Considering the reward-free offline RL data {(s,a,s′)}, we can assume the agent hasaccess to additional expert data (as few as only one trajectory). Then we can use the learned intrinsicrewards to relabel the reward-free transitions, obtaining labeled offline data {(s,a,ˆr,s′)}.Unsupervised offline RL. Given the reward-free offline data {(s,a,s′)}, we can use clusteringalgorithms to cluster the data into multiple classes and then treat each class as separate "expert" data .In this way, we expect to learn different skills/policies when conditioned in different classes.5 ExperimentsIn this section, we first empirically demonstrate the advantages of our pluggable intrinsic rewardsin sparse-reward offline RL tasks. Second, we then evaluate our CLUE in offline IL tasks, studyinghow effective our intrinsic reward is in contrast to a broad range of state-of-the-art offline imitationlearning methods. Third, we study CLUE in unsupervised offline RL settings, expecting to discoverdiverse behaviors from static offline data. Finally, we conduct ablation studies on the calibrationregularization and the amount of unlabeled offline data. All of our results are evaluated over 10random seeds and 10 episodes for each seed.Implementation. Note that our intrinsic rewards are pluggable and can be combined with anyoffline RL algorithms. In our implementation, we combine CLUE with the Implicit Q-Learning(IQL) algorithm [ 53] which is one of the state-of-the-art offline algorithms and can solve most of the(reward-labeled) offline tasks with competitive performance. Our base IQL implementation is adaptedfrom IQL3, and we set all hyperparameters to the ones recommended in the original IQL paper.5.1 Sparse-Reward Offline RL TasksHere we evaluate CLUE on spares-reward tasks (including AntMaze and Adroit domains) from theD4RL benchmark. In this setting, we use the inherent sparse rewards in the dataset to select expertdata, i.e., selecting the task-completed or goal-reached trajectories from the sparse-reward dataset andtreating them as the expert data. In Tables 1 and 2, we compare our method with baseline methods(IQL [ 53] and OTR [ 54]) when only one expert trajectory4is selected. For comparison, we train3https://github.com/ikostrikov/implicit_q_learning.4In the appendix, we also compare our method with baseline methods when at most 10 completed trajectoriesare selected, where "at most 10" refers to that there may be less than 10 successful trajectories in D4RL dataset.5Table 1: Normalized scores (mean and standarddeviation) of CLUE and baselines on sparse-reward AntMaze tasks, where both OTR andCLUE use IQL as the base offline RL algorithm.The highest scores between our CLUE and base-line OTR are highlighted.Dataset IQL OTR CLUEumaze 88.7 81.6 ± 7.3 92.1 ± 3.9umaze-diverse 67.5 70.4 ± 8.9 68.0 ± 11.2medium-play 72.9 73.9 ± 6.0 75.3 ± 6.3medium-diverse 72.1 72.5 ± 6.9 74.6 ± 7.5large-play 43.2 49.7 ± 6.9 55.8 ± 7.7large-diverse 46.9 48.1 ± 7.9 49.9 ± 6.9AntMaze-v2 total 391.3 396.2 415.7Table 2: Normalized scores (mean and standarddeviation) of CLUE and baselines on sparse-reward Adroit tasks, where the highest scores be-tween CLUE and OTR are highlighted.Dataset IQL OTR CLUEdoor-cloned 1.6 0.01 ± 0.01 0.02 ± 0.01door-human 4.3 5.9± 2.7 7.7± 3.9hammer-cloned 2.1 0.9± 0.3 1.4± 1.0hammer-human 1.4 1.8± 1.4 1.9± 1.2pen-cloned 37.3 46.9 ± 20.9 59.4 ± 21.1pen-human 71.5 66.8 ± 21.2 82.9 ± 20.2relocate-cloned -0.2 -0.24 ± 0.03 -0.23 ± 0.02relocate-human 0.1 0.1± 0.1 0.2± 0.3Adroit-v0 total 118.1 122.2 153.3Table 3: Normalized scores (mean and standard deviation) of CLUE and baselines on locomotiontasks using one (K=1), five (K=5), and ten (K=10) expert demonstrations. Both CLUE and OTR usesIQL as the base offline RL algorithm, and we highlight the highest score in each setting.Dataset IQL OTR (K=1) CLUE (K=1) OTR (K=5) CLUE (K=5) OTR (K=10) CLUE (K=10)halfcheetah-medium 47.4 ± 0.2 43.3 ± 0.2 45.6 ± 0.3 43.3 ± 0.2 45.2 ± 0.2 43.1 ± 0.3 45.7 ± 0.2halfcheetah-medium-replay 44.2 ± 1.2 41.3 ± 0.6 43.5 ± 0.5 41.9 ± 0.3 43.2 ± 0.4 41.6 ± 0.3 43.2 ± 0.5halfcheetah-medium-expert 86.7 ± 5.3 89.6 ± 3.0 90.0 ± 2.4 89.9 ± 1.9 91.9 ± 1.4 87.9 ± 3.4 91.0 ± 2.5hopper-medium 66.2 ± 5.7 78.7 ± 5.5 78.3 ± 5.4 79.5 ± 5.3 79.1 ± 3.5 80.0 ± 5.2 79.9 ± 6.0hopper-medium-replay 94.7 ± 8.6 84.8 ± 2.6 94.3 ± 6.0 85.4 ± 1.7 93.3 ± 4.5 84.4 ± 1.8 93.7 ± 4.1hopper-medium-expert 91.5 ± 14.3 93.2 ± 20.6 96.5 ± 14.7 90.4 ± 21.5 104.0 ± 5.4 96.6 ± 21.5 102.3 ± 7.7walker2d-medium 78.3 ± 8.7 79.4 ± 1.4 80.7 ± 1.5 79.8 ± 1.4 79.6 ± 0.7 79.2 ± 1.3 81.7 ± 1.2walker2d-medium-replay 73.8 ± 7.1 66.0 ± 6.7 76.3 ± 2.8 71.0 ± 5.0 75.1 ± 1.3 71.8 ± 3.8 75.3 ± 4.6walker2d-medium-expert 109.6 ± 1.0 109.3 ± 0.8 109.3 ± 2.1 109.4 ± 0.4 109.9 ± 0.3 109.6 ± 0.5 110.7 ± 0.2locomotion-v2 total 692.4 685.6 714.5 690.6 721.3 694.2 723.5IQL over the naive sparse-reward D4RL data and train OTR over the relabeled D4RL dataset (usingoptimal transport to compute intrinsic rewards and employing IQL to learn offline RL policy). Wecan find that in 13 out of 14 tasks across AntMaze and Adroit domains, our CLUE outperforms thebaseline OTR. Meanwhile, compared to naive IQL (with sparse rewards), our CLUE implementationobtains a total score of 106.2% on AntMaze tasks and 129.8% on Adroit tasks. This means thatwith only a single expert trajectory, we can completely replace the sparse rewards with our intrinsicreward in offline RL tasks, which can even achieve higher performance.5.2 Offline Imitation Learning TasksThen, we evaluate CLUE on offline IL tasks. We continue to use the D4RL data as offline data, buthere we explicitly discard the reward signal. Then, we use SAC to train an online expert policy tocollect expert demonstrations in each environment. We first compare CLUE to 1) naive IQL withthe (ground-truth) reward-labeled offline data and 2) OTR under our offline IL setting. In Table 3,we provide the comparison results with 1, 5, and 10 expert trajectories. We can see that in 22 out of27 offline IL settings, CLUE outperforms (or performs equally well) the most related baseline OTR,demonstrating that CLUE can produce effective intrinsic rewards. Meanwhile, with only a singleexpert trajectory, our CLUE implementation can achieve 103.2% of the total scores of naive IQL inlocomotion tasks. This means that with only one expert trajectory, our intrinsic rewards can evenreplace the continuous ground-truth rewards in offline RL tasks and enable better performance.Next, we compare CLUE to a representative set of offline IL baselines: SQIL[ 55] with TD3+BC [ 56]implementation, ORIL [ 57], IQ-Learn [ 58], ValueDICE [ 37], DemoDICE [ 59], and SMODICE [ 60].Note that the original SQIL is an online IL method, here we replace its (online) base RL algorithmwith TD3+BC, thus making it applicable to offline tasks. In Table 4, we provide the comparisonresults over D4RL locomotion tasks. We can see that, overall, our CLUE performs better than mostoffline IL baselines, showing that our intrinsic reward is well capable of capturing expert behaviors.Further, we point out that CLUE is also robust to offline data with different qualities: most previous6Table 4: Normalized scores (mean and standard deviation) of CLUE and offline IL baselines onMuJoCo locomotion tasks using one expert trajectory (K=1) and ten expert trajectories (K=10). Wehighlight the scores that are within two points of the highest score.Dataset SQIL IQ-Learn ORIL ValueDICE DemoDICE SMODICE CLUEK=1halfcheetah-medium 24.3 ± 2.7 21.7 ± 1.5 56.8 ± 1.2 36.4 ± 1.7 42.0 ± 0.8 42.4 ± 0.6 45.6 ± 0.3halfcheetah-medium-replay 43.9 ± 1.0 7.7± 1.6 46.2 ± 1.1 29.4 ± 3.0 38.3 ± 1.3 38.3 ± 2.0 43.5 ± 0.5halfcheetah-medium-expert 6.7± 1.2 2.0± 0.4 48.7 ± 2.4 1.0± 2.4 66.2 ± 4.3 80.9 ± 2.3 90.0 ± 2.4hopper-medium 66.9 ± 5.1 29.6 ± 5.2 96.3 ± 0.9 44.0 ± 12.3 56.4 ± 1.9 54.8 ± 1.2 78.3 ± 5.4hopper-medium-replay 98.6 ± 0.7 23.0 ± 9.4 56.7 ± 12.9 52.5 ± 14.4 70.7 ± 8.5 30.4 ± 7.8 94.3 ± 6.0hopper-medium-expert 13.6 ± 9.6 9.1± 2.2 25.1 ± 12.8 27.3 ± 10.0 103.7 ± 5.5 82.4 ± 7.7 96.5 ± 14.7walker2d-medium 51.9 ± 11.7 5.7± 4.0 20.4 ± 13.6 13.9 ± 9.1 74.5 ± 2.6 67.8 ± 6.0 80.7 ± 1.5walker2d-medium-replay 42.3 ± 5.8 17.0 ± 7.6 71.8 ± 9.6 52.7 ± 13.1 57.2 ± 8.7 49.7 ± 4.6 76.3 ± 2.8walker2d-medium-expert 18.8 ± 13.1 7.7± 2.4 11.6 ± 14.7 37.3 ± 13.7 87.3 ± 10.5 94.8 ± 11.1 109.3 ± 2.1K=10halfcheetah-medium 48.0 ± 0.3 29.2 ± 6.4 56.7 ± 0.9 40.0 ± 1.9 41.9 ± 0.5 41.6 ± 0.7 45.7 ± 0.2halfcheetah-medium-replay 45.1 ± 0.5 29.6 ± 3.1 46.2 ± 0.6 39.6 ± 1.0 38.5 ± 1.6 39.3 ± 0.9 43.2 ± 0.5halfcheetah-medium-expert 11.4 ± 4.7 2.9± 0.8 46.6 ± 6.0 25.2 ± 8.3 67.1 ± 5.5 89.4 ± 1.4 91.0 ± 2.5hopper-medium 65.8 ± 4.1 31.6 ± 6.2 101.5 ± 0.6 37.6 ± 9.5 57.4 ± 1.7 55.9 ± 1.7 79.9 ± 6.0hopper-medium-replay 96.6 ± 0.7 38.0 ± 7.0 29.0 ± 6.8 83.6 ± 8.9 56.9 ± 4.6 32.6 ± 8.7 93.7 ± 4.1hopper-medium-expert 19.6 ± 9.4 19.3 ± 3.9 18.9 ± 9.0 28.4 ± 8.6 96.9 ± 8.5 89.3 ± 5.9 102.3 ± 7.7walker2d-medium 72.4 ± 8.6 46.4 ± 8.5 82.3 ± 8.8 54.3 ± 8.6 71.3 ± 4.3 67.9 ± 7.9 81.7 ± 1.2walker2d-medium-replay 82.4 ± 4.7 16.6 ± 9.3 70.0 ± 10.0 54.6 ± 9.6 58.1 ± 7.9 52.4 ± 6.8 75.3 ± 4.6walker2d-medium-expert 12.5 ± 9.3 24.5 ± 4.8 6.5± 8.2 40.1 ± 9.4 103.5 ± 7.8 107.5 ± 1.0 110.7 ± 0.2Ant-v2 HalfCheetah -v2 Walker2d -v2medium (100 steps) random (20 steps)medium (100 steps)medium (100 steps)random (60 steps)random (100 steps)6.87 m21.72 cm8.81 m 0.67 m0.42 cm 33.75 cmFigure 3: Qualitative visualizations of the learned skills in Ant, HalfCheetah, and Walker domains.We can see that the ant learns to move in different directions, the half-cheetah learns to flip uprightand run at different speeds, and the walker learns to walk at different speeds.adversarial-based methods deliberately depict unlabeled offline data as sub-optimal and tag expertdata as optimal, which can easily lead to a biased policy/discriminator. For example, we can see thatORIL’s performance deteriorates severely on all medium-expert tasks. On the contrary, CLUE doesnot bring in any adversarial objectives and is therefore much more robust.5.3 Unsupervised Offline RL TasksConsidering the reward-free offline RL settings, here we expect to learn diverse skills/policies from thestatic offline data. To do so, we first use K-means clustering to cluster similar transitions and treat thetransitions in each of the clustered classes as separate expert data. For each class, we then use CLUEto learn the corresponding intrinsic reward function and label intrinsic rewards for the rest of theunlabelled data to learn corresponding skills. We visualize the learned diverse behaviors in Figure 3(see videos in the supplementary material). We can see that our CLUE+K-means implementationprovides a promising solution to unsupervised offline RL, which successfully produces a diverse setof skills and thus illustrates a huge potential for skill discovery from static offline data.5.4 Ablation StudiesAblating the calibration regularization. The key idea of CLUE is to encourage learning calibratedexpert embeddings, thus providing effective embedding space when computing intrinsic rewards.7/uni00000058/uni00000058/uni00000010/uni00000047 /uni00000050/uni00000010/uni00000053 /uni00000050/uni00000010/uni00000047 /uni0000004f/uni00000010/uni00000053 /uni0000004f/uni00000010/uni00000047/uni00000015/uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000014/uni00000015/uni00000014/uni00000017/uni00000053/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000003/uni00000047/uni0000004c/uni00000049/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni0000002e/uni00000020/uni00000014/uni00000058/uni00000058/uni00000010/uni00000047 /uni00000050/uni00000010/uni00000053 /uni00000050/uni00000010/uni00000047 /uni0000004f/uni00000010/uni00000053 /uni0000004f/uni00000010/uni00000047/uni00000015/uni00000013/uni00000015/uni00000017/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000014/uni00000015/uni00000014/uni00000017/uni00000053/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000003/uni00000047/uni0000004c/uni00000049/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000046/uni00000048/uni0000002e/uni00000020/uni00000014/uni00000013Figure 4: Ablating the effect of the calibration reg-ularization. We can see that ablating the regular-ization generally causes performance degradation,implying that our calibration regularization can in-deed encourage task-oriented intrinsic rewards. u:umaze. m: medium. l: large. d: diverse. p: play.Here we verify this intuition by ablating thecalibration regularization Lcalibr in Equation 7,and directly using CV AE to learn the embed-ding space. We show ablation results in Fig-ure 4. We can observe that the naive CV AEimplementation (ablating the calibration regular-ization Lcalibr) suffers from a significant perfor-mance drop compared with our CLUE, indicat-ing that our calibration regularization is effectivein promoting embedding alignment and produc-ing task-oriented intrinsic rewards.Varying the amount of unlabeled offline data.To assess the effectiveness of our intrinsic re-wards under data-scarce scenarios, here we vary the amount of unlabeled offline data available foroffline IL settings (see results for sparse-reward settings in the appendix). In Figure 5, we showthe normalized results with a small number of D4RL data ranging from 5% ∼25%. We can seethat across a range of dataset sizes, CLUE can perform well and achieve competitive performancecompared to the state-of-the-art offline IL baselines.Figure 5: Ablating the number of unlabeled offline data. To compare CLUE with the baselinemethods, we shade the area below the scores of our CLUE. We can see that CLUE’s performancegenerally outperforms offline IL baselines across a range of domains and dataset sizes in each domain.6 Discussion and LimitationsIn this paper, we propose calibrated latent guidance (CLUE) algorithm which labels intrinsic rewardsfor unlabeled offline data. CLUE is an effective method and can provide pluggable intrinsic rewardscompatibility with any offline RL algorithms that require reward-annotated data for offline learning.We have demonstrated that CLUE can effectively improve the spare-reward offline RL performance,achieve competitive performance compared with the state-of-the-art baselines in offline IL tasks, andlearn diverse skills in unsupervised offline RL settings.Future work and limitations. Our CLUE formulation assumes the presence of a large batch ofoffline data and expert trajectories. This setting is common in many robotic domains. However,we also point out that in some tasks, expert trajectories may be state-only and not contain actions.Also, there may be transition dynamics shifts between the expert data and the unlabelled offlinedata in some cross-domain problem settings. Thus, future directions could investigate the state-onlyexpert data and the cross-domain intrinsic rewards. In view of the fact that our intrinsic rewards aremeasured in the latent space, it is feasible to apply our CLUE approach in both the state-only andcross-domain scenarios, as long as we impose the corresponding regularization on the learned latentembeddings. For example, we can directly add cross-domain constraints [ 61,62] over our calibrationregularization, making CLUE suitable for cross-domain tasks. In summary, we believe future workto this work will contribute to a general reward-relabeling framework capable of labeling effectiveintrinsic rewards and addressing more realistic robotic tasks e.g., discovering diverse robot behaviorsfrom static manipulation data and transferring offline cross-domain behaviors in sim2real tasks.8AcknowledgmentsThis work was supported by the National Science and Technology Innovation 2030 - Major Project(Grant No. 2022ZD0208800), and NSFC General Program (Grant No. 62176215).References[1]D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakr-ishnan, V . Vanhoucke, et al. Scalable deep reinforcement learning for vision-based roboticmanipulation. In Conference on Robot Learning , pages 651–673. PMLR, 2018.[2]A. Singh, L. Yang, K. Hartikainen, C. Finn, and S. Levine. End-to-end robotic reinforcementlearning without reward engineering. arXiv preprint arXiv:1904.07854 , 2019.[3]K. Zhu and T. Zhang. Deep reinforcement learning based mobile robot navigation: A review.Tsinghua Science and Technology , 26(5):674–691, 2021.[4]D. Shah, B. Eysenbach, N. Rhinehart, and S. Levine. Rapid exploration for open-worldnavigation with latent goal models. arXiv preprint arXiv:2104.05859 , 2021.[5]L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback.Advances in Neural Information Processing Systems , 35:27730–27744, 2022.[6]S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[7]Z. Zheng, J. Oh, M. Hessel, Z. Xu, M. Kroiss, H. Van Hasselt, D. Silver, and S. Singh. Whatcan learned intrinsic rewards capture? In International Conference on Machine Learning , pages11436–11446. PMLR, 2020.[8]S. Hansen, G. Desjardins, K. Baumli, D. Warde-Farley, N. Heess, S. Osindero, and V . Mnih.Entropic desired dynamics for intrinsic control. Advances in Neural Information ProcessingSystems , 34:11436–11448, 2021.[9]A. P. Badia, P. Sprechmann, A. Vitvitskyi, D. Guo, B. Piot, S. Kapturowski, O. Tieleman,M. Arjovsky, A. Pritzel, A. Bolt, et al. Never give up: Learning directed exploration strategies.arXiv preprint arXiv:2002.06038 , 2020.[10] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skillswithout a reward function. arXiv preprint arXiv:1802.06070 , 2018.[11] B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei. Reward learning from humanpreferences and demonstrations in atari. Advances in neural information processing systems ,31, 2018.[12] D. S. Brown and S. Niekum. Deep bayesian reward learning from preferences. arXiv preprintarXiv:1912.04472 , 2019.[13] D. Lindner, M. Turchetta, S. Tschiatschek, K. Ciosek, and A. Krause. Information directedreward learning for reinforcement learning. Advances in Neural Information Processing Systems ,34:3850–3862, 2021.[14] X. Yu, Y . Lyu, and I. Tsang. Intrinsic reward driven imitation learning via generative model. InInternational conference on machine learning , pages 10925–10935. PMLR, 2020.[15] E. Bıyık, D. P. Losey, M. Palan, N. C. Landolfi, G. Shevchuk, and D. Sadigh. Learning rewardfunctions from diverse sources of human feedback: Optimally integrating demonstrations andpreferences. The International Journal of Robotics Research , 41(1):45–67, 2022.9[16] M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifyingcount-based exploration and intrinsic motivation. Advances in neural information processingsystems , 29, 2016.[17] H. Tang, R. Houthooft, D. Foote, A. Stooke, O. Xi Chen, Y . Duan, J. Schulman, F. DeTurck, andP. Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning.Advances in neural information processing systems , 30, 2017.[18] G. Ostrovski, M. G. Bellemare, A. Oord, and R. Munos. Count-based exploration with neuraldensity models. In International conference on machine learning , pages 2721–2730. PMLR,2017.[19] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning , pages 2778–2787.PMLR, 2017.[20] Y . Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation.arXiv preprint arXiv:1810.12894 , 2018.[21] D. Pathak, D. Gandhi, and A. Gupta. Self-supervised exploration via disagreement. InInternational conference on machine learning , pages 5062–5071. PMLR, 2019.[22] R. Sekar, O. Rybkin, K. Daniilidis, P. Abbeel, D. Hafner, and D. Pathak. Planning to explorevia self-supervised world models. In International Conference on Machine Learning , pages8583–8592. PMLR, 2020.[23] R. Houthooft, X. Chen, Y . Duan, J. Schulman, F. De Turck, and P. Abbeel. Vime: Variationalinformation maximizing exploration. Advances in neural information processing systems , 29,2016.[24] L. Lee, B. Eysenbach, E. Parisotto, E. Xing, S. Levine, and R. Salakhutdinov. Efficientexploration via state marginal matching. arXiv preprint arXiv:1906.05274 , 2019.[25] Y . Seo, L. Chen, J. Shin, H. Lee, P. Abbeel, and K. Lee. State entropy maximization withrandom encoders for efficient exploration. In International Conference on Machine Learning ,pages 9443–9454. PMLR, 2021.[26] H. Liu and P. Abbeel. Behavior from the void: Unsupervised active pre-training. Advances inNeural Information Processing Systems , 34:18459–18473, 2021.[27] D. Strouse, M. Kleiman-Weiner, J. Tenenbaum, M. Botvinick, and D. J. Schwab. Learning toshare and hide intentions using information regularization. Advances in neural informationprocessing systems , 31, 2018.[28] A. Goyal, R. Islam, D. Strouse, Z. Ahmed, M. Botvinick, H. Larochelle, Y . Bengio, andS. Levine. Infobot: Transfer and exploration via the information bottleneck. arXiv preprintarXiv:1901.10902 , 2019.[29] Z. Zhuang, K. Lei, J. Liu, D. Wang, and Y . Guo. Behavior proximal policy optimization. arXivpreprint arXiv:2302.11312 , 2023.[30] Y . Lai, J. Liu, Z. Tang, B. Wang, H. Jianye, and P. Luo. Chipformer: Transferable chip placementvia offline decision transformer. ICML , 2023. URL https://openreview.net/pdf?id=j0miEWtw87 .[31] J. Liu, H. Zhang, Z. Zhuang, Y . Kang, D. Wang, and B. Wang. Design from policies: Conser-vative test-time adaptation for offline policy optimization. arXiv preprint arXiv:2306.14479 ,2023.10[32] J. Liu, Z. Zhang, Z. Wei, Z. Zhuang, Y . Kang, S. Gai, and D. Wang. Beyond ood state actions:Supported cross-domain offline reinforcement learning. arXiv preprint arXiv:2306.12755 , 2023.[33] H. Xu, X. Zhan, H. Yin, and H. Qin. Discriminator-weighted offline imitation learning fromsuboptimal demonstrations. In International Conference on Machine Learning , pages 24725–24742. PMLR, 2022.[34] D. A. Pomerleau. Efficient training of artificial neural networks for autonomous navigation.Neural computation , 3(1):88–97, 1991.[35] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predictionto no-regret online learning. In Proceedings of the fourteenth international conference on artifi-cial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Proceedings,2011.[36] J. Liu, L. He, Y . Kang, Z. Zhuang, D. Wang, and H. Xu. Ceil: Generalized contextual imitationlearning. arXiv preprint arXiv:2306.14534 , 2023.[37] I. Kostrikov, O. Nachum, and J. Tompson. Imitation learning via off-policy distribution matching.arXiv preprint arXiv:1912.05032 , 2019.[38] M. Sun, A. Mahajan, K. Hofmann, and S. Whiteson. Softdice for imitation learning: Rethinkingoff-policy distribution matching. arXiv preprint arXiv:2106.03155 , 2021.[39] F. Jarboui and V . Perchet. Offline inverse reinforcement learning. arXiv preprintarXiv:2106.05068 , 2021.[40] G. Swamy, S. Choudhury, J. A. Bagnell, and S. Wu. Of moments and matching: A game-theoretic framework for closing the imitation gap. In International Conference on MachineLearning , pages 10022–10032. PMLR, 2021.[41] B. He, Z. Sun, J. Liu, S. Zhang, X. Chen, and C. Ma. Offline imitation learning with variationalcounterfactual reasoning. arXiv preprint arXiv:2310.04706 , 2023.[42] M. Laskin, D. Yarats, H. Liu, K. Lee, A. Zhan, K. Lu, C. Cang, L. Pinto, and P. Abbeel. Urlb:Unsupervised reinforcement learning benchmark. arXiv preprint arXiv:2110.15191 , 2021.[43] V . Campos, A. Trott, C. Xiong, R. Socher, X. Giró-i Nieto, and J. Torres. Explore, discover andlearn: Unsupervised discovery of state-covering skills. In International Conference on MachineLearning , pages 1317–1327. PMLR, 2020.[44] Q. Tian, G. Wang, J. Liu, D. Wang, and Y . Kang. Independent skill transfer for deep reinforce-ment learning. In Proceedings of the Twenty-Ninth International Conference on InternationalJoint Conferences on Artificial Intelligence , pages 2901–2907, 2021.[45] C. Salge, C. Glackin, and D. Polani. Empowerment–an introduction. Guided Self-Organization:Inception , pages 67–114, 2014.[46] A. Sharma, S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-aware unsuperviseddiscovery of skills. arXiv preprint arXiv:1907.01657 , 2019.[47] J. Liu, H. Shen, D. Wang, Y . Kang, and Q. Tian. Unsupervised domain adaptation withdynamics-aware rewards in reinforcement learning. Advances in Neural Information ProcessingSystems , 34:28784–28797, 2021.[48] Y . Kang, D. Shi, J. Liu, L. He, and D. Wang. Beyond reward: Offline preference-guided policyoptimization. arXiv preprint arXiv:2305.16217 , 2023.[49] G. Berseth, D. Geng, C. Devin, C. Finn, D. Jayaraman, and S. Levine. Smirl: Surpriseminimizing rl in dynamic environments. arXiv preprint arXiv:1912.05510 , 2019.11[50] Q. Tian, J. Liu, G. Wang, and D. Wang. Unsupervised discovery of transitional skills for deepreinforcement learning. In International Joint Conference on Neural Networks (IJCNN) , pages1–8. IEEE, 2021.[51] J. Liu, D. Wang, Q. Tian, and Z. Chen. Learn goal-conditioned policy with intrinsic motiva-tion for deep reinforcement learning. In Proceedings of the AAAI Conference on ArtificialIntelligence , volume 36, pages 7558–7566, 2022.[52] D. P. Kingma and M. Welling. Auto-encoding variational bayes, 2022.[53] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning,2021.[54] Y . Luo, Z. Jiang, S. Cohen, E. Grefenstette, and M. P. Deisenroth. Optimal transport for offlineimitation learning. arXiv preprint arXiv:2303.13971 , 2023.[55] S. Reddy, A. D. Dragan, and S. Levine. Sqil: Imitation learning via reinforcement learning withsparse rewards. arXiv preprint arXiv:1905.11108 , 2019.[56] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. Advancesin neural information processing systems , 34:20132–20145, 2021.[57] K. Zolna, A. Novikov, K. Konyushkova, C. Gulcehre, Z. Wang, Y . Aytar, M. Denil, N. de Freitas,and S. Reed. Offline learning from demonstrations and unlabeled experience. arXiv preprintarXiv:2011.13885 , 2020.[58] D. Garg, S. Chakraborty, C. Cundy, J. Song, and S. Ermon. Iq-learn: Inverse soft-q learning forimitation. Advances in Neural Information Processing Systems , 34:4028–4039, 2021.[59] G.-H. Kim, S. Seo, J. Lee, W. Jeon, H. Hwang, H. Yang, and K.-E. Kim. Demodice: Offlineimitation learning with supplementary imperfect demonstrations. In International Conferenceon Learning Representations , 2022.[60] Y . J. Ma, A. Shen, D. Jayaraman, and O. Bastani. Smodice: Versatile offline imitation learningvia state occupancy matching. arXiv e-prints , pages arXiv–2202, 2022.[61] T. Franzmeyer, P. Torr, and J. F. Henriques. Learn what matters: cross-domain imitationlearning with task-relevant embeddings. Advances in Neural Information Processing Systems ,35:26283–26294, 2022.[62] J. Liu, H. Zhang, and D. Wang. Dara: Dynamics-aware reward augmentation in offlinereinforcement learning. arXiv preprint arXiv:2203.06662 , 2022.[63] J. Wu, H. Wu, Z. Qiu, J. Wang, and M. Long. Supported policy optimization for offlinereinforcement learning, 2022.127 Additional ResultsFigure 6: Ablating the number of unlabeled trajectories. We investigate the effect of unlabeledtrajectories on the performance. CLUE’s performance generally outperforms OTR. Further, we cansee that CLUE approximates the vanilla IQL method (with D4RL rewards) more closely and caneven outperform IQL given such a lack of offline data ( ≤25%).Varying the amount of unlabeled offline data. Here we vary the amount of unlabeled offline dataavailable for sparse-reward settings. Figure 6 shows that adding more unlabeled data improves theperformance of both CLUE and OTR. However, across a range of offline imitation tasks, CLUEshows better performance compared to OTR. We also plot the performance curve of naive IQL with(reward-labeled) offline data in Figure 6. We can see that with extremely limited offline data ( ≤25%),CLUE approaches IQL’s performance more closely on the halfcheetah-medium task, and can evenoutperform IQL on the remaining three tasks.Table 5: Using 10% of D4RL data, normalized scores (mean and standard deviation) of CLUE andbaselines on antmaze tasks using one (K=1) and ten (K=10) expert demonstrations. The experttrajectories are picked from the chosen 10% dataset. The highest score in each setting is highlighted.Dataset IQL OTR (K=1) CLUE (K=1) OTR (K=10) CLUE (K=10)umaze 73.7 ± 7.6 71.4 ± 8.5 75.4 ± 6.1 75.1 ± 8.3 82.5 ± 5.1umaze-diverse 21.6 ± 9.8 33.0 ± 8.5 45.4 ± 10.4 30.8 ± 13.5*58.6 ± 9.5*medium-play 23.0 ± 8.9 38.7 ± 11.1 30.5 ± 13.9 37.3 ± 10.0 36.6 ± 12.7medium-diverse 54.9 ± 7.8 60.9 ± 8.7 64.4 ± 8.9 59.2 ± 9.2 57.8 ± 8.6large-play 5.8 ± 3.8 15.0 ± 8.4 12.0 ± 6.5 13.9 ± 5.8 29.4 ± 8.4large-diverse 7.0 ± 3.6 3.3± 3.6 0.9± 1.5 9.0± 5.9 9.7± 4.5antmaze-v2 total 186.0 222.3 228.6 225.3 274.6*Only two successful trajectories are in the chosen sub-dataset and the results belong to K=2.Varying the number of expert trajectories. Using 10% of D4RL data, we vary the number ofexpert trajectories for sparse-reward offline RL settings in Table 5. We compare our method withbaseline methods (IQL and OTR) when only one expert trajectory is selected. For comparison, wetrain IQL over the naive sparse-reward D4RL data and train OTR over the relabeled D4RL dataset(using optimal transport to compute intrinsic rewards and employing IQL to learn offline RL policy).We can find that in 7 out of 12 AntMaze tasks across, our CLUE outperforms the baseline OTR.Meanwhile, compared to naive IQL (with sparse rewards), our CLUE implementation generallyoutperforms better than IQL. This means that with only a single expert trajectory, we can completelyreplace the sparse rewards with our intrinsic reward in offline RL tasks, which can even achievehigher performance in such a data-scarce scenario (10% of D4RL data).Varying the value of the temperature factor in intrinsic rewards. In Tables 6 and 7, we presentthe results on AntMaze tasks when we vary the value of the temperature factor cin intrinsic rewards.We can find that CLUE can generally achieve a robust performance across a range of temperaturefactors. In Figure 7, we further analyze our intrinsic reward distribution following OTR. We can findthat CLUE’s reward prediction shows a stronger correlation with the ground-truth rewards from thedataset, which can be served as a good reward proxy for downstream offline RL algorithms.13Table 6: Normalized scores (mean) when varying the temperature factor cwith a single experttrajectory (K=1).c= 1 c= 2 c= 3 c= 4 c= 5 c= 6 c= 7 c= 8 c= 9 c= 10umaze 89.4 89.96 91.84 90.88 91.96 92.12 91.68 90.72 90.92 91.2umaze-diverse 43.08 46.76 43.16 43.76 42.36 56.72 52.6 59.04 66.48 68medium-play 60.4 63.2 65.2 68.92 68.04 75.32 71.76 74.12 72.2 73.64medium-diverse 57.8 63.28 63.24 62.04 66.04 70.12 73 74.56 69.4 72.92large-play 34.16 44.84 46.88 50.68 52.72 53.08 53.64 55.2 53.52 55.8large-diverse 27.04 33.96 43.16 46.8 44.88 47.44 47.44 49.92 47.28 47.11Table 7: Normalized scores (mean) when varying the temperature factor cwith 10 expert trajectories(K=10).c= 1 c= 2 c= 3 c= 4 c= 5 c= 6 c= 7 c= 8 c= 9 c= 10umaze 87.88 90 91.08 90.96 91.16 91 89.92 89.44 90.72 91.92umaze-diverse 45.64 40.32 41.04 38.8 39.52 51.64 51.2 57.11 69.92 71.68medium-play 58.72 64.2 68.24 71.44 69.92 75.56 74.12 76.2 75.8 76.48medium-diverse 60.36 57.04 62.12 64.24 63.56 61.44 62.36 64.64 65.47 69.2large-play 48.24 45.8 51.56 48.2 48.4 52.36 49.91 50.58 52.28 51.87large-diverse 36.32 46.08 48.64 50.84 51.16 52.44 53.6 50.92 51.4 53.688 Experimental Details8.1 Hyperparameters for CV AE ImplementationWe list the hyperparameters used for training CV AE models in MuJoCO locomotion, AntMaze, andAdroit tasks. The other CV AE hyperparameters are kept the same as those used in Wu et al. [63].Table 8: Hyperparameters for training CV AE.MuJoCo Locomotion Antmaze Adroitfull-data partial-data full-data partial-data full-dataHidden dim 128 128 512 512 128Batch size 128 128 256 256 128Numbers of iterations 104104105105105Learning rate 10−410−410−310−310−4Weight for Lcalibr 0.1 0.1 0.8 0.8 0.1Spare-reward setting:Number of expert trajectories 3 3 5 5 38.2 Hyperparameters for our IQL ImplementationThe IQL hyperparameters employed in this paper are consistent with those utilized by Kostrikovet al. [53] in their offline implementation. It is important to note that IQL incorporates a procedurefor rescaling rewards within the dataset, which allows for the use of the same hyperparametersacross datasets that differ in quality. As CLUE generates rewards offline, we similarly apply rewardscaling following the IQL methodology. For the locomotion, adroit, and ant tasks, we rescale rewardswith1000max_return −min_return. To regularize the policy network for the chosen sub-dataset, we similarlyintroduce Dropout with a rate of 0.2.MuJoCo locomotion and Adroit tasks. We set the learning rate 10−3forhopper-medium-expertdataset (K=10) and 3×10−4for the rest of tasks. We run IQL for 1M gradient steps and averagemean returns over 10 random seeds and 10 evaluation trajectories for each seed.Antmaze tasks. We set the learning rate 5×10−4forumaze-diverse dataset (K=1 and K=10) and3×10−4for the rest of tasks. For medium-play dataset (K=1 and K=10), medium-diverse dataset14Figure 7: Qualitative comparison of the learned intrinsic rewards with different temperature factors.(K=1), and large-play dataset (K=10), we set the dropout rate 0.2 to gain a better performance. Werun IQL for 1M gradient steps for the full dataset and 0.3M for the partial dataset, respectively.8.3 Hyperparameters in K-meansWe use CLUE to learn diversity skills on Ant-v2 ,HalfCheetah-v2 , and Walker2d-v2 . The K-means,an unsupervised learning method, is employed to cluster the offline transitions {(s, a, s′)}from eachdataset into 100 classes and take each class as a separate "expert". Specifically, we use KMEANSmethod exacted from sklearn.cluster API. The hyperparameters are set as follows: n_clusters =100,random_state = 1,n_init = 1,max_iter = 300 .8.4 Offline IL BaselinesSQIL proposes to learn a soft Q-function where the reward labels for the expert transitions are oneand the reward labels for the non-expert transitions are zero. The offline implementation of SQIL isadapted from the online SAC agent provided by Garg et al. [58], and we combine it with TD3+BC.IQ-Learn advocates for directly learning a Q-function by contrasting the expert data with thedata collected in the replay buffer, thus avoiding the intermediate step of reward learning. In ourexperiments, we used the official PyTorch implementation5with the recommended configuration byGarg et al. [58].ORIL assumes the offline dataset is a mixture of both optimal and suboptimal data and learns adiscriminator to distinguish between them. Then, the output of the discriminator is used as thereward label to optimize the offline policy toward expert behaviors. We borrowed the TD3+BCimplementation reproduced by Ma et al. [60] in our experiments.ValueDICE is the earliest DICE-based IL algorithm that minimizes the divergence of the state-actiondistribution between the learning policy and the expert data. The code used in the experiments is theofficial TensorFlow implementation6released by Kostrikov et al. [37].DemoDICE proposes to optimize the policy via a state-action distribution matching objective withan extra offline regularization term. We report the performance of DemoDICE using the TensorFlowimplementation7by Kim et al. [59], while the hyperparameters are set as same as the ones in thepaper.SMODICE aims to solve the problem of learning from observation and thus proposes to minimizethe divergence of state distribution. Besides, Ma et al. [60] extends the choice of divergence so that5https://github.com/Div99/IQ-Learn6https://github.com/google-research/google-research/tree/master/value_dice7https://github.com/KAIST-AILab/imitation-dice15the agent is more generalized. The code and configuration used in our experiments are from theofficial repository8.9 In What Cases Should We Expect CLUE to Help vs to Hurt?If the distribution of the expert data is unimodal, our method can learn effectively while providingeffective intrinsic rewards. On the contrary, if the distribution of the expert data is multi-modal,explicitly binding the embeddings of the expert data together would instead affect the learning of z,thus resulting in an ineffective intrinsic reward for policy learning.10 Learned Diverse SkillsTo encourage diverse skills from reward-free offline data, we cluster the offline transitions into 100classes using K-means and take each class as a separate "expert". Then, we use these expert data fromdifferent classes to label the original reward-free data and train IQL policy to learn the correspondingskills. In this section, we illustrate all the learned skills by CLUE.8https://github.com/JasonMa2016/SMODICE1610.1 Learned Diverse Skills from Ant-Medium Dataset13.74 m13.74 mFigure 8: Visualization of unsupervised skills learned from the ant-medium dataset.1710.2 Learned Diverse Skills from Ant-Random Dataset46.25 cm46.25 cmFigure 9: Visualization of unsupervised skills learned from the ant-random dataset.1810.3 Learned Diverse Skills from Halfcheetah-Medium Dataset11.75 mFigure 10: Visualization of unsupervised skills learned from the halfcheetah-medium dataset.1910.4 Learned Diverse Skills from Halfcheetah-Random Dataset0.7 cmFigure 11: Visualization of unsupervised skills learned from the halfcheetah-random dataset.2010.5 Learned Diverse Skills from Walker2d-Medium Dataset0.89 mFigure 12: Visualization of unsupervised skills learned from the walker2d-medium dataset.2110.6 Learned Diverse Skills from Walker2d-Random Dataset56.25 cmFigure 13: Visualization of unsupervised skills learned from the walker2d-random dataset.22 |
Q8BGLiWn2X | PLEX: Making the Most of the Available Datafor Robotic Manipulation PretrainingGarrett Thomas∗Ching-An Cheng†Ricky Loynd†Felipe Vieira Frujeri†Vibhav Vineet†Mihai Jalobeanu‡Andrey Kolobov†Abstract: A rich representation is key to general robotic manipulation, but ex-isting approaches to representation learning require large amounts of multimodaldemonstrations. In this work we propose PLEX, a transformer-based architec-ture that learns from a small amount of task-agnostic visuomotor trajectories anda much larger amount of task-conditioned object manipulation videos — a typeof data available in quantity. PLEX uses visuomotor trajectories to induce a la-tent feature space and to learn task-agnostic manipulation routines, while diversevideo-only demonstrations teach PLEX how to plan in the induced latent featurespace for a wide variety of tasks. Experiments showcase PLEX’s generalizationon Meta-World and SOTA performance in challenging Robosuite environments.In particular, using relative positional encoding in PLEX’s transformers greatlyhelps in low-data regimes of learning from human-collected demonstrations.Keywords: Robot learning, Robotic manipulation, Visuomotor representations1 IntroductionTransformers [1] have lead to breakthroughs in training large-scale general representations for com-puter vision (CV) and natural language processing (NLP) [2], enabling zero-shot adaptation and fastfinetuning [3]. At the same time, despite impressive progress, transformer-based representationshaven’t shown the same versatility for robotic manipulation. Some attribute this gap to the lack ofsuitable training data for robotics [3]. We argue instead that data relevant to training robotic ma-nipulation models is copious but has important structure that most existing training methods ignoreand fail to leverage. These insights lead us to propose a novel transformer-based architecture, calledPLEX , that is capable of effective learning from realistically available robotic manipulation datasets.We observe that robotics-relevant data falls into three major categories: (1)Video-only data, whichcontain high-quality and potentially description-annotated demonstrations for an immense varietyof tasks but have no explicit action information for a robot to mimic; (2)Data containing matchingsequences of percepts and actions , which are less plentiful than pure videos and don’t necessarilycorrespond to meaningful tasks [4], but capture valuable correlations between a robot’s actions andchanges in the environment and are easy to collect on a given robot; (3)Small sets of high-qualitysensorimotor demonstrations for a target task in a target environment. Thus, a scalable model ar-chitecture for robotic manipulation must be able to learn primarily from videos, while being extradata-efficient on sensorimotor training sequences and the small amount target demonstrations.PLEX, the PLanning- EXecution architecture we propose, is designed to take advantage of datasources of these types. A PLEX model has two major transformer-based components: (I)a task-conditioned observational planner that, given a task specification and an estimate of the current∗Stanford University, gwthomas@stanford.edu . Work done partly while at Microsoft Research.†Microsoft Research, {chinganc,riloynd,fevieira,vivineet,akolobov }@microsoft.com‡dexman.ai, mihai@dexman.ai . Work done partly while at Microsoft Research.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.world state, determines the next state to which the robot should attempt to transition, and (II)anexecutor that, having received the desired next state from the planner, produces an action that shouldlead there from the current state. The executor is trained by optimizing an inverse dynamics loss overexploratory sensorimotor data of the aforementioned category (2), while the planner is trained byminimizing a loss of its autoregressive predictions computed with respect to video-only trajectoriesof category (1). The target-task data of category (3)can be optionally used to efficiently finetune theplanner, the executor, or both.We make three design choices that greatly help the data efficiency of PLEX’s training:•Learning to plan in the observation embedding space. Rather than generating videos of proposedtask execution using, e.g., stable diffusion as in Du et al. [5], PLEX learns to plan and execute inthe low-dimensional space of observation embeddings.•Asymmetric learning of the embedding space. The observation embedding space in which the ex-ecutor and the planner operate is induced by training the observation encoder using the executor’slossonly (or even by employing a frozen feature-rich encoder such as R3M [6]). The planner’sgradients don’t affect the encoder, which reduces the cost of PLEX training.•Relative positional encodings. We adopt the relative positional encodings [7] in PLEX. We em-pirically show that in robotic manipulation the relative positional encodings significantly improvetraining efficiency from human-collected data compared with the absolute positional encodings [1]commonly used in the literature on transformers.Most approaches that use video-only demonstrations for pretraining in robotic manipulation pro-duce purely visual representations (see, e.g., [6, 8–10]). The majority of algorithms that producesensorimotor models need most or all of the video demonstrations to be accompanied by actionsequences that generated the videos, a requirement that holds only for a small fraction availablemanipulation data [11–17]. Few approaches have a dedicated trainable planning component; e.g.[16, 18–21] plan in a skill space, which PLEX can be modified to do as well. Conceptually, PLEXfalls under the paradigm of learning from observations (LfO), but existing LfO approaches don’thave multitask zero-shot planning capability [22–25] or demostrate it only in low-dimensional envi-ronments across similar tasks [26]. Of the works that have used transformers for robotic manipula-tion [14, 17, 21, 27, 28], only Brohan et al. [17] have analyzed their data efficiency, and none havelooked at positional embeddings as a way to improve it. Overall, the closest approach to PLEX isthe concurrently proposed UniPi [5]. It also has counterparts of PLEX’s planner and executor, butits planner operates using diffusion in the image space [29], which is expensive both datawise andcomputationally, and may fail to model manipulation-relevant 3D object structure consistently [29].A more extensive discussion of prior work is provided in Appendix A.We experimentally show that PLEX’s planner-executor design can effectively exploit the structureof realistically available robotic manipulation data to achieve efficient learning. On the multi-taskMeta-World [30] benchmark, despite pretraining mostly on video data, PLEX exhibits strong zero-shot performance on unseen tasks and can be further improved by finetuning on a small amount ofvideo-only demonstrations. We empirically show on the challenging Robosuite/Robomimic [31, 32]benchmark that, contrary to conclusions from NLP [7], the use of relative positional encodingssignificantly improves the data efficiency of PLEX learning from human-collected demonstrations.2 Problem statement and relevant concepts2.1 Problem statementWe consider the problem of learning a generalist task-conditioned policy for goal-directed objectmanipulation. Namely, we seek a policy that can control a robotic manipulator to successfullyaccomplish tasks that the robot may not have encountered during the policy training process; such apolicy formally can be viewed a solution to a task-conditioned partially observable Markov decisionprocess (POMDP) described in Appendix B. In practice, learning a generalist policy that performswell on a broad distribution of tasks zero-shot is very challenging, as the coverage and amount2of publicly available training data are limited. Therefore, in this work we consider a two-phasedlearning process: (1) pretraining, during which a generalist policy is trained, and (2) finetuning,during which this policy is adapted to a target task.2.2 Data for training robotic manipulation modelsWe consider three broad groups of datasets relevant to training robotic manipulation systems:4Multi-task video demonstrations ( Dmtvd).Being the most abundant category, it comprises datacollections ranging from general YouTube videos to curated benchmarks such as Ego4D [33], EpicKitchens [34, 35], and YouTube-8M [36] showing anagent – either a robot or a person – perform-ing a meaningful object manipulation task with an end-effector. This data contains demonstration-quality sequences of video observations and descriptions of tasks they accomplish, but not the actionsequences whose execution generated these videos.Visuomotor trajectories ( Dvmt).These trajectories consist of paired sequences of observations androbots’ actions. Although some of them may be high-quality demonstrations of specific tasks, e.g.,as in the Bridge Dataset [15], many of these trajectories are generated by activities that most peoplewill not find meaningful, e.g., grabbing random objects in a tray, as in the RoboNet [4]. Since nostrong quality, quantity, or task association requirements are imposed on Dvmtdata, it is relativelyeasy to collect for any target embodiment and environment.Target-task demonstrations ( Dttd). This is the most scarce but also most desirable data category,since it encompasses high-quality trajectories for a specific task in question, ideally collected on thetarget embodiment (robot). Note, however that we don’t require that these demonstrations be visuo-motor. In fact, our experiments show that PLEX needs only video demonstrations for finetuning tolearn a high-quality policy for a target task.A key data assumption we make in this work is that |Dttd| ≪ |D vmt| ≪ |D mtvd|.2.3 Transformers and positional encodingsA transformer-based architecture consists of several specially structured self-attention layers and,in general, maps an input set(often called a context ) ofKelements (called tokens ) to an outputof the same size K[1]. In most applications, such as language translation, transformers need tomap ordered sets (i.e. sequences) to other ordered sets, and therefore add special vectors calledpositional encodings to each input element to identify its position in a sequence. These encodingscan be learned as part of transformer’s training or be hand-crafted.The most common scheme is the absolute positional encoding , where each position in the trans-former’s K-sized context gets a positional vector [1]. Some transformers, e.g., Chen et al. [37], usewhat we call a global positional encoding . It is similar to the absolute one, but assigns a separatevector to each position in the entire input sequence rather than just the K-sized context, up to somemaximum length T≫K. Finally, models based on Transformer-XL [7, 14, 17], instead conditionthe attention computation on the relative positions between different pairs of input tokens withina context. In this work, we argue that on robotic manipulation finetuning datasets that consist ofsmall numbers of human-gathered demonstrations, relative positional encoding is significantly moredata-efficient than absolute or global one.3 PLEX architecture and training3.1 IntuitionPLEX (shown in Figure 1) separates the model into two transformer-based submodules: 1)aplannerthat plans in the observation embedding space based on a task specification, and 2)anexecutor thattakes the embeddings of the historical and the planned future observations and outputs an action tocontrol the robot.4Static image datasets, e.g., ImageNet, aren’t treated by PLEX in a special way and we don’t discuss it here,but can be used to pretrain PLEX’s image encoders.3Figure 1: PLEX architecture. This diagram illus-trates the information flow during PLEX training, de-scribed in Section 3.2. PLEX is optimized using theplanner’s loss LPL(computation shown with black ar-rows↑), and the executor’s loss LEX (computationshown with gray arrows ↑). The symbols ‘ =’ and‘=’ denote stopgrads, where backpropagation is halted.Each input modality mis embedded using a modality-specific encoder φm. Video demonstration embed-dings ̃g, ̃I1:T, and (optionally) ̃R1:Tare used to trainthe planner over the embedding space using the pre-diction loss LPL. Visuomotor trajectory embeddings ̃I1:T, ̃p1:T, ̃a1:Tare passed to the executor to computethe inverse dynamics loss LEX. Note that if the imageencoder φIisn’t frozen, LEX’s gradients will updateφI. In contrast, the planner’s own loss LPLnever af-fectsφI(see stopgrad symbol =).This design is motivated by the structure ofDmtvd,Dvmt, andDttddataset categories, whichas we explain below make them suitable forthree complementary learning objectives.1.Learning to execute state transitions. Thevisuomotor trajectories from Dvmt, collectedon the target robotic manipulator or asimilar one, show the robot how to ex-ecute a wide variety of state transitions.By sampling an observation-action tuple⟨ot−H, . . . , o t, at, ot+L⟩, the agent can learnto infer atfrom ot−H, . . . , o t, and ot+Lus-inginverse dynamics , where tis the cur-rent time step, His an observation historylength, and Lis a lookahead parameter.2.Learning to plan for tasks. In order torecommend a meaningful action at eachstep, inverse dynamics inference needs the(embedding of) the desired future observa-tion. Determining the desired future obser-vation given a task description is somethingthat can be learned from multi-task video-only data Dmtvd, since this data shows whatprogress towards a successful completion ofa specified task should look like.3.Improving target-task performance.While learning to plan and execute ondiverse DmtvdandDvmtdata can result in arobotic manipulation foundation model [3]with strong zero-shot performance (seeSection 4.2), on many tasks it may befar from perfect. Small datasets Dttdofhigh-quality target-task demonstrations(e.g., through teleoperation) can provideadditional grounding to the target domain tofurther improve a pretrained model.3.2 ArchitectureFollowing the above intuitions, we trainPLEX’s executor using data Dvmtand PLEX’splanner using data Dmtvd, in addition to a small dataset Dttdof target-task trajectories (which,if available, can be used to train both the planner and executor). Specifically, let τ=g, R 1, I1, p1, a1. . . , R T, IT, pT, aT=g, R 1:T, I1:T, p1:T, a1:Tdenote a trajectory. Here, gis atask specification, Itis a tuple of camera image observations, ptis a proprioceptive state, atis anaction, and Rtis a return-to-go at time t, i.e.Rt=PTt′=trt′, where rt′is the instantaneous rewardat time t′. The length Tcan vary across trajectories. As Figure 1 shows, PLEX processes theseinput modalities using corresponding encoders φg,φI,φp,φa, and φRto obtain an embedded se-quence ̃g, ̃R1:T, ̃I1:T, ̃p1:T, ̃a1:T. When a modality is missing, it is replaced by trainable placeholdervectors during embedding. Missing modalities are common in robotic manipulation datasets; e.g.,few datasets have rewards. Since PLEX’s executor and planner are designed to be trainable on task-agnostic visuomotor Dvmtdata and task-conditioned video-only demonstrations Dmtvd, respectively,each of these components is specialized to operate only on the (embeddings of) modalities avail-able in their prevalent training data. Per Figure 1, task description and return embeddings ̃gand4 ̃R1:Tdon’t get routed to the executor, since they are missing from Dvmtdata. Similarly, the planneronly receives ̃g, ̃I1:Tand, optionally, ̃R1:Tembeddings, since they are present in Dmtvddata. Thisseparation holds also at the deployment time, when all modalities are available.Planner The planner’s sole purpose is to determine where the agent should go in the observationembedding space. As shown in Figure 1, given embeddings ̃g, ̃I1:Tof a task-conditioned video-onlytraining demonstration, the planner outputs a sequence ˆI1+L:T+Lof embeddings corresponding tothe observations the agent should ideally see Lsteps in the future from its current time step; Lis ahyperparameter. The planner’s training minimizes the prediction lossLPL( ̃g, ̃R1:T, ̃I1:T) =PT+Lt=1+L∥ ̃It−ˆIt∥22. (1)where we set ̃It= ̃ITfort=T+1, ..., T +L. Crucially, LPL’s gradients don’t backpropagate intothe encoders φgandφI. This is to prevent the collapse of the image embedding space (denoted asEo); note the stopgrad symbols on LPL’s computation paths in Figure 1. The embedding space Eoeither comes from pretrained encoders or is learned with inverse dynamics during executor training.Executor Like the planner, the executor has a specific role at the deployment time. Given theobservation-action sequence o1:t, a1:tso far and the target observation embedding ˆIt+Lproducedby the planner, the executor infers an action ˆatfor the current step. This inference step should bedone in a task-agnostic way, as the task knowledge is already incorporated in the ˆIt+Lprediction ofthe planner. For a trajectory from Dvmt, we optimize the executor via the inverse dynamics lossLEX(I1:T, p1:T,ˆI1+L:T+L, a1:T,) =PT−1t=1∥at−ˆat∥22 (2)A major difference between LEXandLPLoptimization is that the former’s gradients can backprop-agate into the encoders φI,φo,φp, and φa: the computation path for LEXthrough these encodersin Figure 1 doesn’t have a stopgrad. This allows executor training to shape the embedding space Eo.Relative positional encoding Like the Decision Transformer (DT) [37], PLEX’s planner and ex-ecutor transformers are derived from GPT-2. However, DT’s use of global positional encodingimplicitly assumes that all training trajectories have the same length T. PLEX, in contrast, usesrelative encoding from Dai et al. [7] as the default. As we show empirically, in robotic manipulationsettings where tasks are usually goal-oriented and training demonstrations vary a lot in length, globalpositional embedding performs poorly and even the fixed absolute positional encoding common inNLP [1] performs much better. Especially, for human-collected demonstrations where variability issignificant, our experimental results show that relative encoding [7] perform significantly better.3.3 Training PLEXTraining PLEX generally involves both pretraining and finetuning, though the experiments in Sec-tion 4.2 show that pretraining alone already gives PLEX solid zero-shot performance.Pretraining PLEX consists of two sub-stages:1. Pretraining the executor by optimizing the LEXloss (Equation (2)) over a Dvmtdataset.2. Pretraining the planner by optimizing the LPLloss (Equation (1)) over a Dmtvddataset.If the observation encoders are expected to be trained or finetuned by the inverse dynamics lossLEX, rather than pretrained and frozen beforehand, it is critical for executor pretraining to be donebefore training the planner. Indeed, the planner is expected to make predictions in the observationencoders’ embedding space, which will change if the inverse dynamics loss affects the encoders. Ifthe encoders are frozen from the start, however, the pretraining stages can proceed asynchronously.Finetuning involves adapting PLEX using a target-task demonstration dataset Dttd. As with anyfinetuning, this involves deciding which part of PLEX to adapt.SinceDttdcan be viewed both as a small Dmtvdand a small Dvmtdataset, it can be used to trainany component of PLEX—executor, planner, and observation encoders. As with pretraining, if Dttdis used for finetuning the encoders, it is critical to complete their finetuning before finetuning the5Figure 2: PLEX’s generalization experiments. The confidence intervals are computed with 10 seeds.planner. In Section 4.2, we show that finetuning just the last layer of the planner’s transformer,which constitutes 5% of the parameters of the PLEX instance in the experiment, is sufficient forsignificantly boosting a pretrained PLEX’s performance.Dttdcan also be employed for optimizing a behavior cloning loss LBC. This amounts to trainingthe planner, executor, and encoders simultaneously by having PLEX predict Dttdtrajectories’s ac-tions from the same trajectories’ observations, and allowing the action prediction loss gradients tobackpropagate through the entire PLEX model, to its the inputs. The experiments in Section 4.3demonstrate the efficiency of BC-based finetuning thanks to the use of a relative position encoding.4 ExperimentsWe conduct two sets of experiments to answer the following questions: (i)Does PLEX pretrainedon task-agnostic sensorimotor data and task-annotated video data generalize well to downstreamtasks? (ii)How does the use of relative positional encodings affect PLEX’s policy quality? Ap-pendix C provides the details about our PLEX implementation.54.1 Benchmarks and training dataMeta-World: Meta-World [30] is a collection of 50 tasks featuring a Sawyer arm. We use Meta-World-v2 with image observations (see details in Appendix D.1). We consider the ML45 splitconsisting of 45 training and 5 target tasks ( door-lock ,door-unlock ,hand-insert ,bin-picking , andbox-close ). We use these 5 target tasks for evaluation. Meta-World comes with high-quality scriptedpolicies for all tasks. To get video demonstration data (Dmtvd), we use these scripted policies togenerate 100 successful video-only demonstrations for each of the 45 training tasks, i.e., |Dmtvd|=4500 . To generate visuomotor trajectories (Dvmt), for the 5 target tasks’ environments, we add zero-mean Gaussian noise with standard deviation 0.5to the actions of the scripted policies and recordthe altered actions. We collect 50 trajectories per task, i.e., |Dvmt|= 250 . Finally, for target-taskdemonstrations (Dttd), we employ the original scripted policies to produce 75 demonstrations pertarget task and sample 10 of them in a finetuning experiment run, i.e., |Dttd|= 10 .Robosuite: Robosuite benchmark [31], compared Meta-World, has robotic manipulation tasks witha significantly more complicated dynamics and action space. We use 9 of its tasks involving asingle robot arm (Panda) ( Lift,Stack ,Door ,NutAssemblyRound ,NutAssemblySquare ,PickPlace-Bread PickPlaceCan ,PickPlaceMilk , and PickPlaceCereal ). Robosuite’s details are provided inAppendix D.1. Importantly, the training data for Robosuite was collected from human demonstra-tions, notgenerated by scripted policies as in Meta-World. See Appendix D.4 for details.4.2 Generalization experimentsHere we focus on pretraining PLEX with multi-task Meta-World data. The results are shown inFigure 2. We train a 16,639,149-parameter PLEX instance (including the ResNet-18-based imageencoder) from scratch with random initialization. We use the success rate on the 5 target tasksas the performance metric. For baselines, we experiment with PLEX with a frozen ResNet-50-based R3M [6], an observational representation pretrained on the large Ego4D dataset [33]. Wedenote it as PLEX+R3M ; in Figure 2, Pretr. PLEX+R3M was first pretrained on multitask data andthen finetuned on a target task, while PLEX+R3M, BC was trained only on a single target task’sdata from the start. In addition, we use an adapted Learning from Play (LfP) approach [11]. The5We implement PLEX using the GPT-2 of the DT codebase [37] but without return conditioning.6hyperparameters and details can found Appendices C and D. In summary, the experimental resultsshow that PLEX can perform well without seeing a single sensorimotor expert demonstration.PLEX demonstrates zero-shot generalization capabilities Figure 2 shows that PLEX pretrainedon as few as 4500 video demonstrations ( Dmtvd) from the training environments and 250 dynamicstrajectories ( Dvmt) from the target environments (denoted as Pretr. PLEX, zero-shot in Figure 2)exhibits good downstream performance zero-shot . To demonstrate that this performance is reallydue to planning learned from video-only data as opposed to the executor inadvertently exploitingbiases in the data, we consider a PLEX variation (denoted as Pretr. EX only, zero-shot ) where weonly pretrain the executor (onDvmt), not the planner.6The results of Pretr. EX only, zero-shot reflecta level of performance one can get with knowledge contained in the dynamics data Dvmtalone. Pretr.EX only, zero-shot underperforms Pretr. PLEX, zero-shot , which shows the importance of learningfromDmtvdvia PLEX’s planner.Our main baseline for zero-shot generalization is Learning from Play (LfP) [11], one of the fewexisting methods able to generalize zero-shot from data as low-quality as Dmtvd. LfP has planningcapability but doesn’t have a way to use either the video-only data Dmtvdor the target-task demon-strations Dttd, and performs which gives PLEX a large advantage.PLEX can be finetuned effectively using only a few video-only demonstrations We furthershow that finetuning only 5% of PLEX’s parameters (the last transformer layer of the planner) on just10video-only demonstrations for a given task significantly boosts PLEX’s success rate there. For all5 downstream tasks, this policy outperforms Pretr. EX only, zero-shot by≥2×. The improvementis drastic especially in the case of hand-insert-v2 ,bin-picking-v2 , and box-close-v2 .Video-only demonstrations is all PLEX needs during finetuning Interestingly, we find that fulldemonstrations (with both video and action sequences) don’t increase PLEX’s performance beyondvideo-only ones. This can seen from the experimental results of Pretr. PLEX, ft. on 10 full demos ,where we finetune PLEX (the action head and last transformer layer of PLEX’s planner, executor;≈11% of PLEX) on 10 full(sensorimotor) demonstrations for each task. We think this is due toPLEX’s image encoder being pretrained only on observations from Dvmtand frozen during fine-tuning. Because of this, finetuning couldn’t help the encoder learn any extra features for modelinginverse dynamics over the observation space region covered by Dttd, even if such features wouldimprove PLEX’s performance.The issue of impoverished observation coverage in Dvmtdataset can be addressed by using a frozenencoder pretrained on an independent large dataset, as the results of PLEX+R3M, BC and of pre-trained PLEX+R3M in Figure 2 suggest. There, PLEX’s R3M encoder was never trained on anyMeta-World observations but enables PLEX to perform reasonably well.The results of Pretr. PLEX+R3M andPLEX+R3M, BC in Figure 2 illuminate two other as-pects of using observation-only representations like R3M: (1) The sensorimotor representation thatPLEX learns on top of R3M clearly helps generalization – pretrained PLEX+R3M performs muchbetter than PLEX+R3M, BC , which was trained only on a single task’s data, despite pretrainedPLEX+R3M seeing just video-only demonstrations at finetuning. (2) Fully frozen R3M some-what limits PLEX’s performance – PLEX variants that pretrained their own encoder outperformPLEX+R3M on 3 of 5 tasks.4.3 Positional encoding experimentsIn the Meta-World experiments, all training data was generated by scripted policies. In real settings,most such data is generated by people teleoperating robots or performing various tasks themselves.A hallmark of human-generated datasets compared to script-generated ones is the demonstrationvariability in the former: even trajectories for the same task originating in the same state tend to bedifferent. In this section, we show that in low-data regimes typical of finetuning on human-generated6At run time we feed the embedding of the task’s goal image as the predictions that the executor conditionson (since no planner is trained).7Figure 3: Data efficiency of PLEX’s relative positional encoding in single-task mode on Robosuite’ssingle-arm tasks with |Dttd|varying from 5 to 75. PLEX (with relative encodings) in most cases sig-nificantly outperforms and at worst matches the performance of its version PLEX-abs with absolutepositional encodings. Both versions significantly outperform DT.demonstrations, PLEX with relative positional encoding yields superior policies for a given amountof training data than using absolute encoding. The results are in Figure 3.Baselines, training and evaluation protocol. To analyze data efficiency and compare to priorresults on Robosuite, we focus on an extreme variant of finetuning – training from scratch. Foreach of the 9 Robosuite tasks and each of the evaluated encodings, we train a separate 36,242,769-parameter PLEX instance using only that task’s Dttddataset of full sensorimotor human-generateddemonstrations. We compare PLEX with relative positional encoding to PLEX with absolute oneand to two flavors of the Decision Transformer (DT) [37], which use global positional embedding.Appendix D.5 and Figure 3 provide more details about model training dataset collection, and thebaselines. For each task/dataset size/approach, we train on 10 seeds.Results. As Figure 3 shows, PLEX learns strong policies using at most 75 demonstrations, despitehaving to train a 36M-parameter model including randomly initialized vision models for tasks, mostof which have complex dynamics and broad initial state distributions. Moreover, PLEX with relativepositional encoding (denoted simply as PLEX in the legend) outperforms the alternatives by as muchas 20 percentage points (pp) on Robosuite’s human-generated demonstration data while never losingto them. In particular, DT-global(+rew) and, especially, DT-global perform far worse of both PLEXandPLEX-abs . Since all models share most of the implementation and are trained similarly whenPLEX andPLEX-abs run in BC mode, we attribute PLEX’s advantage only to the combined effectof using human-generated training data and positional encodings. We have also trained PLEX andPLEX-abs for Meta-World’s 5 target tasks from the previous experiment for various amounts of theavailable – scripted – demonstrations for these tasks and noticed no significant performance differ-ence between PLEX and PLEX-abs on any task. This provides additional evidence that the utility ofrelative positional enconding manifests itself specifically on human-generated demonstration data.In fact, relying on relative positional encoding allows PLEX to achieve state-of-the art performanceon all Robosuite tasks in this experiment, as we show and analyze empirically in Appendix D.4.5 Conclusion and limitationsWe have introduced PLEX, a transformer-based sensorimotor model architecture that can be pre-trained on robotic manipulation-relevant data realistically available in quantity. Our experimentalresults show that PLEX demonstrate strong zero-shot performance and can be effectively finetunedwith demonstrations to further boost its performance. In particular, PLEX shows superior perfor-mance on human-collected demonstrations because of its usage of relative positional encoding.Limitations We believe that PLEX has great potential as a model architecture for general roboticmanipulation, but in most of our experiments so far, the training data came from the same roboton which the trained model was ultimately deployed. In reality, most available multi-task videodemonstration data Dmtvdis generated by other robots or even people. This can cause a mismatchbetween the demonstrations and the target robot’s capabilities and setups. Planning hierarchicallyfirst in the skill space as, e.g., in Lynch et al. [38], and then in the observation embedding spacemay address this issue. In addition, so far we have trained PLEX on simulated data. The eventualgoal, and indeed a significant motivation for this work, would be to pretrain on internet-scale “in-the-wild” video datasets [29, 33, 36]. Also, with the rise of powerful LLMs such as Ouyang et al.[39], switching PLEX to language for task specification can facilitate generalization across tasks.8References[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polo-sukhin. Attention is all you need. In NeurIPS , 2017.[2] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan,R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler,M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever,and D. Amodei. Language models are few-shot learners. In NeurIPS , 2020.[3] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein,J. Bohg, A. Bosselut, E. Brunskill, E. Brynjolfsson, S. Buch, D. Card, R. Castellon, N. Chat-terji, A. Chen, K. Creel, J. Q. Davis, D. Demszky, C. Donahue, M. Doumbouya, E. Durmus,S. Ermon, J. Etchemendy, K. Ethayarajh, L. Fei-Fei, C. Finn, T. Gale, L. Gillespie, K. Goel,N. Goodman, S. Grossman, N. Guha, T. Hashimoto, P. Henderson, J. Hewitt, D. E. Ho, J. Hong,K. Hsu, J. Huang, T. Icard, S. Jain, D. Jurafsky, P. Kalluri, S. Karamcheti, G. Keeling, F. Khani,O. Khattab, P. W. Koh, M. Krass, R. Krishna, R. Kuditipudi, A. Kumar, F. Ladhak, M. Lee,T. Lee, J. Leskovec, I. Levent, X. L. Li, X. Li, T. Ma, A. Malik, C. D. Manning, S. Mirchan-dani, E. Mitchell, Z. Munyikwa, S. Nair, A. Narayan, D. Narayanan, B. Newman, A. Nie, J. C.Niebles, H. Nilforoshan, J. Nyarko, G. Ogut, L. Orr, I. Papadimitriou, J. S. Park, C. Piech,E. Portelance, C. Potts, A. Raghunathan, R. Reich, H. Ren, F. Rong, Y . Roohani, C. Ruiz,J. Ryan, C. R ́e, D. Sadigh, S. Sagawa, K. Santhanam, A. Shih, K. Srinivasan, A. Tamkin,R. Taori, A. W. Thomas, F. Tram `er, R. E. Wang, W. Wang, B. Wu, J. Wu, Y . Wu, S. M. Xie,M. Yasunaga, J. You, M. Zaharia, M. Zhang, T. Zhang, X. Zhang, Y . Zhang, L. Zheng, K. Zhou,and P. Liang. On the opportunities and risks of foundation models. arXiv , 2021.[4] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, andC. Finn. Robonet: Large-scale multi-robot learning. In CoRL , 2019.[5] Y . Du, M. Yang, B. Dai, H. Dai, O. Nachum, J. B. Tenenbaum, D. Schuurmans, and P. Abbeel.Learning universal policies via text-guided video generation. arXiv , 2023.[6] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3M: A universal visual represen-tation for robot manipulation. In CoRL , 2022.[7] Z. Dai, Z. Yang, Y . Yang, J. G. Carbonell, Q. V . Le, and R. Salakhutdinov. Transformer-xl:Attentive language models beyond a fixed-length context. In A. Korhonen, D. R. Traum, andL. M `arquez, editors, ACL, 2019.[8] L. Yen-Chen, A. Zeng, S. Song, P. Isola, and T.-Y . Lin. Learning to see before learning to act:Visual pre-training for manipulation. In ICRA , 2020.[9] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from ”in-the-wild” human videos. In RSS, 2021.[10] I. Radosavovic, T. Xiao, S. James, P. Abbeel, J. Malik, and T. Darrell. Real world robotlearning with masked visual pre-training. In CoRL , 2022.[11] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. InRSS, 2021.[12] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-Z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002, 2021.[13] Z. Mandi, F. Liu, K. Lee, and P. Abbeel. Towards more generalizable one-shot visual imitationlearning. In ICRA , 2022.9[14] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, T. Eccles, J. Bruce, A. Razavi, A. Edwards, N. Heess,Y . Chen, R. Hadsell, O. Vinyals, M. Bordbar, and N. de Freitas. A generalist agent, 2022.[15] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, andS. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets.InRSS, 2022.[16] S. Nasiriany, T. Gao, A. Mandlekar, and Y . Zhu. Learning and retrieval from prior data forskill-based imitation learning. arXiv preprint arXiv:2210.11435 , 2022.[17] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. J. Joshi,R. Julian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manju-nath, I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. RT-1:Robotics transformer for real-world control at scale. arXiv , 2022.[18] K. Hakhamaneshi, R. Zhao, A. Zhan, P. Abbeel, and M. Laskin. Hierarchical few-shot imita-tion with skill transition models. arXiv preprint arXiv:2107.08981 , 2021.[19] A. Ren, S. Veer, and A. Majumdar. Generalization guarantees for imitation learning. In Con-ference on Robot Learning , pages 1426–1442. PMLR, 2021.[20] B. Xihan, O. Mendez, and S. Hadfield. Skill-il: Disentangling skill and knowledge in multitaskimitation learning. arXiv preprint arXiv:2205.03130 , 2022.[21] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitationlearning over unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212,2022.[22] A. Nair, D. Chen, P. Agrawal, P. Isola, P. Abbeel, J. Malik, and S. Levine. Combining self-supervised learning and imitation for vision-based rope manipulation. In ICRA , 2017.[23] I. Radosavovic, X. Wang, L. Pinto, and J. Malik. State-only imitation learning for dexterousmanipulation. In IROS , 2021.[24] D. Pathak, P. Mahmoudieh, G. Luo, P. Agrawal, D. Chen, Y . Shentu, E. Shelhamer, J. Malik,A. A. Efros, and T. Darrell. Zero-shot visual imitation. In ICLR , 2018.[25] B. Baker, I. Akkaya, P. Zhokhov, J. Huizinga, J. Tang, A. Ecoffet, B. Houghton, R. Sampedro,and J. Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. InNeurIPS , 2022.[26] H. Xu, L. Jiang, J. Li, and X. Zhan. A policy-guided imitation approach for offline reinforce-ment learning. In arXiv , 2022.[27] S. Dasari and A. Gupta. Transformers for one-shot visual imitation. In CoRL , 2020.[28] H. Kim, Y . Ohmura, and Y . Kuniyoshi. Transformer-based deep imitation learning for dual-arm robot manipulation. In 2021 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 8965–8972. IEEE, 2021.[29] J. Ho, W. Chan, C. Saharia, J. Whang, R. Gao, A. Gritsenko, D. P. Kingma, B. Poole,M. Norouzi, D. J. Fleet, and T. Salimans. Imagen Video: High definition video generationwith diffusion models. arXiv , 2022.10[30] T. Yu, D. Quillen, Z. He, R. Julian, A. Narayan, H. Shively, A. Bellathur, K. Hausman, C. Finn,and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforce-ment learning. In CoRL , 2019.[31] Y . Zhu, J. Wong, A. Mandlekar, and R. Mart ́ın-Mart ́ın. Robosuite: A modular simulationframework and benchmark for robot learning, 2020.[32] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. In CoRL , 2021.[33] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang,M. Liu, X. Liu, M. Martin, T. Nagarajan, I. Radosavovic, S. K. Ramakrishnan, F. Ryan,J. Sharma, M. Wray, M. Xu, E. Z. Xu, C. Zhao, S. Bansal, D. Batra, V . Cartillier, S. Crane,T. Do, M. Doulaty, A. Erapalli, C. Feichtenhofer, A. Fragomeni, Q. Fu, A. Gebreselasie,C. Gonzalez, J. Hillis, X. Huang, Y . Huang, W. Jia, W. Khoo, J. Kolar, S. Kottur, A. Kumar,F. Landini, C. Li, Y . Li, Z. Li, K. Mangalam, R. Modhugu, J. Munro, T. Murrell, T. Nishiyasu,W. Price, P. R. Puentes, M. Ramazanova, L. Sari, K. Somasundaram, A. Southerland, Y . Sug-ano, R. Tao, M. V o, Y . Wang, X. Wu, T. Yagi, Z. Zhao, Y . Zhu, P. Arbelaez, D. Crandall,D. Damen, G. M. Farinella, C. Fuegen, B. Ghanem, V . K. Ithapu, C. V . Jawahar, H. Joo, K. Ki-tani, H. Li, R. Newcombe, A. Oliva, H. S. Park, J. M. Rehg, Y . Sato, J. Shi, M. Z. Shou,A. Torralba, L. Torresani, M. Yan, and J. Malik. Ego4d: Around the world in 3,000 hours ofegocentric video, 2022.[34] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti,J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchensdataset. In European Conference on Computer Vision (ECCV) , 2018.[35] D. Damen, H. Doughty, G. M. Farinella, , A. Furnari, J. Ma, E. Kazakos, D. Moltisanti,J. Munro, T. Perrett, W. Price, and M. Wray. Rescaling egocentric vision: Collection, pipelineand challenges for epic-kitchens-100. International Journal of Computer Vision (IJCV) , 130:33–55, 2022.[36] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and S. Vijaya-narasimhan. Youtube-8m: A large-scale video classification benchmark, 2016.[37] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas,and I. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. InNeurIPS , 2021.[38] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learninglatent plans from play. In coRL , 2019.[39] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welin-der, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions withhuman feedback. arXiv preprint arXiv:2203.02155 , 2022.[40] E. Chane-Sane, C. Schmid, and I. Laptev. Learning video-conditioned policies for unseenmanipulation tasks. arXiv preprint arXiv:2305.06289 , 2023.[41] S. Karamcheti, S. Nair, A. S. Chen, T. Kollar, C. Finn, D. Sadigh, and P. Liang. Language-driven representation learning for robotics. arXiv preprint arXiv:2302.12766 , 2023.[42] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine. One-shot imitation fromobserving humans via domain-adaptive meta-learning. In RSS, 2018.11[43] A. Zhou, E. Jang, D. Kappler, A. Herzog, M. Khansari, P. Wohlhart, Y . Bai, M. Kalakrishnan,S. Levine, and C. Finn. Watch, try, learn: Meta-learning from demonstrations and reward. InICLR , 2020.[44] J. Li, T. Lu, X. Cao, Y . Cai, and S. Wang. Meta-imitation learning by watching video demon-strations. In International Conference on Learning Representations , 2022.[45] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning viameta-learning. In CoRL , 2017.[46] Z. Mandi, P. Abbeel, and S. James. On the effectiveness of fine-tuning versus meta-reinforcement learning. CoRL 2022 Workshop on Pre-training Robot Learning , 2022.[47] A. Singh, A. Yu, J. Yang, J. Zhang, A. Kumar, and S. Levine. Cog: Connecting new skills topast experience with offline reinforcement learning. In CoRL , 2020.[48] K. Schmeckpeper, O. Rybkin, K. Daniilidis, S. Levine, and C. Finn. Reinforcement learningwith videos: Combining offline observations with interaction. In CoRL , 2020.[49] D. Venuto, S. Yang, P. Abbeel, D. Precup, I. Mordatch, and O. Nachum. Multi-environmentpretraining enables transfer to action limited datasets. arXiv preprint arXiv:2211.13337 , 2022.[50] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. De-hghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth16x16 words: Transformers for image recognition at scale. In ICLR , 2021.[51] M. Janner, Q. Li, and S. Levine. Reinforcement learning as one big sequence modeling prob-lem, 2021.[52] J. Shang, K. Kahatapitiya, X. Li, and M. S. Ryoo. Starformer: Transformer with state-action-reward representations for visual reinforcement learning. arXiv preprint arXiv:2110.06206 ,2021.[53] R. Loynd, R. Fernandez, A. Celikyilmaz, A. Swaminathan, and M. Hausknecht. Workingmemory graphs. In ICML , 2020.[54] E. Parisotto, F. Song, J. Rae, R. Pascanu, C. Gulcehre, S. Jayakumar, M. Jaderberg, R. L.Kaufman, A. Clark, S. Noury, M. Botvinick, N. Heess, and R. Hadsell. Stabilizing transformersfor reinforcement learning. In ICML , 2020.[55] C. R. Dance, J. Perez, and T. Cachet. Conditioned reinforcement learning for few-shot imita-tion. In International Conference on Machine Learning , pages 2376–2387. PMLR, 2021.[56] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as I can, not as I say: Grounding language in roboticaffordances. In CoRL , 2022.[57] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR ,2016.[58] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research , 17(1):1334–1373, 2016.12AppendixA Related workOur work lies at the intersection of scalable multi-task representation learning for robotic manipula-tion, learning from observations, and decision-making using transformers.Representation learning for robotic manipulation. Most approaches of this kind focus on pre-training purely non-motor , usually visual, representation models (see, e.g., [6, 8, 9, 40, 41], andreferences therein). These models don’t output actions; they are meant to be foundations on top ofwhich a policy network is to be learned. Thus, in contrast to PLEX, by themselves they can’t enablezero-shot generalization to unseen tasks even in the limit of pretraining data coverage and amount.However, they are synergistic with PLEX: PLEX can use them as frozen observation encoders, aswe show in Section 4.2 on the example of R3M [6].Techniques that train sensorimotor models – i.e., full-fledged generalist policies, like PLEX – havealso been rising in prominence. Some of them [42–44] are based on meta learning [45]. However,Mandi et al. [46] have shown multi-task pretraining followed by finetuning to be more effectivewhen the task distribution is broad, and several approaches [11–17] follow this training paradigmas does PLEX. At the same time, most of them need pretraining data consisting of high-qualitydemonstrations in the form of matching videos andaction sequences. While the quality requirementcan be relaxed using offline RL, as, e.g., in Singh et al. [47], in order to enable generalization acrossbroad task distributions these sensorimotor training demonstrations need correspondingly broad taskcoverage. This assumption is presently unrealistic and ignores the vast potential of the availablevideo-only data — the weakness PLEX aims to address.Among the sensorimotor representation learning methods that, like PLEX, try to learn from bothvideo-only and sensorimotor data are Schmeckpeper et al. [48], Lynch and Sermanet [11], and Meeset al. [21]. Schmeckpeper et al. [48] consider single-task settings only and require the video-onlyand sensorimotor data to provide demonstrations for the same tasks. Lynch and Sermanet [11]and Mees et al. [21] allow the sensorimotor data to come from exploratory policies rather thantask demonstrations but insist that this data must be generated from meaningful skills , a strongassumption that PLEX avoids.Architecturally, most aforementioned approaches use monolithic models that don’t have separatecomponents for planning and execution like PLEX. Notable exceptions are methods that mine skillsfrom pretraining data, embed them into a latent space, and use the latent skill space for acceleratedpolicy learning of new tasks after pretraining [16, 18–21]. This is akin to planning in the skillspace. PLEX can accommodate this approach hierarchically by having, e.g., a CV AE-based high-level planning model [38] produce a task-conditioned sequence of skill latents and feeding them intoa skill-conditioned planning model that will plan in the observation embedding space. However, inthis work’s experiments, for simplicity PLEX plans in the observation embedding space directly.Learning and imitation from observations (I/LfO) I/LfO has been used in robotic manipulationboth for single-task tabula-rasa policy learning [22, 23] and pretraining [24]. Pathak et al. [24] isrelated to PLEX in spirit but lacks a counterpart of PLEX’s planner. As a result, it can’t completean unseen task based on the task’s goal description alone: it needs either a sequence of subgoalimages starting at the robot’s initial state or a sequence of landmarks common to all initial states ofa given task. Beyond robotics, a type of LfO was also employed by Baker et al. [25] and Venutoet al. [49] to pretrain a large sensorimotor model for Minecraft and Atari, respectively. This model,like Pathak et al. [24]’s, doesn’t have a task-conditioned planning capability and is meant to serveonly as a finetunable behavioral prior. Xu et al. [26] investigate an LfO method akin to PLEX inlow-dimensional environments, where it side-steps the question of choosing an appropriate repre-sentation for planning, the associated efficiency tradeoffs, and pretraining a generalizable planningpolicy.13Overall, the closest approach to PLEX is the concurrently proposed UniPi [5]. It also has a universalplanner meant to be pretrained on a large collection of available videos, as well as an executorthat captures inverse dynamics. However, UniPi ignores the issue of data efficiency and plans inthe space of images (observations), using diffusion [29], rather than in the latent space of theirembeddings. This is expensive to learn and potentially detrimental to plan quality. Latents even fromstatically pretrained image encoders are sufficient to capture object manipulation-relevant detailsfrom videoframes [8], whereas diffusion models can easily miss these details or model their 3Dstructure inconsistently [29]. Indeed, despite being conceptually capable of closed-loop control, forcomputational efficiency reasons UniPi generates open-loop plans, while PLEX interleaves planningand execution in a closed loop.Transformers for decision making and their data efficiency. After emerging as the dominantparadigm in NLP [2] and CV [50], transformers have been recently applied to solving general long-horizon decision-making problems by imitation and reinforcement learning [37, 51–54], includingmulti-task settings [55] and robotic manipulation [14, 17, 21, 27, 28]. Mees et al. [21] provideevidence that in robotic manipulation transformers perform better than RNNs [11] while havingmany fewer parameters. Of all these works, only Reed et al. [14] uses relative positional encoding,and only by “inheriting” it with the overall Transformer-XL architecture [7], without motivating itseffectiveness for decision-making.Task specification formats. Task specification modality can significantly influence the generaliza-tion power of models pretrained on multi-task data. Common task conditioning choices are imagesof a task’s goal [15], videos of a task demonstration by a person [12, 42] or by a robot [13, 45], andlanguage descriptions [11, 12, 17, 21, 56]. PLEX is compatible with any of these formats; in theexperiments, we use goal images.B Problem formalizationFormally, the problem PLEX aims to solve can be described as a partially observable Markov deci-sion process (POMDP) ⟨G,S,O, z,A, p, r⟩with a special structure. Here, Gis the space of possiblemanipulation tasks that we may want to carry out the tasks in G.S=P × W is a state spaceconsisting of a space Pof robots’ proprioceptive states (e.g., poses, joint speeds, etc.) and a spaceWof world states. A state s’s proprioceptive part p∈ P is known at execution time and in someof the training data, whereas the world state w∈ W is never observable directly. A latent state scan be probabilistically inferred from its observations o∈ O and a state-conditioned distributionz:S → ∆(O)that describes how latent states in Smanifest themselves through observations,where ∆denotes the space of distributions. For robotic manipulation, each observation can consistof several modalities : camera images (possibly from several cameras at each time step), depth maps,tactile sensor readings, etc. The distribution zis unknown and needs to be learned. Ais an actionspace, e.g., the space of all pose changes the robotic manipulator can achieve in 1 time step, andp:S × A → ∆(S)is a transition function describing how executing an action affects a currentstate, which potentially is stochastic. A reward function r:G × S × A × S → Rcan provide addi-tional detail about task execution by assigning a numeric reward to each state transition, e.g., 0 fortransitions to a task’s goal state and -1 otherwise. Our objective is to learn a policy π:G×O |H→Athat maps a history of observations O|Hover the previous Hsteps to an action so as to lead the robotto accomplish a task g∈ G.C PLEX implementation detailsThe transformers PLEX uses as its planner and executor are derived from the GPT-2-based versionof the Decision Transformer (DT) [37]. Like in DT, we feed inputs into PLEX by embedding eachmodality instance (e.g., an image or an action) as a single unit. This is different to the way, e.g.,Gato [14] and Trajectory Transformer [51] do it, by splitting each input into fragments such as imagepatches and embedding each fragment separately.14We condition PLEX’s planner on embeddings of goal images. Low-dimensional inputs (actions andproprioceptive states) are mapped to Rh, the transformer’s h-dimensional input space, using a 1-layer linear neural network. High-dimensional inputs – videoframes from one or several camerasat each time step as well as goal images – are processed using a ResNet-18-based [57] encoderfrom Robomimic [32]. It applies a random crop augmentation to each camera’s image, passes itthrough a separate ResNet18 instance associated with that camera, then passes the result througha spatial softmax layer [58], and finally through a small MLP. The resulting embedding is fed intoPLEX’s planner. If the robot has several cameras, the encoder has a separate ResNet instance foreach. For each time step, PLEX’s planner outputs an h-dimensional latent state representing thepredicted embedding of PLEX’s visual observations ktime steps into the future, where kis a tunableparameter. These latents are then fed directly into the planner as predictions of future observationembeddings. The output latents from the planner transformer are fed through a tanh non-linearity,which outputs action vectors in the [−1,1]range. The hyperparameters can be bound in Tables 4and 5.Our PLEX implementation is available at https://microsoft.github.io/PLEX.D Additional details about the experimentsD.1 Meta-World and Robosuite detailsMeta-World. In our Meta-World-v2 setup, at each time step the agent receives an 84×84imagefrom the environment’s corner camera and the Sawyer arm’s 18D proprioceptive state. The agent’sactions have 4 dimensions, each scaled to the [−1,1]range. Although Meta-World also providesprivileged information about the state of the environment, including the poses of all relevant objects,our PLEX agent doesn’t access it.Robosuite. The observation and action space in our experiments is exactly as in the best-performinghigh-dimensional setup from the Robomimic paper [32]. Namely, actions are 7-dimensional: 6 di-mensions for the gripper’s pose control (OSC POSE) and 1 for opening/closing it. Visual observa-tions are a pair of 84×84images from agentview (frontal) and eye-in-hand (wrist) cameras at eachstep. Proprioceptive states consist of a 3D gripper position, a 4D quaternion for its orientation, and2D gripper fingers’ position.D.2 Details of the baselines from prior workPLEX +R3M [6]. We experiment with two combinations of PLEX with a frozen ResNet-50-basedR3M [6], an observational representation pretrained on the large Ego4D dataset [33] In these ex-periments, R3M replaces Robomimic’s ResNet-18, and we use versions of our Meta-World Dvmt,Dmtvd, andDttddatasets with 224x224 image observations instead of the 84x84 ones.One combination, PLEX +R3M, BC in Figure 2, learns a single-task policy on 10 full sensorimo-tor demonstrations for each Meta-World target task. It operates in behavior cloning (BC) mode,whereby PLEX is optimized solely w.r.t. its action predictions’ MSE loss, whose gradients back-propagate though the whole network (except the frozen R3M). The other combination, pretr. PLEX+R3M in Figure 2, follows the same PLEX pretraining and finetuning process as described previ-ously, except the R3M encoder stays frozen throughout.Learning from Play [11]. Our final baseline is an adapted Learning from Play (LfP) approach [11].As in Lynch and Sermanet [11], LfP doesn’t use video-only Dmtvddata or target-task demonstrationsDttd; it trains one model for all target tasks from the “play” dataset Dvmtonly. Instead of usinglanguage annotations to separate “meaningful” subsequences in Dvmt, we give LfP the ground-truthknowledge of where trajectories sampled from different tasks begin and end. Accordingly, we don’tuse language during training either. As n the case of PLEX, We train Learning from Play to planconditioned only on goal images and present it with goal images from successful trajectories of thetarget tasks during evaluation.15D.3 Success rate evaluation protocolIn the generalization experiments on Meta-World , all success rate evaluations are done on 50500-step rollouts starting from initial states sampled from the testdistributions of Meta-World’sML45 target tasks ( door-lock ,door-unlock ,hand-insert ,bin-picking , and box-close ).To evaluate the zero-shot success rate of the pretrained EX and PLEX models, we compute theaverage across 50 rollouts generated by these models on each of the 5 target tasks at the end ofpretraining .To evaluate the success rate of the finetuned models, we adopt the procedure from Mandlekar et al.[32]. The finetuning lasts for Nepochs (see Table 5). After each epoch, we measure the averagesuccess rate of the resulting model across 50 rollouts, and record the maximum average success rateacross all finetuning epochs.In the positional encoding experiments on Robosuite , the evaluation protocol is the same as inMeta-World finetuning and in Robomimic [32]: we train each model for Nepochs (see Table 5),after each epoch compute the success rate across 50 trajectories (with 700-step horizon), and recordthe best average success rate across all epochs.D.4 Robosuite datasets and model trainingTraining data for Robosuite was collected from human demonstrations, not generated by scriptedpolicies. Robosuite provides a keyboard and SpaceMouse interfaces for controlling the Panda armin its environments, and Robomimic supplies datasets of 200 expert (“professional-human”) trajec-tories collected using the SpaceMouse interface for the NutAssemblySquare ,PickPlaceCan , and Lifttasks. For each of the tasks without pre-collected Robomimic datasets, we gather 75 high-qualitytrajectories via Robosuite’s keyboard interface ourselves. We employ Robosuite tasks only for ex-periments that involve training single-task policies from scratch, so all of these trajectories are usedastarget-task demonstration data ( Dttd). Typical demonstration trajectory lengths vary between 50and 300 time steps.Accordingly, to show the difference between relative and absolute positional encodings’ data ef-ficiency, we train PLEX for |Dttd|= 5,10,25,50,and75, sampling Dttd’s from the set of 75demonstrations without replacement. The results are presented in the main paper in Figure 3.ForLift,PickPlaceCan , and NutAssemblySquare , Robomimic [32] similarly provides 200 high-quality human-collected demonstrations each, as well as the results of BC-RNN on subsets ofthese datasets with |Dttd|= 40 ,100,and200. Therefore, for these problems we train PLEX for|Dttd|= 5,10,25,50,75, as well as 40,100,and200. The results are shown in Table 3 and Table 1.The only difference of PLEX model instances for Robosuite from those for Meta-World is the formerhaving twoResNet-18s in the observation encoder, one for the eye-in-hand and one for the agentviewcamera. As for Meta-World, the encoder in the Robosuite is trained from scratch, in order to makeour results comparable to Robomimic’s [32], where models use an identical encoder and also trainit tabula-rasa. In this experiment, we train PLEX in behavior cloning (BC) mode, like Meta-World’ssingle-task PLEX +R3M, whereby PLEX is optimized solely w.r.t. its action predictions’ MSE loss,whose gradients backpropagate though the whole network. All hyperparameters are in Table 5 inAppendix E.We compare PLEX with relative positional encoding to PLEX with absolute one and to two flavorsof the Decision Transformer (DT) [37], which use global positional embedding. One flavor ( DT-global in Figure 3) is trained to condition only on task specification (i.e., goal images), like PLEX.We note, however, that Chen et al. [37] used rewards and returns when training and evaluating DT.Therefore, we also train a return-conditioned version of DT ( DT-global(+rew) in Figure 3), withreturns uniformly sampled from the range of returns in Dttdduring evaluation.16D.5 Additional Robosuite resultsComparison to BC-RNN. Relying on relative positional encoding allows PLEX to achieve state-of-the art performance on all Robosuite tasks in our experiments. To establish this, in addition to thebaselines in Figure 3, we compare to the results of a BC-RNN implementation from the work thatintroduced some of these Robosuite problems [32]. Interestingly, running BC-RNN on the tasks forwhich we have collected demonstrations ourselves resulted in 0 success rate (Table 2), while runningit on tasks with Robomimic-supplied 200 trajectories ( Lift,PickPlaceCan , and NutAssemblySquare )reproduced Mandlekar et al. [32]’s results. PLEX’s comparison to BC-RNN’s results on those prob-lems are in Table 1 in Appendix D.4. PLEX and BC-RNN are at par on the easier problems butPLEX performs better on the harder NutAssemblySquare .Lift PickPlaceCan NutAssemblySquare|Dttd| 40 100 200 40 100 200 40 100 200PLEX 100±0100±0100±082.8±8.995.8±2.896.6±4.140.4±6.969.6±4.186.0±3.1BC-RNN 100±0100±0100±083.3±1.997.3±0.998.0±0.929.3±4.164.7±4.182.0±0.0Table 1: Performance of PLEX and BC-RNN on three Robosuite tasks from Mandlekar et al. [32]on|Dttd|= 40,100,and200demonstrations. BC-RNN’s results come from Figure 3b and Table 27in Mandlekar et al. [32]). On the easier LiftandPickPlaceCan , PLEX and BC-RNN are at par, buton the harder NutAssemblySquare PLEX performs better. On the remaining 6 problems for whichwe have gathered the demonstration data, BC-RNN’s success rate is 0 — see Table 2.Door Stack PickPlaceBread PickPlaceMilk PickPlaceCereal NutAssemblyRound|Dttd| 75 75 75 75 75 75PLEX 78.4±9.297.3±2.992.0±4.65 65.6±4.6 72.2±4.4 49.8±5.5BC-RNN 0±0 0±0 0±0 0±0 0±0 0±0Table 2: Performance of PLEX and BC-RNN on the remaining 6 Robotsuite/Robomimic tasks fromFigure 3. PLEX’s numbers are copied from that Figure.Better data efficiency or higher performance? Given Figure 3, one may wonder: does PLEX-abs’s performance plateau at a lower level than PLEX’s with relative positional encoding, or doesPLEX-abs catch up on datasets with |Dttd|>75? For most tasks we don’t have enough training datato determine this, but Table 3 in Appendix D.4 provides an insight for the tasks with Robomimic-supplied 200 training demonstrations. Comparing the performance gaps between PLEX and PLEX-abs on 75-trajectory and 200-trajectory datasets reveals that the gap tends to become smaller. Thesame can be seen for Stack ,PickPlaceCereal ,NutAssemblyRound already at |Dttd|= 75 in Figure 3,suggesting that with sufficient data PLEX-abs may perform as well as PLEX. However, the amountof data for which this happens may not be feasible to collect in practice.Lift PickPlaceCan NutAssemblySquare|Dttd| 75 200 75 200 75 200PLEX 100±0100±080.4±5.796.6±4.164.0±4.686.0±6.1PLEX-abs 100±0100±072.8±8.093.0±4.745.2±5.776.8±4.9Table 3: Performance of PLEX and PLEX-abs as the amount of training data |Dttd|increases from75 to 200 trajectories. The performance gap between the two is narrower on the larger dataset. ForLiftand several other Robosuite tasks, this trend becomes visible for datasets smaller than 200 (seeFigure 3.E Hyperparameters17Parameter nameMeta-World(PLanner /EXecutor )Robosuite(PLanner /EXecutor )# layers 3/3 3/3context size K 30/30 time steps 30/30 time stepshidden dimension 256/256 256/256# transformer heads 4/4 4/4# evaluation episodes 50 50# max. evaluation episode length 500 700Table 4: Hyperparameters of PLEX’s transformer-based planner and executor components for theMeta-World and Robosuite benchmarks. In each case, the planner and executor use the same param-eters, but for most problems the executor’s context length Kcan be much smaller than the planner’swithout loss of performance, e.g., KEX= 10 . For the Decision Transformer on Robosuite, we use4 transformer layers and otherwise the same hyperparameters as for PLEX.Meta-World RobosuiteParameter namepretraining(PLanner /EXecutor )last-layer finetuning(PLanner /EXecutor )behavior cloning(PLanner /EXecutor )lookahead steps 1/ – 1/ – 1/ –learning rate 5·10−45·10−45·10−4batch size 256 256 256weight decay 10−510−510−5# training epochs 10/10 10/10(?) 10# training steps per epoch 250/250 250/250(?) 500Table 5: Hyperparameters of PLEX training for the generalization experiments on Meta-World andpositional encoding experiments on Robosuite. The former use PLEX in pretraining and finetuningmodes; the latter only in behavior cloning mode (training the entire model from scratch for a singletarget task). In finetuning mode, we adapt only the last transformer layer of the planner and, in oneexperiment, of the executor as well. The (?) next to the executor’s hyperparameters indicate thatthey were used only in the experiment where the executor was actually finetuned. For the DecisionTransformer on Robosuite we use the same hyperparameters as for PLEX.18 |
MnANx01rV2w | CAJun: Continuous Adaptive Jumping using aLearned Centroidal ControllerYuxiang Yang∗, Guanya Shi†, Xiangyun Meng∗, Wenhao Yu‡, Tingnan Zhang‡Jie Tan‡,Byron Boots∗∗University of Washington†Carnegie Mellon University‡Google DeepmindAbstract: We present CAJun, a novel hierarchical learning and control frameworkthat enables legged robots to jump continuously with adaptive jumping distances.CAJun consists of a high-level centroidal policy and a low-level leg controller.In particular, we use reinforcement learning (RL) to train the centroidal policy,which specifies the gait timing, base velocity, and swing foot position for theleg controller. The leg controller optimizes motor commands for the swing andstance legs according to the gait timing to track the swing foot target and basevelocity commands. Additionally, we reformulate the stance leg optimizer in theleg controller to speed up policy training by an order of magnitude. Our systemcombines the versatility of learning with the robustness of optimal control. We showthat after 20 minutes of training on a single GPU, CAJun can achieve continuous,long jumps with adaptive distances on a Go1 robot with small sim-to-real gaps.Moreover, the robot can jump across gaps with a maximum width of 70cm, whichis over 40% wider than existing methods.1Keywords: Jumping, Legged Locomotion, Reinforcement Learning1 IntroductionLegged robots possess a unique capability to navigate some of the earth’s most challenging terrains.By strategically adjusting their foot placement and base pose, legged robots can negotiate steep slopes[1,2,3], traverse uneven surfaces [ 4,5], and crawl through tight spaces [ 6]. However, for terrainswith scarce contact choices, such as gaps or stepping stones, the capability of legged robots remainssomewhat limited. This limitation primarily stems from the fact that most legged robots rely heavilyon walking gaits with continuous foot contacts. As such, options for foot placement are confined towithin one body length from the robot’s current location. Jumping offers a compelling solution tothis problem. By enabling “air phases”, a jumping robot can traverse through long distances withoutterrain contacts. Such a capability could markedly enhance a legged robot’s versatility when dealingwith challenging terrains. In addition, a robot capable of continuous ,adaptive andlong-distancejumps could further boost its speed and efficiency during terrain traversal.Compared with standard walking, jumping is a significantly more challenging control task for bothoptimization-based [ 7,8,9,10,11,6,12,13,14] and learning-based controllers [ 15,5,16,17,18].Optimization-based controllers, despite proving robust in challenging terrains, face computationallimitations that prevent them from planning for long jumping trajectories in real time. Typically, thesecontrollers circumvent this issue by first solving an intricate trajectory optimization problem offline,then utilizing simplified model predictive control (MPC) to track this predetermined fixed trajectoryonline. Consequently, existing works tend to be restricted to non-adaptive, single jumps [ 6,12,13,14].On the other hand, RL controllers have the potential to learn more adaptive and versatile locomotionskills, but they require substantial effort in reward design and sim-to-real transfer [ 19,15,20,4],particularly for dynamic and underactuated tasks such as jumping. Therefore, achieving continuousjumping over long distances can be a significant challenge for existing methods.1Video and code at this page. Author Emails: {yuxiangy,xiangyun,bboots }@cs.washington.edu,guanyas@andrew.cmu.edu, {magicmelon,tingnan,jietan }@google.com7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.In this paper, we present CAJun ( Continuous Adaptive Jumping with a Learned Centroidal Policy),which achieves continuous long-distance jumpings with adaptive distances on the real robot. Ourframework seamlessly combines optimization-based control and RL in a hierarchical manner. Specif-ically, a high-level RL-based centroidal policy specifies the desired gait, target base veloctiy, andswing foot positions to the leg controller , and a low-level leg controller solves the optimal motor com-mands given the centroidal policy’s action. Our framework effectively integrates the benefits of bothcontrol and learning. First, the RL-based centroidal policy is able to learn versatile, adaptive jumpingbehaviors without heavy computational burden. Second, the low-level quadratic-programming-based(QP) leg controller optimizes torque commands at high frequency (500Hz), which ensures reactivefeedback to environmental perturbations and significantly reduces the sim-to-real gap. Finally, toresolve the common training speed bottleneck in hierarchical methods [ 21,22,23], we reformulatedthe QP problem in the leg controller to a least-squares problem with clipping so that the entire stackis 10 times faster and can be executed in massive parallel [16].Within 20 mins of training in simulation, we deploy CAJun directly to a Unitree Go1 robot [ 24].Without any fine-tuning, CAJun achieves continuous, long-distance jumping, and adapts its jumpingdistance based on user command. Moreover, using the alternating contact pattern in a boundinggait, the robot is capable of crossing a gap of 70cm, which is at least 40% larger than existingmethods (Fig. 4 and Table 1). To the best of our knowledge, CAJun is the first framework thatachieves continuous, adaptive jumping with such gap-crossing capability on a commercially availablequadrupedal robot. We further conduct ablation studies to validate essential design choices. Insummary, our contribution with CAJun are the following:•We present CAJun, a hierarchical learning and control framework for continuous, adaptive, long-distance jumpings on legged robots.•We demonstrate that jumping policies trained with CAJun can be directly transferred to the realworld with a gap-crossing capability of 70cm.• We show that CAJun can be trained efficiently in less than 20 minutes using a single GPU.2 Related WorksOptimization-based Control for Jumping Using optimization-based controllers, researchers haveachieved a large variety of jumping behaviors, from continuous pronking and bounding [ 7,8,9,10,11]to large single-step jumps [ 6,12,13,14]. By optimizing for control inputs at a high frequency, thesecontrollers can execute robust motions even under severe perturbations [ 8,9]. However, due to thehigh computation cost, they cannot plan ahead for a long horizon during online execution. Therefore,they primarily focus on high-frequency jumps with a short CoM displacement per jump [ 9,10,11].One way to overcome this computation limit is to pre-compute a reference trajectory offline usingtrajectory optimization (TO) [ 6,12,13,14], which can greatly extend the height [ 12] and distance [ 13]of each jump. However, it can be challenging to generalize beyond the reference trajectories towardscontinuous, adaptive jumping [ 25,26,27]. Notably, using a multi-level planner, Park et al. [26]achieved continuous bounding with fixed gait and adaptive height to jump over hurdles. Compared tothese approaches, our framework adopts a more general formulation, where the policy can adjust thegait timing, base pose, and swing foot position simultaneously.Learning-based Control for Jumping In recent years, learning-based controllers have significantlyimproved the capability of legged robots, from rapid running [ 28] to traversing over challengingterrains [5]. While standard walking gaits can be learned from scratch using reinforcement learning(RL), more dynamic behaviors such as jumping usually require additional setup in the learningprocess, such as motion imitation [ 19,18,17], curriculum learning [ 16] and multi-stage training[3,29]. Another challenge for learning-based controllers is sim-to-real, especially for dynamicunderactuated behaviors like jumping [ 30]. To overcome the sim-to-real gap, researchers havedeveloped a suite of tools such as domain randomization [ 15], system identification [ 31] and motoradaptation [ 20]. Recently, Smith et al. [19] used motion imitation and transfer learning to jump2State EstimatorStance Leg ControllerPDFeedbackGRFOptimizerCoMAccGRFSwing Leg ControllerInverseKinematicsPDControllerDesired Joint AngleCentroidalPolicyDistanceto GoalGaitGeneratorSteppingFrequency Swing Foot Residuals Desired Base V elocity Nominal Swing Trajectory Motor TorqueBase Position and V elocityFoot PositionGRFCentroidal Policy (100Hz) Leg Controller (500Hz)Figure 1: Overview of the hierarchical framework of CAJun.over a gap of 20cm (0.4 body length) on a Unitree A1 robot, and Caluwaerts et al. [3]used multi-stage training with policy synthesis to jump over a gap of 50cm (1 body length) on a custom-builtquadrupedal robot. Compared to these works, CAJun’s hierarchical setup can jump over wider gaps(70cm / 1.4 body length) continuously , and can adapt its landing position based on user command.Hierarchical RL for Legged Robots Recently, there has been increasing interest in combiningRL with optimization-based control for legged robots [ 22,32,21,23,33,34]. These frameworkstypically follow a hierarchical structure, where a high-level RL-trained policy outputs intermediatecommands to a low-level leg controller. The RL policy can give several forms of instructions to thelow-level controller, such as gait timing [ 22,32], CoM trajectory [ 21,34,30,33] and foot landingpositions [ 23,35,36,37,38]. Our approach uses a similar hierarchical setup but adopts a generalaction space design where the policy specifies the gait, CoM velocity and swing foot locationssimultaneously . One bottleneck of the hierarchical approaches is the slow training time because everyenvironment step involves solving the optimization problem in the low-level controller. We overcomethis bottleneck by relaxing the constraints in foot force optimization [ 39,40,41], so that foot forcecan be solved efficiently in closed form. Compared to existing frameworks which can take hours oreven days to train, CAJun can be trained in 20 minutes using GPU-accelerated simulation [16].3 Overview of CAJunIn order to learn continuous, long-distance, and adaptive jumping behaviors, we design CAJun asa hierarchical framework consisting of a high-level centroidal policy and a low-level leg controller(Fig. 1). To specify a jump, The centroidal policy outputs three key actions to the low-level controller,namely, the stepping frequency, the swing foot residual, and the desired base velocity. The modulesin the leg controller then convert these actions into motor commands. Similar to previous works[8,9,32,22,21], the leg controller adopts separate control strategy for swing and stance legs, wherethe desired contact state of each leg is determined by the gait generator . We design the gait generatorto follow a pre-determined contact sequence with timings adjustable by the high-level centroidalpolicy. For swing legs, we first find its desired position based on a heuristically-determined referencetrajectory and learned residuals, and converts that to joint position commands using inverse kinematics.For stance legs, we first determine the desired base acceleration from the policy commands, and thensolves an optimization problem to find the corresponding Ground Reaction Forces (GRFs) to reachthis acceleration. We run the low-level controller at 500Hz for fast, reactive torque control, and thehigh-level controller at 100Hz to ensure stable policy training.4 Low-level Leg ControllerSimilar to prior works [ 22,21,8], the low-level controller of CAJun adopts separate control strategiesfor swing and stance legs, and uses a gait generator to track the desired contact state of each leg.Additionally, we carefully design the interface between the centroidal policy and components in theleg controller to maintain control robustness and policy expressiveness. Moreover, we relaxed theGRF optimization problem in stance leg controller to significantly speed up training.3Air ContactFront Contact Mid Air Rear Contact AirFigure 2: The contact sequence and default timing of the pronking ( left) and bounding ( right ) gait.4.1 Phase-based Gait GeneratorThe gait generator determines the desired contact state of each leg (swing or stance) based on apre-defined contact sequence and the timing information from the centroidal policy. To capturethe cyclic nature of locomotion, we adopt a phase-based gait representation, similar to prior works[22,42]. The gait is modulated by a phase variable φ, which increases monotonically from 0to2πineach locomotion cycle, and wraps back to 0to start the next cycle. The propagation of φis controlledby the stepping frequency f, which is commanded by the centroidal policy:φt+1=φt+ 2πf∆t (1)where ∆tis the control timestep. The mapping from φto the desired contact state is pre-defined.We adopt two types of jumping gaits in this work, namely, bounding andpronking , where boundingalternates between the front and rear leg contacts, and pronking lands and lifts all legs at the sametime (Fig. 2). Note that while the sequence of contacts is fixed, the centroidal policy can flexiblyadjust the timing of contacts to based on the state of the robot.4.2 Stance Leg ControlThe stance leg controller computes the desired joint torque given the velocity command from thecentroidal policy. Since jumping is mostly restricted to the sagittal plane, the policy specifies thevelocity in the forward and upward axis ( vx, vz), as well as the rotational velocity vθ, and the velocityfor the 3 remaining DoF is set to 0. We compute the desired torque following a 3-step procedure. First,we compute the desired CoM acceleration ̈qref∈R6using a PD controller (Appendix. A.2). Next,we optimize for the GRF f= [f1,f2,f3,f4]∈R12to track this desired acceleration, where fiisthe foot force vector of leg i. Lastly, we compute the motor torque command using τ=J⊤f, whereJis the foot Jacobian. When training a hierarchical controller with a low-level optimization-basedcontroller, a major computation bottleneck lies in the GRF optimization [ 21,22,23]. As such, were-design this optimization procedure to significantly speed up the training process.QP-based GRF Optimization To optimize for GRF, prior works typically solve the followingquadratic program (QP):minf∥ ̈q− ̈qref∥U+∥f∥V (2)subject to: ̈q=Af+g (3)fi,z= 0 ifiis a swing leg (4)fmin≤fi,z≤fmax ifiis a stance leg (5)−μfi,z≤fi,x≤μfi,z,−μfi,z≤fi,y≤μfi,z i= 1, . . . , 4 (6)Eq.(3)represents the centroidal dynamics model [ 8], where Ais the generalized time-variant inverseinertia matrix, and gis the gravity vector (see Appendix. A.2 for details). Eq. (5),(4)specifies thecontact schedule, as computed by the gait generator. Eq. (6)specifies the approximated friction coneconstraints, where μis the friction coefficient. U,V≻0are positive definite weight matrices.Unconstrained GRF Optimization with Clipping QP-based GRF optimization would require aniterative procedure (e.g., active set method or interior point method), which can be computationallyexpensive and difficult to scale up in parallel in GPU. Instead of using the QP formulation, CAJunrelaxes this optimization problem by solving the unconstrained GRF first and clipping the resultingGRF to be within the friction cone. Since Eq. (3)is linear in f, if we ignore the constraints in Eq. (6)and(5)and eliminate the variables for non-contact legs, the optimal fcan be solved in closed-form :4bf= (A⊤UA+V)−1A⊤U( ̈qref−g) (7)Next, we project the solved ground reaction forces into the friction cone, where we first clip thenormal force within actuator limits, and then clip the tangential forces based on the clipped normalforce:fi,z=clip(bfi,z, fmin, fmax),(fi,x, fi,y) = (bfi,x,bfi,y)·min(1 , μfi,z/qbf2i,x+bf2i,y) (8)We design this projection to minimize the force disruption in the gravitational direction, so that thelow-level controller can track height commands accurately.Note that our unconstrained formulation not only reduces computational complexity, but also makesthe solving procedure highly parallelizable. Therefore, when paired with GPU-accelerated simulatorlike Isaac Gym [ 16], CAJun can be trained efficiently in massive parallel, which significantly reducedthe turn-around time. Additionally, while our unconstrained formulation may yield sub-optimalsolutions when the least-squares solution (Eq. (7)) finds a GRF outside the friction cone, the high-levelcentroidal policy would observe this sub-optimality during training, and thereby compensating for itby adjusting the desired CoM velocity commands. In practice, we find the policies trained using theconstrained and unconstrained optimization to perform similarly (see Sec. 6.6 for details).4.3 Swing Leg ControlWe use position control for swing legs, where the desired position is the sum of a heuristicallyconstructed reference trajectory [ 8,43] and a residual output from the centroidal policy. Similar toprior works [ 22,21], we generate the reference trajectory by interpolating between key points in theswing phase (see Appendix. A.3 for details). On top of the heuristic trajectory ( psin Fig. 1), thecentroidal policy adjusts the swing foot trajectory for higher foot clearance and optimal foot placementby outputting a residual in foot position ( prin Fig. 1). Once the foot position is determined, weconvert it to desired motor angles using inverse kinematics and execute it using joint PD commands.5 Learning a Centroidal Policy for JumpingThe RL problem is represented as a Markov Decision Process (MDP), which includes the statespaceS, action space A, transition probability p(st+1|st, at), reward function r:S × A 7→ R,and initial state distribution p0(s0). We aim to learn a policy π:S 7→ A that maximizesthe expected cumulative reward over an episode of length T, which is defined as J(π) =Es0∼p0(·),st+1∼p(·|st,π(st))PTt=0r(st, at).Environment Overview For maximum expressiveness, we design the environment such that thecentroidal policy directly specifies the contact schedule, base velocity and swing foot position for thelow-level controller. To focus on continuous jumps, we design each episode to contain exactly 10jumping cycles, where termination is determined by the gait generator (Section. 4.1). Additionally,we normalize the reward so that total reward within each jumping cycle is agnostic to its duration. Inorder to learn distance-adaptive jumping, we sample different jumping distances uniformly in [0.3m,1m] before each jump, and compute the desired landing position, which is included in the state space.State and Action Space We design the state space to include the robot’s proprioceptive state, aswell as related information about the current jump. The proprioceptive information includes thecurrent position and velocity of the robot base, as well as the foot positions in the base frame. Thetask information includes the current phase of the jump φ(Sec. 4.1) and the location of the targetlanding position in egocentric frame. The action space includes the desired stepping frequency f, thedesired base velocity in sagittal plane vx, vz, vθ, as well as the desired swing foot residuals, whichare specified to different modules in the low-level controller.5Figure 3: Long-exposure photos visualizing base (green), front foot (blue) and rear foot (red) trajectories ofthe robot when jumping with alternating distance commands. White lines show the foot positions during eachlanding (contact phase for pronking, mid-air phase for bounding). Time shows the duration of “air phase” (Fig. 2)in each jump when all legs are in the air.Reward Function We design a reward function with 9 terms. At a high level, the reward functionensures that the robot maintains an upright pose, follows the desired contact schedule, and lands closeto goal. See Appendix. B.1 for the detailed weights and definitions.Early Termination To speed up training and avoid unnecessary exploration in sub-optimal states,we terminate an episode early if the robot’s base height is less than 15cm, or the base orientationdeviates significantly from the upright pose.Policy Representation and Training We represent policy and value functions using separate neuralnetworks. Each network includes 3 hidden layers of [512,256,128] units respectively with ELUactivations [ 44]. We train our policy using Proximal Policy Optimization (PPO) [ 45]. Please seeAppendix. B.2 for the detailed configuration.6 Results and AnalysisWe design experiments to validate that CAJun can learn continuous and adaptive jumping controllers.In particular, we aim to answer the following questions:1. Can CAJun enable the robot to learn continuous jumping with adaptive jumping distances?2. What is the widest gap that the robot can jump over using CAJun?3. How robust is the learned jumping controller against external perturbations?4.What is the advantage of the hierarchical design of CAJun, and what are important design choices?6.1 Experiment SetupWe use the Go1 quadrupedal robot from Unitree [ 24], and build the simulation in IsaacGym [ 16,46].To match the GPU-accelerated simulation environment, we implement the entire control stack,including the centroidal policy and the leg controller, in a vectorized form in PyTorch [ 47]. We adoptthe PPO implementation from rslrl[16]. We train CAJun on a standard desktop with an NvidiaRTX 2080Ti GPU, which takes less than 20 minutes to complete.6.2 Continuous and Adaptive JumpingTo verify that CAJun can learn continuous, dynamic jumping with adaptive jumping distances on thereal robot, we deploy the trained pronking andbounding controllers to the real robot. For each gait,we run it continuously for at least 6 jumps, where the desired jumping distance alternates between0.3 and 1 meter. We put LEDs on the base and feet of the robot and capture the robot’s trajectoryusing long-exposure photography (Fig. 3).We find that both the pronking and the bounding controller can be deployed successfully to thereal robot, and achieve continuous jumping with long jumping distances. Both the base and thefoot trajectories exhibit clear periodicity, which demonstrates the long-term stability of the jumpingcontroller. Moreover, the policy responds to jumping distance commands well, and results in6Figure 4: Using the bounding gait, the robot can jump over a 60cm-wide yoga mat without making foot contact.Method Jumping Style Widest Gap CrossedTWiRL [19] Single 0.2mBarkour [3] Single 0.5mMargolis et al. [30] Continuous 0.26mWalk-These-Ways [17] Single w/ Acceleration 0.6mCAJun (ours) Continuous w/ Adaptive Jumping Distance 0.7mTable 1: Comparison of gap-crossing capability on controllers deployed to similar-sized robots.alternating patterns of further and closer jumps. A closer look at the duration of each air phaseshows that in both the bounding and pronking gait, the centroidal policy reduces the air time byapproximately 20% when switching from longer to shorter jumps. This is achieved by the steppingfrequency output (Section. 4.1) of the centroidal policy. As demonstrated in previous works [ 22,48],such gait adjustments can potentially save energy and extend the robot’s operation time.6.3 Jumping over Wide GapsWhile both the pronking and bounding gait can jump with at least 70cm of base movement in eachstep, we find that the bounding gait offers a unique advantage in traversing through gaps. As seen inthe foot trajectories in Fig. 3, the alternating contact pattern in bounding enables the front and rear ofthe robot to land closely in the world frame, so that the robot can utilize the entire jumping distanceof 70cm for gaps. To further validate this, we place a yoga mat with a width of 60cm in the course ofthe robot, and find that the robot can jump over it with additional buffer space before and after thejump (Fig. 4). To the best of our knowledge, CAJun is the first framework that achieves continuousjumping with such gap-crossing capability on a commercially-available quadrupedal robot (Table. 1).6.4 Validation on RobustnessWe design two experiments to further validate the robustness of CAJun. In the first experiment, weadd a leash to the back of the robot and actively pulled the leash during jumping (Fig. 5). Whileboth the pronking and bounding gait experienced a significant drop in forward velocity during thepull, they recovered from the pull and regained momentum for subsequent jumps. In the secondexperiment, we test the robot outdoors, where the robot needs to jump from asphalt to grass (Fig. 6).The uneven and slippery surface of the grass perturbed the robot and broke the periodic patternin pitch angles. However, both policies recovered from the initial perturbation, and resume stable,periodic jumps after around 2 jumping cycles. The robustness of CAJun can be likely attributed tothe high control frequency of the low-level leg controller, which enables the robot to react swiftly tounexpected perturbations, and the online adjustment of the learned centroidal policy.6.5 Comparison with End-to-End RLTo demonstrate the effectiveness of CAJun’s hierarchical setup, we compare it to an end-to-end RLbaseline, where the policy directly outputs motor position commands. Please refer to Appendix. B.3for the setup details. In both simulation and the real world, we run each policy for 6 jumps with adesired distance of 1 meter per jump, and report the total CoM displacement in Table. 2. While CAJunand end-to-end RL achieves comparable performance in simulation, CAJun faces a significantlysmaller sim-to-real gap and outperforms e2e baseline for both gaits in the real world (25% furtherin bounding, 185% in pronking). We further conduct sim-to-sim transfer experiment and validatethe robustness of CAJun under shifted dynamics (Appendix. B.3). While additional efforts such asdomain randomization [ 15], system identification [ 31] or teacher-student training [ 20] could improvethe robustness and reduce the sim-to-real gap for E2E methods, the hierarchical framework of CAJunoffers a simple and efficient alternative that can be deployed zero-shot to the real world.70 1 2 3Time/s1012Speed/(m/s)Pronking0 1 2Time/s1012Speed/(m/s)BoundingFigure 5: Forward velocity of the robot jumping underleash pulling (shaded area shows active pulling).0 1 2 3Time/s0.20.00.2Pitch/radBounding0 1 2 3Time/s0.20.00.2Pitch/radPronkingFigure 6: Pitch angle of the robot jumping from as-phalt to grass (shaded area indicates grass)Pronking BoundingSim Real Sim RealE2E 4.17±0.01 1.67 ±0.18 4.61±0.03 3.47±0.15CAJun (ours) 4.98±0.02 4.76 ±0.11 4.27±0.05 4.34±0.17Table 2: Total distance after 6 jumps achieved by end-to-end RL and CAJun.6.6 Ablation StudyWe design a set of ablation studies to validate the design choices of CAJun. We summarize the resultshere. Please refer to Appendix. B.4 for details.No Gait Modulation The stepping frequency from the centroidal policy is essential for the stabilityof the robot. In no-gait , we disable the stepping frequency output and adopt a fixed stepping frequencyof 1.66Hz for both the pronking and bounding gait, which is the average stepping frequency outputfrom CAJun. While the baseline can achieve a similar reward, the learning process is noisy withfrequent failures. Since the heuristically-designed gait might not match the capability of the robot, itis important for the policy to adjust the gait timing to stabilize each jump.No Swing Leg Residual The swing residuals play a critical role in achieving long-distance jumps.To validate that, we design a baseline, no-swing , where we disable the swing residuals so that swinglegs completely follow the heuristically-designed trajectory from the swing controller. We find thatthe baseline policy cannot jump as far as CAJun, and achieves a lower reward for both gaits.No Swing Leg Reference The reference swing leg trajectory improves the overall jumping per-formance. In NoSwingRef , we train a version of CAJun where the centroidal policy directly specifyswing foot position without reference trajectory. While NoSwingRef performs similarly to CAJun forthe pronking gait, it jumps significantly shorter and achieves a lower reward for the bounding gait,because the bounding gait requires more intricate coordination of swing legs.CAJun-QP The clipped QP in GRF optimization significantly reduced training time withoutnoticeable performance drops. To validate this design choice, we compare the training time andpolicy performance of CAJun with a variant, CAJun-QP, where we solve for GRFs using the completeQP setup, where the approximated friction cone is imposed as constraints. We adopt the QP-solverfrom qpth [49], an efficient interior-point-method-based solver that supports GPU acceleration. Forboth the pronking and bounding gait, we find that CAJun achieves a similar reward compared toCAJun-QP. However, because CAJun-QP needs to iteratively optimize GRF at every control step, itstraining time is almost 10 times longer, which is consistent with prior observations [ 21]. Additionally,we find that the training time speed up of CAJun can be extended to other gaits such as crawling,pacing, trotting and fly-trotting. Please refer to Appendix. B.5 for more details.7 Limitations and Future WorkIn this work, we present CAJun, a hierarchical learning framework for legged robots that consists ofa high-level centroidal policy and a low-level leg controller. CAJun can be trained efficiently usingGPU-accelerated simulation and can achieve continuous jumps with adaptive jumping distances of upto 70cm. One limitation of CAJun is that, while it can adapt to changes in jumping distances, it cannot land accurately at the desired location yet. This inaccuracy might be due to a number of factorssuch as unmodeled dynamics and state estimation drifts. Another limitation of CAJun is that it doesnot make use of perception, and only adjusts its jumping distances based on ad-hoc user commands.In future work, we plan to extend CAJun to incorporate perception and achieve more accurate jumps,so that the robot can demonstrate extended agility and autonomy in challenging terrains.8AcknowledgmentsWe thank He Li for helping with the motor characteristic modeling, and Philipp Wu for the design ofthe robot protective shell. In addition, we would like to thank Nolan Wagener, Rosario Scalise, andother friends and colleagues at the University of Washington for their support and advice throughvarious aspects of this project.References[1]C. Gehring, C. D. Bellicoso, S. Coros, M. Bloesch, P. Fankhauser, M. Hutter, and R. Siegwart.Dynamic trotting on slopes for quadrupedal robots. In 2015 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , pages 5129–5135. IEEE, 2015.[2]H. Kolvenbach, P. Arm, E. Hampp, A. Dietsche, V . Bickel, B. Sun, C. Meyer, and M. Hutter.Traversing steep and granular martian analog slopes with a dynamic quadrupedal robot. arXivpreprint arXiv:2106.01974 , 2021.[3]K. Caluwaerts, A. Iscen, J. C. Kew, W. Yu, T. Zhang, D. Freeman, K.-H. Lee, L. Lee, S. Saliceti,V . Zhuang, et al. Barkour: Benchmarking animal-level agility with quadruped robots. arXivpreprint arXiv:2305.14654 , 2023.[4]J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning quadrupedal locomotionover challenging terrain. Science robotics , 5(47):eabc5986, 2020.[5]A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrainsusing egocentric vision. In Conference on Robot Learning , pages 403–415. PMLR, 2023.[6]S. Gilroy, D. Lau, L. Yang, E. Izaguirre, K. Biermayer, A. Xiao, M. Sun, A. Agrawal, J. Zeng,Z. Li, et al. Autonomous navigation for quadrupedal robots with optimized jumping throughconstrained obstacles. In 2021 IEEE 17th International Conference on Automation Science andEngineering (CASE) , pages 2132–2139. IEEE, 2021.[7]C. D. Bellicoso, F. Jenelten, P. Fankhauser, C. Gehring, J. Hwangbo, and M. Hutter. Dynamiclocomotion and whole-body control for quadrupedal robots. In 2017 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 3359–3365. IEEE, 2017.[8]J. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim. Dynamic locomotion in the mitcheetah 3 through convex model-predictive control. In 2018 IEEE/RSJ international conferenceon intelligent robots and systems (IROS) , pages 1–9. IEEE, 2018.[9]D. Kim, J. Di Carlo, B. Katz, G. Bledt, and S. Kim. Highly dynamic quadruped locomotion viawhole-body impulse control and model predictive control. arXiv preprint arXiv:1909.06586 ,2019.[10] Y . Ding, A. Pandala, and H.-W. Park. Real-time model predictive control for versatile dynamicmotions in quadrupedal robots. In 2019 International Conference on Robotics and Automation(ICRA) , pages 8484–8490. IEEE, 2019.[11] C. Gehring, S. Coros, M. Hutter, C. D. Bellicoso, H. Heijnen, R. Diethelm, M. Bloesch,P. Fankhauser, J. Hwangbo, M. Hoepflinger, et al. Practice makes perfect: An optimization-based approach to controlling agile motions for a quadruped robot. IEEE Robotics & AutomationMagazine , 23(1):34–43, 2016.[12] Q. Nguyen, M. J. Powell, B. Katz, J. Di Carlo, and S. Kim. Optimized jumping on the mitcheetah 3 robot. In 2019 International Conference on Robotics and Automation (ICRA) , pages7448–7454. IEEE, 2019.[13] Z. Song, L. Yue, G. Sun, Y . Ling, H. Wei, L. Gui, and Y .-H. Liu. An optimal motion planningframework for quadruped jumping. In 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 11366–11373. IEEE, 2022.9[14] A. W. Winkler, C. D. Bellicoso, M. Hutter, and J. Buchli. Gait and trajectory optimizationfor legged systems through phase-based end-effector parameterization. IEEE Robotics andAutomation Letters , 3(3):1560–1567, 2018.[15] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y . Bai, D. Hafner, S. Bohez, and V . Vanhoucke.Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332 ,2018.[16] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massivelyparallel deep reinforcement learning. In Conference on Robot Learning , pages 91–100. PMLR,2022.[17] G. B. Margolis and P. Agrawal. Walk these ways: Tuning robot control for generalization withmultiplicity of behavior. In Conference on Robot Learning , pages 22–31. PMLR, 2023.[18] A. Klipfel, N. Sontakke, R. Liu, and S. Ha. Learning a single policy for diverse behaviors on aquadrupedal robot using scalable motion imitation. arXiv preprint arXiv:2303.15331 , 2023.[19] L. Smith, J. C. Kew, T. Li, L. Luu, X. B. Peng, S. Ha, J. Tan, and S. Levine. Learning andadapting agile locomotion skills by transferring experience. arXiv preprint arXiv:2304.09834 ,2023.[20] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.arXiv preprint arXiv:2107.04034 , 2021.[21] Z. Xie, X. Da, B. Babich, A. Garg, and M. v. de Panne. Glide: Generalizable quadrupedallocomotion in diverse environments with a centroidal model. In Algorithmic Foundations ofRobotics XV: Proceedings of the Fifteenth Workshop on the Algorithmic Foundations of Robotics ,pages 523–539. Springer, 2022.[22] Y . Yang, T. Zhang, E. Coumans, J. Tan, and B. Boots. Fast and efficient locomotion via learnedgait transitions. In Conference on Robot Learning , pages 773–783. PMLR, 2022.[23] W. Yu, D. Jain, A. Escontrela, A. Iscen, P. Xu, E. Coumans, S. Ha, J. Tan, and T. Zhang.Visual-locomotion: Learning to walk on complex terrains with vision. In 5th Annual Conferenceon Robot Learning , 2021.[24] Unitree. Go1 Website. URL https://www.unitree.com/products/go1/ .[25] C. Nguyen, L. Bao, and Q. Nguyen. Continuous jumping for legged robots on stepping stonesvia trajectory optimization and model predictive control. In 2022 IEEE 61st Conference onDecision and Control (CDC) , pages 93–99. IEEE, 2022.[26] H.-W. Park, P. M. Wensing, and S. Kim. Jumping over obstacles with mit cheetah 2. Roboticsand Autonomous Systems , 136:103703, 2021.[27] H.-W. Park, P. M. Wensing, S. Kim, et al. Online planning for autonomous running jumps overobstacles in high-speed quadrupeds. 2015.[28] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. Rapid locomotion via reinforce-ment learning. arXiv preprint arXiv:2205.02824 , 2022.[29] Z. Li, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Robust and versatile bipedaljumping control through multi-task reinforcement learning. arXiv preprint arXiv:2302.09450 ,2023.[30] G. B. Margolis, T. Chen, K. Paigwar, X. Fu, D. Kim, S. bae Kim, and P. Agrawal. Learning tojump from pixels. In Conference on Robot Learning , pages 1025–1034. PMLR, 2022.10[31] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V . Tsounis, V . Koltun, and M. Hutter. Learningagile and dynamic motor skills for legged robots. Science Robotics , 4(26):eaau5872, 2019.[32] X. Da, Z. Xie, D. Hoeller, B. Boots, A. Anandkumar, Y . Zhu, B. Babich, and A. Garg. Learninga contact-adaptive controller for robust, efficient legged locomotion. In Conference on RobotLearning , pages 883–894. PMLR, 2021.[33] Y . Yang, X. Meng, W. Yu, T. Zhang, J. Tan, and B. Boots. Continuous versatile jumping usinglearned action residuals. arXiv preprint arXiv:2304.08663 , 2023.[34] G. Bellegarda and Q. Nguyen. Robust quadruped jumping via deep reinforcement learning.arXiv preprint arXiv:2011.07089 , 2020.[35] S. Gangapurwala, M. Geisert, R. Orsolino, M. Fallon, and I. Havoutis. Rloc: Terrain-aware legged locomotion using reinforcement learning and optimal control. arXiv preprintarXiv:2012.03094 , 2020.[36] P. Fankhauser, M. Bjelonic, C. D. Bellicoso, T. Miki, and M. Hutter. Robust rough-terrainlocomotion with a quadrupedal robot. In 2018 IEEE International Conference on Robotics andAutomation (ICRA) , pages 5761–5768. IEEE, 2018.[37] O. Villarreal, V . Barasuol, P. M. Wensing, D. G. Caldwell, and C. Semini. Mpc-based controllerwith terrain insight for dynamic legged locomotion. In 2020 IEEE International Conference onRobotics and Automation (ICRA) , pages 2436–2442. IEEE, 2020.[38] F. Jenelten, T. Miki, A. E. Vijayan, M. Bjelonic, and M. Hutter. Perceptive locomotion in roughterrain–online foothold optimization. IEEE Robotics and Automation Letters , 5(4):5370–5376,2020.[39] S.-H. Hyon, J. G. Hale, and G. Cheng. Full-body compliant human–humanoid interaction:balancing in the presence of unknown external forces. IEEE transactions on robotics , 23(5):884–898, 2007.[40] M. Chignoli and P. M. Wensing. Variational-based optimal control of underactuated balancingfor dynamic quadrupeds. IEEE Access , 8:49785–49797, 2020.[41] Z. Zhou and Y . Zhao. Accelerated admm based trajectory optimization for legged locomotionwith coupled rigid body dynamics. In 2020 American Control Conference (ACC) , pages5082–5089. IEEE, 2020.[42] A. Iscen, K. Caluwaerts, J. Tan, T. Zhang, E. Coumans, V . Sindhwani, and V . Vanhoucke.Policies modulating trajectory generators. In Conference on Robot Learning , pages 916–926.PMLR, 2018.[43] M. H. Raibert. Legged robots that balance . MIT press, 1986.[44] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning byexponential linear units (elus). arXiv preprint arXiv:1511.07289 , 2015.[45] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[46] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, and G. State. Isaac gym: High performance gpu-based physics simulationfor robot learning, 2021.[47] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learninglibrary. Advances in neural information processing systems , 32, 2019.11[48] Z. Fu, A. Kumar, J. Malik, and D. Pathak. Minimizing energy consumption leads to theemergence of gaits in legged robots. arXiv preprint arXiv:2111.01674 , 2021.[49] B. Amos and J. Z. Kolter. Optnet: Differentiable optimization as a layer in neural networks. InInternational Conference on Machine Learning , pages 136–145. PMLR, 2017.12A Details of Low-level ControllerA.1 NotationWe represent the base pose of the robot in the world frame as q= [p,Θ]∈R6.p∈R3is theCartesian coordinate of the base position. Θ= [φ, θ, ψ ]is the robot’s base orientation representedas Z-Y-X Euler angles, where ψis the yaw, θis the pitch and φis the roll. We represent the basevelocity of the robot as ̇q= [v,ω], where vandωare the linear and angular velocity of the base. Wedefine the control input as f= [f1,f2,f3,f4]∈R12, where fidenotes the ground reaction forcegenerated by leg i.rfoot= (r1,r2,r3,r4)∈R12represents the four foot positions relative to therobot base. Indenotes the n×nidentity matrix. [·]×converts a 3d vector into a skew-symmetricmatrix, so that for a,b∈R3,a×b= [a]×b.A.2 Details of the Stance Leg ControllerCoM PD Controller Given the desired CoM velocity in the sagittal planevrefx, vrefz, ωrefy, we firstfind the reference pose qrefand velocity ̇qrefof the robot base. We set qref= [px, py, pz,0, θ, ψ]to bethe current pose of the robot with the roll angle set to 0, and ̇qref=vrefx,0, vrefz,0, ωrefy,0to followthe policy command in the sagittal plane and keep the remaining dimensions to 0. We then find theCoM acceleration using a PD controller: ̈qref=kp(qref−q) +kd( ̇qref− ̇q) (9)where we set kp= [0 ,0,0,50,0,0]to only track the reference roll angle, and kd=[10,10,10,10,10,10]to track reference velocity in all dimensions.Centroidal Dynamics Model Our centroidal dynamics model is based on [ 8] with a few modifica-tions. We assume massless legs, and simplify the robot base to a rigid body with mass mand inertiaIbase(in the body frame). The rigid body dynamics in local coordinates are given by:Ibase ̇ω=4Xi=1ri×fi (10)m ̈p=4Xi=1fi+g (11)where gis the gravity vector transformed to the base frame.With the above simplifications, we get the linear, time-varying dynamics model: ̇ω ̈p|{z} ̈q=I−1base[r1]×I−1base[r2]×I−1base[r3]×I−1base[r4]×I3/m I3/m I3/m I3/m| {z }Af1f2f3f4|{z}f+0g|{z}g(12)as seen in Eq. (3).A.3 Reference Trajectory for Swing LegsFor swing legs, we design the reference trajectory to always keep the feet tangential to the ground,and use residuals from the centroidal policy to generate vertical movements. To find the referencetrajectory, we interpolate between three key frames (plift-off,pair,pland)based on the gait timing. Thelift-off position plift-off is the foot location at the beginning of the swing phase. The mid-air positionpairis the position of the robot’s hip projected onto the ground plane. We use the Raibert Heuristic[43] to estimate the desired foot landing position:pland=pref+vCoMTstance/2 (13)13Parameter ValueLearning rate 0.001, adaptive# env steps per update 98,304Batch size 24,576# epochs per update 5Discount factor 0.99GAE λ 0.95Clip range 0.2Table 3: Hyperparameters used for PPO.where vCoMis the projected robot’s CoM velocity onto the x−yplane, and Tstance is the expectedduration of the next stance phase, which is estimated using the stepping frequency from the centroidalpolicy. Raibert’s heuristic ensures that the stance leg will have equal forward and backward movementin the next stance phase, and is commonly used in locomotion controllers [ ?8].Given these three key points, plift-off,pair,andpland, we fit a quadratic polynomial, and computesthe foot’s desired position in the curve based on its progress in the current swing phase. Given thedesired foot position, we then compute the desired motor position using inverse kinematics, and trackit using a PD controller. We re-compute the desired foot position of the feet at every step (500Hz)based on the latest velocity estimation.B Experiment DetailsB.1 Reward FunctionOur reward function consists of 9 terms. We provide the detail about each term and its correspondingweight below:1.Upright (0.02) is the projection of a unit vector in the z-axis of the robot frame onto thez-axis of the world frame, and rewards the robot for keeping an upright pose.2.Base Height (0.01) is the height of the robot’s CoM in meters, and rewards the robot forjumping higher.3.Contact Consistency (0.008) is the sum of 4 indicator variables:P4i=11(ci= ˆci), whereciis the actual contact state of leg i, and ˆciis the desired contact state of leg ispecified bythe gait generator. It rewards the robot for following the desired contact schedule.4.Foot Slipping (0.032) is the sum of the world-frame velocity for contact-legs:P4i=1ˆciqv2i,x+v2i,y, where ˆci∈ {0,1}is the desired contact state of leg i, andvi,x, vi,yistheworld-frame velocity of leg i. This term rewards the robot for keeping contact legs staticon the ground.5.Foot Clearance (0.008) is the sum of foot height (clipped at 2cm) for non-contact legs. Thisterm rewards the robot to keep non-contact legs high on the ground.6.Knee Contact (0.064) is the sum of knee contact variablesP4i=1kci, where kci∈ {0,1}isthe indicator variable for knee contact of the ith leg.7.Stepping Frequency (0.008) is a constant plus the negated frequency 1.5−clip(f,1.5,4),which encourages the robot to jump at large steps using a low stepping frequency.8.Distance to goal (0.016) is the Cartesian distance from the robot’s current location to thedesired landing position, and encouarges the robot to jump close to the goal.9.Out-of-bound-action (0.01) is the normalized amount of excess when the policy computesan action that is outside the action space. We design this term so that PPO would notexcessively explore out-of-bound actions.140 +1kg +2kg +3kg +4kgPayload4.04.55.0Distance/mPronkingE2ECAJun (ours)0 +1kg +2kg +3kg +4kgPayload4.04.24.44.6Distance/mBoundingE2ECAJun (ours)Figure 7: Comparison of total jumping distance under increased payload.0 1 2 3 4Num Env Steps 1e750607080T otal RewardBounding0 1 2 3 4Num Env Steps 1e750607080T otal RewardPronking0 1 2 3 4Num Env Steps 1e72345Distance / mBounding0 1 2 3 4Num Env Steps 1e72345Distance / mPronkingCAJun (ours) No Gait No Swing NoSwingRef CAJun-QPFigure 8: Reward curve and jumping distance of CAJun compared to the ablated methods.B.2 PPO hyperparametersWe list the hyperparameters used in our PPO algorithm in Table. 3. We use the same set of hyperpa-rameters for all PPO training, including the CAJun policies and baseline policies.B.3 Comparison with End-to-End RLE2E Setup We use a similar MDP setup as CAJun (section. 5) for the end-to-end RL baseline.More specifically, we use the same gait generator as CAJun to generate reference foot contacts, andinclude stepping frequency as part of the action space so that the policy can modify the gait schedule.However, unlike CAJun, this reference gait is only used for reward computation, and does not directlyaffect leg controllers. For reward, we keep the same reward terms and weights (Appendix. B.1).However, since the initial exploration phase of end-to-end RL can lead to a lot of robot failures withnegative rewards, we add an additional alive bonus of 0.02 to ensure that the reward stays positive.Sim-to-Sim Transfer To better understand the robustness of CAJun and end-to-end RL (E2E) underdifferent dynamics, we conduct a sim-to-sim transfer experiment, where we test the performanceof CAJun and E2E under increased body payloads. The result is summarized in Fig. 7. While thedistance of E2E drops quickly with increased payload, CAJun maintains a near-constant distanceeven with a 4kg payload, thanks to the robustness of the low-level centroidal controller.B.4 Ablation StudyLearning Curves For each baseline, we report its total reward and CoM displacement over 6consecutive jumps with a desired distance of 1m per jump (Fig. 8). We train each baseline using 515CAJun No GaitNo SwingNoSwingRefCAJun-QP01234HoursTraining TimeFigure 9: Training Time of CAJun compared to ablated methods.random seeds and report the average and standard deviations. We also report the wall-clock trainingtime in Fig. 9.B.5 Extension to Other GaitsWhile we focus on jumping gaits in this work, CAJun is a versatile locomotion framework that iscapable of learning a wide range of locomotion gaits. By adopting a different contact sequence forthe gait generator (Fig.2), CAJun can learn a wide variety of other locomotion gaits such as crawling,pacing, trotting and fly trotting. With GPU-parallelization, all these gaits can be trained in less than20 minutes. Please check our website for videos.16 |
Tka2U40pHz0 | Tuning Legged Locomotion Controllers viaSafe Bayesian OptimizationDaniel Widmer∗Dongho Kang∗Bhavya SukhijaJonas Hübotter Andreas Krause Stelian CorosETH Zürich{widmdani, kangd, sukhijab, jhuebotter, krausea, scoros}@ethz.chAbstract: This paper presents a data-driven strategy to streamline the deploymentof model-based controllers in legged robotic hardware platforms. Our approachleverages a model-free safe learning algorithm to automate the tuning of controlgains, addressing the mismatch between the simplified model used in the controlformulation and the real system. This method substantially mitigates the risk ofhazardous interactions with the robot by sample-efficiently optimizing parameterswithin a probably safe region. Additionally, we extend the applicability of ourapproach to incorporate the different gait parameters as contexts, leading to a safe,sample-efficient exploration algorithm capable of tuning a motion controller fordiverse gait patterns. We validate our method through simulation and hardwareexperiments, where we demonstrate that the algorithm obtains superior performanceon tuning a model-based motion controller for multiple gaits safely.Keywords: Legged robot, Bayesian optimization, Safe learning, Controller tuning1 IntroductionA model-based control strategy facilitates quick adaptation to various robots and eliminates theneed for offline training, thereby streamlining the design and test phases. However, it requires anaccurate dynamics model of the system, which is often unavailable due to our limited understandingof real-world physics and inevitable simplifications to reduce the computational burden. As aresult, these controllers typically underperform on actual hardware without considerable parameterfine-tuning. This tuning process is not only time-consuming but can also harm the hardware platform.Additionally, it often requires reiteration for diverse environments or movement patterns.This work explores the challenge of determining optimal control gain parameters for a model-basedlegged locomotion controller. In doing so, we aim to bridge the disparity between simplified modelsand actual hardware behavior, consequently improving the controller’s robustness and trackingaccuracy. To this end, we employ a safe learning algorithm , namely GOSAFEOPT[1] to automatethe parameter tuning process, enabling the online identification of optimal control gain parameterswithin a safe region. Furthermore, we extend GOSAFEOPTby incorporating various gait parametersascontexts [2]. This facilitates more sample-efficient learning of control gains tailored for distinctgait patterns and allows for fluid online adjustments of the control gains during operation.We demonstrate our method on the quadruped robot Unitree Go1 [3] in both simulation and hardwareexperiments. In our simulation experiments, we show that contextual GOSAFEOPToutperformsother model-free safe exploration baselines while ensuring zero unsafe interactions. Moreover,when trained across varied gait patterns, the experimental results clearly indicate that our contextualGOSAFEOPTdelivers a considerable performance boost. Moving to our hardware experiments,contextual GOSAFEOPTfinds optimal feedback controller gains for both trotandcrawl gaits in only∗These authors contributed equally.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.50 learning steps, all while avoiding any unsafe interaction with the real robot . The resulting controllergains, together with our model-based controller, ensure robust legged locomotion against perturbationsand environmental uncertainties. In addition, our tests reveal that GOSAFEOPTcan effectively suggestreasonably good controller gains for previously unseen gait patterns such as flying trot andpronk .In summary, ( i) we formulate the problem of safe control parameter tuning for a model-based legged lo-comotion controller as constrained optimization, ( ii) we extend GOSAFEOPTto account for contextualscenarios while providing theoretical safety and optimality guarantees, ( iii) we demonstrate the superi-ority of contextual GOSAFEOPTover other state-of-the-art safe exploration algorithms in supportingdiverse gait patterns, and ( iv) we show that our method successfully and safely tunes control gainson the hardware and enhances the robustness and tracking performance of the controller significantly.2 Related WorkBridging the reality gap in legged locomotion tasks Several previous studies have emphasizedthe importance of considering an actuator behavior and identifying the system latency to successfullybridge the reality gap in legged robot systems [ 4,5,6]. These studies develop a simulation model ofa legged robot system incorporating either modeled or learned actuator dynamics and train a controlpolicy that can be effectively deployed to the robot hardware.Incorporating this strategy into a model-based control framework is an area of active investigation.Rather, in the context of model-based control, it is typically more straightforward to introduceadjustable control gain parameters and fine-tune them to align with the real-world behaviors ofthe robot. For instance, Kim et al. [7]use joint position- and velocity-level feedback to joint torquecommand in order to address any discrepancy between the actual torque output and the intendedtorque command for robots with proprioceptive actuators [ 8]. However, the fine-tuning of theseparameters continues to present a significant challenge. Schperberg et al. [9]utilize the unscentedKalman filter algorithm to recursively tune control parameters of a model-based motion controlleronline, and they successfully demonstrate it on the simulated quadrupedal robot in the presence ofsensor noise and joint-level friction. However, their proposed tuning method is inherently unsafeand can therefore lead to arbitrary harmful interactions with the system. In contrast, our methodaims to optimize control gains while avoiding any unsafe interactions with the robot hardware.Safe exploration for controller parameter tuning Training a controller directly on hardwareis a challenging task, as it requires sample efficient and safe exploration to avoid possible damageto the robot. In such settings, Bayesian optimization (BO [ 10]) emerges as a suitable framework dueto its sample efficiency. A notable example in the field of legged robotics comes from Calandra et al.[11], who successfully employed BO to learn optimal gait parameters for a bipedal robot platform.BO methods can be easily adapted to constrained settings for safe learning. Gelbart et al.[12], Hernández-Lobato et al. [13], Marco et al. [14] utilize constrained BO for finding safe optimalcontroller parameters. However, these works do not provide safety assurance during exploration.In contrast, methods such as SAFEOPT[15,16] and its extensions [ 1,17,18,19] guarantee safetythroughout the entire learning and exploration phases. SAFEOPTleverages regularity properties ofthe underlying optimization to expand the set of safe controllers. This expansion is inherently local,and accordingly SAFEOPTcan miss the global optimum. For dynamical systems, Baumann et al.[19] introduced GOSAFE, aglobal safe exploration algorithm which, unlike SAFEOPT, is capableof identifying the global optimum. However, the BO routine proposed in GOSAFE is expensive andsample inefficient which limits the scalability of the method. To this end, Sukhija et al. [1]introduceGOSAFEOPT.GOSAFEOPTleverages the underlying Markovian structure of the dynamical systemto overcome the GOSAFE’s restrictions. As a result, it can perform global safe exploration forrealistic and high-dimensional dynamical systems.In this work, we extend GOSAFEOPTto a contextual setting and apply it to systematically tune themodel-based controller of a quadruped robot for various gait patterns. Our proposed method notonly guarantees safety and global optimality but also scales effectively to systems with relativelyhigh-dimensional search space that involves a twenty-four-dimensional state space, six-dimensionalparameter space, and five-dimensional context space.23 Problem SettingSafe learning formulation The dynamics of robotic systems can generally be described as anordinary differential equation (ODE) of the form ̇s=f(s,u)where u∈ U ⊂ Rduis the controlsignal and s∈ S ⊂ Rdsis the state of the robot. Due to the reality gap, disparities can arise betweenthe real-world dynamics and the dynamics model f. This often results in a significant divergencebetween the behaviors of models and actual real-world systems, thereby making the control ofintricate and highly dynamic systems like quadrupeds particularly challenging.A common solution to this problem is using a feedback policy to rectify the model inaccuracies. Givena desired input signal u∗, desired state s∗, and true system state s, we formulate a parameterizedfeedback control policy in the form u=πθ(u∗,s∗,s)that steers sto closely align with s∗. Theparameters θare picked to minimize the tracking error. A common example of such a feedback policyis PD control, where u=u∗+θ(s∗−s), where θ∈Rdu×dscorresponds to the controller gains.Typically, choosing the parameters θinvolves a heuristic process, requiring experimental iterationswith the physical hardware. However, such interactions can be unpredictably risky and could possiblycause damage to the hardware.In this work, we formalize the tuning process as a constrained optimization problem:maxθ∈Θg(θ)such that qi(θ)≥0,∀i∈ Iq, (1)where gis an objective function (or reward function), qiare constraints with Iq={1, . . . , c}, andΘis a compact set of parameters over which we optimize. Since the true dynamics are unknown,we cannot solve Equation (1) directly. Instead, we interact with the robot to learn g(θ)andqi(θ),and solve the optimization problem in a black-box fashion. As we interact directly with the robothardware, it is important that the learning process is sample-efficient and safe, i.e., constraints qiarenot violated during learning.Extension to a contextual setting Our goal is to find optimal control gains specific to individualgait patterns and facilitate seamless online transitions across various gaits. Each gait patterndemonstrates unique dynamic properties. Therefore, the optimal feedback parameters θvarydepending on the gait pattern in question. We consider gaits as contexts zfrom a (not necessarilyfinite) set of contexts Z[2]. Contexts are essentially external variables specified by the user. Webroaden our initial problem formulation from Equation (1) to accommodate these contexts;maxθ∈Θg(θ,z)such that qi(θ,z)≥0,∀i∈ Iq, (2)where z∈ Z is the context, which in our scenario, is the parameters of the gait of interest.Assumptions We reiterate and discuss the assumptions for GOSAFEOPT[1] in Appendix A. Tosummarize, we assume the following: ( i) an initial safe set of parameters is known, ( ii) the objectiveand constraints lie in a reproducing kernel Hilbert space with bounded norm, ( iii) measurement noisesare i.i.d. sub-Gaussian, ( iv) the control frequency is sufficiently high to capture the state evolution,and ( v) constraints qi(θ,z)can be defined as the minimum of a state-dependent function ̄qi(θ,s,z)along the trajectory starting in s0with policy πθ.4 Control Gain Optimization for Model-based Legged Locomotion Control4.1 Control PipelineModel-based locomotion controller Our locomotion controller utilizes a combination of the modelpredictive control (MPC) and the whole-body control (WBC) method following the previous work byKim et al. [7], Kang et al. [20], and Kang et al. [21]. The MPC generates dynamically consistent baseand foot trajectories by finding an optimal solution of a finite-horizon optimal control problem, usinga simplified model. To convert these trajectories into joint-level control signals, we implement a WBCmethod that incorporates a more sophisticated dynamics model and takes into account the physical con-straints of the robot. More specifically, we use a WBC formulation similar to the one presented by Kimet al. [7]. This method calculates the desired generalized coordinates x∗, speed ̇x∗, and acceleration3Control gain tunerMPCRobotGait plannerBody velocity commandGait parametersDesired joint targetsContacttimelineRobot state observationWBCFoot targetsBase targetsJoint controllerJoint torquesLocomotion controllerControl gainsSafe Bayesian optimization: training timeFigure 1: Overview of the system. The control gain tuner determines the optimal gains kp,kdfor the locomotion controller given gait parameters zgas a context variable. In order to learn themap between the optimal gains and context variable, we use a safe Bayesian optimization algorithm,which finds optimal gains by minimizing the mismatch between desired joint states ̄s∗and actualjoint states ̄swhile ensuring no safety breach during the learning process. ̈x∗on a kinematic level while respecting task priority via the null-space projection [ 22]. Subsequently,it finds the desired joint torques τ∗by solving a quadratic program that aligns with the desired general-ized acceleration, adhering to the motion equations of the floating base and other physical constraints.For a more detailed explanation of the WBC formulation, the reader is referred to Appendix D.We emphasize that the feed-forward torque commands τ∗by themselves fail to produce the desiredmotion on the robot hardware due to model discrepancies. Particularly, we observed the actuator dy-namics and joint friction, which are impractical to include in the system model, contribute significantlyto this model mismatch. As a practical solution, we compute the final joint torque commands τcmd=τ∗+kp( ̄x∗− ̄x) +kd( ̇ ̄x∗− ̇ ̄x)with the feedback gains kp∈Rdu×d ̄xandkd∈Rdu×d ̄xand sendthem to the robot. Here, ̄x, ̇ ̄xrepresent the joint angles and speeds (we use ̄sto represent the concate-nated vector of ̄xand ̇ ̄x), while ̄x∗, ̇ ̄x∗denote their desired values. We treat the feedback gains kpandkdas the parameters θthat we want to optimize using data samples collected from hardware directly.Gait parameterization We parameterize a quadrupedal gait pattern with zg= [dg, tgs, og1, og2, og3],where dgis the duty cycle for gait g,tgsis the gait duration, and ogiare the phase offsets of legs twoto four respectively, starting counterclockwise with the rear left leg. The duty cycle is defined as thecontact duration divided by the stride duration. In general, the optimal feedback parameters (k∗p,k∗d)change with the gait. We show this empirically in Section 5.4.2 Contextual G OSAFEOPTWe model the unknown objective and constraint functions h(·, i)(i= 0for the objective, i∈ Iqforconstraints) through Gaussian Process regression [ 23]. To this end, given a dataset {vj,yj}j≤n, withvj= (θj,zj)and the kernel k, we calculate mean and uncertainty estimations of h(·, i):μn(v, i) =k⊤n(v)(Kn+σ2I)−1yn,i,σ2n(v, i) =k(v,v)−k⊤n(v)(Kn+σ2I)−1kn(v),(3)where yn,i= [yj,i]⊤j≤nare the observations of h(·, i),kn(v) = [ k(v,vj)]⊤j≤n, and Kn=[k(vj,vl)]j,l≤nis the kernel matrix. We leverage these estimates to provide high-probability frequen-tist confidence intervals.Lemma 1 (Confidence intervals, Theorem 2 of [24] and Lemma 4.1 of [16]) .Lethbe defined ash(θ,z, i) =g(θ,z)ifi= 0,qi(θ,z)ifi∈ Iq.(4)For any δ∈(0,1)and under Assumptions 2 and 3 from Appendix A, with probability at least 1−δitholds jointly for all n, i,z,θthat|h(θ,z, i)−μn(θ,z, i)| ≤βn(δ)·σn(θ,z, i) (5)withβn(δ)≤ O(B+ 4σp2(γn|I|+ 1 + log(1 /δ)))whereγn= maxA⊂Θ×Z×I|A|≤nI(yA;hA). (6)4Here, I(yA;hA)denotes the mutual information between hA= [h(v)]v∈A, if modeled with aGP, and the noisy observations yAathA. It quantifies the reduction in uncertainty about huponobserving yAat points A. The quantity γnis a Bayesian construct, however, in the frequentist settingit quantifies the complexity of learning the function h. It is instance-dependent and can be boundeddepending on the domain Θ× Z × I and kernel function k(see Appendix B).Given the confidence interval from Equation (5), we define a confidence set for each context z,parameter θand index 0≤i≤c, asC0(θ,z, i) =[0,∞] ifθ∈ S0(z)andi≥1,[−∞,∞]otherwise,(7)Cn(θ,z, i) =Cn−1(θ,z, i)∩[μn(θ,z, i)±βn(δ)·σn(θ,z, i)], (8)We refer to ln(θ,z, i) = min Cn(θ,z, i)as the lower bound, un(θ,z, i) = max Cn(θ,z, i)theupper bound, and wn(θ,z, i) =un(θ,z, i)−ln(θ,z, i)the width of our confidence set.4.2.1 AlgorithmGiven (user-specified) context zn∈ Z, an episode nof contextual GOSAFEOPTis performed in oneof two alternating stages: local safe exploration (LSE) and global exploration (GE).Local safe exploration During the LSE stage, we explore the subset of the parameter space Θwhich is known to be safe, and learn backup policies for all the states on the trajectories visitedduring LSE. In this stage, the parameters are selected according to the acquisition functionθn= argmaxθ∈Gn−1(zn)∪Mn−1(zn)maxi∈Iwn−1(θ,zn, i) (9)where, Gn(zn)⊆Sn(zn)is a set of expanders (c.f., Equation (16) in Appendix B) andMn(zn)⊆Sn(zn)is a set of maximizers (c.f., Equation (18) in Appendix B). Intuitively,Gn(zn)∪ M n(zn)represents those parameters that can potentially lead to an expansion of the safesetSn(zn)or potentially be a solution to the optimization problem of Equation (2) with context zn.Global exploration Once LSE converges (see Equation (21) in Appendix B), we run the GE stagewhere we evaluate possibly unsafe policies and trigger a backup policy whenever necessary. If nobackup policy is triggered, we conclude that the evaluated policy is safe and add it to our safe set.After a new parameter is added to the safe set during GE, we continue with LSE.The parameters are selected according to the acquisition functionθn= argmaxθ∈Θ\(Sn−1(zn)∪E(zn))maxi∈Iwn−1(θ,zn, i) (10)where Edenotes all parameters which have been shown to be unsafe (see line 7 of Algorithm 4 inAppendix B). If all parameters have been determined as either safe or unsafe, i.e., Θ\(Sn(zn)∪E(zn)) =∅, then GE has converged.Summary A detailed description of the contextual GOSAFEOPTalgorithm is provided in Ap-pendix B.2. GOSAFEOPTalternates between local safe exploration and global exploration. Therefore,it can seek for the optimum globally. In Figure 5 of Appendix C, we analyze the algorithm using asimple example for better understanding.The only difference between the contextual and non-contextual variants is that contextualGOSAFEOPTmaintains separate sets Sn, Cn,Bn,Dn,E, andXFailfor each context z∈ Z. Forany given context z∈ Z, the running best guess of contextual GOSAFEOPTfor the optimum isˆθn(z) = argmaxθ∈Sn(z)ln(θ,z,0).4.2.2 Theoretical ResultsIn the following, we state our main theorem, which extends the safety and optimality guarantees fromSukhija et al. [1] to the contextual case.5We say that the solution to Equation (2), θ∗(z), isdiscoverable if there exists a finite ̃nsuch thatθ∗(z)∈ ̄Rzε(S ̃n(z)). Here, ̄Rzε(S)⊆Θrepresents the largest safe set which can be reached safelyfrom S⊆Θup to ε-precision (c.f., Equation (20) in Appendix B).Theorem 1. Consider any ε >0andδ∈(0,1). Further, let Assumptions 1 to 5 from Appendix Ahold and βn(δ)be defined as in Lemma 1. For any context z∈ Z, let ̃n(z)be the smallest integersuch thatn(z)β ̃n(z)(δ)·γn(z)|I|(z)≥C|Θ|2ε2where n(z) = ̃n(z)Xn=11{z=zn} (11)andC= 32 /log(1 + σ−2). Here, γn(z) = max A⊂Θ×I,|A|≤nI(yA,z;hA,z)≤γndenotes themutual information between hA,z= [h(θ,z, i)](θ,i)∈Aand corresponding observations.Then, when running contextual GOSAFEOPTand if θ∗(z)is discoverable, the following inequalitiesjointly hold with probability at least 1−2δ:1.∀n≥0, t≥0, i∈ Iq: ̄qi(θn,s(t),z)≥0, (safety)2.∀z∈ Z, n≥ ̃n(z):g(ˆθn(z),z)≥g(θ∗(z),z)−ε. (optimality)It is natural to start for each i∈ Iwith kernels kZiandkΘion the space of contexts and the space ofparameters, respectively, and to construct composite kernels ki=kZi⊗kΘiorki=kZi⊕kΘias theproduct or sum of the pairs of kernels (see section 5.1 of [ 2]). In this case, the information gain γnissublinear in nfor common choices of kernels kZiandkΘiimplying that n∗(z)is finite.The theorem is proven in Appendix B.3. Comparing to contextual SAFEOPT[16] which is onlyguaranteed to converge to safe optima in ̄Rzε(S0(z)), the global exploration steps of contextualGOSAFEOPTcan also discover a safe optimum which was not reachable from the initial safe seed.We remark that Theorem 1 is a worst-case result and, in particular, disregards a possible statisticaldependence between different contexts. In practice, if a kernel is chosen which does not treat allcontexts as independent, then the convergence can be much faster as knowledge about a particularcontext can be transferred to other contexts.5 Experimental resultsWe evaluate the performance of contextual GOSAFEOPTusing the Unitree Go1 robot in both physicalsimulation and hardware experiments. In the experiments, we use the following objective andconstraint functions:g(θ,z) =−Xt≥0∥ ̄s∗(t)− ̄s(t,z,θ)∥2Qg, q(θ,z) = mint≥0v− ∥ ̄s∗(t)− ̄s(t,z,θ)∥2Qq,(12)where ̄sis joint-level states of the system (i.e. joint angles and speeds), ̄s∗denotes its desired values,and both QgandQqare positive semi-definite matrices. Additionally, we define an error threshold,v, that the norm of the state error shouldn’t surpass throughout the entire duration. Further detailsof the experimental setup are provided in Appendix E. We also made the implementation availableonline1and uploaded a video showcasing the experiments2.Simulation experiments In our simulation experiments, we contrasted the learning curves ofcontextual GOSAFEOPTtoSAFEOPT, and GOSAFEOPTwithout contexts. Additionally, we evaluateGP-UCB [ 25], an unconstrained BO algorithm. To simulate the model mismatches and uncertainties,we introduced disturbances in the form of joint impedances at every joint (see Appendix E for moredetails). These disturbances destabilize the system, leading to constraint violations. To adhere to theprerequisites of the safe BO algorithms, we initiate all experiments with roughly hand-tuned controlgains that are safe, yet suboptimal.1https://github.com/lasgroup/gosafeopt2https://youtu.be/zDBouUgegrU6GOSAFEOPT Contextual GOSAFEOPT SAFEOPT Contextual SAFEOPT UCBTrot CrawlIteration steps0.000.501.00Reward0 100 200 300 400 500 0 50 100 150 200 250Figure 2: Simulation experiments. On the left, we display the learning curves of the BO algorithmstrained with trotgait. After completing this training, we started new training for crawl gait andincluded the contextual variants of G OSAFEOPTand S AFEOPTin the assessment to investigate theimpact of contextual settings on learning performance as illustrated on the right.Iteration steps Iteration stepsReward Tracking error [deg]Trot Crawl0.00.51.00.02.55.00 50 100 0 1000 2000 Trained for trot Trained for crawlReward for crawlTrot untrainedTrot trainedCrawl untrainedCrawl trainedFigure 3: Hardware experiments. On the left, we present the learning curve of ContextualGOSAFEOPT. It shows that the algorithm successfully tunes the controller gains for trot, and thensubsequently for crawl . In the center, we compare the performance of the optimized control gainsoftrotwhen applied to crawl , against the gains specifically optimized for crawl . On the right, wepresent the tracking error of the hip joint for the front-left leg with trotandcrawl gait at initialization(trot: yellow, crawl : violet) and after optimization ( trot: green, crawl : blue).We optimized joint-level feedback gains for two different gaits; trotandcrawl sequentially. All simu-lation experiments were conducted using ten different seeds, and we report the mean with one standarderror in the Figure 2. Throughout our experiments, all of the safe algorithms met safety constraints. Incontrast, the standard GP-UCB method violates the constraints in 4.7%and8%of all evaluations forthetrotandcrawl gaits respectively. In Figure 2’s left plot, we illustrate the normalized performance ofGOSAFEOPTandSAFEOPTw.r.t. our objective for the trotgait. The learning curves clearly indicatethatGOSAFEOPT’s global exploration facilitates faster identification of better control gains. Notably,theGOSAFEOPTalgorithm performs nearly as well as GP-UCB but without any constraint violations.In the second test, we contrasted the contextual variants of GOSAFEOPTandSAFEOPTwith theirnon-contextual counterparts. The results, as depicted in the right part of Figure 2, suggest that thecontextual variants yield superior optima, with the contextual GOSAFEOPTalgorithm emerging asthe standout performer. The contextual variants leverage the information collected from the previoustraining with the trotgait, enabling them to identify better optima for the newly introduced crawlgait more efficiently. Additionally, they evade unsafe or unstable evaluations, unlike GP-UCB. Thegait parameters we used for the trotandcrawl gaits are provided in Appendix E.1.Hardware experiments Similarly to the simulation experiments, we first tune the controller for thetrotgait and subsequently for the crawl gait. In Figure 3’s left plot, we report the mean performancewith one standard error, based on experiments conducted using three different seeds. In all ourexperiments, we note that the contextual GOSAFEOPTalgorithm results in zero constraint violations.In our hardware experiments, we confirmed that different gait patterns require distinct sets of controlgains. As shown in the center plot of Figure 3, the best-performing parameters for the trotgait do notperform well on the crawl gait. However, as we train with a context for the new gait pattern crawl ,there is a notable improvement in the reward. Here, we highlight that the contextual GOSAFEOPTcan harness previously gathered data when encountering new gait patterns, accelerating the discoveryof optimal gains for the new gait.7Initial gains Optimized gainsPush Slippery surfaceFigure 4: Robustness test. Compared to the roughly hand-tuned initial gains (left), the optimal gainsderived from our method (right) significantly improve the motion controller’s robustness againstexternal pushes (top) and slippery contacts caused by socks on the robot’s feet (bottom).Additionally, in the right plot of Figure 3, we evaluate the tracking performance of our tuned controller,focusing on the hip joint’s joint-angle tracking. When comparing the initial and the tuned controlleracross both trotandcrawl gaits, it’s evident that the tuned controller has a significantly reducedtracking error. For a comprehensive view, the error plots for other joints are provided in Appendix E.To assess the robustness of the locomotion controller with the optimal control gains, we introduceduncertainties in the form of external forces and simulated slippery conditions by placing socks onthe robot’s feet, as depicted in Figure 4. Our experiments demonstrate that the robustness of ourcontroller is significantly enhanced after the tuning process, and it is able to recover from pushes andretain stability on a slippery surface. On the other hand, the controller with the initial control gains ismore vulnerable to these uncertainties and tends to easily crash.Finally, we also highlight the zero-shot generalization capabilities of our method for unseen gaitpatterns through a learned model. While flying trot andpronk gaits were not presented during thetraining, the learned model effectively suggests reasonably good control gains for these gaits. Weencourage readers to view the accompanying video2for a more in-depth understanding.6 ConclusionIn this work, we extend GOSAFEOPTto the contextual setting and showcase its efficacy in adjustingthe control parameters for a model-based legged locomotion controller through both simulation andhardware experiments. In our experiments, contextual GOSAFEOPTdemonstrated superior conver-gence for newly introduced gait patterns by drawing upon information from previous training sessions.Additionally, our results confirmed that contextual GOSAFEOPTcan effectively identify better optimawithout violating safety constraints. Across all of our experiments, contextual G OSAFEOPToutper-forms prior approaches by a large margin and successfully finds optimal control gains for differentquadrupedal gait patterns. After the fine-tuning process, we found that our model-based controllerexhibits a considerable improvement in robustness to various types of uncertainties, significantlyenhancing the system’s reliability. We highlight that the applicability of our proposed algorithmextends beyond our current scenario. For instance, we are interested in applying this method to amore diverse set of quadrupedal gait patterns and extend its scope to encompass non-periodic andunstructured gait patterns. Furthermore, we stress that the algorithm is controller- or robot-agnostic,making it a pivotal tool for addressing the reality gap across various contexts.Limitations While the algorithm provides theoretical safety guarantees, it is often uncertain inreal-world applications whether all theoretical prerequisites are fulfilled. For instance, even thoughthe surrogate model might be Lipschitz-continuous, the Lipschitz constant is generally not knowna priori, i.e., Assumption 2 from Appendix A may not be satisfied. This often results in a tooconservative choice of parameters. Furthermore, a wrong parameter choice for the backup prior canresult in unsafe global exploration or no global exploration at all. In general, while safe explorationmethods such as SAFEOPTandGOSAFEOPThave been successfully applied on several practicaldomains [ 1,17,18,26,27,28], bridging the disparity between theoretical foundations and real-worldapplication remains a topic of active investigation [29, 30, 31].8AcknowledgementsWe thank Lenart Treven and Flavio De Vincenti for their feedback on this work.This project has received funding from the Swiss National Science Foundation under NCCR Automa-tion, grant agreement 51NF40 180545, the European Research Council (ERC) under the EuropeanUnion’s Horizon 2020 research and innovation programme, grant agreement No. 866480, and theMicrosoft Swiss Joint Research Center.References[1]B. Sukhija, M. Turchetta, D. Lindner, A. Krause, S. Trimpe, and D. Baumann. Gosafeopt:Scalable safe exploration for global optimization of dynamical systems. Artificial Intelligence ,2023.[2]A. Krause and C. Ong. Contextual gaussian process bandit optimization. In Advances in NeuralInformation Processing Systems , 2011.[3] Unitree Robotics. https://www.unitree.com/en/go1 .[4]J. Tan, T. Zhang, E. Coumans, A. Iscen, Y . Bai, D. Hafner, S. Bohez, and V . Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. In Proceedings of Robotics: Scienceand Systems , 2018.[5]J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V . Tsounis, V . Koltun, and M. Hutter. Learningagile and dynamic motor skills for legged robots. Science Robotics , 2019.[6]D. Kang, J. Cheng, M. Zamora, F. Zargarbashi, and S. Coros. Rl + model-based control: Usingon-demand optimal control to learn versatile legged locomotion. IEEE Robotics and AutomationLetters , 2023.[7]D. Kim, J. D. Carlo, B. Katz, G. Bledt, and S. Kim. Highly dynamic quadruped locomotion viawhole-body impulse control and model predictive control. arXiv preprint arXiv:1909.06586 ,2019.[8]P. M. Wensing, A. Wang, S. Seok, D. Otten, J. Lang, and S. Kim. Proprioceptive actuator designin the mit cheetah: Impact mitigation and high-bandwidth physical interaction for dynamiclegged robots. IEEE Transactions on Robotics , 2017.[9]A. Schperberg, S. D. Cairano, and M. Menner. Auto-tuning of controller and online trajectoryplanner for legged robots. IEEE Robotics and Automation Letters , 2022.[10] J. Mockus, V . Tiesis, and A. Zilinskas. The application of Bayesian methods for seeking theextremum. Towards Global Optimization , 1978.[11] R. Calandra, A. Seyfarth, J. Peters, and M. P. Deisenroth. Bayesian optimization for learninggaits under uncertainty: An experimental comparison on a dynamic bipedal walker. Annals ofMathematics and Artificial Intelligence , 2016.[12] M. Gelbart, J. Snoek, and R. Adams. Bayesian optimization with unknown constraints. Confer-ence on Uncertainty in Artificial Intelligence , 2014.[13] J. M. Hernández-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. Ageneral framework for constrained Bayesian optimization using information-based search. TheJournal of Machine Learning Research , 2016.[14] A. Marco, D. Baumann, M. Khadiv, P. Hennig, L. Righetti, and S. Trimpe. Robot learning withcrash constraints. IEEE Robotics and Automation Letters , 2021.9[15] Y . Sui, A. Gotovos, J. Burdick, and A. Krause. Safe exploration for optimization with Gaussianprocesses. In International Conference on Machine Learning , 2015.[16] F. Berkenkamp, A. Krause, and A. P. Schoellig. Bayesian optimization with safety constraints:safe and automatic parameter tuning in robotics. Machine Learning , 2021.[17] Y . Sui, V . Zhuang, J. Burdick, and Y . Yue. Stagewise safe Bayesian optimization with Gaussianprocesses. In International Conference on Machine Learning , 2018.[18] C. König, M. Turchetta, J. Lygeros, A. Rupenyan, and A. Krause. Safe and efficient model-freeadaptive control via bayesian optimization. In IEEE International Conference on Robotics andAutomation , 2021.[19] D. Baumann, A. Marco, M. Turchetta, and S. Trimpe. GoSafe: Globally optimal safe robotlearning. In IEEE International Conference on Robotics and Automation , 2021. Proofs inextended online version: arXiv 2105.13281 .[20] D. Kang, S. Zimmermann, and S. Coros. Animal gaits on quadrupedal robots using motionmatching and model-based control. In IEEE/RSJ International Conference on Intelligent Robotsand Systems . IEEE, 2021.[21] D. Kang, F. De Vincenti, N. C. Adami, and S. Coros. Animal motions on legged robots usingnonlinear model predictive control. In IEEE/RSJ International Conference on Intelligent Robotsand Systems . IEEE, 2022.[22] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo. Robotics: Modelling, Planning and Control .Springer Publishing Company, Incorporated, 1st edition, 2008.[23] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning (AdaptiveComputation and Machine Learning) . The MIT Press, 2005.[24] S. R. Chowdhury and A. Gopalan. On kernelized multi-armed bandits. In InternationalConference on Machine Learning , 2017.[25] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the banditsetting: No regret and experimental design. In Proceedings of the 27th International Conferenceon International Conference on Machine Learning . Omnipress, 2010.[26] A. Wischnewski, J. Betz, and B. Lohmann. A model-free algorithm to safely approach thehandling limit of an autonomous racecar. In IEEE International Conference on ConnectedVehicles and Expo , 2019.[27] M. Fiducioso, S. Curi, B. Schumacher, M. Gwerder, and A. Krause. Safe contextual Bayesianoptimization for sustainable room temperature PID control tuning. In International JointConference on Artificial Intelligence , 2019.[28] S. E. Cooper and T. I. Netoff. Multidimensional bayesian estimation for deep brain stimulationusing the safeopt algorithm. medRxiv , 2022.[29] C. Fiedler, C. W. Scherer, and S. Trimpe. Practical and rigorous uncertainty bounds for Gaussianprocess regression. AAAI Conference on Artificial Intelligence , 35(8), 2021.[30] F. Berkenkamp, A. P. Schoellig, and A. Krause. No-regret bayesian optimization with unknownhyperparameters. Journal of Machine Learning Research , 2019.[31] J. Rothfuss, C. Koenig, A. Rupenyan, and A. Krause. Meta-learning priors for safe bayesianoptimization. In 6th Annual Conference on Robot Learning , 2022.[32] S. Vakili, K. Khezeli, and V . Picheny. On information gain and regret bounds in gaussianprocess bandits. In International Conference on Artificial Intelligence and Statistics , 2021.10[33] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.Openai gym. arXiv preprint arXiv:1606.01540 , 2016.[34] D. Kang, F. De Vincenti, and S. Coros. Nonlinear model predictive control for quadrupedallocomotion using second-order sensitivity analysis. arXiv preprint arXiv: 2207.10465 , 2022.[35] R. Smith et al. Open dynamics engine, 2007.[36] K. Serkan, I. Turker, and G. Moncef. Multidimensional particle swarm optimization for machinelearning and pattern recognition . Springer-Verlag Berlin Heidelberg, 2014.[37] Constrained Bayesian optimization with particle swarms for safe adaptive controller tuning.IFAC-PapersOnLine , 2017.[38] G. Pleiss, J. Gardner, K. Weinberger, and A. G. Wilson. Constant-time predictive distributionsfor Gaussian processes. In Proceedings of the 35th International Conference on MachineLearning . PMLR, 2018.11Contents of AppendixA Assumptions 13B Proofs 13B.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13B.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14B.3 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16C Comparison of S AFEOPTand G OSAFEOPT 16D Control Formulation 17E Experimental Details 18E.1 Gait parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18E.2 Bayesian optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19E.3 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19E.4 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19F Practical modifications 19F.1 Boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19F.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21F.3 Fix iterations and discard unpromising new detected safe regions . . . . . . . . . . 21F.4 Posterior estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2112A AssumptionsIn this section, we reiterate the assumptions by Sukhija et al. [1] for G OSAFEOPT.Assumption 1 (Initial safe seed) .For any episode n≥1with (user-specified) context zn∈ Z, anon-empty initial safe set of parameters Sn−1(zn)⊂Θis known. That is, for all θ∈ Sn−1(zn)andalli∈ Iq,qi(θ,zn)≥0.Here,Sn(z)⊇ S 0(z)denotes the safe set after episode nfor the given context zas defined inEquation (15) in Appendix B. Given the prior knowledge of the dynamics, a conservative safe setof parameters represents some initial stable feedback controller. Accordingly, this assumption istypically satisfied in practice. The assumption is necessary as, in principle, during each iteration, anadversarial context could be chosen for which the initial safe set does not include any safe parameters.Assumption 2 (Continuity of objective and constraints) .Lethbe defined ash(θ,z, i) =g(θ,z)ifi= 0,qi(θ,z)ifi∈ Iq.(13)We assume that hlies in a reproducing kernel Hilbert space (RKHS) associated with a kernel kand has a bounded norm in that RKHS, that is, ∥h∥k≤B. Furthermore, we assume that gandqi(∀i∈ Iq)are Lipschitz-continuous with known Lipschitz constants.This is a common assumption in the model-free safe exploration literature [ 16,19,1]. Sukhija et al.[1] discuss the practical implications of this assumption in more detail.Assumption 3. We obtain noisy measurements of hwith measurement noise i.i.d. σ-sub-Gaussian.Specifically, for a measurement yiofh(θ,z, i), we have yi=h(θ,z, i) +εiwithεiσ-sub-Gaussianfor all i∈ Iwhere we write I={0, . . . , c}.Assumption 4. We observe the state s(t)every ∆tseconds. Furthermore, for any s(t)andρ∈[0,1],the distance to s(t+ρ∆t)induced by any action is bounded by a known constant Ξ, that is,∥s(t+ρ∆t)−s(t)∥ ≤Ξ.Assumption 4 is crucial to guarantee safety in continuous time even though the state is measured atdiscrete time instances. For highly dynamical systems, such as quadrupeds, the observation frequencyis typically very high, e.g., 500 Hz -1 kHz , and accordingly Ξis small.Assumption 5. We assume that, for all i∈ {1, . . . , c},qiis defined as the minimum of a state-dependent function ̄qialong the trajectory starting in s0with controller πθ. Formally,qi(θ,z) = mins′∈ξ(s0,θ,z) ̄qi(s′,z,θ), (14)withξ(s0,θ,z)={s0+Rt0f(s(τ),πθ(s(τ),z))dτ|t≥0}representing the trajectory of s(t)underpolicy parameter θand context zstarting from s0at time 0.Assumption 5 is an assumption on our choice of the constraint. Many common constraints, such asthe minimum distance to an obstacle along a trajectory, satisfy this assumption.B ProofsB.1 DefinitionsWe begin by re-stating definitions of sets used by GOSAFEOPT[1] with an additional context variable.Fix an arbitrary context z∈ Z. The safe set is defined recursively asSn(z) =\i∈Iq[θ′∈Sn−1(z){θ∈Θ|ln(θ′,z, i)−LΘ(z)∥θ−θ′∥ ≥0} (15)13where LΘ(z)is the joint Lipschitz constant of gand the constraints qiunder context z. The expandersare defined asGn(z) ={θ∈Sn(z)|en(θ,z)>0} with (16)en(θ,z) =|{θ′∈Θ\Sn(z)| ∃i∈ Iq:un(θ,z, i)−LΘ(z)∥θ−θ′∥ ≥0}| (17)and the maximizers are defined asMn(z) ={θ∈Sn(z)|un(θ,0)≥maxθ′∈Sn(z)ln(θ′,0)}. (18)The analysis requires the ε-slacked safe region ̄Rzε(S)given an initial safe seed S⊆Θ, which isdefined recursively asRzε(S) =S∪ {θ∈Θ| ∃θ′∈Ssuch that ∀i∈ Iq:qi(θ′,z)−ε−LΘ(z)∥θ−θ′∥ ≥0},(19) ̄Rzε(S) = limn→∞(Rzε)n(S) (20)where (Rzε)ndenotes the nth composition of Rzεwith itself.B.2 AlgorithmB.2.1 Local Safe ExplorationDuring LSE, we keep track of a set of backup policies B(z)⊆Θ× X and observations of hforeach context z∈ Z , which we denote by D(z)⊆Θ×R|I|. An LSE step is described formally inAlgorithm 1.Algorithm 1 Local Safe Exploration (LSE)Input : Current context zn, safe sets S, sets of backups B, datasets D, Lipschitz constants LΘ1:Recommend parameter θnwith Equation (9)2:Collect R=Sk∈N{(θn,s(k))}andh(θn,zn, i) +εn3:B(zn) =B(zn)∪ R,D(zn) =D(zn)∪ {(θn, h(θn,zn, i) +εn)}4:Update sets S(z),G(z), andM(z)for all z∈ Z ▷Equations (15), (16) and (18)Return :S,B,DThe LSE stage terminates for some given context z∈ Z when the connected safe set is fully exploredand the optimum within the safe set is discovered. This happens when the uncertainty among theexpanders and maximizers is less than εand the safe set is not expandingmaxθ∈Gn−1(z)∪Mn−1(z)maxi∈Iwn−1(θ,z, i)< ε and Sn−1(z) =Sn(z). (21)B.2.2 Global ExplorationA GE step conducts an experiment about a candidate parameter θn∈Θwhich may not be safe. If thesafety boundary is approached, GE conservatively triggers a safe backup policy. If, on the other hand,the experiment is successful, a new (potentially disconnected) safe region was discovered which canthen be explored by LSE in the following steps. A GE step is described formally in Algorithm 2.B.2.3 Boundary ConditionThe boundary condition checks when the system is in the state swhether there is a backup (θs,ss)∈B(z)such that ssis sufficiently close to sto guarantee that θscan steer the system back to safety forany state which may be reached in the next time step. If no such backups exist for the next states, abackup is triggered at the current state. In this case, the backup parameter θ∗swith the largest safetymargin is triggered:θ∗s= max(θs,ss)∈Bn(zn)mini∈Iqln(θs,zn, i)−Lx∥s−ss∥. (22)14Algorithm 2 Global Exploration (GE)Input :zn, safe sets S, confidence intervals C, sets of backups B, datasets D, fail sets EandXFail1:Recommend global parameter θnwith Equation (10)2:θ=θn,sFail=∅, Boundary = False3:while Experiment not finished do ▷Rollout policy4: ifNot Boundary then5: Boundary, θ∗s= B OUNDARY CONDITION (zn,s(k),B)6: ifBoundary then ▷Trigger backup policy7: θ=θ∗s,sFail=s(k)8: E=E ∪ {θn},XFail=XFail∪ {sFail} ▷Update fail sets9: Execute until s(k)10:Collect R=Sk∈N{(θn,s(k))}, and h(θn,zn, i) +εn11:ifNot Boundary then ▷Successful global search12: B(zn) =B(zn)∪ R andD(zn) =D(zn)∪ {(θn, h(θn,zn, i) +εn)}13: S(zn) =S(zn)∪ {θn}14: C(θn,zn, i) =C(θn,zn, i)∩[0,∞]for all i∈ IqReturn :S,C,B,D,E,XFailAlgorithm 3 BOUNDARY CONDITIONInput : context zn, state s, backups B1:if∀(θs,ss)∈ B(zn),∃i∈ Iq:ln(θs,zn, i)−Lx∥s−ss∥+ Ξ<0then2: Boundary = True, Calculate θ∗s(Equation (22))3:else4: Boundary = False, θ∗s=Nullreturn : Boundary, θ∗sB.2.4 Contextual G OSAFEOPTThe algorithm stops for a particular context z∈ Z whenEquation (21) is satisfied| {z }LSE convergedand Θ\(Sn(zn)∪ E(zn)) =∅| {z }GE converged. (23)The full algorithm is described in Algorithm 4.Algorithm 4 Contextual G OSAFEOPTInput : Domain Θ, Contexts Z, Sequence of contexts {zn∈ Z} n≥1,k(·,·),S0,C0,D0,ε1:Initialize GP h(θ,z, i),E(z) =∅,XFail(z) =∅,B0(z) ={(θ, x0)|θ∈S0}2:while∃z∈ Z such that G OSAFEOPThas not terminated for z(Equation (23)) do3: ifGOSAFEOPThas terminated for zn(Equation (23)) then ▷Skip finished contexts4: continue5: fors∈ X Fail(zn)do ▷Update fail sets6: ifNot B OUNDARY CONDITION (zn,s,Bn)then7: E(zn) =E(zn)\ {θ},XFail(zn) =XFail(zn)\ {s}8: Update Cn(θ,z, i)∀θ∈Θ,z∈ Z,i∈ I ▷Update confidence intervals, Equation (8)9: ifLSE not converged for context zn(Equation (21)) then10: Sn+1,Bn+1,Dn+1=LSE(zn,Sn,Bn,Dn)11: else12: Sn+1, Cn+1,Bn+1,Dn+1,E,XFail=GE(zn,Sn, Cn,Bn,Dn,E,XFail)return :{ˆθn(z)|z∈ Z}15B.3 Proof of Theorem 1Proof. We first derive the sample complexity bound of non-contextual GOSAFEOPT. Then, weextend this sample complexity bound to contextual GOSAFEOPT. We assume without loss ofgenerality that βnis monotonically increasing with n.Sample complexity Assume first that the context is fixed, that is, ∀n≥1 :zn=z. In this case,the safety guarantee (with probability at least 1−δ) follows directly from Theorem 4.1 of Sukhijaet al. [1]. Thus, it remains to show that the optimality guarantee with the given sample complexityholds also with probability at least 1−δ, as then their union holds jointly with probability at least1−2δusing a union bound.It is straightforward to see (by employing Theorem 4.1 of Berkenkamp et al. [16]) that Theorem 4.2of Sukhija et al. [1] holds for n∗being the smallest integer such thatn∗≥C|Θ|βn∗(δ)γn∗|I|2ε2(24)where we use that | ̄Rz0(S)| ≤ | Θ|for any S⊆Θand|Θ|+ 1≤2|Θ|. Thus, whenever a newdisconnected safe region is discovered by GE, LSE is run for at most n∗steps.It follows from the stopping criterion of GE, Θ\(Sn∪ E) =∅, that GE is run for at most |Θ|consecutive steps (i.e., without an LSE-step in between). Clearly, a new disconnected safe regioncan be discovered by GE at most |Θ|times, and hence, GOSAFEOPTterminates after at most |Θ|iterations of at most n∗LSE steps and at most |Θ|GE steps. Altogether, we have that the optimalityguarantee holds with probability at least 1−δfor ̃nbeing the smallest integer such that ̃n=C|Θ|2β ̃n(δ)γ ̃n|I|ε2≥ |Θ|(n∗+|Θ|), (25)completing the proof of Theorem 1 for non-contextual G OSAFEOPT.Multiple contexts Visiting other contexts Z \ {z}in between results in additional measurementsand increases the constant β, ensuring that the confidence intervals are well-calibrated. The onlydifference in the proofs is the appearance of βn∗(z)rather than βn(z)in Equation (24). In thecontextual setting, n∗(z)is the smallest integer such thatn(z)≥C|Θ|βn∗(z)(δ)γn(z)|I|(z)2ε2(26)wheren(z) =n∗(z)Xn=11{z=zn}counts the number of episodes with context zuntil episode n∗(z). The bound on ̃n(z)then followsanalogously to Equation (25).C Comparison of S AFEOPTand G OSAFEOPTTo visually analyze the different exploration properties of SAFEOPTandGOSAFEOPTwe usethe Pendulum Environment from OpenAI [ 33] as an example. The ideal trajectory is given bysome undisturbed controller. In our toy problem, we use a simple PD control which is sufficientfor the pendulum swing-up problem and various oscillating trajectories. To simulate the sim tohardware gap, we artificially add a disturbance to the applied torque in the form of joint impedancesτ=τ∗−θdp( ̃x∗− ̃x) +θdd ̇ ̃xwhere θdpandθddare unknown disturbance parameters and ̃x∗, ̃xarethe desired and observed motor angles. We use GOSAFEOPTto tune an additional PD controllerwhich should follow the ideal trajectory and compensate for the artificial disturbance. Figure 5 showsan example run of SAFEOPTandGOSAFEOPT. Whereas SAFEOPTis restricted to expanding theinitial safe region, G OSAFEOPTcan discover new safe regions, and thus find a better optimum.16Table 1: Here, we summarize different magnitudes of γnfor composite kernels from Theorems 2and 3 of Krause and Ong [2]and for individual kernels from Theorem 5 of Srinivas et al. [25] andRemark 2 of Vakili et al. [32]. The magnitudes hold under the assumption that the domain of thekernel is compact. γΘandγZdenote the information gain for the kernels kΘandkZ, respectively.Bνis the modified Bessel function.Kernel k(v,v′) γnProduct kΘ(v,v′)·kZ(v,v′)ifkZhas rank at most d dγΘn+dlog(n)Sum kΘ(v,v′) +kZ(v,v′) γΘn+γZn+ 2 log( n)Linear v⊤v′O(dlog(n))RBF e−∥v−v′∥22l2 Ologd+1(n)Matérn1Γ(ν)2ν−1√2ν∥v−v′∥lνBν√2ν∥v−v′∥lOnd2ν+dlog2ν2ν+d(n)−4 −2 0 2 4−5−4−3−2−1012SafeOpt−4 −2 0 2 4−5−4−3−2−1012GoSafeOptkdkp1Figure 5: Example run of SAFEOPTand GOSAFEOPT.The red circle denotes the initial safepoint. The black dots denote observed points. The green circle denotes the true safe optimum and theblue circle denotes the optimal point determined by SAFEOPTandGOSAFEOPTafter 150 iterationsrespectively. The discovered safe sets are shown in black. GOSAFEOPTgets closer to the trueoptimum by discovering new safe regions which are not connected to the initial safe region.D Control FormulationOur model-based motion controller integrates the MPC and WBC methods to enhance both robustnessand maneuverability. The MPC is responsible for generating base and foot trajectories, while theWBC converts these trajectories into joint-level commands. For the MPC component, we employ themodel predictive control formulation proposed by Kang et al. [21,34]. This formulation represents a17finite-horizon optimal control problem as a nonlinear program utilizing the variable-height invertedpendulum model. The optimal solution of the nonlinear program is determined by using a second-order gradient-based method. For a more in-depth understanding of the MPC formulation, we directreaders to the prior work by Kang et al. [21, 34].We employ a slight modification of the WBC formulation introduced by Kim et al. [7], adaptingit to align with our MPC method. Following the method proposed by Kim et al. [7], we computethe desired generalized coordinates x*, speed ̇x*, and acceleration ̈x*for a quadruped system at thekinematic level. This process involves translating desired task space (Cartesian space) positions,velocities, and accelerations into configuration space counterparts. Throughout this process, weenforce task priority through iterative null-space projection [ 22]. The top priority is assigned to thecontact foot constraint task, followed by the base orientation tracking task. The base position trackingtask is given the third priority, and the swing foot tracking task is assigned the final priority.Subsequently, we solve the following quadratic program:minδ ̈x,fc∥δ ̈x∥2Q (27a)s.t. Sf(M ̈x+b+g) =SfJ⊤cfc (27b) ̈x∗∗= ̈x∗+δ ̈x,0nj⊤(27c)Wf c≥0, (27d)where δ ̈xdenotes a relaxation variables for the floating base acceleration and fcdenotes contactforces with the contact Jacobian Jc. Equation (27a) is the objective function that penalizes theweighted norm of δ ̈xwith the weight matrix Q. Equation (27b) corresponds to the equation of motionof the floating base, representing the first six rows of the whole-body equation of motion, with Sfbeing the corresponding selection matrix. Lastly, Equation (27d) sets forth the Coulomb frictionconstraints. This procedure refines the desired generalized acceleration ̈x*, which is calculated at thekinematic level, by incorporating the dynamic impacts of the robot’s movements.Upon determining ̈xis determined, we compute the joint torque commands as follows:τ∗=M ̈x∗∗+b+g−J⊤cfc. (28)The final torque commands are calculated using τcmd=τ∗+kp( ̄x∗− ̄x) +kd( ̇ ̄x∗− ̇ ̄x)anddispatched to the robot with the feedback gains kp∈Rdu×d ̄xandkd∈Rdu×d ̄x. Here, ̄x, ̇ ̄xarethe joint angles and speeds, while ̄x∗, ̇ ̄x∗denote their desired values. As previously noted, thisstep is crucial in dealing with model mismatches, specifically, the differences in joint-level behaviorstemming from actuator dynamics and joint friction.E Experimental DetailsE.1 Gait parametersWe used the following gait parameters for the simulation and the hardware experiments.Table 2: Gait parameters.Trot Crawl Flying trot PronkDuration [ s] 0.5 1.2 0.6 0.6Duty cycle 0.5 0.75 0.4 0.9PhaseOffsets0.5 0.25 0.5 0.00.5 0.5 0.5 0.00 0.75 0 0.018E.2 Bayesian optimizationFor all our experiments, we use a Matérn kernel with ν= 1.5for the underlying Gaussian Process.The lengthscales are fixed during the whole optimization process and set toTable 3: Kernel lengthscales.lengthscalesSimulation [0.1,0.05,0.1,0.05,0.1,0.05,0.1,0.05,0.1,0.1,0.1,0.1,0.1]Hardware [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 0.5, 0.5, 0.5, 0.5, 0.5]where the first nparameters correspond to the (kp,kd)pairs and the last parameters to the context.For all experiments, we use β= 16 for the LCB on the constraints.E.3 SimulationWe developed an emulator to simulate the control gain tuning process, utilizing the open-source rigid-body simulation engine, the Open Dynamics Engine (ODE) [ 35]. To account for model mismatchesand uncertainties, we introduced disturbances based on the model detailed in subsequent sections.Disturbance model We introduced joint-level disturbances by altering the torque exerted by eachmotor. This method emulates the torque tracking discrepancies in motors, the damping effectsstemming from joint friction, and other model mismatches attributed to inaccuracies in the model.More specifically, for ith motor of leg l, the applied motor torque is given by τappliedi,l=αlτcmdi,l+θ⊤l[ ̄x∗l,i− ̄xl,i,− ̇ ̄xl,i]⊤. In this equation, τcmdis the torque command computed as described inAppendix D, αlis a disturbance factor for leg lwithα= [0.73,0.9,0.73,0.9]⊤, and ̄x, ̇ ̄xare thejoint angles and speeds, while ̄x∗, ̇ ̄x∗denote their desired values.Reward function The joint states variable ̄s∈R24of all 12 joints is described as a concatenatedvector of joint angles and joint speeds. We set the matrices from Equation (12) to Qg=I24×24andQi,jq= 1{i=j∧i <= 12}.E.4 HardwareWe slightly modify the reward function for the hardware experiment and include a penalty term onthe joint velocities.ˆg(θ,z) =g(θ,z)− ∥ ̄s(t)∥2Qp,where the velocity state errors in QgandQqin Equation 12 are set to zero, since the joint speedmeasurements are noisy finite difference approximations of the joint angles. Furthermore, we defineQpto only include the noisy joint speeds observations. More specifically, we define Qi,jq=Qi,jq=1{i=j∧i <= 12}andQi,jp=121{i=j∧i >12}andQq,Qq,Qp∈R24×24. Experimentalresults have shown, that adding a penalty term on the joint velocities acts as a regulator to prefersolutions where motor vibrations are low. This has shown to improve overall convergence and tovisibly avoid solutions where motor vibrations are high.Figure 6 shows that the optimal feedback control parameters drastically reduce motor vibrations andincrease the tracking performance.F Practical modificationsF.1 Boundary conditionsWe use the idea from Sukhija et al. [1]to reduce computational complexity by defining an interior andmarginal set. Intuitively, the interior set contains all observed states for which the safety margin is19Tracking performanceTracking error [deg] Joint angle [deg]Iteration steps Iteration stepsHip Thigh Calf03605100361-2-50-5-100-5-10Trot untrainedTrot trainedCrawl untrainedCrawl trainedJoint configurationsHipCalfThighFigure 6: Joint angle tracking performance comparison. Joint angle tracking errors in degrees(left) and joint angle measurements in degrees (right.) The yellow and violet regions represent theinitial control gains for trotandcrawl respectively. Conversely, the green and blue regions indicatethe optimized gains for trotandcrawl . It is evident from the plots that the refined gains yield asubstantially reduced tracking error with diminished jitter.high and the marginal set includes all states where the safety margin is greater than a certain threshold.More formally, Sukhija et al. [1] defines the interior and marginal set as :ΩI,n={xs∈ X | (θ,xs)∈ Bn:∀i∈ Iq, ln(θ, i)≥ηu} (29)ΩM,n={xs∈ X | (θ,xs)∈ Bn:∀i∈ Iq, ηl≤ln(θ, i)< ηu} (30)The boundary condition is defined separately for the interior and marginal set. Firstly, the Euclideandistance dibetween the observed state and all the backup states is calculated. If dmin= min idi= 0,a backup policy for the observed state is known to be safe. Intuitively, the uncertainty if a backuppolicy can safely recover from the observed state increases as dmingrows. If the observed state movestoo far away from the set of backup states, the closest backup policy is triggered. More formally,a backup policy is triggered, if the ∄dis.tp(|x| ≥di)> τ. The distribution over xis defined asx∼ N(0, σ2)andτm≥τifor the interior and marginal set, respectively. With σ2andτithere aretwo adjustable parameters to influence how conservative the backup policy acts.Table 4: Boundary condition parametersParameter Value DescriptionSimulationσ 2 Standard deviation of backup distributionτi 0.2 Interior lower bound probabilityτm 0.6 Marginal lower bound probabilityHardwareσ 2 Standard deviation of backup distributionτi 0.05 Interior lower bound probabilityτm 0.1 Marginal lower bound probability20F.2 OptimizationThe solution of the acquisition optimization problem formulated in 10 is approximated with thestandard particle swarm [36] algorithm, similar to [37].At the beginning of each acquisition optimization, npparticle positions are initialized. Rather thaninitializing the positions over the whole domain, the positions are sampled from a list of known safepositions in the current safe set.For all experiments, the parameters in Table 5 are used.Table 5: Swarmopt parametersParameter Value DescriptionΘg 1 Social coefficientΘp 1 Cognitive coefficientw 0.9 Inertial weightn 100 Number of iterationsnr 100 Number of restarts if no safe set is foundF.3 Fix iterations and discard unpromising new detected safe regionsIn practice, it is not practical to fully explore a safe set before the global exploration phase. For ourexperiments, the number of iterations for the local and global exploration phase are fixed to nl= 10andng= 5, respectively. To avoid exploring for all nlsteps in unpromising regions, we definend= 5< nland switch to local exploration of the best set if the best reward estimation of the currentset is much less than the best global reward estimate. That is, we switch to the best set if ˆr∗i< cˆr∗andnd=nl.F.4 Posterior estimationEach BO step requires the optimization of the GOSAFEOPTacquisition function to predict the nextparameters to evaluate. This paper uses the standard particle swarm [ 36] algorithm, which requires thecomputation of the posterior distribution at each optimization step for all particles. To speed up thecomputation of the posterior distribution, the paper uses Lanczos Variance Estimates Pleiss et al. [38].21 |
0mRSANSzEK | Improving Behavioural Cloning withPositive Unlabeled LearningQiang Wang1, Robert McCarthy2, David Cordova Bulens1,Francisco Roldan Sanchez3,4, Kevin McGuinness3,4, Noel E. O’Connor3,4,Nico Gürtler5, Felix Widmaier5, Stephen J. Redmond†1,41University College Dublin,2University College London,3Dublin City University4Insight SFI Research Centre for Data Analytics,5MPI for Intelligent SystemsAbstract: Learning control policies offline from pre-recorded datasets is a promis-ing avenue for solving challenging real-world problems. However, availabledatasets are typically of mixed quality, with a limited number of the trajecto-ries that we would consider as positive examples; i.e., high-quality demonstrations.Therefore, we propose a novel iterative learning algorithm for identifying ex-pert trajectories in unlabeled mixed-quality robotics datasets given a minimal setof positive examples, surpassing existing algorithms in terms of accuracy. Weshow that applying behavioral cloning to the resulting filtered dataset outper-forms several competitive offline reinforcement learning and imitation learningbaselines. We perform experiments on a range of simulated locomotion tasksand on two challenging manipulation tasks on a real robotic system; in theseexperiments, our method showcases state-of-the-art performance. Our website:https://sites.google.com/view/offline-policy-learning-pubc .Keywords: Offline policy learning, Positive unlabeled learning, Behaviouralcloning1 IntroductionData-driven learning methods can discover sophisticated control strategies with minimal human in-volvement, and have demonstrated impressive performance in learning skills across many challengingdomains [ 1,2,3,4]. Nonetheless, data-driven methods are not often applied in real world applicationsdue to the amount of interactions with the environment needed before an effective policy can belearned [ 5,6,7]. Moreover, data acquisition can be costly and/or unsafe in physical environments.This inefficiency can potentially be improved by learning from previously-collected data; i.e., learninga policy from a historical dataset without needing additional data acquisition from the environment.This is termed offline policy learning.Standard behavioral cloning (BC) is the simplest offline policy learning algorithm, it aims to find apolicy that can mimic the behavior observed in a dataset capturing the performance of a given task.The target behavior to be cloned is usually obtained from an expert; for instance, a human [ 8,9] or awell-performing scripted agent [ 10]. BC performs supervised regression, learning a control policythat maps observations from a dataset to the corresponding actions taken by a behavior policy. Whenhigh-quality expert data is used for training, BC demonstrates high efficiency, and the resulting agenttypically exhibits good performance [11].However, a major drawback of BC is its dependence on a high-quality training dataset. Specifically,the data collected for BC should come from one highly-skilled expert. Moreover, the actionsconditioned on the environment states must display a unimodal distribution to prevent regressionambiguity [ 12], as BC follows a supervised machine learning paradigm. However, real-world/practicaltraining datasets are often of mixed-quality, containing examples of both high-quality and low-quality†is the corresponding author7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.behaviors/data, which can be detrimental to the training process. The presence of low-quality data inthese datasets can be attributed to factors such as the involvement of low-skilled agents, non-task-related policies, or environmental noise. If multiple experts are involved, the dataset may furthermoreexhibit multi-modal behavior, leading to regression ambiguity in BC. In this paper, we aim to studymethods allowing the discrimination of a single target expert’s data within a mixed-quality dataset,and utilize this single-expert subset for BC learning.1.1 Our workWe assume we are provided with a mixed-quality dataset, as discussed previously, along with a seed-positive dataset consisting of high-quality data generated by an expert. Our goal is to discriminate datain the mixed-quality dataset that shares the same behavioral patterns as the seed-positive examplesto obtain an expert dataset suitable for BC. In practice, the seed-positive dataset can be quite small,typically constituting only 0.1% to 0.4% of the mixed-quality dataset’s size. This enables users toobtain the seed-positive examples using feasible methods, such as heuristically/manually samplingfrom the mixed-quality raw dataset or by requesting the target expert to collect a small amount ofadditional data. Our approach can be considered an example of Positive Unlabeled (PU) learning,which constitutes an important subfield within conventional semi-supervised learning. Concisely,datasets for PU learning comprise a portion of data labeled as positive, while the rest remainsunlabeled. The objective of PU learning is to leverage the information from positive examples toclassify the unlabeled data as either positive or negative.We establish a supervised learning signal by integrating synthetically-generated negative examples andseed-positive examples. Our training methodology adheres to a traditional semi-supervised learningparadigm; it begins the training with a small, positive dataset and then iteratively discriminatespositive examples from a large, unlabeled dataset. These identified examples are then added to thepositively-labeled subset for the subsequent training cycle, until convergence is achieved. Oncethe dataset has been labeled using this iterative method, during the policy learning phase, we applystandard BC to this positively-labeled subset.In general, we refer to our offline policy learning approach as Positive Unlabeled Behavioural Cloning(PUBC). PUBC stands out for its simplicity in implementation, quick training time, and ease ofparameter tuning. In our experiment, PUBC achieves excellent performance across a wide range ofchallenging robotic tasks, surpassing several state-of-the-art algorithms.2 Related work2.1 Offline policy learningResearch methods for offline policy learning can largely be categorized as offline imitation learning(IL) or offline reinforcement learning (RL); we refer readers to [ 13,14] for comprehensive surveys,and to [15, 16, 17] for research benchmarks.BC is the simplest form of offline IL. In addition to standard BC as mentioned earlier, researchershave recently proposed Implicit BC [ 18], which employs an energy-based model [ 19] to improve BC’sperformance. Furthermore, Inverse RL [ 20] is regarded as an alternative branch of IL, focusing onunderstanding the motivations behind an agent’s actions by inferring the underlying reward structure,which guides the decision-making process in Inverse RL. Additionally, a Generative AdversarialNetwork (GAN) [ 21] has been integrated with IL in [ 22], where it employs adversarial training withgenerative and discriminative models to learn the action distribution of the behavior policy.Offline RL[ 14] aims to maximize the expectation of the sum of discounted rewards. However, unlikein online RL, no interactions with the environment are allowed. Most off-policy RL [ 23] algorithmsare applicable offline, but they typically suffer from the issue of outputting out-of-distribution (OOD)actions due to the distribution shift between the action distribution in the training dataset and thatinduced by the learned policy [ 10]. To mitigate this issue, several constraint methods, such as policyregularization [24, 10, 25, 26] and conservative value estimates [27, 28] are proposed.2Recently, a novel offline RL approach, inspired by transformer models [ 29], has been introduced in[30,31]. Unlike conventional RL methods that depend on policy gradients or temporal differences,this approach takes a distinct route by utilising supervised sequence modelling to fit the policy datadistribution. In the policy evaluation phase, the GPT model takes a target return and generates anaction sequence for that goal. This GPT-based method will be a baseline in our paper.Another branch of offline RL perfroms BC while focusing on learning transitions with highersignificance in the datasets. It starts by learning advantage functions and subsequently utilizes them todownweight transitions with lower advantage [ 32,33,34,35,36,37]. Similarly, in [ 38], the authorspropose a GAN-like architecture [ 21] that combines BC and a discriminator. The discriminatorselectively picks high-quality expert data from the dataset for BC learning.2.2 Positive unlabeled learningThe main challenge in PU learning is to acquire negative examples from the unlabeled data tocomplement the available positive examples for training a supervised classifier. The two-step PUlearning approach involves manually labeling a subset of negative examples and using them alongwith positive examples to train the classifier [ 39]. The classifier can then label the remaining unlabeleddata in the dataset. However, this method can be human labor-intensive, and it may not be effective ifthe underlying patterns of the negative examples are difficult to interpret.Another solution is to naively treat the unlabeled data as negative examples during classifier training[40]. The classifier can then assign scores to the unlabeled examples, with positive examples typicallyreceiving higher scores. This method has been improved in [ 41] by using the bagging technique togenerate multiple subsets from the unlabeled dataset, which are combined with positive examples totrain a series of weaker classifiers. Finally, the output from the classifier ensemble is used to producea more accurate prediction. However, if an unsuitable loss function is used, biased errors may occur.This problem is addressed in [ 42] by introducing Unbiased PU learning. More recently, [ 43] proposedNon-negative PU learning to mitigate the overfitting problem associated with unbiased PU learning.We recommend [44] for an in-depth analysis of PU learning.3 Positive Unlabeled Behavioural Cloning3.1 PreliminariesThe offline policy learning problem is formulated in the context of a Markov decision process, M= (S,A,R,P,γ) [45], where Sis the state space, Ais the action space, Ris the reward function,Pis environment dynamic and γis the discount factor. At each time step, t, the agent gets a statest∈ S and outputs an action at∈ A according to a policy π(at|st); after applying the action to theenvironment, the agent will get a reward rt∈ R and the environment state transitions to st+1. Weassume that we can obtain or are given a positive dataset, D+={(s+t, a+t, r+t, s+t+1)t=1...m}, withmtime steps and a large mixed-quality offline dataset Dmix={(smixt, amixt, rmixt, smixt+1)t=1...n},withntime steps. We assume m << n , thatD+contains only data collected by the targeted expert,andDmixincludes a proportion of data collected by the target expert. In our following description,we define a positive example as the data collected by the target expert, and a negative example as anydata not collected by the target expert.3.2 Generating the training examplesOur approach to generating negative examples for training the PU classifier is similar to the two-step method outlined in Section 2.2. However, rather than depending on manual selection ofnegative examples from Dmix, we create them by randomly mixing the states and actions fromdifferent sources, including from D+, from Dmix, and random examples from the state-actionspace (see Figure 1(a)). Therefore, the set of negative examples can be informally written as:D−={(s+, amix)n1∪(s+, ̃a)n2∪(smix, a+)n3∪(smix, ̃a)n4∪( ̃s, a+)n5∪( ̃s, amix)n6∪( ̃s, ̃a)n7},where ̃sand ̃arefer to the random states and actions that follow a uniform distribution within therange of minimum action to maximum action; n1−n7correspond to the number of state-action pairsgenerated for each combination of sources. We require that the artificially generated state-action pairbe distinct from the state-action pair obtained from the raw dataset.3(a) (b)Figure 1: PUBC learning block diagram. (a) Illustration of the iterative PU learning process. Thered rounded rectangle shows how positive andnegative training examples are generated, with thenegative examples generated by intentionally mismatching actions and states from different timepoints and/or different data subsets to form action-state pairs that would most likely represent poorlyperforming behaviors; See section 3.2 for a more comprehensive explanation. (b) The neural networkstructure of the classifier includes an encoder to reduce the usually high-dimensional state vector,∫, followed by a multilayer perceptron (MLP) classifier, taking as input both state ∫and action avectors, and whose sigmoidal output is interpreted as the probability that the example was generatedby an expert agent.In the first training iteration, we treat the examples in the positive dataset as positive, similar totraditional PU learning. Once the classifier is trained, we use it to identify additional positive examplesfrom the unlabeled dataset. These newly identified positive examples then replace the previous ones.3.3 Classifier structureFigure 1(b) shows the network structure of the classifier, which is a combination of MLPs; it takes thestate-action pair and outputs the probability that this pair was generated by the actions of the sameexpert agent that generated (we assume) most of the data in the training dataset, D+. Experimentally,we found that directly inputting a concatenation of raw state-action pairs into the neural networkwill cause the action to be effectively ignored, as the dimension of the observation space is typicallymuch larger than the action space in practical task settings. Therefore, to reduce the dimensionof the observation space, we encode the observation vector into a lower dimensional space beforeconcatenating it with the action vector and inputting it to the MLP to obtain a decision. The finallayer of the MLP uses a sigmoid activation function, sigmoid( x) = 1 /(1 +e−x), where the outputF(s, a)is interpreted as the probability that the state-action pair was generated by an expert. Binarycross-entropy is chosen as the training loss, L:L= E(st,at)∼D−[−log(1− F(st, at))] + E(st,at)∼D +[−logF(st, at)].(1)3.4 Additional methods for optimizing PU learning performance•Classifying per trajectory: The policy used for data collection typically does not change within atrajectory (each interaction episode). The labels assigned to all state-action pairs in a trajectory aretherefore aggregated and the same label is applied to all transitions in the trajectory. This is done bysoft voting, taking the mean of the individually predicted probabilities of the state-action pairs inthe trajectory, F(st, at)forta member of the trajectory time interval, to create a confidence scorefor the trajectory. Subsequently, a threshold, thconf, is applied to the confidence score to binarize it.•Adaptive confidence threshold: It is necessary to set a confidence threshold thconfto discriminatepositive trajectories in the unlabeled dataset, Dmix. In other words, all trajectories with a confidencescore exceeding the threshold are classified as subsets containing entirely positive examples. Wediscovered that employing an adaptive threshold can enhance the classifier’s performance. Briefly,it searches for a local minimum in the confidence score histogram and uses this value as a decision4Algorithm 1: PUBC algorithmInput: Mixed-quality dataset Dmix, positive dataset D+whileD+not converged doRandomly sample Ksubsets {Dk+}k=1...KfromD+;Generate Kcorresponding negative subsets {Dk−}k=1...K;Initialise Kclassifiers {Fθk}k=1...Kwith parameters θ1, ..., θ K;fork←1toKdoUpdate θkby minimizing the loss in Equation 1;endGetconfidence threshold thconfusing the trained {Fθk}k=1...K;Update the membership of D+using Equation 2;endInitialise BC network πδwith parameters of δ;forepoch ←1toepochs doUpdate δby minimizing: E(st,at)∼D +[−logπδ(at|st)]endOutput: πδboundary, above which it is assumed the data is dominated by expert-generated trajectories (seeAppendix B for details).•Ensemble learning: Similar to the approach in [ 41], we employ bagging to enhance the classifier’sperformance by concurrently learning multiple independent weaker classifiers and combining theirindividual decisions to determine a final decision. The datasets used to train each classifier aresubsets of D+, sampled with replacement.In Appendix D, we conducted an ablation study on these three techniques individually.3.5 Using the trained classifier and learning the policyConsidering the above techniques, the label decision for a trajectory can be determined using:f:= 1"KXk=11" 1TTXt=1Fk(st, at)!≥thconf#>K2#, (2)where 1[·]are indicator functions, Tis the number of time steps in each trajectory; and Kis thenumber of classifiers in the ensemble, set to an odd number to avoid ties.The process iterates until trajectories in D+converge. Once D+has converged, we train a standardBC model to mimic the behavior in D+. The entire semi-supervised training and filtering process forour PU classifier as well as the training of BC can be succinctly summarized in Algorithm 1.4 Experimental resultsThis section aims to showcase the effectiveness of our proposed PUBC method by conductingexperiments on a range of continuous control benchmark tasks, including challenging physicalrobotic manipulation tasks from the Real Robot Challenge (RRC) III competition1and numerousMuJoCo locomotion tasks [47, 48] (see Figure 2).4.1 Environments, tasks and datasets4.1.1 Robotic manipulation tasksFigure 2(a) illustrates the domains of RRC III, which focuses on two tasks: Push andLift[49]. In thePush task, the cube must be moved to specified positions on the arena floor. The more challengingLifttask requires lifting the cube and maintaining it at a target position and orientation.1A robotic manipulation competition featured in the NeurIPS 2022 Competition Track, more details seehttps://real-robot-challenge.com/ . We won the competition [ 46], with the filter-based technique beingone of our key strategies, and the proposed PUBC method extends this filtering method.5In our experiment, we use the mixed datasets provided by the competition organizers for each task.Each mixed dataset is collected by a mixture of different policies with varying levels of skill, and asignificant portion of the data was collected by domain-specific experts, ensuring high quality. Thesubsequent discussion will denote these datasets by Lift/mixed andPush/mixed , respectively.To obtain the seed-positive dataset, D+, we assume that the target expert has generated the trajectorieswith the highest returns. Therefore, we select the 0.4% episodes with the highest returns as ourseed-positive dataset. The selection of this value is further investigated in Appendix D through anablation study. Post-competition, we received trajectory labels from RRC III organizers, serving asground truth to assess our PU learning method’s accuracy.4.1.2 MuJoCo locomotion tasksAs shown in Figure 2(b)-2(c), the bodies being controlled in the locomotion tasks comprise segmentsand joints. Actions are applied to maintain the balance of the body and to move forward. Here, Wecollected mixed-quality datasets comprising five different structures: 1. E+E : Expert+Expert datasetconsisting of data from two expert policies with similar performance but different habits/behaviors,with only one expert policy being of interest to us; 2. E+W : Expert+Weaker dataset containing datafrom one expert policy and one poorly performing agent; 3. E+N : Expert+Noise dataset comprisingexpert data and an equal amount of domain noise; 4. E+E+W+N : a combination of the above fourtypes of data. In addition, we also included an expert dataset E, consisting only of expert data as abaseline. The configuration of each mixed dataset is further detailed in Appendix C.4.2 PU learning resultsThis section presents the accuracy of our PU learning method, comparing it to traditional PU learningapproaches, including Unbiased PU learning and Non-negative (NN) PU learning. The mathematicalformulations for Unbiased PU and NN PU learning methodologies are provided in Appendix F.1for reference. To maintain a balanced comparison, we also employed the techniques delineated inSection 3.4 to the baselines.As illustrated in Table 1, there is no significant difference between the performances of the twobaseline methods, while our approach shows a significant increase in accuracy compared to thesemethods. The baseline methods. This could potentially be attributed to the relatively complex datadistribution produced by the RRC III environment. In locomotion tasks, both the environment andthe task are relatively stable, typically involving the operation of a simulated body to complete thesingle task of moving forward. However, in the RRC III manipulation environment, there is a highdegree of randomness; both the initial position of the object and the target are randomly initialized,meaning the tasks completed in each trajectory vary. This results in a more complex data distribution.Furthermore, we observed that generally, in real-world environments, the performance of traditionalalgorithms tends to be inferior to their performance in simulators. This is due to non-ideal hardwarein physical environments introducing substantial environmental noise, making the data distributioneven more complex.In unbiased PU learning, introducing a reweighting operation to mitigate positive label data biascan shift the data distribution, resulting in poor model generalization, especially in complex datadistributions like the RRC III environment.In the Non-negative PU learning, the weight of the positive samples are enforced to non-negative,however, this may hinder the model from capturing essential nuances in the data. For some intricatedata distributions, permitting the model to allocate negative weights to positive samples could beinstrumental in uncovering the data’s inherent structure and relationships more effectively.In contrast to this, the method we propose can effectively tackle these issues, demonstrating bothhigh accuracy and strong robustness.4.3 Offline policy learningWe present the evaluated policy performance in Table 2, where we compare our PUBC with otherrelevant baseline algorithms, including a naive reward-based filter before performing BC, CRR[32],6Table 1: Comparing our method’s accuracy to baselines in classifying expert vs. non-expert trajec-tories. Accuracy formula: (TP+TN)/(TP+TN+FP+FN), where TP(True Positive) andTN (True Negative) denote the correct classification of expert trajectories and non-expert trajectoriesrespectively. FP(False Positive) denotes that non-expert trajectories are incorrectly classified asexpert, FN (False Negative) denotes that expert trajectories are incorrectly classified as non-expert.Dataset Unbiased PU NN-PU OursRRC-Sim- Lift/mixed 82.5% 79.2% 99.7%RRC-Sim- Push/mixed 94.3% 90.5% 100.0%RRC-Sim Avg 88.4% 84.9% 99.9%RRC-Real- Lift/mixed 69.4% 64.8% 99.2%RRC-Real- Push/mixed 88.7% 90.0% 100.0%RRC-Real Avg 79.1% 77.4% 99.6%Ant - E+E 98.1% 95.3% 99.0%Ant - E+W 100.0% 99.2% 100.0%Ant - E+N 99.3% 100.0% 100.0%Ant - E+E+W+N 98.1% 96.9% 98.8%Ant Avg 98.9% 97.9% 99.5%Humanoid - E+E 93.4% 94.9% 99.7%Humanoid - E+W 93.7% 94.6% 100%Humanoid - E+N 92.1% 93.6% 100%Humanoid - E+E+W+N 91.0% 89.7% 99.0%Humanoid Avg 92.6% 93.2% 99.7%Overall Avg 89.7% 88.3% 99.6%DWBC[ 38] and IQL[ 24]. We provide a detailed description of these baselines in Appendix F.2 forreference. It is evident that our PUBC consistently outperforms the baseline approaches across alldomains. From the overall scores, we can observe that our method demonstrates a performanceadvantage of over 12% compared to the second-best performing approach.Employing naive reward-based techniques like 10% BC and 50% BC can extract sufficient expertdata from mixed-quality datasets in some cases. This is evident from the performance on the Sim- andReal-Push/mixed datasets, and the E+W dataset for both locomotion tasks. However, naive filteringworks well only if there’s a large performance difference between expert and non-expert policies, thefraction of expert data is known, and reward noise is low. Therefore, BC performs poorly on otherdatasets lacking these conditions.The advantage-based CRR algorithm performs relatively poorly. This ineffectiveness stems fromthe challenging nature of estimating advantages in the offline RL setting, particularly when theenvironment is stochastic and/or the given rewards are sparse or noisy. In contrast, by consideringbehaviors over an entire trajectory, our method more accurately identifies target expert trajectories.While IQL is considered one of the top-performing offline RL algorithms, our experiments have shownthat its performance is not satisfactory on the selected problems. Although offline RL theoreticallyhas the ability to handle various types of data, including mixed-quality datasets, it is preferable to usehigh-quality datasets in any scenario. Previous research has demonstrated that offline RL algorithmsgenerally struggle to handle suboptimal robotics data effectively [ 50]. Furthermore, the additionalexperiments in Appendix E show that our PU method can improve the performance of offline RL ona range of D4RL benchmarks by enhancing the quality of the training data.The GPT-based policy generation method DT, demonstrates strong performance in most RRC tasks,but it falls short in the Real Lift/Mixed tasks. We have noticed that DT requires about 6ms to generatean action at each time step. However, considering the demanding dexterity requirements of the RealLift task, it can only tolerate a maximum delay of 2ms. This computational delay presents a limitationthat would hinder the practical extension of DT into real-world scenarios. Another constraint isits reliance on high-quality data; its effectiveness reduces with noisy datasets, though it performsrobustly on the expert’s locomotion tasks.7Table 2: Averaged normalized scores of our method and the baselines. Each result is averaged overthree training seeds, and training lasts 106time steps. We evaluate each learned policy for 100trajec-tories. The score is normalized by score norm = (score−score min)/(score max−score min).Dataset Data 10% BC 50% BC BC DT IQL CRR DWBC PUBC (Ours)RRC-Sim- Lift/mixed 0.83 0.39 0.40 0.56 0.73 0.63 0.49 0.71 0.87RRC-Sim- Push/mixed 0.61 0.64 0.84 0.59 0.80 0.71 0.82 0.80 0.85RRC-Sim total 1.44 1.02 1.24 1.15 1.53 1.34 1.32 1.51 1.72RRC-Real- Lift/mixed 0.60 0.32 0.31 0.31 0.44 0.36 0.40 0.54 0.65RRC-Real- Push/mixed 0.44 0.67 0.85 0.58 0.82 0.79 0.79 0.81 0.83RRC-Real total 1.04 1.00 1.15 0.89 1.26 1.15 1.20 1.36 1.49Ant - E 0.80 - - 0.84 0.87 0.82 0.76 - -Ant - E+E 0.78 0.70 0.68 0.73 0.84 0.80 0.68 0.71 0.79Ant - E+W 0.53 0.82 0.80 0.53 0.73 0.79 0.41 0.80 0.79Ant - E+N 0.42 0.62 0.54 0.47 0.52 0.74 0.73 0.76 0.79Ant - E+E+W+N 0.48 0.70 0.53 0.31 0.67 0.67 0.29 0.69 0.78Ant total 2.22 2.84 2.56 2.03 2.76 3.00 2.12 2.95 3.16Humanoid - E 0.92 - - 0.87 0.91 0.78 0.23 - -Humanoid - E+E 0.90 0.34 0.73 0.69 0.88 0.72 0.26 0.69 0.85Humanoid - E+W 0.58 0.87 0.92 0.29 0.83 0.45 0.46 0.78 0.86Humanoid - E+N 0.63 0.45 0.37 0.25 0.59 0.11 0.60 0.81 0.86Humanoid - E+E+W+N 0.65 0.57 0.45 0.25 0.72 0.29 0.53 0.60 0.82Humanoid total 2.76 2.23 2.48 1.47 3.02 1.57 1.85 2.88 3.40Overall 7.46 7.09 7.43 5.55 8.57 7.06 6.48 8.70 9.76DWBC demonstrates the second-best overall performance in our experiment. Nonetheless, its efficacyis notably limited in challenging RRC Sim- and Real- Lift/mixed datasets. The filtering/weightingcomponent of DWBC shares similarities with classical PU learning methods. However, theseapproaches have limitations when dealing with complex data distributions, as mentioned previously.5 DiscussionIn summary, our PUBC method demonstrates superior accuracy and stability compared to conven-tional approaches in filtering the expert trajectories from the mixed-quality datasets. Furthermore, ourPUBC can effectively learn policies from mixed-quality continuous control datasets, outperforming avariety of sophisticated state-of-the-art algorithms.For certain applications, annotating rewards for all transitions can be a costly endeavor, especially incomplex, real-world scenarios. Therefore, an alternative approach is to leverage our method to extracthigh-performing trajectories from a mixed-quality dataset by incorporating a few demonstrationsamples instead of manually crafting a reward function.Of course, our methodology has certain limitations that should be acknowledged. Firstly, it is notapplicable when the initial seed-positive dataset is unattainable. Secondly, when the data is not groupin trajectories but consists of disorganized transitions, the performance of PUBC will be harmed.Lastly, our policy learning algorithm BC is inherently upper bounded by the performance of theexpert behavior policy. Indeed, sophisticated RL related algorithms can often learn policies thatgeneralize better and are able to directly learn from unknown datasets. In our future endeavors,we strive to enhance the performance of the trained policy by integrating the principles of our PUlearning method with subsequent RL paradigms. Furthermore, we have plans to tackle more complexscenarios, including environments with partial observability and non-Markovian dynamics.6 ConclusionThis paper introduces a new offline policy learning method termed PUBC, an effective approach toidentifying transition behaviors generated by a specific expert policy in order to improve the quality ofthe dataset used for subsequent offline policy learning. This approach allows a learning algorithm todisregard low-skill behaviours, hence improving the performance of the learned policy. In our work,the PU learning method allows a naive BC learning algorithm to outperform other state-of-the-artoffline RL algorithms in challenging physical problem domains.8AcknowledgmentsThis publication has emanated from research conducted with the financial support of China Scholar-ship Council under grant number CSC202006540003 and of Science Foundation Ireland under grantnumbers 17 /FRL/4832 and SFI /12/RC/2289 _P2. We are grateful about the invaluable suggestionsand comments that reviewers given to help to imporve the quality of this paper. We extend ourheartfelt gratitude to Dr. Kevin McGuinness for his invaluable contributions and expertise to thisresearch. It is with deep sorrow that we note he did not live to see the completion of this work. Hisexceptional insights and unwavering dedication will forever be etched in our memories.References[1]V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller.Playing Atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 , 2013.[2]D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre,D. Kumaran, T. Graepel, et al. Mastering chess and shogi by self-play with a general reinforce-ment learning algorithm. arXiv preprint arXiv:1712.01815 , 2017.[3]Q. Wang, F. R. Sanchez, R. McCarthy, D. C. Bulens, K. McGuinness, N. O’Connor,M. Wüthrich, F. Widmaier, S. Bauer, and S. J. Redmond. Dexterous robotic manipulation usingdeep reinforcement learning and knowledge transfer for complex sparse reward-based tasks.arXiv preprint arXiv:2205.09683 , 2022.[4]A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. In Conference on Robot Learning , pages 1101–1112. PMLR, 2020.[5]D. Yarats, A. Zhang, I. Kostrikov, B. Amos, J. Pineau, and R. Fergus. Improving sampleefficiency in model-free reinforcement learning from images. In Proceedings of the AAAIConference on Artificial Intelligence , volume 35, pages 10674–10681, 2021.[6]Y . Tassa, Y . Doron, A. Muldal, T. Erez, Y . Li, D. d. L. Casas, D. Budden, A. Abdolmaleki,J. Merel, A. Lefrancq, et al. DeepMind control suite. arXiv preprint arXiv:1801.00690 , 2018.[7]R. McCarthy, F. R. Sanchez, Q. Wang, D. C. Bulens, K. McGuinness, N. O’Connor, and S. J.Redmond. Solving the Real Robot Challenge using deep reinforcement learning. arXiv preprintarXiv:2109.15233 , 2021.[8]A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Martín-Martín. What matters in learning from offline human demonstrations forrobot manipulation. arXiv preprint arXiv:2108.03298 , 2021.[9]P. Sermanet, C. Lynch, J. Hsu, and S. Levine. Time-contrastive networks: Self-supervisedlearning from multi-view observation. In 2017 IEEE Conference on Computer Vision andPattern Recognition Workshops (CVPRW) , pages 486–487. IEEE, 2017.[10] S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo-ration. In International conference on machine learning , pages 2052–2062. PMLR, 2019.[11] J. Merel, L. Hasenclever, A. Galashov, A. Ahuja, V . Pham, G. Wayne, Y . W. Teh, and N. Heess.Neural probabilistic motor primitives for humanoid control. arXiv preprint arXiv:1811.11711 ,2018.[12] S. Levine. Supervised Learning of Behaviors, 2022. URL http://rail.eecs.berkeley.edu/deeprlcourse-fa21/static/slides/lec-2.pdf . (Accessed 2022, Oct 10).[13] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learningmethods. ACM Computing Surveys (CSUR) , 50(2):1–35, 2017.9[14] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[15] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-drivenreinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[16] C. Gulcehre, Z. Wang, A. Novikov, T. Paine, S. Gómez, K. Zolna, R. Agarwal, J. S. Merel, D. J.Mankowitz, C. Paduraru, et al. Rl unplugged: A suite of benchmarks for offline reinforcementlearning. Advances in Neural Information Processing Systems , 33:7248–7259, 2020.[17] S. Fujimoto, E. Conti, M. Ghavamzadeh, and J. Pineau. Benchmarking batch deep reinforcementlearning algorithms. arXiv preprint arXiv:1910.01708 , 2019.[18] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee,I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning ,pages 158–168. PMLR, 2022.[19] Y . LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. A tutorial on energy-basedlearning. Predicting structured data , 1(0), 2006.[20] A. Y . Ng, S. Russell, et al. Algorithms for inverse reinforcement learning. In ICML , volume 1,page 2, 2000.[21] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, andY . Bengio. Generative adversarial networks. Communications of the ACM , 63(11):139–144,2020.[22] J. Ho and S. Ermon. Generative adversarial imitation learning. Advances in neural informationprocessing systems , 29, 2016.[23] M. Uehara, C. Shi, and N. Kallus. A review of off-policy evaluation in reinforcement learning.arXiv preprint arXiv:2212.06355 , 2022.[24] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.arXiv preprint arXiv:2110.06169 , 2021.[25] W. Zhou, S. Bajracharya, and D. Held. Plas: Latent action space for offline reinforcementlearning. In Conference on Robot Learning , pages 1719–1735. PMLR, 2021.[26] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. Advancesin neural information processing systems , 34:20132–20145, 2021.[27] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforcementlearning. Advances in Neural Information Processing Systems , 33:1179–1191, 2020.[28] G. An, S. Moon, J.-H. Kim, and H. O. Song. Uncertainty-based offline reinforcement learningwith diversified q-ensemble. Advances in neural information processing systems , 34:7436–7447,2021.[29] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, andI. Polosukhin. Attention is all you need. Advances in neural information processing systems ,30, 2017.[30] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, andI. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advancesin neural information processing systems , 34:15084–15097, 2021.[31] M. Janner, Q. Li, and S. Levine. Offline reinforcement learning as one big sequence modelingproblem. Advances in neural information processing systems , 34:1273–1286, 2021.10[32] Z. Wang, A. Novikov, K. Zolna, J. S. Merel, J. T. Springenberg, S. E. Reed, B. Shahriari,N. Siegel, C. Gulcehre, N. Heess, et al. Critic regularized regression. Advances in NeuralInformation Processing Systems , 33:7768–7778, 2020.[33] Q. Wang, J. Xiong, L. Han, H. Liu, T. Zhang, et al. Exponentially weighted imitation learningfor batched historical data. Advances in Neural Information Processing Systems , 31, 2018.[34] X. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple andscalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177 , 2019.[35] X. Chen, Z. Zhou, Z. Wang, C. Wang, Y . Wu, and K. Ross. Bail: Best-action imitation learningfor batch deep reinforcement learning. Advances in Neural Information Processing Systems , 33:18353–18363, 2020.[36] N. Y . Siegel, J. T. Springenberg, F. Berkenkamp, A. Abdolmaleki, M. Neunert, T. Lampe,R. Hafner, N. Heess, and M. Riedmiller. Keep doing what worked: Behavioral modelling priorsfor offline reinforcement learning. arXiv preprint arXiv:2002.08396 , 2020.[37] G. Neumann and J. Peters. Fitted q-iteration by advantage weighted regression. Advances inneural information processing systems , 21, 2008.[38] H. Xu, X. Zhan, H. Yin, and H. Qin. Discriminator-weighted offline imitation learning fromsuboptimal demonstrations. In International Conference on Machine Learning , pages 24725–24742. PMLR, 2022.[39] A. Kaboutari, J. Bagherzadeh, and F. Kheradmand. An evaluation of two-step techniquesfor positive-unlabeled learning in text classification. Int. J. Comput. Appl. Technol. Res , 3(9):592–594, 2014.[40] C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In Proceedingsof the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining ,pages 213–220, 2008.[41] F. Mordelet and J.-P. Vert. A bagging svm to learn from positive and unlabeled examples.Pattern Recognition Letters , 37:201–209, 2014.[42] M. Du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive andunlabeled data. In International Conference on Machine Learning , pages 1386–1394. PMLR,2015.[43] R. Kiryo, G. Niu, M. C. Du Plessis, and M. Sugiyama. Positive-unlabeled learning withnon-negative risk estimator. Advances in Neural Information Processing Systems , 30, 2017.[44] J. Bekker and J. Davis. Learning from positive and unlabeled data: A survey. Machine Learning ,109:719–760, 2020.[45] M. L. Puterman. Markov decision processes. Handbooks in operations research and manage-ment science , 2:331–434, 1990.[46] Q. Wang, R. McCarthy, D. C. Bulens, and S. J. Redmond. Winning solution of real robotchallenge iii. arXiv preprint arXiv:2301.13019 , 2023.[47] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ international conference on intelligent robots and systems , pages 5026–5033. IEEE,2012.[48] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.Openai gym. arXiv preprint arXiv:1606.01540 , 2016.11[49] M. Wüthrich, F. Widmaier, F. Grimminger, J. Akpo, S. Joshi, V . Agrawal, B. Hammoud,M. Khadiv, M. Bogdanovic, V . Berenz, et al. Trifinger: An open-source robot for learningdexterity. arXiv preprint arXiv:2008.03596 , 2020.[50] N. Gürtler, S. Blaes, P. Kolev, F. Widmaier, M. Wuthrich, S. Bauer, B. Schölkopf, and G. Mar-tius. Benchmarking offline reinforcement learning on real-robot hardware. In The EleventhInternational Conference on Learning Representations , 2023.[51] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In International conference on machinelearning , pages 1861–1870. PMLR, 2018.[52] S. Fujimoto, H. Hoof, and D. Meger. Addressing function approximation error in actor-criticmethods. In International conference on machine learning , pages 1587–1596. PMLR, 2018.[53] A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann. Stable-baselines3:Reliable reinforcement learning implementations. Journal of Machine Learning Research , 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/20-1364.html .[54] T. Seno and M. Imai. d3rlpy: An offline deep reinforcement learning library. arXiv preprintarXiv:2111.03788 , 2021.12A EnvironmentsThe environments of the tasks considered in our work are illustrated in Figure 2.(a) TriFinger robot (b) Ant (c) HumanoidFigure 2: (a) The physical TriFinger robot from the RRC III competition, where three identicalrobotic fingers are equally spaced 120◦apart around the circular arena; the coloured cube is the objectto be moved. (c)-(b) Illustrate the MuJoCo locomotion task environments.B Description of how the adaptive confidence threshold is setWe use an adaptive mechanism to adjust the confidence threshold, thconf, which is used in Section 3.5to convert the continuous classifier output probability to a discrete binary label. Figure 3 shows anexample of selecting the thconfvalue over the PU training iterations for the RRC III competitionPush/mixed andLift/mixed datasets. Furthermore, we presented the filtering accuracy acrossiterations for these two examples to better depict the convergence of accuracy.We firstly apply the trained classifier to Dmixto get confidence scores (a probability, as the final layeractivation function is sigmoidal) that each trajectory was generated by an expert policy. Secondly, wecalculate the histogram of these confidence scores across all trajectories. Finally, we use a polynomialto fit the confidence score histogram. The threshold is determined by identifying the confidence scoreat which the maximum local minimum on the x-axis occurs. This point is marked with a blue dot inthe bottom row of subplots. As the iterative training process proceeds, the trajectories selected by thePU learning changes and eventually the filter output converges. In our work, we define convergenceas the condition where the change in trajectory memberships between two consecutive iterations iswithin a threshold of 2%.Table 3: Illustration of the convergence process of filtering the RRC Lift/mixed andPush/mixeddataset. Each iteration lasts for 20 epochs. The TP (True Positive) represents an expert-collectedtrajectories that are correctly classified as expert-collected. FP (False Positive) represents a non-expert-collected trajectories that are incorrectly classified as expert-collected. FN (False Negative)represents expert-collected trajectories that are incorrectly classified as non-expert-collected, and TN(True Negative) represents non-expert-collected trajectories that are correctly classified as non-expert-collected.Iteration TP FP FN TNLift/mixed1 200 4 998 11932 1005 97 193 11003 1195 37 3 11604 1194 9 4 1188Push/mixed1 1421 2 429 19182 1915 0 5 11003 1920 0 0 192013(a) Push/mixed(b) Lift/mixedFigure 3: A demonstration of selecting the adaptive confidence threshold, thconf, for the Push/mixedandLift/mixed datasets. The top row of each subplot shows the confidence score counts for alltrajectories in the larger dataset, Dmix. The bottom row displays polynomial curves fitted to thesehistograms. The position of the blue dot indicates the rightmost local minimum of the polynomial,which determines the confidence value, thconf, used for the subsequent dataset filtering iteration.C Configurations used to collect the locomotion datasetsThis section describes the acquisition process of the datasets used in Section 4. We selected two Mu-JoCo locomotion domains, namely Ant-v3 andHumanoid-v3 , which have relatively high dimensionalstate and action spaces, as well as high levels of difficulty.Initially, we trained three different agents: target expert policy ( E+); additional expert policy ( E−);and weaker performing policy ( W). We utilized online RL algorithms for each domain. We trainedeach agent using different random seeds to ensure that the resulting agents can exhibit diversebehaviors or habits[ 3]. These trained agents were then deployed to interact with their correspondingenvironments. The interaction data was recorded in the form of {(st, at, rt, yt)t=1...n}, where yrepresents the ground truth label for each policy, indicating whether the data was collected by thetarget expert or not.14Table 4: Configuration details on the collection of MuJoCo datasets. We utilized the Soft ActorCritic (SAC) [ 51] and Twin Delayed Deep Deterministic Policy Gradient Algorithm (TD3) [ 52], asimplemented in stable-baselines3 [ 53], using recommended hyperparameters. In the Humanoid - v3environment, the reset_noise_scale was set to 10−2.SubjectAnt HumanoidE+ E− W N E+ E− W NAlgorithm TD3 TD3 TD3 - SAC SAC SAC -Train length 1061062×105- 1061063×105-Mean return 3034 2910 920 - 5725 5330 1576 -Data amount 5×1055×1055×1055×1055×1055×1055×1055×105Positive example amount 2×103- - - 2×103- - -In addition to these datasets, we also included noise data ( N), in which the state, action, and rewardwere sampled using a uniform distribution within the range of minimum reward to maximum reward.Further details about the online training process, the performance of the trained agents, and the sizeof the collected datasets are provided in Table 4.D Ablation studyIn this section, we conduct an ablation study to examine the impact of various factors on theperformance of our PU learning method. We have structured our study into several groups ofablations. Firstly, we aim to examine the techniques presented in Section 3.4. For each group in thestudy, we remove one technique to assess its effect on the overall performance. Additionally, weconduct an investigation into the size of the dataset of seed-positive dataset D+. In this part of thestudy, we gradually increase the size of the D+starting from 0.1% to 1% of the unlabeled datasetDmix.From the results shown in Table 5, it is easy to see that adaptive thresholds play a significant rolein these relatively challenging physical robotic manipulation domains. Additionally, performingclassification at the trajectory level can boost overall accuracy. The bagging is not the primarydeterminant of accuracy, but it can serve as a beneficial complement. Looking at the impact of thesize of the positive dataset on PU learning, there were several direct failures when the dataset wasextremely small. However, once the size increases to 0.4%, it is sufficient for PU learning to achieveoptimal performance.D.1 Guidance on tuning the parametersBased on our experience, an appropriately chosen set of hyperparameters can result in distinct dualpeaks in the output histogram within as few as three iterations, as depicted in Figure.3. Among thesepeaks, the peak representing a higher probability corresponds to the expert data. Otherwise, it mightindicate a set of poorly chosen hyperparameters; in which case, the following suggestions based onour experience may be helpful:1.Setting an adaptive threshold is a critical technique that we highly recommend enabling, especiallyin scenarios where the expert policy and supposedly non-expert policy (or policies) used to generatethe dataset exhibit similar behaviours. In adaptive thresholding, one crucial hyperparameter toconsider is the order of the polynomial used to fit the histogram. If the order is set too high, it maylead to overfitting the histogram, causing the adaptive threshold to select thresholds at extremelyhigh probabilities, which could fail to effectively distinguish a sufficient amount of expert data.On the other hand, if the order is set too low, it may miss the optimal threshold by not fittingthe shape of the histogram well enough. We have found that a 10th-order polynomial yields thebest results for our case. Empirically, when dealing with complex mixed datasets, such as ourreal-life-mixed dataset, we suggest slightly increasing the order to a range between 10 and 20.Conversely, in cases of simpler composite datasets, we recommend reducing the order to a rangebetween 5 and 10.2.If you observe that the histogram generated during training consistently exhibits a unimodal peak,this might be because you have not classified the trajectory by aggregating the state-actions within15Table 5: Results of the ablation study. Group 1 represents the removal of trajectory-based classifica-tion, instead opting for individual state-action pair classification. Group 2 employs a non-adaptiveconfidence threshold, using a fixed threshold set at 0.5 (considering the sigmoid interval of [0,1], themidpoint of 0.5 is arbitrarily chosen). Group 3 involves using a single classifier model instead of anensemble (i.e., no bagging). Lastly, the 0.1% - 1% range refers to varying sizes of the D+, where theDmixsize is a fixed value. Examples of failure in classification are indicated by ✗, which signifiesthat the neural network’s learning process has completely collapsed, rendering it unable to learn anymeaningful information.Datasets Ours Group 1 Group 2 Group 3D+size /Dmixsize0.1% 0.4% 1%RRC-Real- Lift/mixed 99.2% ✗ ✗ 92.8% ✗ 99.2% 99.2%RRC-Real- Push/mixed 100.0% 77.8% ✗ 100.0% 89.8% 100.0% 100.0%Humanoid - E+E 99.7% 89.3% 99.7% 94.8% ✗ 99.7% 99.7%Humanoid - E+W 100.0% 94.8% 100.0% 100.0% 69.7% 100.0% 100.0%Humanoid - E+N 100.0% 98.7% 100.0% 100.0% 100.0% 100.0% 100.0%Humanoid - E+E+W+N 99.0% 83.2% 99.0% 91.0% ✗ 99.0% 99.0%it. Classifying single state-action pairs may not effectively capture the inherent policy behavior.On the other hand, classification based on trajectories can take into account the correlation withinthe data to create a more precise classification relationship. This approach not only improves themodel’s accuracy but also accelerates the iterative process. Therefore, we recommend enablingthis feature when applicable, such as in datasets that follow the D4RL protocol. Another potentialreason might be that the initial amount of seed data you collected/separated was insufficient. Thiscould prevent the neural network from effectively capturing the behavioral characteristics of thetarget expert data.3.If you notice a converging trend in the number of expert trajectories being filtered out, but alsoobserve substantial fluctuations (instability or lack of convergence) over subsequent iterations, youmight consider increasing the number of classifier models in the ensemble. The quantity of modelsin the ensemble is a relatively straightforward hyperparameter that can be adjusted to ascertain theoptimal value. This procedure is similar to the typical strategy of adjusting the learning rate.E Benefits of PU learning in offline RLThis section aims to demonstrate the advantages of employing PU learning in stochastic offline RLalgorithms. We utilize three medium-expert datasets available from the D4RL benchmark, specifically,halfcheetah-medium-expert-v0 ,hopper-medium-expert-v0 , and walker2d-medium-expert-v0 . Thesedatasets have previously served as benchmarks in numerous studies [ 24,26,25] and have beeneffectively addressed by a range of algorithms. Our objective is to show that the application of PUlearning can further enhance the performance of offline RL on these datasets.The aforementioned datasets are all of mixed-quality, each of which was collected by two agentsexhibiting different skill levels, specifically, medium and expert. Given that our approach requires avery small-scale seed-positive dataset, we extract the top 0.2% of trajectories based on cumulativereward from each mixed dataset to constitute the seed-positive data subset.Our results, as displayed in Table 6, include comparisons with various state-of-the-art offline RLalgorithms such as IQL [ 24], TD3+BC [ 26], and PLAS [ 25], with BC acting as the baseline. It isevident that the implementation of PU learning enhances agent performance. Examining the learningcurves (shown in Figure 4), we can see that PU learning not only accelerates the learning process butalso enhances the stability of all the investigated offline RL algorithms.F Descriptions of compared algorithmsF.1 PU learning•Unbiased PU learning [42]: Traditional PU methods train classifiers by minimizing empirical risk,wherein unlabeled examples are directly treated as negative examples. This approach, however,16Table 6: Averaged normalized scores [ 15] with PU vs without PU for three D4RL benchmark tasks.Each result includes three random seed values and each model training session lasts for 106timesteps. We evaluate each learned policy for 100environmental trajectories.PU BC PLAS IQL TD3+BC TotalHalfcheetah-medium-expert-v0✓ 0.93 0.92 0.93 0.94 3.72✗ 0.56 0.74 0.69 0.93 2.92Hopper-medium-expert-v0✓ 1.11 0.66 0.64 1.10 3.51✗ 0.51 0.32 0.35 0.89 2.07Walker2d-medium-expert-v0✓ 1.08 1.09 1.10 1.10 4.37✗ 0.76 0.99 1.06 1.11 3.92Figure 4: Training curves of Table 6; curves are averaged over three random seeds, with the shadedareas representing the minimum/maximum values across these three seeds. Each data point refers tothe average normalised score [15] of 10 environmental episodes.may lead to a high empirical risk for the negative class. To address this issue, Du Plessis et al.[42] re-weighted the losses for positive and unlabeled examples. Hence the training objective ofUnbiased PU learning becomes minimizing the following, where δis the proportion of positiveexamples to unlabeled examples in the unlabeled dataset (same as our approach, we set it to 0.4%here):Lunbiased =δE(st,at)∼D +[−log(F(st, at))] + E(st,at)∼D−[−log(1− F(st, at))]−δE(st,at)∼D +[−log(1− F(st, at))],(3)•Non-negative PU learning [43]: While as the complexity of the model increases, the risk (Equa-tion 3) on the training set may approach or even become negative, while the corresponding risk onthe test set increases. This suggests that the model is overfitting. To tackle this problem, Kiryo et al.17[43] introduced non-negative PU learning, in which the training objective is modified as follows,where again δis the proportion of positive examples to unlabeled examples in the unlabeled dataset(same as our approach, we set it to 0.4% here):Lnon−neg=δE(st,at)∼D +[−log(F(st, at))] + max(0 ,E(st,at)∼D−[−log(1− F(st, at))]−δE(st,at)∼D +[−log(1− F(st, at))]),(4)Compared to Unbiased PU learning and Non-negative PU learning, the PU method introduced in thispaper offers a more decent approach for generating negative examples, with improved diversity andlogical correctness. Additionally, our method employs a simpler loss function.F.2 Policy learning algorithms•Naive reward-based filter + BC : Reward may be used for filtering expert data for policy training,as experts are more likely to achieve higher rewards. Our results show two variations of thisapproach: 10%BC and 50%BC, which involve selecting trajectories with the top 10% and 50%highest cumulative (over the trajectory) returns, respectively, then using the filtered subsets to trainBC.•Critic Regularized Regression (CRR) [32]: A state-of-the-art filter/weight-based BC algorithm.It utilizes a reward-based advantage function to weight the significance of training examples in thedataset. In fact, the CRR method can be imagined as a soft-weighting process, while our method isa hard-weighting process.•Discriminator Weighted Behaviour Cloning (DWBC) [38]: This is similar to our approch.DWBC employs a discriminator to distinguish high-quality data from mixed datasets. Additionally,it use a small seed-positive dataset to initiate the training process. The discriminator used hereis similar to Unbiased PU learning, but they introduce the additional policy function as input.This approach combines the training process of BC with the discriminator to create a GAN-likearchitecture to improve performance. DWBC can be seen as a fusion of Unbiased PU learning andBC, whereas our approach introduces a novel PU learning method to guide BC.•Implicit Q-learning (IQL) [24]: IQL is an offline reinforcement learning method that solves theOOD problem by setting implicit constraints. It has previously been used to address the problem oflearning strategies in mixed-quality data.•Decision Transformer (DT) [30]: In DT, a causal transformer is employed to model policytrajectories, taking as input the sequence {R0, s0, a0, R1, s1, a1, ..., R K−1, sK−1, aK−1}, whereRrepresents the desired return-to-go in a trajectory; we specifically set this value to the maximumattainable reward for each domain. sdenotes the state and arepresents the action taken at eachtime step. At each time-step t, the first 3∗ttokens are fed into the transformer to predict the actionat time t, denoted as p(at|R0, s0, a0, ..., R t−1, st−1, at−1).G Additional classification functionHere, we introduce an alternative option for the classification function discussed in Section3.5. Inthis approach, we employ logistic operations to calculate the product of F:f:= 1"KXk=11" TXt=1log(Fk(st, at))!≥thconf#>K2#. (5)This would introduces a trade-off between disregarding positive data by being overly strict andincorporating non-positive data by being too lenient.H Implementations and trainingThe neural network architecture for PU learning of the PUBC is illustrated in Figure 5. The PUtraining for the RRC Sim- and Real- Lift/mixed datasets lasted for 4 iterations; for the Sim- and18Table 7: Hyperparameters of algorithms used in our experimentsAlgorithm HyperparameterPUlearning_rate=0.001; batch_size=1024; optimizer = adam;epochs_per_iteration=20; models_in_ensemble=3; polynomial_order=10BC learning_rate=0.001; batch_size=100; optimizer = adam; epochs=200CRRactor_learning = critic_learning_rate = 0.0003; batch_size=256;optimizer = adam; beta=1.0IQLactor_learning_rate=critic_learning_rate=0.0003; batch_size=256;optimizer = adam; expectile=0.7; weight_temp=3.0DWBC learning_rate=0.0001; batch_size=256; alpha=7.5; eta=0.5; no_pu=FalseTD3+BC actor_learning = critic_learning_rate = 0.0003; batch_size=256; alpha=2.5PLASactor_learning_rate=0.0001; critic_learning_rate=0.001; optimizer = adam;warmup_steps=500000; beta=0.5Real- Push/mixed datasets, 3 iterations; and for each MoJoCo dataset, 3 iterations. In each iteration,the subset of negative examples, D−, is set to have the same size as the subset of positive examples,D+. Furthermore, the number of each type of negative example, (n1−n7), remains consistentthroughout the subset of negative examples, D−.Once our work is accepted, we plan to open-source our full implementations of PUBC on GitHub. Theimplementations of the baseline PU learning algorithms from https://github.com/cimeister/pu-learning . All the offline RL and BC algorithms used in our study are sourced from d3rlpy [ 54].The implementation of DWBC [ 38] is based on the original authors’ work at https://github.com/ryanxhr/DWBC .The key hyperparameters for training each algorithm involved in our experiment are displayed inTable 7. Our experiments ran on a PC with an Intel I9-12900F CPU (2.40 GHz ×16, 32 GB RAM)and an NVIDIA 3090 GPU.Figure 5: The neural network structure of classifier, illustrating the details of Figure 1(b).19 |
44FPaVRWkbl | DORT: Modeling Dynamic Objects in Recurrent forMulti-Camera 3D Object Detection and TrackingQing Lian1,2Tai Wang1,3Dahua Lin1,3Jiangmiao Pang1 1Shanghai AI Laboratory2The Hong Kong University of Science and Technology3The Chinese University of Hong Kongqlianab@connect.ust.hk, {wt019,dhlin }@ie.cuhk.edu.hk, pangjiangmiao@gmail.comAbstract: Recent multi-camera 3D object detectors usually leverage temporalinformation to construct multi-view stereo that alleviates the ill-posed depth es-timation. However, they typically assume all the objects are static and directlyaggregate features across frames. This work begins with a theoretical and em-pirical analysis to reveal that ignoring the motion of moving objects can result inserious localization bias. Therefore, we propose to model Dynamic Objects inRecurrenT (DORT) to tackle this problem. In contrast to previous global Bird-Eye-View (BEV) methods, DORT extracts object-wise local volumes for motionestimation that also alleviates the heavy computational burden. By iteratively re-fining the estimated object motion and location, the preceding features can beprecisely aggregated to the current frame to mitigate the aforementioned adverseeffects. The simple framework has two significant appealing properties. It is flexi-ble and practical that can be plugged into most camera-based 3D object detectors.As there are predictions of object motion in the loop, it can easily track objectsacross frames according to their nearest center distances. Without bells and whis-tles, DORT outperforms all the previous methods on the nuScenes detection andtracking benchmarks with 62.8% NDS and 57.6% AMOTA, respectively. Codesare available at https://github.com/OpenRobotLab/DORT .Keywords: Temporal Modeling, 3D Object Detection1 IntroductionMulti-camera 3D object detection is critical to robotic systems such as autonomous vehicles, hu-manoid robots, and etc.As object depth estimation from a single image is naturally ill-posed, recentworks use large-scale depth pre-trained models [1] and leverage geometric relationships [2, 3, 4, 5]to alleviate the problem. Because stereo correspondence exists in consecutive frames, some worksresort to temporal information for accurate depth predictions. For example, BEVDet4D [6] andBEVFormer [7] warp preceding features to the current frame to enrich the single-frame BEV rep-resentations. DfM [5] constructs temporal cost volumes that explicitly establish the stereo corre-spondence. However, these cross-frame feature aggregations do not consider the motion of movingobjects and assume all the objects are static, which results in serious 3D localization bias.In this paper, we first provide a theoretical and empirical analysis to reveal the negative effects ofinaccurate object motion to object depth (Fig 1). In particular, if the object is moving, the incorrecttemporal correspondence would derive a biased depth. In the driving scenarios, it is critical that amisleading depth is estimated, which might reduce the reaction time of the decision system, leadingto catastrophic collision accidents. This motivates us to devise an explicit mechanism to involveobject motion estimation in the temporal-based 3D detection pipeline.Corresponding author.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Modeling dynamic objects in this context has several challenges: (1) We need a flexible object-wiserepresentation for potential object-wise operations based on motion modeling. (2) Jointly estimatingobject location and motion is an inherent chicken and egg problem [8]: The temporal correspondencecan derive accurate object location only when accurate object motion is given and vice versa. (3)Simultaneously predicting object location and motion from only two frames is also an ill-posedproblem theoretically, and thus it is desired to involve right-body assumption and more frames topose reasonable constraints.Frame TFrame T -1Frame T -2InaccurateLocation&Motion(static)AccurateLocation&MotionProgressivelyRefinement(Previous)(Ours)MotionEstimationFigure 1: Visualization of object localization fromtemporal correspondence. Previous works ignore themotion of moving objects, leading to imprecise local-ization. Our work progressively refines the object’slocation and motion so that the preceding features canbe precisely aggregated.To address these problems, we model DynamicObjects in Recurren T(DORT) that simultane-ously estimates object motion and location, andthen progressively refines them for accurate 3Dobject detection. It benefits from a local 3D vol-ume representation that not only extracts object-wise 3D features but also alleviates the heavycomputational costs of global BEV in previousmethods [5, 7, 9]. Based on the object-wise vol-ume, temporal volumes are constructed by warp-ing the volumes from the preceding frame to thecurrent frame according to the object motion.Then the obtained cost volumes act as the featuresfor updating the candidate location and motion.We model this estimation and update pipeline as arecurrent process to alleviate the aforementionedchicken and egg problem. In addition, our frame-work can take into more than two frames andpose constraints to the object motion. It inher-ently provides a feasible solution to avoid the ill-posed dilemma of estimating object location andmotion from only a single pair of correspondence. As there is object motion prediction in the loop,the framework is naturally capable of joint object detection and tracking by utilizing object motionto align the detection results into the same timestamp. It also can be plugged into most camera-based3D object detectors for flexible and practical use.We validate the effectiveness of our framework on the nuScenes detection and tracking benchmarks.Benefiting from the dynamic objects modeling, DORT outperforms all the previous methods witha large margin, leading to 62.8% nuScenes detection metric (NDS) and 57.6% and average multi-object tracking accuracy (AMOTA), respectively.2 Related workMonocular 3D Object Detection Monocular-based 3D object detection was first approached fromthe single-frame scenario and evolved into multi-frame to alleviate the ill-posed depth estimation.(a) Methods with A Single Frame The single-frame-based methods [10, 11, 3, 12, 13] first extend2D object detectors and insert several 3D attribute regression heads to predict 3D bounding boxes.To alleviate the ill-posed depth recovery, several methods improve the model from the perspectivesof loss function [11], network architecture [14, 15], regression objective [3, 4], etc.Besides directlyregressing depth, later approaches [2, 16, 17] further design 2D-3D geometry constraints to betterextract visual cues for depth estimation. To align detection features with the output space, anotherline of methods [18] designs several transformation modules to lift 2D inputs into 3D space. Pseudo-lidar-based methods [19, 20, 21, 21] first predict the per-pixel depth and convert the raw pixel intopoint cloud for 3D detection. BEV-based methods [18, 22, 9, 23] propose orthographic featuretransformation (OFT) to transform the 2D features into 3D voxels and then adopt a LiDAR-based2head to localize objects. Later works improve the OFT by explicit depth distribution modeling [22,9, 23], incorporating deformable attention module [7] or designing 3D position encoding [24, 25].(b) Methods with Multiple Frames Although many techniques are designed in single-frame-basedmethods, they still suffer from ill-posed depth recovery, leading to unsatisfactory performance fordeployment. To augment the single-view observation, recent works [26, 14, 5, 6, 7, 27] leverageprevious frames as additional observations for features augmentation. Kinematic3D [14] leverages3D Kalman Filter to associate objects across frames and refines 3D boxes. Later studies [5, 6, 7]construct cross-frame cost volumes as another visual cue for 3D detection. The cost volumes arebased on the multi-view stereo, which assumes objects are static across frames. However, thisassumption does not align with the driving scenario, where the objects can move.Monocular 3D Object Tracking 3D object tracking associates objects across frames and gener-ates a set of trajectories for motion prediction. Traditional methods adopt a tracking-by-detectionparadigm that first detects objects in each frame and then associates them by appearance features [26]or objects’ displacement with Kalman filter [28, 29, 30, 31, 32, 33]. Besides the above paradigm,several methods [34, 35] design a two-stage paradigm that first associates objects and then utilizesthe temporal motion to improve the detection performance. In this work, we utilize temporal costvolumes to bridge the spatial location and temporal motion and derive a recurrent paradigm thatiteratively updates them to obtain tightly coupled results for joint 3D detection and tracking.3 Object Motion in Temporal ModelingIn this section, we first conduct analysis to demystify the adverse effects of neglecting object motionin temporal modeling, then discuss the challenges of modeling 3D motion in the monocular setting.Localization Bias from the Static Assumption In previous methods [5, 6, 7], the object motionis ignored by assuming objects are static and the features are directly aggregated after convertingthe past frames to the current frame. We first show that the static assumption would derive a biaseddepth. Without loss of generality, we consider the two-view case, and it can be naturally extendedto more than two views. We denote the camera intrinsic as Kwith focal length and center offset(f, cu, cv), and the ego motion and object motion from frame t0to frame t1asTegot0→t1andTobji→j:K="f0cu0f cv0 0 1#, Tegot0→t1="1 0 0 xego0 1 0 00 0 1 zego#, Tobji→j=1 0 0 xobj0 1 0 00 0 1 zobj. (1)(b) Histogram of object velocities(a) Average depth error vs object velocity51% are moving objectsFigure 2: Empirical analysis of the depth bias onthe nuScenes dataset if objects are assumed static.For simplicity, we assume the ego and object mo-tion only contain the translation (x,0, z)on thehorizontal plane. The analysis can be easily ex-tended to the case that the motion contains rotation.Given the temporal images, we can utilize photo-metric or featuremetric similarity to find the corre-spondence of pixel pt0= (ut0, vt0)in the frame t0and pixel pt1= (ut1, vt1)in the frame t1. Then,the depth zt1can be recovered by:Tegot0→t1·π(pt0, K) =π(pt1, K), z t1=zego(ut0−cu)−fxegout0−ut1, (2)where πdenotes the 2D to 3D projection. This derivation assumes that the object is static. However,objects can move with a corresponding motion Tobjt0→t1. Then, the object depth is revised as follows:Tobjt0→t1Tegot0→t1·π(pt0, K) =π(pt1, K),ˆzt1=(zego+zobj)(ut0−cu)−f(xego+xobj)ut0−ut1.(3)Based on Eq (2) and (3), we can obtain the depth gap:∆z=zobj(ut0−cu)−fxobjut0−ut1. (4)3Figure 3: Pipeline overview. Given a video sequence, we first extract the 2D features and generate the candidateboxes and motion by a single-frame detector. Then the boxes and motion are progressively refined from theconcurrently updated 3D volume features. A fusion process in the recurrent module combines the estimationfrom each pair of frames. Based on the tightly coupled modeling of object location and motion, the frameworkcan achieve joint 3D detection and tracking during inference.From Eq (4), we can observe that the depth bias is linearly correlated with the object motion. InFig 2, we also display the empirical statistics of object motion and the corresponding depth biasfrom the nuScenes dataset. We can observe that the empirical depth error is also correlated withthe object velocity and increases as the time interval enlarges. Besides, the right part in Fig. 2also shows that almost 51% of objects are moving across frames, demonstrating the necessity ofmodeling object motion in the temporal-based framework.Ill-Posed Problem in Motion Modeling Except for demonstrating the necessity of modelingobject motion, we also want to mention that simultaneously estimating object location and motion isnontrivial, especially in the two-frame case. As shown in Fig 1, the correspondence of two points cancome from infinite combinations of object location and motion. This illustrates that joint locationand motion estimation from only one correspondence is ill-posed. To alleviate this issue, we firstsimplify the object motion as a right-body movement so that multiple correspondences from thepoints in the object can be used to solve a shared motion. Furthermore, we leverage more than twoframes to constrain the flexibility of object motion. More details can be referred to Sec 4.3.4 MethodologyThis section describes the details of DORT. DORT is a general joint detection and motion predictionmodule that can estimate coupled object location and motion results across frames. Based on thetightly coupled location and motion results, DORT is also capable of simultaneously 3D objectdetection and tracking. Basically, it can be based on most temporal 3D detectors [6, 36, 7]. In thiswork, we select the popular temporal detector BEVDepth [23] as the base detector and extend itto handle both static and moving objects in temporal modeling. We first present an overview oftemporal-based frameworks in Sec 4.1 and then introduce our modifications: the local volume forobject-wise representation in Sec 4.2, the key recurrent dynamic objects modeling in Sec. 4.3, andthe object association for monocular 4D object detection in Sec 4.4.4.1 Overview of Temporal-Based FrameworksCurrent temporal-based methods contain three stages: (1) The 2D features extraction stage extractsthe features from the 2D images. (2) The view transformation and stereo matching process that firstlift the 2D features to a 3D volume and then warp the features in each frame to an aligned canonicalspace for matching. Depending on the model design, the order of view transformation and stereomatching may reverse. (3) The detection stage takes the 3D features to estimate 3D bounding boxes.In this work, we follow previous methods [5, 9, 23] and adopt the widely-used 2D backbone ( e.g.ResNet [37]) to do the features extraction. For the view transformation stage, we design an object-wise local volume that leverages the candidate 3D boxes to obtain the potential foreground regionsand only models them with local object-wise 3D volumes. For the stereo matching and detection4stages, we propose a recurrent dynamic objects modeling module to progressively refine the de-tection and motion results for accurate 3D temporal features.4.2 Object-wise Local VolumePast 3D VolumesView TransformationView TransformationPred BoxesWarp by Ego-MotionCurrent 3D VolumesObject MotionLocal VolumesFigure 4: The process of extracting local vol-umes in the current and past frames accordingto predicted bounding boxes and object motion.In previous works, the 2D-3D transformation con-siders each candidate 3D grid point and constructs aglobal volume for detection. However, there are sev-eral limitations: (1) the global volume contains lots ofbackground regions, which is not vital for detectionbut increases the computation burden. (2) Modelinga global volume needs to pre-define a detection rangeduring training, making the detectors fail to detect ob-jects with arbitrary depth. (3) It is inconvenient to ma-nipulate a global volume with object-wise operations.Hence, we replace the global volume with an object-wise local volume. Specifically, we leveragethe candidate boxes to determine the 3D region of interest (RoI) and set the local volume center asthe bounding box center. To keep the object ratio and achieve cross-view warping, we assign each3D RoI volume V∈ RW×H×L×Cwith the same 3D dimension (W, H, L )and channel size C.Different from 2D detection, the objects’ dimension in 3D space has less variance and empiricallyrelies less on the RoI-Align [38] operation. We display the construction of object-wise local volumesin Fig 4. For the 2D to 3D transformation, we first follow LiftSplat [22] and lift the images to a 2.5Dfrustum by weighting with depth probability. Then we utilize the grid sample operation to warp thefeatures from the 2.5D frustum to each 3D local volume. Benefiting from the accurate 2D detectionperformance, the local volume features sampled from the 2.5D frustum would have a large overlapwith the foreground objects. Hence, if the proposal 3D location is inaccurate, the later refinementmodule still can use the features for refinement.4.3 Recurrent Dynamic Objects ModelingThe pipeline of the recurrent framework is illustrated in Fig. 3. Given the candidate 3D boundingboxes and motion as input, each iteration first constructs the temporal cost volumes thereon, andaggregates these cues to refine the proposal boxes and motion. In particular, we adopt a perspective-view based 3D detector ( i.e.PGD) to generate the initialized candidate 3D boxes and motion andonly predict their residuals for refinement in the subsequent recurrent updates.Cross-Frame Cost Volumes Construction Given the initial predictions of 3D boxes and theirmotion, we first obtain object-wise volume features following Sec. 4.2. Then we can construct thetemporal cost volumes by warping features from past frames to the current frame coordinates basedonego-motion . In contrast to previous works [5, 6] assuming objects are static, we further involvetheobject motion into the warping procedure. Specifically, for each point p∈ R3in the object-wiselocal volume V, we query the corresponding features in previous frame t−∆twith the considerationof ego-motion Tegoand the object motion Tobjand construct the cost asV(p), Vt−∆t(TobjTegop).Note that we simplify the point motion as the object motion with a rigid-body assumption, whichcan approximate most of the cases in driving scenarios, especially for vehicles [26, 39].3D Boxes and Motion Residual Estimation Given the object-wise temporal features built frominput 3D boxes and motion, we leverage a refinement network to estimate the residual between theinput 3D boxes and motion with the ground truth. The refinement network contains several 2D/3Dresidual-based convolutional layers to extract the 3D volumes and 2D BEV features. The detailedarchitecture is presented in Supplementary. Formally, the refinement is formulated as the regres-sion of 3D attribute residuals B, including the object’s 3D center x, y, z , 3D size w, h, l , rotationθand velocity vx, vy. Since we use the object velocity in the current frame to represent the objectmotion and assume constant velocity across frame, the supervision for some frames may containnoise( e.g.inaccurate labels, violation of rigid-body assumption, etc). Hence, we model the residual5Table 1: Experimental results of monocular 3D object detection and tracking on the nuScenes test set. Theinput resolution is 1600×900with using ConvNeXt-Base [40] as the backbone.(a) 3D detection results.Method mAP↑mATE↓mASE ↓mAOE ↓mA VE ↓mAAE ↓NDS↑Ego3RT [41] 42.5 0.55 0.26 0.43 1.01 0.14 47.3UVTR [42] 47.2 0.57 0.25 0.39 0.51 0.12 55.1BEVFormer [7] 48.1 0.58 0.25 0.37 0.37 0.12 56.9PETRv2 [43] 51.2 0.55 0.25 0.36 0.40 0.13 58.6BEVDepth [23] 52.0 0.45 0.24 0.35 0.35 0.13 60.9BEVStereo [27] 52.5 0.43 0.24 0.36 0.35 0.14 61.0SOLOFusion [36] 54.0 0.45 0.26 0.37 0.27 0.14 61.9DORT (Ours) 55.3 0.43 0.26 0.42 0.24 0.14 62.8(b) 3D tracking results.Method AMOTA ↑AMOTP ↓MOTAR ↑QD-3DT [34] 21.7 1.550 56.3Time3D [35] 21.4 1.360 -PolarDETR [30] 27.3 1.185 60.7MUTR3D [44] 27.0 1.494 64.3SRCN3D [31] 39.8 1.317 70.2QTTrack [32] 48.0 1.100 74.7UVTR [42] 51.9 1.125 76.4DORT(Ours) 57.6 0.951 77.1as a Laplacian distribution and design the loss function as:Lrefine =Xb∈B(√2σb∥∆ˆb−∆b∥+ log σb). (5)Here, ∆b,∆ˆb, and σbare all the network outputs, and represent the ground truth residual, the esti-mated residual, and the estimated standard deviation of residual for each 3D attribute, respectively.Multiple Estimation Fusion Given nframe as inputs, we can obtain n3D volumes ( 1for thelocal volumes from the current view and n−1for the paired cross-view cost volumes) and obtainnestimated residuals from the residual estimation module. Then we weigh the importance of eachresidual by the estimated deviation and fuse them to obtain an ensemble result:ˆbfused =nXi=1eσbibiPni=1eσbi, (6)where idenotes the volume index. For simplicity, here we only estimate the velocity measurementfor the referenced frame, i.e., the fluctuation of object velocity across different frames would notbe considered explicitly. The mechanism for multi-frame fusion is expected to handle this problemadaptively. At the same time, this constraint also provides additional cues when simultaneouslyestimating object location and motion from more than two frames.Recurrent Location and Motion Update After each iteration, we can obtain the refined boundingboxes with their motion and thus can derive the updated bounding boxes in different frames. Withthese updated locations and RoIs, we can further update the volume features and proceed to thenext-round refinement. Note that any complex or learnable motion modeling can be integrated intothis procedure. Here, to be consistent with the multiple estimation fusion designs, we still keep theconstant velocity prediction for simplicity to derive the bounding boxes of previous frames.In the training stage, we follow the recurrent methods [45, 39] in other tasks and set the loss weightfor each iteration as the same. The overall loss is represented as:L=Lpv+kXi=1Lirefine , (7)where Lpvis the loss in the perspective-view detector [4], Lirefine is the refinement loss in each iter-ation and k= 3is the number of the iterations. In the inference stage, we first take the perspective-view detector ( i.e.PGD [4]) to generate the initial 3D boxes and their motion and then progressivelyrefine them. In each iteration, we first construct the volume features as discussed in Sec 4.3 andfeed them into the refinement module to estimate the 3D boxes and motion residuals for each pairedframe input. Then we utilize the multiple estimation fusion module in Sec 4.3 to fuse the estimatedresults and obtain the refined 3D boxes and motion as the next stage input.4.4 Monocular 4D Object DetectionSo far, we have introduced our recurrent framework for 3D detection from monocular videos. Basedon progressive refinement, our model can estimate tightly coupled object location and motion resultsand thus can easily associate the object detection results across frames, leading to joint 3D detection6Table 2: Experimental results on the nuScenes validation set. The input resolution is 704×256using ResNet-50 as the backbone. * denotes the re-implementation based on the provided code.Methods # frame mAP↑mATE ↓mASE ↓mAOE ↓mA VE ↓mAAE ↓NDS↑PGD* [4]128.8 0.75 0.27 0.52 1.13 0.18 37.0BEVDet [9] 29.8 0.73 0.28 0.59 0.86 0.24 37.9PETR [25] 31.3 0.77 0.28 0.56 0.92 0.23 38.1DETR3D [24] 34.9 0.72 0.27 0.38 0.84 0.20 43.4BEVDet4D [6]232.2 0.70 0.28 0.50 0.35 0.21 45.7BEVDepth [23] 35.1 0.64 0.27 0.48 0.43 0.20 47.5DORT (Ours) 37.9 0.62 0.27 0.45 0.31 0.20 50.4BEVDepth* [6]839.8 0.57 0.27 0.49 0.27 0.18 52.3DORT (Ours) 41.8 0.57 0.26 0.43 0.25 0.19 53.4SOLOFusion [36]1642.7 0.57 0.27 0.41 0.25 0.18 53.4DORT (Ours) 43.6 0.56 0.26 0.41 0.24 0.18 54.0and tracking. Specifically, we follow [28, 46, 29, 47] and associate the detection results by warpingcurrent detection to the past frames with object motion. Based on the ego-motion, we first convertthe predicted object location to the past frame coordinate and then warp with the estimated objectvelocity. Then we follow the popular distance-based tracker [28, 29] and associate the objects bythe closest distance matching. We provide more details of the tracking pipeline in the Appendix.5 Experiments5.1 Experimental SetupIn this section, we describe the used dataset, the evaluation metrics, and the implementation details.Dataset NuScenes [48] is a large-scale driving dataset, which contains 1,000 video sequences.The official protocol splits the video sequences into 700 for training, 150 for validation, and 150 fortesting. Each sequence is annotated with the objects’ 3D bounding box, velocity, and tracking id.Network Details As discussed in Sec 4.3, the recurrent module requires a proposal detector togenerate candidate foreground regions as the 1ststage input. We adopt the popular monocular3D detector PGD [4] due to its high 2D object detection recall. Following [9, 6, 23], we adoptthe ResNet-50 [37] with FPN as the 2D feature extractor and mainly conduct experiments on thissetting. The 2D feature extractors in PGD and the recurrent module are shared to save computationtime. The grid size of the 3D volume is set as 0.8mwith the range of [−5m,5m]in the X and Z(depth) axis and [−4m,2m]in the Y (height) axis. During 2D to 3D features transformation, wefollow [23] and adopt the depth distribution guided 2D to 3D features lifting. Regarding the testset submission, we follow [23, 27] and adopt the ConvNeXt-Base [40] as the image backbone. Theimage backbone is initialized with ImageNet pre-trained weights, and no other external data is used.We provide more details about the network architecture of the recurrent module and the trainingconfiguration in the supplementary materials.Training Configurations The model is optimized by AdamW optimizer with weight decay 10−2.We first follow [4] to train the proposal detector and refine the recurrent module with 24 epochs,where the initial learning rate is set as 2×10−4and decreases to 2×10−5and2×10−6at the 18thand22thepochs. Following [9, 23], we use the class balance sampling strategy (CBGS) to alleviatethe class imbalance problem. We adopt the commonly used 2D data augmentation that randomlyflips the image, resizes the image with the range of [0.36,0.55], and crops the image to the resolutionof704×256. Regarding the input video sequences, we follow [6, 23] and sample the precedingkeyframes to obtain the past video sequences. Regarding the test set submission, we enlarge theinput resolution to 1600×640and reduce the volume size to 0.4m.5.2 Main ResultsIn Table 1, we first provide the comparison of our framework with existing state-of-the-art methodson the nuScenes test benchmarks. We draw the following observations: (i) Benefiting from dynamicobjects modeling, our method displays a significant improvement in both object detection (mAP)and motion estimation (mA VE), and 1.3% and relatively 11.1% better than the previous best method.These localization and motion estimation improvements also contribute to state-of-the-art results in70 1 2 3 4Iteration3032343638mAPModeling object motionAssumed static0 1 2 3 4Iteration0.40.60.81.0mAVEModeling object motionAssumed staticSetting mAP NDS mA VEAssumed static 35.0 47.1 0.37GT motion 39.3 - -Pred motion 37.9 52.1 0.31Figure 5: Left and Middle: mAP and mA VE in each recurrent iteration step (1 past frame is used). Right:Experimental results of different motion modeling strategies.terms of the nuScenes detection metric (NDS). (ii) With strong localization and motion estimationresults, our tracking module can better associate the detected objects in different timestamps, re-sulting in superior performance over all the other trackers with different metrics. Specifically, weimprove the second best tracker [42], another distance-based tracker with % and 7.5% relative im-provements on the AMOTA and AMOTP metrics. Compared with the joint detection and trackingmethods QDTrack3D [34] and Time3D [35], the performance gain of our method demonstrates theeffectiveness of our dynamic objects modeling framework in jointly modeling object motion andlocation. (iii) In Table 2, we also report our method on the nuScenes validation set with differentsettings. For the detection performance, we can draw the same observation as in the test set that ourmethod can outperform previous temporal-based methods [6, 23, 36] in terms of mAP and mA VE.Note that our method is also compatible with the components designed in the current BEV-basedframeworks, such as the training techniques in BEVDepth [23] and the depth estimation modulein SOLOFusion [36]. Furthermore, the local 3D volume is more friendly to practical applications,which can handle objects with arbitrary depth in the image.5.3 Ablation StudyAblation Study of Object Motion We further validate the influence of different dynamic objectmodeling strategies on the detection performance The first experiment compares the assumed staticcase with that of using the ground truth object motion. As shown in Figure 5, the model with groundtruth object motion outperforms the assumed static with 4.3% mAP, demonstrating the necessity ofobject motion for obtaining accurate temporal correspondence features. When we replace the groundtruth object motion with an estimated one, it still can bring 2.9% mAP improvements, illustratingthe usefulness of our dynamic objects modeling module.Experiments with Different Iterations In Figure 5, we provide the comparison of modeling objectmotion and assumed static with different recurrent iterations. Benefiting from the BEV featuresmodeling, the two configurations display almost 2% mAP improvements in the first iterations. Inthe later iterations, the improvement in assumed static stops was mainly due to the lack of accuratetemporal features. With more and more accurate temporal features, the model with modeling objectmotion can progressively improve the detection and motion estimation results.6 Conclusion and LimitationsThis work proposes a novel framework to better leverage temporal information for camera-only 3Ddetection by modeling dynamic objects. We first design an object-wise local volume to save com-putation time and maintain an object-wise representation for motion and detection modeling. Thenwe propose a recurrent module to tackle the challenging motion and location modeling problem.Specifically, we progressively update the motion and location results from the concurrently updated3D volume features. As the object motion and location results are tightly coupled in the recurrentstage, we also demonstrate the framework can naturally achieve 3D tracking.Although DORT can better handle dynamic objects, we follow previous methods and simplify themotion with a constant velocity assumption. To handle various kinds of scenarios, this assumptioncould be relaxed with modeling acceleration or explicit trajectory prediction. Additionally, similarto current object-wise methods [24, 25, 49] ( i.e.DETR-based, Two-stage-based), the computationis linearly correlated with the number of instances. To overcome this limitation, point-wise motionmodeling and feature extraction with object-wise grouping will be considered in future work.Acknowledgement This project is supported by Shanghai Artificial Intelligence Laboratory.8References[1] D. Park, R. Ambrus, V . Guizilini, J. Li, and A. Gaidon. Is pseudo-lidar needed for monocular3d object detection? In ICCV , 2021.[2] P. Li, H. Zhao, P. Liu, and F. Cao. Rtm3d: Real-time monocular 3d detection from objectkeypoints for autonomous driving. In ECCV , 2020.[3] Y . Zhang, J. Lu, and J. Zhou. Objects are different: Flexible monocular 3d object detection. InCVPR , 2021.[4] T. Wang, X. Zhu, J. Pang, and D. Lin. Probabilistic and geometric depth: Detecting objects inperspective. In CoRL , 2021.[5] T. Wang, J. Pang, and D. Lin. Monocular 3d object detection with depth from motion. InECCV , 2022.[6] J. Huang and G. Huang. Bevdet4d: Exploit temporal cues in multi-camera 3d object detection.arXiv preprint arXiv:2203.17054 , 2022.[7] Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y . Qiao, and J. Dai. Bevformer: Learningbird’s-eye-view representation from multi-camera images via spatiotemporal transformers. InECCV , 2022.[8] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision . Cambridge Uni-versity Press, New York, NY , USA, 2 edition, 2003. ISBN 0521540518.[9] J. Huang, G. Huang, Z. Zhu, Y . Ye, and D. Du. Bevdet: High-performance multi-camera 3dobject detection in bird-eye-view. arXiv preprint arXiv:2112.11790 , 2021.[10] X. Zhou, D. Wang, and P. Kr ̈ahenb ̈uhl. Objects as points. In arXiv preprint arXiv:1904.07850 ,2019.[11] X. Ma, Y . Zhang, D. Xu, D. Zhou, S. Yi, H. Li, and W. Ouyang. Delving into localizationerrors for monocular 3d object detection. In CVPR , 2021.[12] Y . Zhou, Y . He, H. Zhu, C. Wang, H. Li, and Q. Jiang. Monocular 3d object detection: Anextrinsic parameter free approach. In CVPR , 2021.[13] G. Brazil and X. Liu. M3d-rpn: Monocular 3d region proposal network for object detection.InICCV , 2019.[14] G. Brazil, G. Pons-Moll, X. Liu, and B. Schiele. Kinematic 3d object detection in monocularvideo. In ECCV , 2020.[15] T. Wang, X. Zhu, J. Pang, and D. Lin. FCOS3D: Fully convolutional one-stage monocular 3dobject detection. In ICCVW , 2021.[16] Z. Liu, D. Zhou, F. Lu, J. Fang, and L. Zhang. Autoshape: Real-time shape-aware monocular3d object detection. In ICCV , 2021.[17] Q. Lian, P. Li, and X. Chen. Monojsg: Joint semantic and geometric cost volume for monocular3d object detection. In CVPR , 2022.[18] T. Roddick, A. Kendall, and R. Cipolla. Orthographic feature transform for monocular 3dobject detection. arXiv preprint arXiv:1811.08188 , 2018.[19] Y . Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Q. Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomousdriving. In CVPR , 2019.9[20] Y . You, Y . Wang, W.-L. Chao, D. Garg, G. Pleiss, B. Hariharan, M. Campbell, and K. Q.Weinberger. Pseudo-lidar++: Accurate depth for 3d object detection in autonomous driving.arXiv preprint arXiv:1906.06310 , 2019.[21] Y . Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Q. Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomousdriving. In CVPR , 2019.[22] J. Philion and S. Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs byimplicitly unprojecting to 3d. In ECCV , 2020.[23] Y . Li, Z. Ge, G. Yu, J. Yang, Z. Wang, Y . Shi, J. Sun, and Z. Li. Bevdepth: Acquisition ofreliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092 , 2022.[24] Y . Wang, V . Guizilini, T. Zhang, Y . Wang, H. Zhao, , and J. M. Solomon. Detr3d: 3d objectdetection from multi-view images via 3d-to-2d queries. In CoRL , 2021.[25] Y . Liu, T. Wang, X. Zhang, and J. Sun. Petr: Position embedding transformation for multi-view3d object detection. In ECCV , 2022.[26] P. Li, J. Shi, and S. Shen. Joint spatial-temporal optimization for stereo 3d object tracking. InCVPR , 2020.[27] Y . Li, H. Bao, Z. Ge, J. Yang, J. Sun, and Z. Li. Bevstereo: Enhancing depth esti-mation in multi-view 3d object detection with dynamic temporal stereo. arXiv preprintarXiv:2209.10248 , 2022.[28] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. Simple online and realtime tracking. InICIP , 2016.[29] T. Yin, X. Zhou, and P. Kr ̈ahenb ̈uhl. Center-based 3d object detection and tracking. CVPR ,2021.[30] S. Chen, , X. Wang, T. Cheng, Q. Zhang, C. Huang, and W. Liu. Polar parametrization forvision-based surround-view 3d detection. arXiv:2206.10965 , 2022.[31] Y . Shi, J. Shen, Y . Sun, Y . Wang, J. Li, S. Sun, K. Jiang, and D. Yang. Srcn3d:Sparse r-cnn 3d surround-view camera object detection and tracking for autonomous driving.arXiv:2206.14451 , 2022.[32] J. Yang, E. Yu, Z. Li, X. Li, and W. Tao. Quality matters: Embracing quality clues for robust3d multi-object tracking. arXiv:2208.10976 , 2022.[33] T. Fischer, Y .-H. Yang, S. Kumar, M. Sun, and F. Yu. CC-3DT: Panoramic 3d object trackingvia cross-camera fusion. In CORL , 2022.[34] H.-N. Hu, Y .-H. Yang, T. Fischer, T. Darrell, F. Yu, and M. Sun. Monocular quasi-dense 3dobject tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2022.[35] P. Li and J. Jin. Time3d: End-to-end joint monocular 3d object detection and tracking forautonomous driving. In CVPR , June 2022.[36] J. Park, C. Xu, S. Yang, K. Keutzer, K. M. Kitani, M. Tomizuka, and W. Zhan. Time will tell:New outlooks and a baseline for temporal multi-view 3d object detection. In The EleventhInternational Conference on Learning Representations , 2023.[37] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR ,2016.[38] K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick. Mask r-cnn. In ICCV , 2017.10[39] Z. Teed and J. Deng. Raft-3d: Scene flow using rigid-motion embeddings. In CVPR , 2021.[40] Z. Liu, H. Mao, C.-Y . Wu, C. Feichtenhofer, T. Darrell, and S. Xie. A convnet for the 2020s.CVPR , 2022.[41] J. Lu, Z. Zhou, X. Zhu, H. Xu, and L. Zhang. Learning ego 3d representation as ray tracing.InECCV , 2022.[42] Y . Li, Y . Chen, X. Qi, Z. Li, J. Sun, and J. Jia. Unifying voxel-based representation withtransformer for 3d object detection. In NeurIPS , 2022.[43] Y . Liu, J. Yan, F. Jia, S. Li, A. Gao, T. Wang, X. Zhang, and J. Sun. Petrv2: A unifiedframework for 3d perception from multi-camera images. arXiv:2206.-1256 , 2022.[44] T. Zhang, X. Chen, Y . Wang, Y . Wang, and H. Zhao. Mutr3d: A multi-camera tracking frame-work via 3d-to-2d queries. arXiv preprint arXiv:2205.00613 , 2022.[45] Z. Teed and J. Deng. Raft: Recurrent all-pairs field transforms for optical flow. In ECCV ,2020.[46] X. Zhou, V . Koltun, and P. Kr ̈ahenb ̈uhl. Tracking objects as points. In ECCV , 2020.[47] Z. Pang, Z. Li, and N. Wang. Simpletrack: Understanding and rethinking 3d multi-objecttracking. In CVPR , 2021.[48] H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan,and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In CVPR , 2020.[49] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection withregion proposal networks. In NeurIPS , 2015.[50] Y . Yan, Y . Mao, and B. Li. Second: Sparsely embedded convolutional detection. Sensors , 18(10):3337, 2018.[51] T. Wang, Q. Lian, C. Zhu, X. Zhu, and W. Zhang. Mv-fcos3d++: Multi-view camera-only4d object detection with pretrained monocular backbones. arXiv preprint arXiv:2207.12716 ,2022.[52] J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell, and F. Yu. Quasi-dense similarity learningfor multiple object tracking. In CVPR , 2021.[53] N. Marinello, M. Proesmans, and L. Van Gool. Triplettrack: 3d object tracking using tripletembeddings and lstm. In CVPRW , 2022.[54] K. Wang and S. Shen. MVDepthNet: real-time multiview depth estimation neural network. In3DV, 2018.[55] Y . Yao, Z. Luo, S. Li, T. Fang, and L. Quan. Mvsnet: Depth inference for unstructured multi-view stereo. In ECCV , 2018.[56] T. Taketomi, H. Uchiyama, and S. Ikeda. Visual slam algorithms: a survey from 2010 to2016. IPSJ Transactions on Computer Vision and Applications , 9, 12 2017. doi:10.1186/s41074-017-0027-2.[57] Y . Yao, Z. Luo, S. Li, T. Shen, T. Fang, and L. Quan. Recurrent mvsnet for high-resolutionmulti-view stereo depth inference. In CVPR , 2019.[58] D. Sun, X. Yang, M.-Y . Liu, and J. Kautz. PWC-Net: CNNs for optical flow using pyramid,warping, and cost volume. In CVPR , 2018.[59] J. Sun, L. Chen, Y . Xie, S. Zhang, Q. Jiang, X. Zhou, and H. Bao. Disp r-cnn: Stereo 3d objectdetection via shape prior guided instance disparity estimation. In CVPR , 2020.11[60] H. Wang, S. Sridhar, J. Huang, J. Valentin, S. Song, and L. J. Guibas. Normalized objectcoordinate space for category-level 6d object pose and size estimation. In CVPR , 2019.[61] C. Tang and P. Tan. BA-net: Dense bundle adjustment networks. In ICLR , 2019.[62] P. Li, S. Liu, and S. Shen. Multi-sensor 3d object box refinement for autonomous driving.arXiv preprint arXiv:1909.04942 , 2019.12Supplementary Material for DORT: ModelingDynamic Objects in Recurrent for Multi-Camera 3DObject Detection and TrackingA Evaluation MetricsDetection Metrics We adopt the official evaluation protocol provided by nuScenes benchmark [48].The official protocol evaluates 3D detection performance by the metrics of average translation error(ATE), average scale error (ASE), average orientation error (AOE), average velocity error (A VE),and average attribute error (AAE). Besides, it also measures the mean average precision (mAP) withconsidering different recall thresholds. Instead of using 3D Intersection over Union (IoU) as thecriterion, nuScenes defines the match by 2D center distance don the ground plane with thresholds{0.5,1,2,4}m. The above metrics are finally combined into a nuScenes Detection Score (NDS).Tracking Metrics Regarding the tracking metrics, the nuScenes benchmark mainly measuresthe average multi-object tracking accuracy (AMOTA), average multi-object tracking precision(AMOTP), and tracking recall. In particular, AMOTA and AMOTP are the averages of multi-object tracking accuracy (MOTA) and multi-object tracking precision (MOTP) under different recallthresholds.B Implementation DetailsIn the main paper, we have introduced our overall multi-camera 3D object detection and trackingframework and the details of the proposed components. In this supplemental section, we present thedetails of the other basic modules.B.1 Network ArchitectureOur framework is built based on BEVDet and BEVDepth, and we follow them to design the basicmodules.2D Feature Extraction Given Nmulti-view images I∈ RN×W×H×3in each frame, we use ashared 2D backbone to extract the corresponding features. We adopt the standard ResNet-50 [37] asthe backbone and initialize it with ImageNet pre-trained weights. Then we adopt a modified FeaturePyramid Network (FPN) [50] to extract the multiple-level features and the output 2D features aredownsampled with the ratio of116with channel size 256:Fpv∈ RW16×H16×256.View Transformation Our work is the same as BEVDet and BEVDepth which contains a 2D to3D view transformation module. Specifically, we first leverage a depth prediction head to predict thedepth probability for each pixel. Then we lift the 2D features to a 2.5D frustum space via out-productit with the depth probability. The depth probability range is set as [0m,60m]with grid size 0.5m.With the 2.5D frustum features, the 3D features for each local volume are obtained via utilizing thecamera intrinsic to project the 3D grid back to the frustum and bi-linear sample the correspondingfeatures. As mentioned in the main paper, we aggregate the 3D volume features along the heightdimension and obtain the corresponding object-wise BEV features Fobjbev∈ RN×Wobj×Hobj×256,where WobjandHobjare the object features dimension and set as 28 in the main setting.RefineNet Given the object-wise features extracted based on the proposal 3D box and motion,RefineNet takes several convolutional neural networks to extract the object-wise features and esti-mate the bounding box and motion residual. Specifically, we first adopt an average pooling layer toaggregate the 3D features along the height dimension and obtain the BEV features. Then we filtereach object-wise BEV features with 6 basic 2D residual blocks, where each residual block consists13of two 2D convolution layers and a skip connection module as in ResNet. The channel size of theresidual blocks in the first three layers is 256 and decreases to 64 in the last three layers. Thenwe aggregate the features along the spatial dimension via average pooling and take 4 layers MLPnetwork to estimate the bounding box and motion residuals.B.2 The Tracking ModuleIn this section, we provide the details of the tracking module that omit in the paper. Since DORTcan estimate tightly coupled object location and motion, object tracking can be easily achieved vianearest center distances association [28, 29, 47]. Hence, our tracking module is mainly adapted fromthe previous distance-based object tracker [28, 29, 47]. Specifically, the tracking module containsfour parts: Pre-processing, Association, Status Update and Life-cycle Management.Pre-processing Given the detection results, the pre-processing stages mainly focus on filtering falsenegative objects. In our work, we first adopt Non-maximum Suppression to remove the duplicatedbounding boxes with the threshold of 0.1 in terms of 3D IoU. Then we filter out the bounding boxesthat the confidence threshold is lower than 0.25.Association The association stage associates the tracked objects in frame t-1 and the detectionresults in frame t. We don’t use the Kalman filter to predict the location of the trackers from framet-1. Instead, we utilize the predicted velocity to propagate the detection results in frame t backto t-1. Then, we utilize the L2 distances of object centers to compute the similarity between thedetected objects and the tracklets. Finally, the linear greedy matching strategy is adopted to achievemulti-object matching. Status Update After associating the detection results in frame t, we updatethe tracklets from frame t-1 into frame t. For the tracklets in frame t-1 that match with boundingboxes in frame t, we replace their object center location with the corresponding detection resultsto frame t. For the unmatched objects, we utilize the estimated object velocity to update its objectcenter location to frame t. Life-cycle Management The life-cycle management module controlsthe “birth” and “depth” of the tracklets ( i.e.birth, depth). Specifically, for the unmatched boundingboxes, they will be initialized as new tracklets. For the unmatched tracklets, we remove them whenthey are consecutive unmatched more than 2 times.Details of depth error calculation in Fig 2. The depth error in Fig2 is calculated as follows. Thedepth error is the l1 distance between the ground truth depth and the depth obtained by assumingobjects are static across frames. Specifically, the object depth that assumes objects are static iscalculated as follows. We first utilize the ego-motion (camera extrinsic) and the ground-truth 2Dlocation of 3D box centers in the past and current frames to obtain the cross-frame correspondence.Then, we utilize Eq2 to obtain the corresponding object depth.C More results on the Waymo datasetIn this section, we further provide the experimental results on the Waymo dataset for reference. Weadopt ResNet-101 as the image backbone and train the model with 1/3 training data. Regardinginitialization methods other than ImageNet pre-trained weights, we present additional results usingthe commonly used FCOS3D++ pre-trained weights on the Waymo dataset.D Ablation StudiesIn this section, we provide the additional ablation studies that are omitted in the main paper. DORTwith Different Proposal Detector We first show that DORT is agnostic with different proposaldetectors ( e.g.PGD [4], BEVDepth [23]). In Table 4, we display the experimental results of DORTwith using PGD and BEVDepth as the proposal detectors. We can observe that the DORT is insen-sitive to the proposal detector and can consistently improve BEVDepth. We Benefiting from the lowcomputation overhead of BEVDepth in the perspective part and the designed local volume, DORTalso can achieve a more lightweight pipeline for dynamic object modeling.14Table 3: Experimental results on the Waymo validation set. ImageNet pre-trained denotes the supervisedclassification pre-training for the network backbone. FCOS3D++ on Waymo denotes the further pre-trainingof the model backbone via FCOS3D++ on the Waymo training dataset.Method Pre-trained mAPL mAPFCOS3D++ [15]ImageNet20.4 28.6DETR3D [24] 26.1 39.0BEVDepth [23] 28.2 39.9MV-FCOS3D++ [51] 28.7 39.9Ours 30.1 42.3MV-FCOS3D++ [51] FCOS3D++on Waymo33.8 46.7Ours 35.0 48.9Table 4: Experimental results on the nuScenes validation set. 1 past frame is adopted in the temporal modeling.∗denotes the BEV FLOPS from the proposal detector.Method mAP NDSFlopsPV BEVBEVDepth 35.1 47.5 120.4 94.5DORT with PGD 37.9 52.1 238.2 40.2DORT with BEVDepth 38.1 52.1 120.4 74.4∗+40.2Tracking with Semantic Embedding or Geometry Distance In this work, DORT achieves 3Dobject tracking via the nearest centerness association. To have a more comprehensive comparisonof the tracking pipeline designed, we further provide the comparison of DORT by using semanticembedding to associate objects. Specifically, we follow previous methods [34] and adopt the widely-used quasi-dense similarity learning [52] to learn the tracking embedding. We extract two kindsof embedding features, one is from the perspective-view (PV) and another is from the bird-eye-view (BEV). In Table 5 and 6, we display the tracking results on the nuScenes tracking set. Wecan observe that DORT with geometry distance association can outperform the embedding-basedmethods by a large margin. Furthermore, it is also much simpler and more efficient and does notneed to maintain an extra object embedding. Besides, the PV embedding is worse than the BEV-based embedding, which may be due to the view change in different cameras.Table 5: Experimental results on the nuScenes validation set. 1 past frame is adopted in the temporal modeling.Method AMOTA ↑AMOTP ↓MOTAR ↑PV-Embedding 36.8 1.412 44.2BEV-Embedding 40.1 1.356 46.7DORT (Geometry Distance) 42.4 1.264 49.2E Theoretical Analysis of Ignoring Object MotionIn the main paper, we have shown that when ignoring object motion, the temporal correspondencewould derive a biased depth. In this supplementary, we provide the full details of how ignoringobject motion introduces a biased depth. We denote the camera intrinsic as Kand the ego-motionfrom frame t0to frame t1asTegot0→t1:K="f0cu0f cv0 0 1#, Tegot0→t1="1 0 0 xego0 1 0 00 0 1 zego#. (8)Here, fis the camera’s focal length, and (cu, cv)is the camera center coordinates in the image. Forsimplicity, we assume the ego-motion only contains the translation (xego,0, zego)on the horizontalplane. The analysis also can be easily extended to a more complicated case that the motion con-tains rotation. Given the multiple-view images, temporal-based methods can utilize photometric orfeature-metric similarity to find the correspondence of pixel pt0= (ut0, vt0)in the past frame t0and the pixel pt1= (ut1, vt1)in the current frame t1.15Table 6: 3D object tracking results on the nuScenes validation set. We adopt ResNet-50 as the backbone andset the input resolution as 704×256.Method AMOTA ↑AMOTP ↓Recall↑QD-Track3D [34] 24.2 1.518 39.9Time3D [35] 21.4 1.360 N/ATripletTrack [53] 28.5 1.485 N/AMUTR3D [44] 29.4 1.498 42.7QTrack [32] 34.7 1.347 46.2DORT 42.4 1.264 49.2When we ignore the object motion, the depth zt1of pixel pt1can be recovered as:Tegot0→t1·π(pt0, K) =π(pt1, K),zt1ut1+cuf−xego=ut0+cuf(zt1−zego),zt1=zego(ut0−cu)−fxegout0−ut1, (9)where πdenotes the projection from 2D image coordinate to 3D camera coordinate.But as we showed in the main paper, the moving objects occupy large ratios in the driving scenarios.For example, when the object contains the translation (xobj,0, zobj)in the horizontal plane, theobject’s motion can be represented asTobji→j=1 0 0 xobj0 1 0 00 0 1 zobj. (10)With the object motion, the depth zt1of pixel pt1is recovered as:Tobjt0→t1Tegot0→t1·π(pt0, K) =π(pt1, K),zt1ut1+cuf−xego−xobj=ut0+cuf(zt1−zego−zobj)ˆzt1=(zego+zobj)(ut0−cu)−f(xego+xobj)ut0−ut1. (11)From Eq (9) and Eq (11), we can obtain the depth gap for the temporal correspondence with andwithout considering object motion:∆z=zobj(ut0−cu)−fxegout0−ut1. (12)In Figure 6, we also provide a toy example to illustrate that one temporal correspondence can comefrom multiple combinations of object depth and motion ( i.e.inaccurate depth with zero motion andaccurate depth and GT motion). This means that if we inaccurately assume that objects are staticacross frames, the temporal correspondence would derive a misleading depth.E.1 Ill-posed Problem of Simultaneously Estimating 3D Location and MotionAlthough object motion plays a critical role in temporal correspondence, however, it is non-trivialto estimate it from the monocular images. As shown in Figure 6, the one correspondence can comefrom infinite combinations of location and motion (the location can be the point in the ray− − − − →Ot0Pt0and− − − − →Ot1Pt1, and the motion can be the line that connects the points.) Hence, it is an ill-posedproblem that simultaneously estimates the 3D location and motion from the monocular images. Toalleviate this issue, we leverage the rigid-body assumption for the objects in the driving scenariosand elaborate more temporal frames with constant velocity regularization to further constrain themotion.16P0 (u0, v0)P1 (u1, v1)Origin O1Candidate DepthCurrent FrameOrigin O0Static AssumptionInaccurate MotionGT MotionPast FrameMotionFigure 6: Different object motions can make the same temporal correspondence derive different depth.F More Related WorkMulti-View 3D Perception Leveraging multi-view images to recover 3D information is a fun-damental topic, such as structure from motion [54], multi-view stereo [55], simultaneous lo-calization and mapping [56], etc. One line of methods develop neural-network-based cost vol-umes [55, 57, 58, 45, 39, 59] to construct cross-frame visual cues for 3D perception. Another lineof methods [60, 61, 62] constructs geometry constraints and leverages optimization techniques toobtain a tight-coupled 3D structure. However, most of the work assumes the scene and objects arestatic, making them fail to handle the moving objects in driving scenarios.17 |
JkFeyEC6VXV | Finetuning Offline World Models in the Real WorldYunhai Feng1∗Nicklas Hansen1∗Ziyan Xiong12∗Chandramouli Rajagopalan1Xiaolong Wang11University of California San Diego2Tsinghua UniversityAbstract: Reinforcement Learning (RL) is notoriously data-inefficient, whichmakes training on a real robot difficult. While model-based RL algorithms ( worldmodels ) improve data-efficiency to some extent, they still require hours or days ofinteraction to learn skills. Recently, offline RL has been proposed as a frameworkfor training RL policies on pre-existing datasets without any online interaction.However, constraining an algorithm to a fixed dataset induces a state-action dis-tribution shift between training and inference, and limits its applicability to newtasks. In this work, we seek to get the best of both worlds: we consider the prob-lem of pretraining a world model with offline data collected on a real robot, andthen finetuning the model on online data collected by planning with the learnedmodel. To mitigate extrapolation errors during online interaction, we propose toregularize the planner at test-time by balancing estimated returns and (epistemic)model uncertainty. We evaluate our method on a variety of visuo-motor controltasks in simulation and on a real robot, and find that our method enables few-shotfinetuning to seen and unseen tasks even when offline data is limited. Videos areavailable at https://yunhaifeng.com/FOWM .Keywords: Model-Based Reinforcement Learning, Real-World Robotics1 IntroductionReinforcement Learning (RL) has the potential to train physical robots to perform complex tasksautonomously by interacting with the environment and receiving supervisory feedback in the formof rewards. However, RL algorithms are notoriously data-inefficient and require large amounts(often millions or even billions) of online environment interactions to learn skills due to limitedsupervision [1, 2, 3]. This makes training on a real robot difficult. To circumvent the issue, priorwork commonly rely on custom-built simulators [4, 5, 1] or human teleoperation [6, 7] for behaviorlearning, both of which are difficult to scale due to the enormous cost and engineering involved.Additionally, these solutions each introduce additional technical challenges such as the simulation-to-real gap [8, 1, 9, 10] and the inability to improve over human operators [11, 12], respectively.Recently, offline RL has been proposed as a framework for training RL policies from pre-existinginteraction datasets without the need for online data collection [13, 14, 15, 16, 17, 18, 19, 20].Leveraging existing datasets alleviates the problem of data-inefficiency without suffering from theaforementioned limitations. However, any pre-existing dataset will invariably not cover the entirestate-action space, which leads to (potentially severe) extrapolation errors, and consequently forcesalgorithms to learn overly conservative policies [21, 22, 23, 24]. We argue that extrapolation errorsare less of an issue in an online RL setting, since the ability to collect new data provides an in-trinsic self-calibration mechanism: by executing overestimated actions and receiving (comparably)negative feedback, value estimations can be adjusted accordingly.In this work, we seek to get the best of both worlds. We consider the problem of pretraining an RLpolicy on pre-existing interaction data, and subsequently finetuning said policy on a limited amount∗Equal contribution. Correspondence to Yunhai Feng <yuf020@ucsd.edu >.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.onlinebufferofflinebufferbalancedsamplingOnline InteractionModel LearningQahorizona0a1aHPlanninguncertaintys, raFigure 1. Approach. We propose a framework for offline pretraining and online finetuning of worldmodels directly in the real world, without reliance on simulators or synthetic data. Our methoditeratively collects new data by planning with the learned model , and finetunes the model on acombination of pre-existing data and newly collected data. Our method can be finetuned few-shoton unseen task variations in ≤20trials by leveraging novel test-time regularization during planning.of data collected by online interaction. Because we consider only a very limited amount of onlineinteractions ( ≤20trials ), we shift our attention to model-based RL (MBRL) [25] and the TD-MPC[26] algorithm in particular, due to its data-efficient learning. However, as our experiments willreveal, MBRL alone does not suffice in an offline-to-online finetuning setting when data is scarce.In particular, we find that planning suffers from extrapolation errors when queried on unseen state-action pairs, and consequently fails to converge. While this issue is reminiscent of overestimationerrors in model-free algorithms, MBRL algorithms that leverage planning have an intriguing prop-erty: their (planning) policy is non-parametric and can optimize arbitrary objectives without anygradient updates. Motivated by this key insight, we propose a framework for offline-to-online fine-tuning of MBRL agents ( world models ) that mitigates extrapolation errors in planning via noveltest-time behavior regularization based on (epistemic) model uncertainty. Notably, this regularizercan be applied to both purely offline world models andduring finetuning.We evaluate our method on a variety of continuous control tasks in simulation, as well as visuo-motor control tasks on a real xArm robot. We find that our method outperforms state-of-the-artmethods for offline and online RL across most tasks, and enables few-shot finetuning to unseentasks and task variations even when offline data is limited. For example, our method improves thesuccess rate of an offline world model from22%to67%in just 20 trials for a real-world visualpick task with unseen distractors. We are, to the best of our knowledge, the first work to investigateoffline-to-online finetuning with MBRL on real robots, and hope that our encouraging few-shotresults will inspire further research in this direction.2 Preliminaries: Reinforcement Learning and the TD-MPC AlgorithmWe start by introducing our problem setting and MBRL algorithm of choice, TD-MPC, which to-gether form the basis for the technical discussion of our approach in Section 3.Reinforcement Learning We consider the problem of learning a visuo-motor control policy byinteraction, formalized by the standard RL framework for infinite-horizon Partially ObservableMarkov Decision Processes (POMDPs) [27]. Concretely, we aim to learn a policy πθ:S×A 7→ R+that outputs a conditional probability distribution over actions a∈ A conditioned on a state s∈ Sthat maximizes the expected return (cumulative reward) R=Eπθ[P∞t=0γtrt], where tis a discretetime step, rtis the reward received by executing action atin state stat time t, and γ∈[0,1)is adiscount factor. We leverage an MBRL algorithm in practice, which decomposes πθinto multiplelearnable components (a world model ), and uses the learned model for planning. For brevity, weuse subscript θto symbolize learnable parameters throughout this work. In a POMDP, environmentinteractions obey an (unknown) transition function T:S×A 7→ S , where states sthemselves are as-2sumed unobservable. However, we can define approximate environment states s.= (o1,o2, . . . ,on)from sensory observations o1:nobtained from e.g.cameras or robot proprioceptive information.z 0 z 1 z 2 s 0 s 1 s 2 a 0 a 1 a 2 a 0 a 1 a 2 ^ ^ ^ r 0 r 1 r 2 ^ ^ ^ q 0 q 1 q 2 ^ ^ ^ Figure 2. Architecture. Ourworld model encodes an observa-tions0into its latent representa-tionz0, and then recurrently pre-dicts future latent states z1:haswell as optimal actions ˆa0:h,re-wards ˆr0:h, and values ˆq0:h. Fu-ture states s1:hprovide supervi-sion for learning but are not re-quired for planning.TD-MPC Our work extends TD-MPC [26], an MBRL algo-rithm that plans using Model Predictive Control (MPC) with aworld model and terminal value function that is jointly learnedvia Temporal Difference (TD) learning. TD-MPC has two in-triguing properties that make it particularly relevant to our set-ting: (i)it uses planning, which allows us to regularize actionselection at test-time, and (ii)it is lightweight relative to otherMBRL algorithms, which allows us to run it real-time. We sum-marize the architecture in Figure 2. Concretely, TD-MPC learnsfive components: (1)a representation z=hθ(s)that mapshigh-dimensional inputs sto a compact latent representationz,(2)a latent dynamics model z′=dθ(z,a)that predicts thelatent representation at the next timestep, and three predictionheads: (3)a reward predictor ˆr=Rθ(z,a)that predicts the in-stantaneous reward, (4)a terminal value function ˆq=Qθ(z,a),and(5)a latent policy guide ˆa=πθ(z)that is used as a behav-ioral prior for planning. We use z′,s′to denote the successor(latent) states of z,sin a subsequence, and use ˆa,ˆr,ˆqto dif-ferentiate predictions from observed (ground-truth) quantities a, r, q. In its original formulation,TD-MPC is an online off-policy RL algorithm that maintains a replay buffer Bof interactions, andjointly optimizes all components by minimizing the objectiveL(θ) = E(s,a,r,s′)0:h∼BhXt=0∥z′t−sg(hφ(s′t))∥22| {z }Latent dynamics+∥ˆrt−rt∥22|{z}Reward+∥ˆqt−qt∥22|{z}Value−Qθ(zt,ˆat)|{z}Action(1)where (s,a, r,s′)0:his a subsequence of length hsampled from the replay buffer, φis an expo-nentially moving average of θ,sgis the stop-grad operator, qt=rt+γQφ(z′t, πθ(z′t))is theTD-target, and gradients of the last term (action) are taken only wrt. the policy parameters. Con-stant coefficients balancing the losses are omitted. We refer to Hansen et al. [26] for additionalimplementation details, and instead focus our discussion on details pertaining to our contributions.During inference, TD-MPC plans actions using a sampling-based planner (MPPI) [28] that itera-tively fits a time-dependent multivariate Gaussian with diagonal covariance over the space of actionsequences such that return – as evaluated by simulating actions with the learned model – is maxi-mized. For a (latent) state z0=hθ(s0)and sampled action sequence a0:h, the estimated return ˆRisgiven byˆR=γhQθ(zh,ah)|{z}Value+h−1Xt=0γtRθ(zt,at)|{z}Reward,zt+1=dθ(zt,at)|{z}Latent dynamics,z0=hθ(s0)|{z}Encoder. (2)To improve the rate of convergence in planning, a fraction of sampled action sequences are gener-ated by the learned policy πθ, effectively inducing a behavioral prior over possible action sequences.While πθis implemented as a deterministic policy, Gaussian action noise can be injected for stochas-ticity. TD-MPC has demonstrated excellent data-efficiency in an online RL setting, but suffers fromextrapolation errors when na ̈ıvely applied to our problem setting, which we discuss in Section 3.3 Approach: A Test-Time Regularized World Model and PlannerWe propose a framework for offline-to-online finetuning of world models that mitigates extrapo-lation errors in the model via novel test-time regularization during planning. Our framework issummarized in Figure 1, and consists of two stages: (1)anoffline stage where a world model is pre-trained on pre-existing offline data, and (2)anonline stage where the learned model is subsequently3finetuned on a limited amount of online interaction data. While we use TD-MPC [26] as our back-bone world model and planner, our approach is broadly applicable to any MBRL algorithm that usesplanning. We start by outlining the key source of model extrapolation errors when used for offlineRL, then introduce our test-time regularizer, and conclude the section with additional techniquesthat we empirically find helpful for the few-shot finetuning of world models.3.1 Extrapolation Errors in World Models Trained by Offline RLAll methods suffer from extrapolation errors when trained on offline data and evaluated on unseendata due to a state-action distribution shift between the two datasets. In this context, value overesti-mation in model-free Q-learning methods is the most well-understood type of error [14, 16, 17, 19].However, MBRL algorithms like TD-MPC face unique challenges in an offline setting: state-actiondistribution shifts are present not only in value estimation, but also in (latent) dynamics and rewardprediction when estimating the return of sampled trajectories as in Equation 2. We will first addressvalue overestimation, and then jointly address other types of extrapolation errors in Section 3.2.Inspired by Implicit Q-learning (IQL; [19]), we choose to mitigate the overestimation issue by ap-plying TD-backups only on in-sample actions. Specifically, consider the value term in Equation 1that computes a TD-target qby querying Qφon a latent state z′tand potentially out-of-sample ac-tionπθ(z′t). To avoid out-of-sample actions in the TD-target, we introduce a state-conditional valueestimator Vθand reformulate the TD-target as qt=rt+γVθ(z′t). This estimator can be optimizedby an asymmetric l2-loss (expectile regression):LV(θ) =|τ− 1{Qφ(zt,at)−Vθ(zt)<0}|(Qφ(zt,at)−Vθ(zt))2, (3)where τ∈(0,1)is a constant expectile . Intuitively, we approximate the maximization in Vθ(zt) =max atQφ(zt,at)forτ→1, and are increasingly conservative for smaller τ. Note that atisthe action from the dataset (replay buffer), and thus no out-of-sample actions are needed. For thesame purpose of avoiding out-of-sample actions, we replace the action term for policy learningin Equation 1 by an advantage weighted regression (AWR) [29, 30, 21] loss exp(β(Qφ(zt,at)−Vθ(zt))) log πθ(at|zt), where β≥0is a temperature parameter.3.2 Uncertainty Estimation as Test-Time Behavior RegularizationWhile only applying TD-backups on in-sample actions is effective at mitigating value overestimationduring offline training , the world model (including dynamics, reward predictor, and value function)may still be queried on unseen state-action pairs during planning ,i.e., when estimating returns usingEquation 2. This can result in severe extrapolation errors despite a cautiously learned value function.To address this additional source of errors, we propose a test-time behavior regularization techniquethat balances estimated returns and (epistemic) model uncertainty when evaluating Equation 2 forsampled action sequences. By regularizing estimated returns, we retain the expressiveness of plan-ning with a world model despite imperfect state-action coverage. Avoiding actions for which theoutcome is highly uncertain is strongly desired in an offline RL setting where no additional data canbe collected, but conservative policies likewise limit exploration in an online RL setting. Unlikeprior offline RL methods that predominantly learn an explicitly and consistently conservative valuefunction and/or policy, regularizing planning based on model uncertainty has an intriguing property:as planning continues to cautiously explore and the model is finetuned on new data, the epistemicuncertainty naturally decreases. This makes test-time regularization based on model uncertaintyhighly suitable for both few-shot finetuning and continued finetuning over many trajectories.To obtain a good proxy for epistemic model uncertainty [31] with minimal computational over-head and architectural changes, we propose to utilize a small ensemble of value functionsQ(1)θ, Q(2)θ, . . . , Q(N)θsimilar to Chen et al. [32]. We optimize the Q-functions with TD-targetsdiscussed in Section 3.1, and use a random subset of (target) Q-networks to estimate the Q-valuesin Equation 3. Although ensembling all components of the world model may yield a better estimateof epistemic uncertainty, our choice of ensembling is lightweight enough to run on a real robot andis empirically a sufficiently good proxy for uncertainty. We argue that with a Q-ensemble, we can4Simulation tasks Real-world tasks Robot setup Transfer tasks Figure 3. Tasks. We consider diverse tasks in simulation and on a real robot. Our real-world tasksuse raw pixels as input. Our method achieves high success rates in offline-to-online transfer to bothseen and unseen tasks in just 20 online trials on a real robot.Table 1. Real-world offline-to-online results.Success rate (%) as a function of online finetun-ing trials. Mean of 18 trials and 2 seeds.online offline-to-onlineTrials TD-MPC TD-MPC OursReach0 0 ±0 50±18 72±610 0 ±0 67±12 94±620 0±0 78±12 89±0Pick0 0 ±0 0±0 0±010 0 ±0 28±6 33±020 0±0 28±6 50±6Kitchen0 0 ±0 0±0 11±1110 0 ±0 33±11 56±1120 0±0 61±17 78±0Table 2. Finetuning to unseen real-worldtasks. Success rate (%) of our method for eachtask variation shown in Figure 3. We include4 successful transfers and 1 failure. See Ap-pendix B for task descriptions. Mean of 18 trialsand 2 seeds.online trialsVariation 0 10 20Reachdistractor 22 ±022±11 62±6object shape 44 ±11 44±078±11Pickdistractors 22 ±11 56±067±11object color 0 ±0 0±0 0±0Kitchen distractor 0 ±0 50±28 67±11not only benefit from better Q-value estimation, but also leverage the standard deviation of the Q-values to measure the uncertainty of a state-action pair, i.e., whether it is out-of-distribution or not.This serves as a test-time regularizer that equips the planner with the ability to balance exploitationand exploration without explicitly introducing conservatism in training. By penalizing the actionsleading to high uncertainty, we prioritize the actions that are more likely to achieve a reliably highreturn. Formally, we modify the estimated return in Equation 2 of an action sequence a0:hto beˆR=γh(Qθ(zh,ah)−λuh) +h−1Xt=0γt(Rθ(zt,at)−λut), ut= std{Q(i)θ(zt,at)}Ni=1,(4)where the uncertainty regularizer utis highlighted in red. Here, stddenotes the standard deviationoperator, and λis a constant coefficient that controls the regularization strength. We use the samevalue of λfor both the offline and online stages in practice, but it need not be equal.To facilitate rapid propagation of information acquired during finetuning, we maintain tworeplaybuffers, BoffandBonfor offline and online data, respectively, and optimize the objective in Equation1 on mini-batches of data sampled in equal parts from BoffandBon,i.e., online interaction data isheavily oversampled early in finetuning. Balanced sampling has been explored in various settings[33, 34, 35, 22, 36, 37], and we find that it consistently improves finetuning of world models as well.4 Experiments & DiscussionWe evaluate our method on diverse continuous control tasks from the D4RL [38] and xArm [39]task suites and quadruped locomotion in simulation, as well as three visuo-motor control tasks on areal xArm robot, as visualized in Figure 3. Our experiments aim to answer the following questions:−Q1: How does our approach compare to state-of-the-art methods for offline RL and online RL?−Q2: Can our approach be used to finetune world models on unseen tasks and task variations?−Q3: How do individual components of our approach contribute to its success?5Table 3. Offline-to-online results in simulation. Success rate(xArm) and normalized return (D4RL and quadruped) of methodsbefore andafter online finetuning. See Appendix B for task ex-planations. Mean of 5 seeds.online offline-to-onlineTask TD-MPC TD-MPC (+d) TD-MPC (+o) IQL OursPush (m) 69.0 76.0 14.0→77.0 39.0→51.0 35.0 →79.0Push (mr) 69.0 81.0 59.0→80.0 22.0 →22.0 45.0 →64.0Pick (mr) 0.0 84.0 0.0→96.0 0.0→0.0 0.0 →88.0Pick (m) 0.0 52.0 0.0→58.0 0.0 →1.0 0.0 →66.0Hopper (m) 4.8 2.8 0.8→11.0 66.3→76.1 49.6 →100.7Hopper (mr) 4.8 6.1 13.4→13.7 76.5 →101.4 84.4 →93.5AntMaze (mp) 0.0 52.0 0.0→68.0 54.0 →80.0 58.0→96.0AntMaze (md) 0.0 72.0 0.0→91.0 62.0→88.0 75.0→89.0Walk 8.8 9.2 18.5→1.2 19.1 →19.2 67.2→85.8Average 17.4 48.3 11.8→55.1 37.7→48.7 46.0→84.70 10 20 30 40 50Environment steps (×103)0255075100Success Rate (%)xArm Transfer TasksOurs TD-MPCFigure 4. Finetuning to unseentasks. Success rate (%) aggre-gated across 9 transfer tasks insimulated xArm environments.Mean of 5 seeds.In the following, we first detail our experimental setup, and then proceed to address each of theabove questions based on our experimental results.Real robot setup Our setup is shown in Figure 3 (right) . The agent controls an xArm 7 robot witha jaw gripper using positional control, and 224×224RGB image observations are captured by astatic third-person Intel RealSense camera (an additional top-view camera is used for kitchen ; seeAppendix B.1 for details); the agent also has access to robot proprioceptive information. Our setuprequires no further instrumentation. We consider three tasks: reach ,pick, and kitchen , and severaltask variations derived from them. Our tasks are visualized in Figure 3. The goal in reach is to reacha target with the end-effector, the goal in pick is to pick up and lift a target object above a heightthreshold, and the goal in kitchen is to grasp and put a pot in a sink. We use manually designeddetectors to determine task success and automatically provide sparse rewards (albeit noisy) for bothoffline and online RL. We use 120offline trajectories for reach ,200forpick, and 216 for kitchen ;see Section 4.1 (Q3.2) for details and ablations.Simulation tasks and datasets We consider a diverse set of tasks and datasets, includ-ing four tasks from the D4RL [38] benchmark ( Hopper (medium) ,Hopper (medium-replay) ,AntMaze (medium-play) andAntMaze (medium-diverse) ), two visuo-motor control tasks from thexArm [40] benchmark ( push andpick) and a quadruped locomotion task ( Walk ); the two xArm tasksare similar to our real-world tasks except using lower image resolution ( 84×84) and dense rewards.See Figure 3 (left) for task visualizations. We also consider two dataset variations for each xArmtask: medium , which contains 40k transitions (800 trajectories) sampled from a suboptimal agent,andmedium-replay , which contains the first 40k transitions (800 trajectories) from the replay bufferof training a TD-MPC agent from scratch. See Appendix B for more details.Baselines We compare our approach against strong online RL and offline RL methods: (i)TD-MPC [26] trained from scratch with online interaction only, (ii)TD-MPC (+data) which utilizesthe offline data by appending them to the replay buffer but is still trained online only, (iii)TD-MPC(+offline) which na ̈ıvely pretrains on offline data and is then finetuned online, but without any of ouradditional contributions, and (iv)IQL [19], a state-of-the-art offline RL algorithm which has strongoffline performance and also allows for policy improvement with online finetuning. See Appendix Cand D for extensive implementation details on our method and baselines, respectively.4.1 ResultsQ1: Offline-to-online RL We benchmark methods across all tasks considered; real robot resultsare shown in Table 1, and simulation results are shown in Table 3. We also provide aggregate curvesin Figure 5 (top) and per-task curves in Appendix A. Our approach consistently achieves strong zero-shot and online finetuning performance across tasks, outperforming offline-to-online TD-MPC andIQL by a large margin in both simulation and real in terms of asymptotic performance. Notably,the performance of our method is more robust to variations in dataset and task than baselines, asevidenced by the aggregate results.6Q2: Finetuning to unseen tasks A key motivation for developing learning-based (and model-based in particular) methods for robotics is the potential for generalization and fast adaptation tounseen scenarios. To assess the versatility of our approach, we conduct additional offline-to-onlinefinetuning experiments where the offline and online tasks are distinct, e.g., transferring a reach policy−500 −250 0 250 5000255075100Normalized ScoreD4RL Tasks−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)xArm TasksOursIQLTD-MPCTD-MPC (+data)TD-MPC (+offline)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreD4RL TasksOursw/o in-sample learningw/o value ensemblew/o uncertainty penaltyw/o balanced samplingFigure 5. Aggregate results.Top: comparison to baselines.Bottom: ablations. Offline pre-training is shaded gray. 5 seeds.to apush task, introducing distractors, or changing the target ob-ject. We design 5 real-world transfer tasks, and 11 in simulation(9 for xArm and 2 for locomotion). See Figure 3 (center) andAppendix B for task visualizations and descriptions. Our realrobot results are shown in Table 2, and aggregated simulationresults are shown in Figure 4. We also provide per-task resultsin Appendix A. Our method successfully adapts to unseen tasksand task variations – both in simulation and in real – and signif-icantly outperforms TD-MPC trained from scratch on the targettask. However, as evidenced in the object color experiment ofTable 2, transfer might not succeed if the initial model does notachieve anyreward.Q3: Ablations To understand how individual componentscontribute to the success of our method, we conduct a series ofablation studies that exhaustively ablate each design choice. Wehighlight our key findings based on aggregate task results butnote that per-task curves are available in Appendix A.Q3.1: Algorithmic components We ablate each componentof our proposed method in Figure 5 (bottom) , averaged acrossall D4RL tasks. Specifically, we ablate (1)learning Qθwithin-sample actions and expectile regression as described in Sec-tion 3.1, (2)using an ensemble of 5value functions instead of2as in the original TD-MPC, (3)regularizing planning with ouruncertainty penalty described in Section 3.2, and (4)using bal-anced sampling, i.e., sampling offline and online data in equalparts within each mini-batch. The results highlight the effective-ness of our key contributions. Balanced sampling improves thedata-efficiency of online finetuning, and all other componentscontribute to both offline and online performance.Table 4. Real-world ablation on offline data.Success rate (%) as a function of online finetuningtrials for two data sources and sizes from our real-world reach task. Mean of 18 trials and 2 seeds.TD-MPC OursBase Diverse Base DiverseTrials 100 120 50 100 70 1200 0±0 50±18 0±0 0±033±072±610 44 ±12 67±12 50±061±661±694±620 83±6 78±12 61±689±089±089±0Q3.2: Offline dataset Next, we investigatehow the quantity and source of the offlinedataset affect the success of online finetun-ing. We choose to conduct this ablation withreal-world data to ensure that our conclusionsgeneralize to realistic robot learning scenarios.We experiment with two data sources and twodataset sizes: Base that consists of 50/100 tra-jectories (depending on the dataset size) gen-erated by a BC policy with added noise; andDiverse that consists of the same 50/100 tra-jectories as Base, but with an additional 20 exploratory trajectories from a suboptimal RL agent. Infact, the additional trajectories correspond to a 20-trial replay buffer from an experiment conductedin the early stages of the research project. Results for this experiment are shown in Table 4. Wefind that results improve with more data regardless of source, but that exploratory data holds fargreater value: neither TD-MPC nor our method succeeds zero-shot when trained on the Base dataset– regardless of data quantity – whereas our method obtains 33% and72% success rate with 70and120trajectories, respectively, from the Diverse dataset. This result demonstrates that replay buffersfrom previously trained agents can be valuable data sources for future experiments.7−500 −250 0 250 5000255075100Normalized ScoreHopper (medium)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreAntMaze (medium-diverse)λ=0λ=1λ=3λ=10λ=20Figure 6. Regularization ( λ).Score as a function of regulariza-tion strength for two tasks fromD4RL. Gray shade indicates of-fline RL. Mean of 5 seeds.Q3.3: Uncertainty regularization We seek to understandhow the uncertainty regularization strength λinfluences the test-time performance of our method. Results for two tasks areshown in Figure 6; see Appendix A for more results. Whileλ > 0almost always outperforms λ= 0 in both offline andonline RL, large values, e.g.,λ= 20 , can be detrimental to on-line RL in some cases. We use the same λfor offline and onlinestages in this work, but remark that our uncertainty regularizer isatest-time regularizer, which can be tuned at any time at no cost.5 Related WorkOffline RL algorithms seek to learn RL policies solely from pre-existing datasets, which results in a state-action distribution shiftbetween training and evaluation. This shift can be mitigated byapplying explicit regularization techniques to conventional on-line RL algorithms, commonly Soft Actor-Critic (SAC; [41]).Most prior works constrain policy actions to be close to the data[13, 14, 15, 16], or regularize the value function to be conserva-tive ( i.e., underestimating) [17, 18, 42, 43]. While these strate-gies can be highly effective for offline RL, they often slow downconvergence when finetuned with online interaction [21, 22, 24]. Lastly, Agarwal et al. [44] andYarats et al. [23] show that online RL algorithms are sufficient for offline RL when data is abundant.Finetuning policies with RL Multiple works have considered finetuning policies obtained by,e.g., imitation learning [45, 12, 36], self-supervised learning [46], or offline RL [21, 47, 22, 48, 24].Notably, Rajeswaran et al. [45] learns a model-free policy and constrains it to be close to a set ofdemonstrations, and Lee et al. [22] shows that finetuning an ensemble of model-free actor-criticagents trained with conservative Q-learning on an offline dataset can improve sample-efficiency insimulated control tasks. By instead learning a world model on offline data, our method regularizesactions at test-time (via planning) based on model uncertainty, without explicit loss terms, which isparticularly beneficial for few-shot finetuning. While we do not compare to [22] in our experiments,we compare to IQL [19], a concurrent offline RL method that is conceptually closer to our method.Real-world RL Existing work on real robot learning typically trains policies on large amountsof data in simulation, and transfers learned policies to real robots without additional training(simulation-to-real). This introduces a domain gap, for which an array of mitigation strategieshave been proposed, including domain randomization [8, 4, 1], data augmentation [39], and sys-tem identification [49]. Additionally, building accurate simulation environments can be a dauntingtask. At present, only a limited number of studies have considered training RL policies in the realworld without any reliance on simulators [50, 21, 51]. For example, researchers have proposed toaccelerate online RL with human demonstrations [50] or offline datasets [21, 34] for model-freealgorithms. Most recently, Wu et al. [51] demonstrates that MBRL can be data-efficient enough tolearn diverse real robot tasks from scratch. Our work is conceptually similar to [21, 51] but is notdirectly comparable, as we consider an order of magnitude less online data than prior work.6 LimitationsSeveral open problems remain: offline data quality and quantity heavily impact few-shot learning,and we limit ourselves to sparse rewards since dense rewards are difficult to obtain in the real world.We also find that the optimal value of λcan differ between tasks and between offline and online RL.We leave it as future work to automate this hyperparameter search, but note that doing so is relativelycheap since it can be adjusted at test-time without any overhead. Lastly, we consider pretraining ona single task and transferring to unseen variations. Given such limited data for pretraining, somestructural similarity between tasks is necessary for few-shot learning to be successful.8AcknowledgmentsThis work was supported, in part, by the Amazon Research Award, gifts from Qualcomm and theTechnology Innovation Program (20018112, Development of autonomous manipulation and grip-ping technology using imitation learning based on visual and tactile sensing) funded by the Ministryof Trade, Industry & Energy (MOTIE), Korea.References[1] M. Andrychowicz, B. Baker, M. Chociej, R. J ́ozefowicz, B. McGrew, J. W. Pachocki,A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder,L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. The International Jour-nal of Robotics Research (IJRR) , 39:20 – 3, 2018.[2] C. Berner, G. Brockman, B. Chan, V . Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer,S. Hashme, C. Hesse, R. J ́ozefowicz, S. Gray, C. Olsson, J. W. Pachocki, M. Petrov, H. P.de Oliveira Pinto, J. Raiman, T. Salimans, J. Schlatter, J. Schneider, S. Sidor, I. Sutskever,J. Tang, F. Wolski, and S. Zhang. Dota 2 with large scale deep reinforcement learning. ArXiv ,abs/1912.06680, 2019.[3] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lock-hart, D. Hassabis, T. Graepel, T. P. Lillicrap, and D. Silver. Mastering atari, go, chess and shogiby planning with a learned model. Nature , 588:604 – 609, 2019.[4] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of roboticcontrol with dynamics randomization. 2018 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1–8, 2017.[5] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel. Asymmetric actor criticfor image-based robot learning. Proceedings of Robotics: Science and Systems (RSS) , 2018.[6] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning(CoRL) , 2022.[7] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. J. Joshi, R. C.Julian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath,I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao, M. S.Ryoo, G. Salazar, P. R. Sanketi, K. Sayed, J. Singh, S. A. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. H. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich.RT-1: Robotics transformer for real-world control at scale. ArXiv , abs/2212.06817, 2022.[8] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomiza-tion for transferring deep neural networks from simulation to the real world. 2017 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 23–30, 2017.[9] W. Zhao, J. P. Queralta, and T. Westerlund. Sim-to-real transfer in deep reinforcement learningfor robotics: a survey. 2020 IEEE Symposium Series on Computational Intelligence (SSCI) ,pages 737–744, 2020.[10] N. Hansen, R. Jangir, Y . Sun, G. Aleny `a, P. Abbeel, A. A. Efros, L. Pinto, and X. Wang. Self-supervised policy adaptation during deployment. In International Conference on LearningRepresentations (ICLR) , 2021.[11] A. Kumar, J. Hong, A. Singh, and S. Levine. Should I run offline reinforcement learning orbehavioral cloning? In International Conference on Learning Representations (ICLR) , 2022.9[12] B. Baker, I. Akkaya, P. Zhokov, J. Huizinga, J. Tang, A. Ecoffet, B. Houghton, R. Sampedro,and J. Clune. Video pretraining (VPT): Learning to act by watching unlabeled online videos.InAdvances in Neural Information Processing Systems (NeurIPS) , 2022.[13] S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo-ration. In International Conference on Machine Learning (ICML) , 2018.[14] Y . Wu, G. Tucker, and O. Nachum. Behavior regularized offline reinforcement learning. ArXiv ,abs/1911.11361, 2019.[15] R. Kidambi, A. Rajeswaran, P. Netrapalli, and T. Joachims. MOReL: Model-based offlinereinforcement learning. Advances in Neural Information Processing Systems (NeurIPS) , 33:21810–21823, 2020.[16] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. Advancesin neural information processing systems (NeurIPS) , 34:20132–20145, 2021.[17] A. Kumar, J. Fu, M. Soh, G. Tucker, and S. Levine. Stabilizing off-policy Q-learning viabootstrapping error reduction. Advances in Neural Information Processing Systems (NeurIPS) ,32, 2019.[18] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforce-ment learning. Advances in Neural Information Processing Systems (NeurIPS) , 33:1179–1191,2020.[19] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.InInternational Conference on Learning Representations (ICLR) , 2022.[20] S. M. Richards, N. Azizan, J.-J. Slotine, and M. Pavone. Adaptive-control-oriented meta-learning for nonlinear systems. Proceedings of Robotics: Science and Systems (RSS) , 2021.[21] A. Nair, A. Gupta, M. Dalal, and S. Levine. AWAC: Accelerating online reinforcement learningwith offline datasets. arXiv preprint arXiv:2006.09359 , 2020.[22] S. Lee, Y . Seo, K. Lee, P. Abbeel, and J. Shin. Offline-to-online reinforcement learning viabalanced replay and pessimistic Q-ensemble. In Conference on Robot Learning (CoRL) , pages1702–1712. PMLR, 2022.[23] D. Yarats, D. Brandfonbrener, H. Liu, M. Laskin, P. Abbeel, A. Lazaric, and L. Pinto. Don’tchange the algorithm, change the data: Exploratory data for offline reinforcement learning.ArXiv , abs/2201.13425, 2022.[24] M. Nakamoto, Y . Zhai, A. Singh, M. S. Mark, Y . Ma, C. Finn, A. Kumar, and S. Levine. Cal-QL: Calibrated offline RL pre-training for efficient online fine-tuning. ArXiv , abs/2303.05479,2023.[25] D. Ha and J. Schmidhuber. Recurrent world models facilitate policy evolution. In Advances inNeural Information Processing Systems 31 , pages 2451–2463. Curran Associates, Inc., 2018.[26] N. Hansen, X. Wang, and H. Su. Temporal difference learning for model predictive control. InInternational Conference on Machine Learning (ICML) , 2022.[27] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observ-able stochastic domains. Artificial Intelligence , 1998.[28] G. Williams, A. Aldrich, and E. A. Theodorou. Model predictive path integral control usingcovariance variable importance sampling. ArXiv , abs/1509.01149, 2015.[29] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operationalspace control. In International Conference on Machine Learning (ICML) , 2007.10[30] X. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple andscalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177 , 2019.[31] B. Charpentier, R. Senanayake, M. Kochenderfer, and S. G ̈unnemann. Disentangling epistemicand aleatoric uncertainty in reinforcement learning. arXiv preprint arXiv:2206.01558 , 2022.[32] X. Chen, C. Wang, Z. Zhou, and K. W. Ross. Randomized ensembled double q-learning:Learning fast without a model. In International Conference on Learning Representations(ICLR) , 2021.[33] R. C. Julian, B. Swanson, G. S. Sukhatme, S. Levine, C. Finn, and K. Hausman. Efficientadaptation for end-to-end vision-based robotic manipulation. ArXiv , abs/2004.10190, 2020.[34] D. Kalashnikov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. MT-Opt: Continuous multi-task robotic reinforcement learning at scale. ArXiv ,abs/2104.08212, 2021.[35] N. Hansen, H. Su, and X. Wang. Stabilizing deep Q-learning with convnets and visiontransformers under data augmentation. Advances in Neural Information Processing Systems(NeurIPS) , 34:3680–3693, 2021.[36] N. Hansen, Y . Lin, H. Su, X. Wang, V . Kumar, and A. Rajeswaran. Modem: Acceleratingvisual model-based reinforcement learning with demonstrations. In International Conferenceon Learning Representations (ICLR) , 2023.[37] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, T. Eccles, J. Bruce, A. Razavi, A. D. Edwards, N. M. O.Heess, Y . Chen, R. Hadsell, O. Vinyals, M. Bordbar, and N. de Freitas. A generalist agent.Transactions on Machine Learning Research , 2022.[38] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4RL: Datasets for deep data-drivenreinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[39] R. Jangir, N. Hansen, S. Ghosal, M. Jain, and X. Wang. Look closer: Bridging egocentric andthird-person views with transformers for robotic manipulation. IEEE Robotics and AutomationLetters , pages 1–1, 2022. doi:10.1109/LRA.2022.3144512.[40] Y . Ze, N. Hansen, Y . Chen, M. Jain, and X. Wang. Visual reinforcement learning with self-supervised 3D representations. IEEE Robotics and Automation Letters , pages 1–1, 2023.[41] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V . Kumar, H. Zhu, A. Gupta,P. Abbeel, and S. Levine. Soft actor-critic algorithms and applications. ArXiv , abs/1812.05905,2018.[42] T. Yu, G. Thomas, L. Yu, S. Ermon, J. Y . Zou, S. Levine, C. Finn, and T. Ma. MOPO:Model-based offline policy optimization. Advances in Neural Information Processing Systems(NeurIPS) , 2020.[43] A. Kumar, A. Singh, F. Ebert, Y . Yang, C. Finn, and S. Levine. Pre-training for robots: OfflineRL enables learning new tasks from a handful of trials. arXiv preprint arXiv:2210.05178 ,2022.[44] R. Agarwal, D. Schuurmans, and M. Norouzi. Striving for simplicity in off-policy deep rein-forcement learning. ArXiv , abs/1907.04543, 2019.[45] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demon-strations. In Proceedings of Robotics: Science and Systems (RSS) , 2018.11[46] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Gray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welin-der, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions withhuman feedback. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances inNeural Information Processing Systems (NeurIPS) , 2022.[47] C. Wang, X. Luo, K. W. Ross, and D. Li. VRL3: A data-driven framework for visual deepreinforcement learning. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advancesin Neural Information Processing Systems (NeurIPS) , 2022.[48] Y . Xu, N. Hansen, Z. Wang, Y .-C. Chan, H. Su, and Z. Tu. On the feasibility of cross-tasktransfer with model-based reinforcement learning. In International Conference on LearningRepresentations (ICLR) , 2023.[49] A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid motor adaptation for legged robots.ArXiv , abs/2107.04034, 2021.[50] A. Zhan, P. Zhao, L. Pinto, P. Abbeel, and M. Laskin. A framework for efficient roboticmanipulation. ArXiv , abs/2012.07975, 2020.[51] P. Wu, A. Escontrela, D. Hafner, K. Goldberg, and P. Abbeel. Daydreamer: World models forphysical robot learning. In Conference on Robot Learning (CoRL) , 2022.[52] I. Kostrikov, D. Yarats, and R. Fergus. Image augmentation is all you need: Regularizing deepreinforcement learning from pixels. In International Conference on Learning Representations(ICLR) , 2021.[53] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. arXiv preprintarXiv:1511.05952 , 2015.12A Additional ResultsIn addition to the aggregated results in the main paper, we also provide per-task results for theexperiments and tasks in simulation. Our benchmark results are shown in Figure 7, and task transferresults are shown in Figure 10. Per-task results for ablations are shown in Figure 8 and Figure 9.−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreHopper (medium)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreAntMaze (medium-play)−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)Push (medium)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreHopper (medium-replay)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreAntMaze (medium-diverse)−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)Push (medium-replay)−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)Pick (medium)−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)Pick (medium-replay)−50 −25 0 25 50Environment steps (×103)0255075100Normalized ScoreWalkOurs TD-MPC TD-MPC (+data) TD-MPC (+offline) IQLFigure 7. Comparison of our method against baselines. Offline pretraining is shaded gray. Meanof 5 seeds; shaded area indicates 95% CIs.130255075100Success Rate (%)Push (medium-replay)0255075100Normalized ScoreHopper (medium)0255075100Normalized ScoreAntMaze (medium-play)−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)Pick (medium-replay)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreHopper (medium-replay)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreAntMaze (medium-diverse)Ours w/o in-sample learningw/o value ensemblew/o uncertainty penaltyw/o balanced samplingFigure 8. Per-task ablation results. Offline pretraining is shaded gray. Mean of 5 seeds; shadedarea indicates 95% CIs.0255075100Success Rate (%)Push (medium-replay)0255075100Normalized ScoreHopper (medium)0255075100Normalized ScoreAntMaze (medium-play)−50 −25 0 25 50Environment steps (×103)0255075100Success Rate (%)Pick (medium-replay)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreHopper (medium-replay)−500 −250 0 250 500Environment steps (×103)0255075100Normalized ScoreAntMaze (medium-diverse)λ=0 λ=1 λ=3 λ=10 λ=20Figure 9. Ablation study on uncertainty coefficient ( λ).Offline pretraining is shaded gray. Meanof 5 seeds; shaded area indicates 95% CIs.140255075100Success Rate (%)Reach → Push Cube Reach → Push Cylinder Push Cube → Push Cylinder0255075100Success Rate (%)Reach → Push Sphere w/ Increased Lighting Push Cube → Push Sphere w/ Increased Lighting Reach → Push Cylinder w/ Altered Target Color−40 −30 −20 −10 0 10 20 30 40 50Environment steps (×103)0255075100Success Rate (%)Push Cube → Push Cylinder w/ Altered Target Color−40 −30 −20 −10 0 10 20 30 40 50Environment steps (×103)Reach → Push Cube w/ Obstacle−40 −30 −20 −10 0 10 20 30 40 50Environment steps (×103)Push Cube → Push Cube w/ ObstacleOurs TD-MPC−50 −25 0 25 50 75 100 125 150Environment steps (×103)0255075100Normalized ScoreWalk → Walk Fast−50 −25 0 25 50 75 100 125 150Environment steps (×103)0255075100Normalized ScoreWalk → Walk on Rugged TerrainOurs TD-MPCFigure 10. Task transfer results. Success rate (%) of our method and TD-MPC trained from scratchon all simulated transfer tasks. The first nine are designed based on the xArm [40] task suite, andthe last two are quadruped locomotion. Offline pretraining is shaded gray. Mean of 5 seeds; shadedarea indicates 95% CIs.15B Tasks and DatasetsB.1 Real-World Tasks and DatasetsFigure 11. Real-world workspace.Moving range of the end-effectorand the initialization range of tar-get/object are shaded in the image.The positions for evaluation are la-beled by crosses.We implement three visuo-motor control tasks, reach ,pickandkitchen on a UFactory xArm 7 robot arm. Here we firstintroduce the setup for reach andpick, which share the sameworkspace. We use an Intel RealSense Depth Camera D435as the only external sensor. The observation space contains a224×224RGB image and an 8-dimensional robot proprio-ceptive state including the position, rotation, and the openingof the end-effector and a boolean value indicating whether thegripper is stuck. Both tasks are illustrated in Figure 3 (secondfrom the left) . For safety reasons, we limit the moving rangeof the gripper in a 30cm×30cm×30cmcube, of which pro-jection on the table is illustrated in Figure 11. To promoteconsistency between experiments, we evaluate agents on a setof fixed positions, visualized as red crosses in the aforemen-tioned figure. The setup for kitchen is shown in Figure 12.We use two D435 cameras for this task, providing both a frontview and a top view. The observation space thus contains two224×224RGB images and the 8-dimensional robot proprio-ceptive state.(a)Kitchen setup (b)Front view (c)Top viewFigure 12. Real-world kitchen task setup. (a)Setup of the kitchen workspace with the xArmrobot. (b)-(c) Sample images from the front view and the top view, respectively.Below we describe each task and the data used for offline pretraining in detail. Figure 13 showssample trajectories for these tasks.Reach The objective of this task is to accurately position the red hexagonal prism, held by thegripper, above the blue square target. The action space of this task is defined by the first two dimen-sions, which correspond to the horizontal plane. The agent will receive a reward of 1 when the objectis successfully placed above the target, and a reward of 0 otherwise. The offline dataset for reachcomprises 100 trajectories collected using a behavior-cloning policy, which exhibits an approximatesuccess rate of 50%. Additionally, there are 20 trajectories collected through teleoperation, wherethe agent moves randomly, including attempts to cross the boundaries of the allowable end-effectormovement. These 20 trajectories are considered to be diverse and are utilized for conducting anablation study around the quality of the offline dataset.Pick The objective of this task is to grasp and lift a red hexagonal prism by the gripper. The actionspace of this task contains the position of the end-effector and the opening of the gripper. The agentwill receive a reward of 1 when the object is successfully lifted above a height threshold, 0.5 whenthe object is grasped but not lifted, and 0 otherwise. The offline dataset for pick comprises 200trajectories collected using a BC policy that has an approximate success rate of 50%.16Kitchen This task requires the xArm robot to grasp a pot and put it into a sink in a toy kitchenenvironment. The agent will receive a reward of 1 when the pot is successfully placed in the sink, 0.5when the pot is grasped, and 0 otherwise. The offline dataset for kitchen consists of 216 trajectories,of which 100 are human teleoperation trajectories, 25 are from BC policies, and 91 are from offlineRL policies.Real-world transfer tasks We designed two transfer tasks for both reach andpick, and one forkitchen as shown in Figure 3 (the second from right) . As the red hexagonal prism is an importantindicator of the end-effector position in reach , we modify the task by (1) placing an additional redhexagonal prism on the table, alongside the existing one, and (2) replacing the object with a smallred ketchup bottle, whose bottom is not aligned with the end-effector. In pick, the red hexagonalprism is regarded as a target object. Therefore we (1) add two distractors, each with a distinct shapeand color compared to the target object, and (2) change the color and shape of the object (from a redhexagonal prism to a green octagonal prism). For kitchen , we also add a teapot with a similar coloras the pot in the scene as a distractor. We’ve shown by experiments that different modificationswill have different effects on subsequent performance in finetuning, which demonstrates both theeffectiveness and limitation of the offline-to-online pipeline we discussed.ReachPickKitchenFigure 13. Sample trajectories. We include eleven trajectories from the offline dataset or evaluationresults, which illustrate all real-world tasks considered in this work. Successful trajectories aremarked green while failed trajectories are marked red.17B.2 Simulation Tasks and DatasetsxArm Push andpick are two visuo-motor control tasks in the xArm robot simulation environ-ment [40] implemented in MuJoCo. The observations consist of an 84×84RGB image and a4-dimensional robot proprioceptive state including the position of the end-effector and the openingof the gripper. The action space is the control signal for this 4-dimensional robot state. The tasksare visualized in Figure 3 (left) .push requires the robot to push a green cube to the red target. Thegoal in pick is to pick up a cube and lift it above a height threshold. Handcrafted dense rewards areused for these two tasks. We collected the offline data for offline-to-online finetuning experimentsby training TD-MPC agents from scratch on these tasks. The medium datasets contain 40k transi-tions (800 trajectories) sampled from a sub-optimal agent, and the medium-replay datasets containthe first 40k transitions (800 trajectories) from the replay buffers. Figure 14 gives an overview ofthe offline data distribution for the two tasks.010203040506070CountPush (medium-replay)0102030405060CountPick (medium-replay)−30 −25 −20 −15 −10 −5 0Episode Return010203040506070CountPush (medium)0 5 10 15 20 25 30 35Episode Return0102030405060CountPick (medium)Figure 14. Offline dataset statistics for xArm tasks in simulation. We plot the distribution ofepisode returns for trajectories in the two offline datasets. The red line indicates the mean perfor-mance achieved by our method after online finetuning.Quadruped locomotion Walk is a state-only continuous control task with a 12-DoF Unitree Go1robot, as visualized in Figure 3 (left) . The policy takes robot states as input and output control signalfor 12 joints. The goal of this task is to control the robot to walk forward at a specific velocity.Rewards consist of a major velocity reward that is maximized when the forward velocity matchesthe desired velocity, and a minor component that penalizes unsmooth actions.Transfer tasks We designed nine transfer tasks based on reach (the same task as real reach butsimplified because of the knowledge of ground-truth positions) and push with xArm, and two trans-fer tasks with the legged robot in simulation to evaluate the generalization capability of offlinepretrained models. Compared to real-world tasks, the online budget is abundant in simulation, thuswe increase the disparity between offline and online tasks such as finetuning on a totally differenttask. As the target point for both xArm tasks is a red circle, we directly use reach as offline pretraintask and online finetuning on different instances of push including push cube, push sphere, pushcylinder, and push cube with an obstacle. For quadruped locomotion, we require the robot to walkat a higher target speed (twice the pretrained speed) and to walk on new rugged terrain. Tasks areillustrated in Figure 15.D4RL We consider four representative tasks from two domains (Hopper and AntMaze) in theD4RL [38] benchmark. Each domain contains two data compositions. Hopper is a Gym locomotiondomain where the goal is to make hops that move in the forward (right) direction. Observations18contain the positions and velocities of different body parts of the hopper. The action space is a 3-dimension space controlling the torques applied on the three joints of the hopper. Hopper (medium)uses 1M samples from a policy trained to approximately 1/3 the performance of the expert, whileHopper (medium-replay) uses the replay buffer of a policy trained up to the performance of themedium agent. Antmaze is a navigation domain with a complex 8-DoF quadruped robot. We usethemedium maze layout, which is shown in Figure 3 (left) . The play dataset contains 1M samplesgenerated by commanding specific hand-picked goal locations from hand-picked initial positions,and the diverse dataset contains 1M samples generated by commanding random goal locations inthe maze and navigating the ant to them. This domain is notoriously challenging because of theneed to “stitch” suboptimal trajectories. These four tasks are officially named hopper-medium-v2 ,hopper-medium-replay-v2 ,antmaze-medium-play-v2 andantmaze-medium-diverse-v2in the D4RL benchmark.C Implementation DetailsQ-ensemble and uncertainty estimation We provide PyTorch-style pseudo-code for the imple-mentation of the Q-ensemble and uncertainty estimation discussed in Section 3.2. Here Qsis a listofQ-networks. We use the minimum value of two randomly selected Q-networks for Q-value es-timation, and the uncertainty is estimated by the standard deviation of all Q-values. We use fiveQ-networks in our implementation.def Q_estimate(Qs, z, a):x = torch.cat([z, a], dim=-1) # concatenate (latent) state and actionidxs = random_choice(len(Qs), 2, replace=False) # randomly select two distinct Qsq1, q2 = Qs[idxs[0]](x), Qs[idxs[1]](x)return torch.min(q1, q2) # return the minimum of the two as Q value estimationdef Q_uncertainty(Qs, z, a):x = torch.cat([z, a], dim=-1) # concatenate (latent) state and actionqs = torch.stack(list(q(x) for q in Qs), dim=0)uncertainty = qs.std(dim=0) # compute the standard deviation as uncertaintyreturn uncertaintyNetwork architecture For the real robot tasks and simulated xArm tasks where observations con-tain both an RGB image and a robot proprioceptive state, we separately embed them into featurevectors of the same dimensions with a convolutional neural network and a 2-layer MLP respec-tively, and do element-wise addition to get a fused feature vector. For real-world kitchen tasks,where observations include two RGB images and a proprioceptive state, we use separate encodersto embed them into three feature vectors and do the element-wise addition. For D4RL and quadrupedlocomotion tasks where observations are state features, only the state encoder is used. We use fiveQ-networks to implement the Q-ensemble for uncertainty estimation. All Q-networks have the samearchitecture. An additional Vnetwork is used for state value estimation as discussed in Section 3.1.Hyperparameters We list the hyperparameters of our algorithm in Table 5. The hyperparametersrelated to our key contributions are highlighted .Other details We apply image shift augmentation [52] to image observations, and use PrioritizedExperience Replay (PER; [53]) when sampling from replay buffers.D BaselinesTD-MPC We use the same architecture and hyperparameters for our method and our threeTD-MPC baselines as in the public TD-MPC implementation from https://github.com/nicklashansen/tdmpc , except that multiple encoders are used to accommodate both visual inputsand robot proprioceptive information in the real robot and xArm tasks, as described in Apppendix C.19For the TD-MPC (+data) baseline, we append the offline data to the replay buffer at the beginningof online training so that they can be sampled together with the newly-collected data for model up-date. For the TD-MPC (+offline) baseline, we na ̈ıvely pretrain the model on offline data and thenfinetune it with online RL without any changes to hyperparameters.IQL We use the official implementation from https://github.com/ikostrikov/implicit_q_learning for the IQL baseline. We use the same hyperparameters that the authors usedfor D4RL tasks. For xArm tasks, we perform a grid search over the hyperparameters τ∈{0.5,0.6,0.7,0.8,0.9,0.95}andβ∈ {0.5,1.0,3.0,10.0}, and we find that expectile τ= 0.95and temperature β= 10.0achieves the best results. We add the same image encoder as ours to theIQL implementation in visuo-motor control tasks.Table 5. Hyperparameters.Hyperparameter ValueExpectile ( τ) 0.9 (AntMaze, xArm, Walk)0.7 (Hopper)AWR temperature ( β) 10.0 (AntMaze)3.0 (Hopper, xArm)1.0 (Walk)Uncertainty coefficient ( λ) 1 (xArm, Walk)3 (AntMaze)20 (Hopper)Qensemble size 5Batch size 256Learning rate 3e-4Optimizer Adam( β1= 0.9, β2= 0.999)Discount 0.99 (D4RL, Walk)0.9 (xArm)Action repeat 1 (D4RL, Walk)2 (xArm)Value loss coefficient 0.1Reward loss coefficient 0.5Latent dynamics loss coefficient 20Temporal coefficient 0.5Target network update frequency 2Polyak 0.99MLP hidden size 512Latent state dimension 50Population size 512Elite fraction 50Policy fraction 0.1Planning iterations 6 (xArm, Walk)1 (D4RL)Planning horizon 5Planning temperature 0.5Planning momentum coefficient 0.120(a)Reach.(b)Push.(c)Push sphere with increased lighting.(d)Push cylinder.(e)Push with obstacle.(f)Walk on rugged terrain.Figure 15. Transfer tasks in simulation. We consider a total of eleven transfer settings in sim-ulation. We here visualize a trajectory for each of the tasks used in our xArm experiments, anda trajectory for walking on rugged terrain with the legged robot. Task labels correspond to thoseshown in Figure 10.21 |
EXQ0eXtX3OW | Dexterity from Touch: Self-Supervised Pre-Trainingof Tactile Representations with Robotic PlayIrmak Guzey1,†Ben Evans1Soumith Chintala2Lerrel Pinto1New York University1, Meta2tactile-dexterity.github.io∗Abstract: Teaching dexterity to multi-fingered robots has been a longstandingchallenge in robotics. Most prominent work in this area focuses on learning con-trollers or policies that either operate on visual observations or state estimates de-rived from vision. However, such methods perform poorly on fine-grained manip-ulation tasks that require reasoning about contact forces or about objects occludedby the hand itself. In this work, we present T-D EX, a new approach for tactile-based dexterity, that operates in two phases. In the first phase, we collect 2.5 hoursof play data, which is used to train self-supervised tactile encoders. This is nec-essary to bring high-dimensional tactile readings to a lower-dimensional embed-ding. In the second phase, given a handful of demonstrations for a dexterous task,we learn non-parametric policies that combine the tactile observations with vi-sual ones. Across five challenging dexterous tasks, we show that our tactile-baseddexterity models outperform purely vision and torque-based models by an aver-age of 1.7X. Finally, we provide a detailed analysis on factors critical to T-D EXincluding the importance of play data, architectures, and representation learning.Keywords: Tactile, Dexterity, Manipulation1 IntroductionHumans are able to solve novel and complex manipulation tasks with small amounts of real-worldexperience. Much of this ability can be attributed to our hands, which allow for redundant contactsand multi-finger manipulation. Endowing multi-fingered robotic hands such dexterous capabilitieshas been a long-standing problem, with approaches ranging from physics-based control [1] to sim-ulation to real (sim2real) learning [2, 3]. More recently, the prevalence of improved hand-pose esti-mators has enabled imitation learning approaches to teach dexterity, which in turn improves sampleefficiency and reduces the need for precise object and scene modelling [4, 5, 6].Even with improved algorithms, teaching dexterous skills is still quite inefficient, requiring largeamounts of demonstration data and training [5, 2]. While algorithmic improvements in control willinevitably lead to improvements in dexterity over time, an often overlooked source of improvementlies in the sensing modality. Current dexterous robots either use high-dimensional visual data orcompact states estimated from them. Both suffer significantly either when the task requires reason-ing about contact forces, or when the fingers occlude the object being manipulated. In contrast tovision, tactile sensing provides rich contact information while being unaffected by occlusions.So, why is tactile-based dexterity hard to achieve? There are three significant challenges. First, tac-tile sensors are difficult to simulate, which limits the applicability of sim2real based methods [2, 7]to binary contact sensors [8]. Second, for many commonly available tactile sensors, precise calibra-tion of analog readings to physical forces is difficult to achieve [9]. This limits the applicability of∗†Correspondence to irmakguzey@nyu.edu .7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Tactile ObservationTactile PadsTactile EncoderPolicy(a) Framework for Tactile-based DexterityCup UnstackingBowl UnstackingBook Opening(b) Visualization of T-Dex rollouts on three selected robot tasks Figure 1: T-D EXlearns dexterous policies from high-dimensional tactile sensors on a multi-fingered robothand. Combined with vision, our tactile representations are crucial to learn fine-grained manipulation tasks.physics-based control. Third, for multi-fingered hands, tactile sensors need to cover a larger areacompared to two-fingered hands. This increases the dimensionality of the tactile observation, whichin turn makes learning-based approaches inefficient. A common approach to alleviate this chal-lenge in vision-based learning is to use pretrained models that encode high-dimensional images tolow-dimensional representation. However, such pretrained models do not exist for tactile data.In this work, we present T-D EX, a new approach to teach tactile-based dexterous skills on multi-fingered robot hands. To overcome issues in simulating tactile sensors and calibration, we use animitation framework that trains directly using raw tactile data obtained by teleoperating the robot.However, directly reasoning about actions from raw tactile data would still require collecting largeamounts of demonstrations. To address this, we take inspiration from recent works in robot play [10],and pretrain our own tactile representations. This is done by collecting 2.5 hours of aimless data ofobject manipulation. Tactile data collected through this play is used to train tactile encoders throughstandard self-supervised techniques, mitigating the need for exact force calibration.Given this pretrained tactile encoder, we use it to solve tactile-rich dexterous tasks with just a handfulof demonstrations: 6 demonstrations per task, corresponding to under 10 minutes of demonstrationtime. To achieve imitation with so few demonstrations, we employ a non-parameteric policy thatretrieves nearest-neighbor actions from the demonstration set. Such a policy provides significantimprovements over fully parameteric policies (Table 1). Importantly, this enables combining tactileencodings with other sensor modalities such as vision without additional training or sensor fusion.This ability to combine touch with vision makes T-D EXcompatible with tasks that require visualsensing for coarse-grained manipulation and tactile sensing for fine-grained manipulation.We evaluate T-D EXacross five challenging tasks such as opening a book, bottle cap opening, andprecisely unstacking cups (see Figure 1). Through a large-scale experimental study of over 50 hrsof robot evaluation we present the following insights:1. T-D EXimproves upon vision-only and torque-only imitation models with over a 170% improve-ment in average success rate (Section 4.2).2. Play data significantly improves tactile-based imitation, with an average of 58% improvementover tactile models that do not use play data (Section 4.3).3. Ablations on different tactile representations and architectures show that the design decisions inT-D EXare important for high performance (Section 4.4).Robot videos and qualitative studies of T-D EXare best viewed on tactile-dexterity.github.io .2 Related WorkOur work builds on several prior ideas in tactile sensing, representation learning and imitation learn-ing. For brevity, we describe the most relevant below and present more discussion in Appendix A.2Tactile Sensing in Dexterity: Multi-fingered robot control has been studied extensively [11, 12,13]. Initial work focuses on physics-based modelling of grasping [14, 15] that often used contactforce estimates to compute grasp stability. However, contact estimates derived from motor torqueonly give point estimates and are susceptible to noise due to coupling with the hand’s controller.To give robots a sense of touch, many tactile sensors have been created for enhancing robotic sens-ing [16, 17, 18]. Prominently, the GelSight sensor has been used for object identification [19],geometry sensing [20], and pose estimation [21]. However, since GelSight requires a large formfactor, it is difficult to cover an entire multifingered hand with it. Instead, ‘skin’-like sensors [22]and tactile pads can cover entire hands, yielding high-dimensional tactile observations for dexterity.In this work, we use the XELA uSkin [23] sensors to cover our Allegro hand.Learning-based approaches have been employed to leverage high-dimensional readings from tactilesensors for a variety of applications such as grasping and manipulation with two-fingered grippers[24, 25, 26], object classification [27] and 3d shape detection [28]. However, these prior worksdiffer from T-D EXin two key ways. First, such tactile learning methods have not been appliedto multifingered hands. Second, the tactile representations learned in these works require largeamounts of task-centric data for each task. On the other hand, T-D EXuses a large amount of task-agnostic play data, which enables learning tasks with small amounts of data per task. Concurrent toour work, binary tactile sensing has shown success for in-hand manipulation [8] through sim2realtraining. However, its application to high-dimensional tactile data is under explored.Representations in Offline Imitation Imitation Learning (IL) allows for efficient training of skillsfrom data demonstrated by an expert. Given a set of demonstrations, offline imitation methods suchas Behavior Cloning (BC) use supervised learning to learn a policy that outputs actions similar to theexpert data [29, 30, 31, 32, 33]. However, such methods often require demonstrations on the orderof hundreds to thousands trajectories and collecting the same quantity of data for dexterous tasksis difficult due to cognitive and physical demands of teleoperating multi-fingered hands. To learnwith fewer demonstrations in high-dimensional action spaces, non-parametric approaches such asnearest neighbors have shown to be more effective than parametric ones [4, 5]. Although efficient,non-parametric learning approaches require good representations of the environment. Several priorwork have looked at learning visual features using self-supervision [34, 35, 5, 36]. We build on thisidea to tactile observations and train tactile features using our human-generated robot play data.Exploratory and Play Data: Since task-specific data can be expensive to collect, a number ofworks have examined leveraging off-policy data to improve task performance. Previous work hasused play data to learn latent plan representations [37] and to learn a goal-conditioned policy [38].Recent work in offline RL has noted that including exploratory data improves downstream perfor-mance [39] and that actively straying away from the task improves robustness [40]. These findingsare paralleled by studies on motor development in humans. 3-5-month old infants spontaneouslyexplore novel objects [41] and 15-month-old infants produce the same quantity of locomotion in aroom with or without toys [42]. Given these motivating factors, we opt to leverage a play dataset ofimperfect, tactile-rich interactions in order to improve our representations and task performance.3 Tactile-Based Dexterity (T-D EX)T-D EXoperates in two phases: pretraining from task-agnostic play data and downstream learningfrom a few task-specific demonstrations. In the pretraining phase, we begin by collecting a diverse,contact-rich play dataset from a variety of objects by teleoperating the robot (see Appendix B.1 forthe setup). Once collected, we use self-supervised learning (SSL) [43] algorithms on the play data tolearn an encoder for tactile observations. In the downstream learning phase, a teleoperator collectsdemonstrations of solving a desired task. Given these demonstrations, non-parametric imitationlearning is combined with the pretrained tactile encoder to efficiently learn dexterous policies. SeeFigure 3 for a high-level overview of our framework. Details of individual phases follow.3Figure 2: Visualization of some of the play tasks that uses grasping, pinching and other manipulation tasks.3.1 Phase I: Pre-Training Tactile Representations from PlayPlay Data Collection: The play data is collected from a variety of contact-rich tasks includingpicking up objects, grasping a steering wheel, and in-hand manipulation which are illustrated inFigure 2. We collect a total of 2.5 hours of play data, including failed examples and random behavior.Because the image and tactile sensors operate at 30Hz and 100Hz, respectively, we sub-sample thedata to about 10Hz to reduce the size of the dataset. We only include observations whenever the totalchanged distance of the fingertips and robot end effector exceed 1cm, reducing the dataset from 450kframes to 42k. This filters out the similar states when the robot is still, which could potentially biasthe SSL phase. All this data, along with camera streams, is publicly released on our project website.Feature Learning: To extract useful representations from the play data we use Bootstrap yourown Latent (BYOL) [44], an SSL method which tries to learn informative representations from rawobservations. We treat the tactile sensor data as an image with one channel for each axis of force.Each of the finger’s 3-axis 4x4 sensors are stacked into a column to produce a 16x4 image for thefingers and a 12x4 image for the thumb. These images are then concatenated to produce a three-channel 16x16 image with constant padding for the shorter thumb. A visualization of the tactileimages can be seen in Figure 3. We pre-train an Alexnet [45] encoder on these tactile images fromthe play data using BYOL and use that encoder to extract tactile features in downstream tasks. Moreinformation on BYOL and our use of it on tactile readings can be found in Appendix C.1 and D.1.3.2 Phase II: Non-parametric LearningDemonstration Collection: Six demonstrations are collected for each task by a teleoperator usingthe Holodex [5] framework. Because the nature of our tasks are contact-dependent and the humanoperator does not receive tactile feedback, there is a relatively high failure rate while collectingdemonstrations. Although successful demonstrations correspond to at most 10 minutes of robottime, it requires up to 60 minutes of collection time in order to get successful demonstrations whichresults in 5/6 failure rate in teleoperation. Success of a task is determined by visual inspectionof robot scene by the teleoperator. This highlights the importance of using tactile feedback fordexterous manipulation along with the necessity of learning from few task-specific demonstrations.Visual Feature Learning: Many dexterous tasks require coarse-grained information about thelocation of the object to be manipulated. This necessitates incorporating vision feedback as tactileobservations are not meaningful when the hand is not touching the object. To do this, we extractvisual features using standard BYOL augmentations on the images collected from demonstration4Phase 1: Self-supervised learning from play Play Data Tactile ValuesAug. 1Aug. 2AugmentationsTrainingTactile EncoderSSL LossTarget EncoderDemonstrationsRolloutDemonstration BufferPhase 2: Downstream learning with Nearest-Neighbor retrieval Tactile EncoderTactile EncoderApplied ActionCurrent obsNext obsFigure 3: An overview of the T-D EXframework. Left: we train tactile representations using BYOL on a largeplay dataset. Right: we leverage the learned representations using nearest neighbors imitation.data. The views for each task are significantly different, so we did not observe a benefit fromincluding the play data in the visual representation learning. Similar to prior work [4, 5], we startwith a ResNet-18 [46] architecture that has been pre-trained on ImageNet [47] classification.Downstream Imitation Learning: Our action space consists of both the hand pose, specifiedby 16 absolute joint angles, and the robot end effector position and orientation, specified by a 3-dimensional position and a 4-dimensional quaternion. Due to both the high-dimensional action andobservation spaces, parametric methods struggle to learn quality policies in the low-data regime.To mitigate this, we use a nearest neighbors-based imitation learning policy [35] to leverage ourdemonstrations. For each tuple of observations and actions in the demonstrations (oVi, oTi, ai), wecompute visual and tactile features (yVi, yTi)and store them along-side the corresponding actions.Since the scales of the two features are at different, we scale both features such that the maximumdistance in the dataset for each feature is 1. At test time tgiven ot, we compute (yVt, yTt), find thedatum with the lowest total distance, and execute the action associated with it.4 ExperimentsWe evaluate T-D EXon a range of tasks that are designed to answer the following questions: (a)Does tactile information improve policy performance? (b) How important is play data to our repre-sentations? (c) What are important design choices when learning dexterity from tactile information?4.1 Experimental OverviewDescription of Dexterous Tasks: We examine five dexterous contact-rich tasks that require pre-cise multi-finger control. We show the robot rollouts for the selected 3 of the tasks in Figure 1. Wedescribe them in detail in Appendix B.2, showcase the contribution of each taxel to tasks in Ap-pendix C.2 and visualize the rollouts for them in Figure 9. To evaluate various models for dexterity,we first collect six demonstrations for each task in which the object’s configuration is varying insidea 10x15cm box. Models are then evaluated on new configurations in the convex hull of demonstratedones. This follows the standard practice of evaluating representations for robotics [48, 49, 50]. Ad-ditional experimental details exist in Appendix B.Baselines for Dexterity: We study the impact of tactile information on policies learned throughimitation, comparing against a number of baselines. Here, we briefly describe these baselines. Ad-ditional baseline and model details can be found in Appendix B.3-D.1.BC [51] : We train a neural network end-to-end to map from visual and tactile features to actions.2.NN-Image [35] : We perform nearest neighbors with the image features only.3.NN-Torque [52] : We perform nearest neighbors with the output torques from our PD controllerand visual representations.4.NN-Tactile : Nearest neighbors with only the tactile features trained on play data.5Table 1: Real-world success rate of the learned policiesBC NN-Im NN-Tac NN-Task NN-Tor BET IBC GMM T-D EXJoystick 0/10 4/10 6/10 8/10 7/10 0/10 0/10 0/10 8/10Bottle 0/10 0/10 6/10 3/10 3/10 0/10 0/10 0/10 6/10Cup 0/10 0/10 0/10 4/10 2/10 0/10 0/10 0/10 8/10Bowl 0/10 2/10 2/10 3/10 4/10 0/10 0/10 0/10 7/10Book 0/10 5/10 0/10 6/10 3/10 0/10 0/10 0/10 9/10Average 0/10 2.2/10 2.8/10 4.8/10 3.8/10 0/10 0/10 0/10 7.6/10Robot observationImage-onlyTactile-onlyT-DexFigure 4: Visualization of the camera image, top-two activated tactile sensors, and their nearest neighbors forT-D EXand baselines. We observe that baselines fail to either recognize the contact with the object (image-only) or to capture the position of the robot (tactile-only).5.NN-Task : Instead of training the tactile encoder on the play data, we train it on the 6 task-specificdemonstrations. Representation retrieval is same as T-D EX.6.Behavior Transformer (BeT) [31] : Transformer based behavior cloning baseline that focuses oncapturing distributionally multi-modal behaviors from a diverse set of demonstration data.7.Implicit Behavioral Cloning (IBC) [30] : We fit an energy based model (EBM) on the joint ob-servation and action space and choose actions that minimize the output of this EBM.8.Gaussian Mixture Models [53] on top of BC (BC-GMM) [32] : A probabilistic model that repre-sents the action space as a combination of multiple Gaussian distributions given observations.9.T-D EX: This is our proposed method with the tactile encoder pre-trained on all the play datafollowed by nearest neighbor retrieval with image and tactile features on task data.Quantitative results can be found in Table 1, while robot rollouts and qualitative comparisons of thebaselines are shown our website tactile-dexterity.github.io .4.2 How important is tactile sensing for dexterity?T-DEXImage onlyT-DEXImage onlyFigure 5: Visualization of the failure modes of ourImage only baseline. Without tactile information,the robot applies either too much force or does notcorrectly contact with the objects.In Table 1 we report success rates of T-D EXalongwith baseline methods. We observe that amongthe nearest neighbor based methods, we find thattactile-only (NN-tactile) struggles on Book Openingand Cup Unstacking since the hand fails to local-ize the objects to make first contact. On the otherhand, the image-only (NN-Image) struggles on Bot-tle Opening and Cup Unstacking as severe occlu-sions caused by the hand result in poor retrievals.Using torque targets (NN-Torque) instead of tactilefeedback proved useful, improving over NN-Image,but did not match using tactile.6We find that combining the coarse-grained localization ability of NN-Image along with the fine-grained manipulation of NN-Tactile results in the strongest results across all tasks. To further analyzewhy T-D EXperforms so well, we visualize the nearest neighbors of states for the image-only andtactile-only methods in Figure 4. And show additional failure modes for NN-Image in Figure 5. Ourmethod produces neighbors that seem to capture the state of the world better than other baselines.More details of the robot policy can be found in Appendix B.2 and E.4.3 Does pre-training on play improve tactile representations?1 5 20 80 150Amount of Play Data (Minutes)020406080100Success Rate (%)Success Rate vs Amount of Play DataBook OpeningCup UnstackingFigure 6: Success rate on unstacking tasks withvarying amount of play data. Training only ontask data performs moderately well, but is out-performed with just 20 minutes of play.To understand the importance of pre-training, we runNN-Task, which pre-trains tactile representations ontask data. As seen in Table 1, this baseline does quitewell on the simpler Joystick Movement task. However,on harder tasks, particularly the Unstacking tasks andBottle Opening, we find that NN-Task struggles signif-icantly. This can be attributed to poor representationalmatches when trained on limited task data. To mitigatethis, we also try training the encoder with a combi-nation of successful and failed demonstrations on theBowl Unstacking task, getting a success rate of 30%,which shows no improvement in task performance.To provide further evidence for the usefulness of tac-tile pretraining, we plot the gains in performance across varying amounts of play data in Figure 6.We see that for easier tasks like Book Opening, even small amounts of play data (20 mins) is suffi-cient to achieve a 90% success rate. However, for harder tasks like Cup Unstacking, we see steadyimprovements in success rate with larger amounts of play data.4.4 Analysis of the design decisions in T-D EXChoice of Policy Class: T-D EXuses a non-parametric class instead of commonly used parametricones. To study the importance of this choice, we compare across a variety of fully parametricpolicy architectures including BC [29], IBC [30], BeT [31], BC-GMM [32] in Table 1. We observethat parametric methods fail to learn high-quality policies and significantly overfits on the smallamount of data with the high-dimensional action space. This emphasizes the need for retrieval-based policies, which have shown success with few demonstrations [35, 5].Choice of the encoder architecture: A critical component in T-D EXis the architectural details inrepresenting and processing tactile data. In this section, we examine various approaches to representtactile features. For simplicity, we study a subset of the tasks, Book Opening and Cup Unstackingand show the results in Table 2. Each encoder is trained using BYOL on the play dataset withthe same augmentations used in the main method. We compare our main encoder, AlexNet withfew different architectures: (a) ResNet: A standard Resnet-18 [46] with weights pre-trained onthe ImageNet [47] classification task. (b) 3-layer CNN: A CNN with three layers initialized withrandom weights. (c) Stacked CNN: Rather than laying out the sensor data of the fingers spatially inthe image, we consider stacking the sensor output into one 45-channel image. (d) Shared CNN: Wepass individual pad values to the same network and concatenate the outputs. (e) Raw Tactile: Weflatten the raw tactile data into a 720-dimensional vector instead of using an encoder.Table 2: Success rates of various representations for tactile data on two of our tasks.T-D EX ResNet 3-layer Stacked Shared RawBook Opening 9/10 9/10 6/10 5/10 2/10 5/10Cup Unstacking 8/10 6/10 3/10 1/10 1/10 3/107We find that both T-D EXand ResNet perform similarly on Book Opening, although ResNet takessignificantly more computation for the same results. On Cup Unstacking we find that ResNet per-forms a little worse than T-D EX, which further informs our architectural choice. While, one mayconclude that smaller architectures are better, we see that a simpler 3-layered CNN also performspoorly and does not reach the performance of either of the larger models.Structure of the tactile input: Apart from the architecture, we find that the structure of inputtingtactile data from individual tactile pads is also important. For example, we find that stacking tactilepads channel-wise is substantially worse than T-D EXthat stacks the tactile pads spatially. Similarlywe find that using a shared encoder for each tactile pad is also poor. This is perhaps because of thenoise that exists in high-dimensional raw tactile data, which is difficult to filter out with the stackedand shared encoders. Hence, one spurious reading in an unused tactile pad could yield an incorrectneighbor, producing a bad action. This hypothesis is further substantiated by the result of the RawTactile method, which is roughly on par with the Stacked method.Alternative representations: We additionally run three experiments with different tactile repre-sentations on the Bowl Unstacking task to analyze our choice of representation. We run PCA on theRaw Tactile features on the play dataset and use the top 100 components as features, achieving asuccess rate of 40%. When PCA fails, it is not able to capture fine-grained tactile information that isnecessary to solve the task. Next, we sum the activations of each 4x4 tactile sensor in each dimen-sion to create a 45-dimensional feature, which does not succeed on any task. Finally, we shuffle theorder of the pads in the tactile image, which achieves 20% success, which is much lower than usingthe structured layout (Section 3.1), showing that the layout of tactile data is crucial for performance.4.5 Generalization to Unseen Objects3/10CupBowlT-DEX7/103/100/105/106/105/104/10Figure 7: We show success rates of T-D EXon objects not seenduring demonstration collection.To examine the generalization abilityof T-D EX, we run the Bowl Unstack-ing and Cup Unstacking tasks withunseen crockery. T-D EXpolicies foreach of these tasks were trained usingonly one set of objects, seen in Fig-ure 1 (b). Without any modifications,the policy is then run on new objects,with the objects placed in 10 differentconfigurations. As seen in Figure 7,T-D EXis able to generalize to thesenew objects in varying degree. For objects of similar shapes and size as the training object, T-D EXdoes quite well. However, it begins to fail when the objects change significantly. For example, theinner bowl in the right-most column for Bowl Unstacking is too small for the hand to pick up.5 Limitations and ConclusionIn this work, we have presented an approach for tactile-based dexterity (T-D EX) that combinestactile pretraining on play data along with efficient downstream learning on a small amount of task-specific data. Our results indicate that T-D EXcan significantly improve over prior approaches thatuse images, torque and tactile data. However, we recognize two key limitations. Although T-D EXsucceeds on several out-of-distribution examples, the success rate is lower than the training setting.The second, is that our approach is currently limited to offline imitation, which limits the ability ofour policies to learn from failures. Both limitations could be addressed by integrating online learningand better tactile-vision fusion algorithms. While these aspects are out of scope to this work, wehope that the ideas introduced in T-D EXcan spur future work in these directions.8AcknowledgmentsWe thank Vaibhav Mathur, Jeff Cui, Ilija Radosavovic, Wenzhen Yuan and Chris Paxton for valuablefeedback and discussions. This work was supported by grants from Honda, Meta, Amazon, andONR awards N00014-21-1-2758 and N00014-22-1-2773.References[1] V . Kumar, E. Todorov, and S. Levine. Optimal control with learned local models: Applicationto dexterous manipulation. In 2016 IEEE International Conference on Robotics and Automa-tion (ICRA) , pages 378–383, 2016. doi:10.1109/ICRA.2016.7487156.[2] OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki,A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder,L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation, 2019.[3] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron,A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder,L. Weng, Q. Yuan, W. Zaremba, and L. Zhang. Solving rubik’s cube with a robot hand. arXiv ,2019.[4] S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation madeeasy: A learning-based framework for efficient dexterous manipulation. arXiv preprintarXiv:2203.13251 , 2022.[5] S. P. Arunachalam, I. G ̈uzey, S. Chintala, and L. Pinto. Holo-dex: Teaching dexterity withimmersive mixed reality, 2022. URL https://arxiv.org/abs/2210.06463 .[6] A. Handa, K. Van Wyk, W. Yang, J. Liang, Y .-W. Chao, Q. Wan, S. Birchfield, N. Ratliff, andD. Fox. Dexpilot: Vision-based teleoperation of dexterous robotic hand-arm system. In 2020IEEE International Conference on Robotics and Automation (ICRA) , pages 9164–9170, 2020.doi:10.1109/ICRA40945.2020.9197124.[7] Y . Wang, W. Huang, B. Fang, F. Sun, and C. Li. Elastic tactile simulation towards tactile-visualperception. In Proceedings of the 29th ACM International Conference on Multimedia , pages2690–2698, 2021.[8] Z.-H. Yin, B. Huang, Y . Qin, Q. Chen, and X. Wang. Rotating without seeing: Towards in-handdexterity through touch. arXiv preprint arXiv:2303.10880 , 2023.[9] H. Lee, H. Park, G. Serhat, H. Sun, and K. J. Kuchenbecker. Calibrating a soft ert-based tactilesensor with a multiphysics model and sim-to-real transfer learning. In 2020 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 1632–1638. IEEE, 2020.[10] S. Young, J. Pari, P. Abbeel, and L. Pinto. Playful interactions for representation learning.arXiv preprint arXiv:2107.09046 , 2021.[11] M. T. Ciocarlie, C. Goldfeder, and P. K. Allen. Dexterous grasping via eigengrasps : A low-dimensional approach to a high-complexity problem. In Dexterous Grasping via Eigengrasps: A Low-dimensional Approach to a High-complexity Problem , 2007.[12] V . Kumar, Y . Tassa, T. Erez, and E. Todorov. Real-time behaviour synthesis for dynamic hand-manipulation. In 2014 IEEE International Conference on Robotics and Automation (ICRA) ,pages 6808–6815. IEEE, 2014.[13] S. Shigemi. ASIMO and Humanoid Robot Research at Honda , pages 1–36. Springer Nether-lands, Dordrecht, 2018. ISBN 978-94-007-7194-9. doi:10.1007/978-94-007-7194-9 9-2.URL https://doi.org/10.1007/978-94-007-7194-9_9-2 .9[14] A. M. Okamura, N. Smaby, and M. R. Cutkosky. An overview of dexterous manipulation. InProceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Roboticsand Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 1, pages 255–262.IEEE, 2000.[15] L. U. Odhner, L. P. Jentoft, M. R. Claffee, N. Corson, Y . Tenzer, R. R. Ma, M. Buehler, R. Ko-hout, R. D. Howe, and A. M. Dollar. A compliant, underactuated hand for robust manipulation.The International Journal of Robotics Research , 33(5):736–752, 2014.[16] S. Wang, Y . She, B. Romero, and E. Adelson. Gelsight wedge: Measuring high-resolution 3dcontact geometry with a compact robot finger, 2021. URL https://arxiv.org/abs/2106.08851 .[17] R. M. Bhirangi, T. L. Hellebrekers, C. Majidi, and A. Gupta. Reskin: versatile, replaceable,lasting tactile skins. CoRR , abs/2111.00071, 2021. URL https://arxiv.org/abs/2111.00071 .[18] A. Alspach, K. Hashimoto, N. Kuppuswamy, and R. Tedrake. Soft-bubble: A highly com-pliant dense geometry tactile sensor for robot manipulation. In 2019 2nd IEEE InternationalConference on Soft Robotics (RoboSoft) , pages 597–604. IEEE, 2019.[19] R. Patel, R. Ouyang, B. Romero, and E. Adelson. Digger finger: Gelsight tactile sensor forobject identification inside granular media. In Experimental Robotics: The 17th InternationalSymposium , pages 105–115. Springer, 2021.[20] S. Dong, W. Yuan, and E. H. Adelson. Improved gelsight tactile sensor for measuring geom-etry and slip. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 137–144. IEEE, 2017.[21] T. Kelestemur, R. Platt, and T. Padir. Tactile pose estimation and policy learning for unknownobject manipulation. arXiv preprint arXiv:2203.10685 , 2022.[22] R. Dahiya, N. Yogeswaran, F. Liu, L. Manjakkal, E. Burdet, V . Hayward, and H. J ̈orntell.Large-area soft e-skin: The challenges beyond sensor designs. Proceedings of the IEEE , 107(10):2016–2033, 2019. doi:10.1109/JPROC.2019.2941366.[23] T. P. Tomo, M. Regoli, A. Schmitz, L. Natale, H. Kristanto, S. Somlor, L. Jamone, G. Metta,and S. Sugano. A new silicone structure for uskin—a soft, distributed, digital 3-axis skin sensorand its integration on the humanoid robot icub. IEEE Robotics and Automation Letters , 3(3):2584–2591, 2018. doi:10.1109/LRA.2018.2812915.[24] A. Murali, Y . Li, D. Gandhi, and A. Gupta. Learning to grasp without seeing. In J. Xiao,T. Kr ̈oger, and O. Khatib, editors, Proceedings of the 2018 International Symposium on Ex-perimental Robotics , pages 375–386, Cham, 2020. Springer International Publishing. ISBN978-3-030-33950-0.[25] R. Calandra, A. Owens, D. Jayaraman, J. Lin, W. Yuan, J. Malik, E. H. Adelson, and S. Levine.More than a feeling: Learning to grasp and regrasp using vision and touch. IEEE Robotics andAutomation Letters , 3(4):3300–3307, 2018.[26] Y . She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson. Cable manipulation with atactile-reactive gripper, 2019. URL https://arxiv.org/abs/1910.02860 .[27] M. Zambelli, Y . Aytar, F. Visin, Y . Zhou, and R. Hadsell. Learning rich touch representationsthrough cross-modal self-supervision. In Conference on Robot Learning , pages 1415–1425.PMLR, 2021.[28] S. Wang, J. Wu, X. Sun, W. Yuan, W. T. Freeman, J. B. Tenenbaum, and E. H. Adelson. 3dshape perception from monocular vision, touch, and shape priors. In 2018 IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS) , pages 1606–1613. IEEE, 2018.10[29] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NeurIPS , pages305–313, 1989.[30] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mor-datch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning , pages158–168. PMLR, 2022.[31] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning$k$ modes with one stone. In Advances in Neural Information Processing Systems , 2022. URLhttps://openreview.net/forum?id=agTr-vRQsa .[32] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. Viola: Imitation learning for vision-based manipulationwith object proposal priors. arXiv preprint arXiv:2210.11339 , 2022.[33] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. arXiv preprint arXiv:2108.03298 , 2021.[34] P. Sermanet, C. Lynch, Y . Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain.Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE internationalconference on robotics and automation (ICRA) , pages 1134–1141. IEEE, 2018.[35] J. Pari, N. M. Shafiullah, S. P. Arunachalam, and L. Pinto. The surprising effectiveness ofrepresentation learning for visual imitation, 2021.[36] A. Zhan, R. Zhao, L. Pinto, P. Abbeel, and M. Laskin. A framework for efficient roboticmanipulation. In Deep RL Workshop NeurIPS 2021 , 2020.[37] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learninglatent plans from play, 2019.[38] Z. J. Cui, Y . Wang, N. Muhammad, L. Pinto, et al. From play to policy: Conditional behaviorgeneration from uncurated robot data. arXiv preprint arXiv:2210.10047 , 2022.[39] D. Yarats, D. Brandfonbrener, H. Liu, M. Laskin, P. Abbeel, A. Lazaric, and L. Pinto. Don’tchange the algorithm, change the data: Exploratory data for offline reinforcement learning.arXiv preprint arXiv:2201.13425 , 2022.[40] D. Brandfonbrener, S. Tu, A. Singh, S. Welker, C. Boodoo, N. Matni, and J. Varley. Visualbacktracking teleoperation: A data collection protocol for offline image-based reinforcementlearning, 2022. URL https://arxiv.org/abs/2210.02343 .[41] P. Rochat. Object manipulation and exploration in 2-to 5-month-old infants. DevelopmentalPsychology , 25(6):871, 1989.[42] J. E. Hoch, S. M. O'Grady, and K. E. Adolph. It's the journey, not the destination: Locomotorexploration in infants. Developmental Science , 22(2), Oct. 2018. doi:10.1111/desc.12740.URL https://doi.org/10.1111/desc.12740 .[43] L. Ericsson, H. Gouk, C. C. Loy, and T. M. Hospedales. Self-supervised representation learn-ing: Introduction, advances, and challenges. IEEE Signal Processing Magazine , 39(3):42–62,2022.[44] J.-B. Grill, F. Strub, F. Altch ́e, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch,B. Avila Pires, Z. Guo, M. Gheshlaghi Azar, et al. Bootstrap your own latent-a new approachto self-supervised learning. NeurIPS , 2020.[45] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutionalneural networks. In Advances in neural information processing systems , pages 1097–1105,2012.11[46] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778,2016.[47] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierar-chical image database. In 2009 IEEE conference on computer vision and pattern recognition ,pages 248–255. Ieee, 2009.[48] G. Zhou, V . Dean, M. K. Srirama, A. Rajeswaran, J. Pari, K. B. Hatch, A. Jain, T. Yu,P. Abbeel, L. Pinto, C. Finn, and A. Gupta. Train offline, test online: A real robot learn-ing benchmark. In Deep Reinforcement Learning Workshop NeurIPS 2022 , 2022. URLhttps://openreview.net/forum?id=eqrVnNgkYWZ .[49] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[50] I. Radosavovic, T. Xiao, S. James, P. Abbeel, J. Malik, and T. Darrell. Real-world robotlearning with masked visual pre-training, 2022. URL https://arxiv.org/abs/2210.03109 .[51] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS , 1989.[52] L. Sievers, J. Pitz, and B. B ̈auml. Learning purely tactile in-hand manipulation with a torque-controlled hand. In 2022 International Conference on Robotics and Automation (ICRA) , pages2745–2751, 2022. doi:10.1109/ICRA46639.2022.9812093.[53] D. Reynolds. Gaussian Mixture Models , pages 659–663. Springer US, Boston, MA, 2009.ISBN 978-0-387-73003-5. doi:10.1007/978-0-387-73003-5 196. URL https://doi.org/10.1007/978-0-387-73003-5_196 .[54] I. Mordatch, Z. Popovi ́c, and E. Todorov. Contact-invariant optimization for hand manipula-tion. In Proceedings of the ACM SIGGRAPH/Eurographics symposium on computer anima-tion, pages 137–144, 2012.[55] K. Lowrey, A. Rajeswaran, S. Kakade, E. Todorov, and I. Mordatch. Plan online, learn offline:Efficient learning and exploration via model-based control. arXiv preprint arXiv:1811.01848 ,2018.[56] A. Nagabandi, K. Konoglie, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. arXiv , 2019.[57] W. Huang, I. Mordatch, P. Abbeel, and D. Pathak. Generalization in dexterous manipulationvia geometry-aware multi-task learning. arXiv preprint arXiv:2111.03062 , 2021.[58] T. Chen, J. Xu, and P. Agrawal. A system for general in-hand object re-orientation. Conferenceon Robot Learning , 2021.[59] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V . Kumar. Dexterous manipulation with deepreinforcement learning: Efficient, general, and low-cost. In ICRA , 2019.[60] K. Lowrey, S. Kolev, J. Dao, A. Rajeswaran, and E. Todorov. Reinforcement learning fornon-prehensile manipulation: Transfer from simulation to physical system. In 2018 IEEEInternational Conference on Simulation, Modeling, and Programming for Autonomous Robots(SIMPAR) , pages 35–42. IEEE, 2018.[61] A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk,K. Van Wyk, A. Zhurkevich, B. Sundaralingam, Y . Narang, J.-F. Lafleche, D. Fox, and G. State.Dextreme: Transfer of agile in-hand manipulation from simulation to reality. arXiv , 2022.12[62] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning complex dexterous manipulation with deep reinforcement learning and demonstra-tions. In RSS, 2018.[63] C. Finn, X. Y . Tan, Y . Duan, T. Darrell, S. Levine, and P. Abbeel. Deep spatial autoencodersfor visuomotor learning. In 2016 IEEE International Conference on Robotics and Automation(ICRA) , pages 512–519. IEEE, 2016.[64] D. Ha and J. Schmidhuber. World models. arXiv preprint arXiv:1803.10122 , 2018.[65] L. Pinto, D. Gandhi, Y . Han, Y .-L. Park, and A. Gupta. The curious robot: Learning visualrepresentations via physical interactions. In ECCV , 2016.[66] P. R. Florence, L. Manuelli, and R. Tedrake. Dense object nets: Learning dense visual objectdescriptors by and for robotic manipulation. In Conference on Robot Learning , pages 373–385.PMLR, 2018.[67] B. Chen, A. Sax, G. Lewis, I. Armeni, S. Savarese, A. Zamir, J. Malik, and L. Pinto. Ro-bust policies via mid-level visual representations: An experimental study in manipulation andnavigation. arXiv preprint arXiv:2011.06698 , 2020.[68] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learningof visual representations. arXiv preprint arXiv:2002.05709 , 2020.[69] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin. Unsupervised learning ofvisual features by contrasting cluster assignments. Advances in neural information processingsystems , 33:9912–9924, 2020.[70] A. Bardes, J. Ponce, and Y . LeCun. Vicreg: Variance-invariance-covariance regularization forself-supervised learning. arXiv preprint arXiv:2105.04906 , 2021.[71] D. Yarats, R. Fergus, A. Lazaric, and L. Pinto. Reinforcement learning with prototypical rep-resentations. In International Conference on Machine Learning , pages 11920–11931. PMLR,2021.[72] M. Laskin, A. Srinivas, and P. Abbeel. Curl: Contrastive unsupervised representations forreinforcement learning. In International Conference on Machine Learning , pages 5639–5650.PMLR, 2020.[73] D. Niizumi, D. Takeuchi, Y . Ohishi, N. Harada, and K. Kashino. BYOL for Audio: Exploringpre-trained general-purpose audio representations. IEEE/ACM Transactions on Audio, Speech,and Language Processing , 31:137–151, 2023. ISSN 2329-9304. doi:10.1109/TASLP.2022.3221007. URL http://dx.doi.org/10.1109/TASLP.2022.3221007 .[74] M. Afham, I. Dissanayake, D. Dissanayake, A. Dharmasiri, K. Thilakarathna, and R. Rodrigo.Crosspoint: Self-supervised cross-modal contrastive learning for 3d point cloud understanding.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 9902–9912, 2022.13AppendixA Related WorkA.1 Learning in Dexterous ManipulationThere are several methodologies for using learning in dexterity. Model-based reinforcement learn-ing (RL) methods have been shown to work in both simulation [54, 55] and the real world [56, 1].Model-free RL has been used to train policies both in simulation [57, 58] and directly on hard-ware [59]. Simulation to real transfer has also shown success [60, 2, 61, 52], though it often requiresextensive randomization, significantly increasing training time. The use of expert demonstrationscan reduce the amount of real-world interactions needed to learn a dexterous policy [62, 59, 4].The works mentioned above either use visual observations or estimates of object state, which sufferduring heavy occlusion of the object.A.2 Representation Learning for RoboticsLearning concise representations from high-dimensional observations is an active area of research inrobotics. A wide variety of approaches using auto-encoders [63, 50, 64], physical interaction [65],dense descriptors [66], and mid-level features [67] have been studied.In computer vision, self-supervised learning (SSL) is often used to pre-train visual features fromunlabeled data, improving downstream task performance. Contrastive methods learn features bymoving features of similar observations closer to one another and features of dissimilar observationsfarther from one another [68, 69]. These methods require sampling negative pairs of datapoints,which adds an additional layer of complexity. Non-contrastive methods typically try to learn fea-tures by making augmented versions of the same observation close [44, 70] and do not requiresampling negative examples. Self-supervision has been adopted for visual RL [71, 72, 49, 50] androbotics [34, 35, 5, 36] to improve sample efficiency and asymptotic performance. SSL methodshave also been applied to other sensory inputs like audio [73] and depth [74]. We build on this ideaof self-supervision and extend it to tactile observations. However, unlike visual data, for which largepretrained models or Internet data exists, neither are available for tactile data. This necessitates thecreation of large tactile datasets, which we generate through robot play.B Experiment DetailsB.1 System Details and Robot SetupRealsense CamerasOculus HeadsetKinova JoystickKinova Robot ArmAllegro HandFigure 8: Hardware setting of T-D EX. We use an Oculus Headsetto teleoperate the Allegro hand and the built in Kinova joystickto control the arm. Visual observations are streamed through twodifferent Realsense cameras and tactile observations are saved withXELA touch sensors on the Allegro hand.Our robotic system, visualized in Fig-ure 8, consists of a robotic arm andhand. The arm is a 6-dof Kinova Jacoand the hand is a 16-dof Allegro handwith four fingers. The arm can beteleoperated through the built-in Ki-nova joystick, while the the hand canbe teleoperated using the the Holo-Dex framework [5]. Here, our teleop-erator uses a virtual reality headset toboth visualize robot images and con-trol the hand in real time. The headsetreturns a pose estimate for each fin-ger of the hand which is re-targetedto the Allegro Hand. Inverse Kine-matics is then used to translate targetCartesian positions in space to joint14Cup UnstackingBowl UnstackingBottle OpeningBook OpeningJoystick MovementFigure 9: Visualization of all robot rollouts from T-D EXpolicies. Note the severe visual occlusions when therobot makes contact with the object.angles, which are fed into the low-level hand controller. To achieve robust position control, we usea low-level PD joint position controller with gravity compensation to allow the robot to maintain ahand pose at different orientations in space. Our action space is Cartesian position and orientationof the arm (3D position and 4D quaternion for orientation) and the 16-dimensional joint state of thehand for a total of 23 dimensions.The Allegro hand is fitted with 15 XELA uSkin tactile sensors, 4 on each finger and 3 on thethumb. Each sensor has a 4x4 resolution output of tri-axial force reading (forces in translationalx, y, z) information, which amounts to a 720-dimensional tactile reading. The force readings areuncalibrated, susceptible to hysterisis, and can change when strong magnets or metals are in thevicinity. Due to this, we opt against explicit calibration of the 720 sensor units. To supplement thetactile sensors, we also use two RGB cameras with 640x480 resolution to capture visual informationin the scene, though our policies only uses information from one to execute. Our choice of camerafor tasks depends on which one captures the most visual information about the objects to ensurefairness when comparing to baselines and enable better joint vision and tactile control.B.2 Task DetailsHere we explain the details of our tasks and visualize the robot rollouts for each of them in Figure 9.1.Joystick Movement: Starting over an arcade gamepad, the hand is tasked with movingdown and pulling a joystick backwards. This task is difficult because the hand occludes thegamepad when manipulating it. We collect demonstrations of the joystick in two differentpositions and evaluate on different positions and orientations not seen during training. Atrial is successful if the joystick has been pulled within 60 seconds.2.Bottle Opening: This task requires the hand to open the lid of a bottle. We collect threedemonstrations with the bottle orientation requiring the use of the thumb, and three otherrequiring the use of the middle finger. The task is considered successful if the lid is openwithin 120 seconds.3.Cup Unstacking: Given two cups stacked inside one another, the tasks is to remove thesmaller cup from the inside of the larger one. In addition to occlusion, this task requiresmaking contact both the inner and outer cups before lifting the inner cup with the indexfinger. It is considered a success if the smaller cup is raised outside the larger cup withoutdropping it or knocking the cup off the table within 240 seconds.154.Bowl Unstacking: This task is similar to the previous, but with bowls instead of cups.Since the bowls are larger, multiple fingers are required to lift and stabilize them. A run issuccessful if it has lifted the bowl within 100 seconds.5.Book Opening: This task requires opening a book with three fingers. After making contactwith the cover, the hand must pull up with an arm movement, remaining in contact until itis fully open. The task is considered a success if the book is open within 300 seconds.B.3 Baseline DetailsHere we explain the details of all our baselines. Unless explicitly explained all the image featuresare received by an image encoder trained with BYOL on the task-specific demonstrations and allthe tactile features are received by a tactile encoder pre-trained on all the play data. Success rates ofeach of the baselines are shown in 1.1.Behavior Cloning (BC) [51] : We train a neural network end-to-end to map from visual andtactile features to actions.2.Nearest Neighbors with Image only (NN-Image) [35] : We perform nearest neighbors withthe image features only. During evaluation, to ensure fairness we use viewpoints that canconvey maximal information about the scene.3.Nearest Neighbors with Torque and Image (NN-Torque) [52] : We perform nearest neigh-bors with the output torques from our PD controller and visual observations. The torquetargets can be used as a proxy for force, providing some tactile information.4.Nearest Neighbors with Tactile only (NN-Tactile) : Nearest neighbors with the tactile fea-tures trained on play data. Unlike T-D EXwe do not use vision data for this baseline.5.Nearest Neighbors with Tactile Trained on Task Data (NN-Task) : Instead of training thetactile encoder on the play data, we train it on the 6 task-specific demonstrations.6.Behavior Transformer (BET) [31] : We train a transformer to predict action modes and off-sets given image and tactile features. This method is known to capture the multi-modalityof the environment given unlabeled multi-task data.7.Implicit Behavioral Cloning (IBC) [30] : We train a model that outputs the energy of therepresentation space given image and tactile features and optimize for actions that minimizethe energy landscape. This method is proven to capture the stochastic nature of the envi-ronments and recover better in out-of distribution modes compared to the explicit behaviorcloning (BC) approach.8.Gaussian Mixture Models [53] on top of BC (BC-GMM) : We fit a Gaussian Mixture Modelon the action space and sample actions from a weighted sum of Gaussian component den-sities from given image and tactile features.9.Nearest Neighbors with Tactile Trained on Play Data ( T-D EX): This is our main methodwith the tactile encoder pre-trained on all the play data followed by nearest neighbor re-trieval on task data.For simplicity, we secure the joystick, bottle, and book to the table. This mimics having anothermanipulator keep the object in place while the hand manipulates the object. The bowls and cups arenot secured, making the problem of unstacking much more difficult.To ensure fair evaluation, we start each method with the object in the same configuration for eachindex trial. This corresponds to 10 different starting positions, each of which is used at the start ofeach baseline run.C Tactile Data DetailsIn this section we give more details regarding the mapping of the tactile readings to our robotic setupand the importance of each of the sensor readings to our tasks.16C.1 Mapping of the tactile imagesFigure 10 showcase where each tactile pad is on the Allegro hand and how they are mapped to tactilereadings and images.Tactile ReadingsTactile ImageHeatmap of Usage of Each Tactile PadBowl UnstackingBottle Cap OpeningCup UnstackingJoystick MovementBook Opening45678910111213141512323456789101112131415123456789101112131415Figure 10: We show the location of each tactile pad and how they are mapped to the corresponding tactilereadings and images. Locations could be observed by following the numbered pads and visualizations.C.2 Contribution of each tactile reading to tasksIn order to explore the contribution of each tactile reading to our tasks we created a figure to showthe heatmaps for the variances of each taxel through robot trajectories for different tasks. We observethat the lower segments of the fingers have very high variance due to the lower segments touchingthe palm of the hand in most of the tasks. The variance in the other segments show the importanceof each pad as follows:•Bowl Unstacking : The 2nd and 3rd segment tactile pads of the thumb.•Bottle Cap Opening : Tips of middle, index and the thumb.•Cup Unstacking : Tip, 2nd and 3rd segment of the thumb.•Joystick Movement : Tips of index, middle and ring finger.•Book Opening : Tips of index, middle and ring finger and the 3rd segment of the thumb.Heatmaps for the variances can be seen in Figure 11.D Model DetailsHere we provide additional details about our method and baselines for easier reproduction.For all image-based models, we normalize the inputs based on the mean and standard deviation ofthe data seen during training. For the tactile-based models, we normalize the inputs to be within therange [0,1].D.1 BYOL DetailsBootstrap your own Latent: BYOL has both a primary encoder fθ, and a target encoder fξ, whichis an exponential moving average of the primary. Two augmented views of the same observation oando′are fed into each to produce representations yandy′, which are passed through projectors gθ17Tactile ReadingsTactile ImageHeatmap of Usage of Each Tactile PadBowl UnstackingBottle Cap OpeningCup UnstackingJoystick MovementBook Opening45678910111213141512323456789101112131415123456789101112131415Figure 11: Heatmaps of the variances of each taxels through all of our robot trajectories.andgξto produce zandz′, which are higher dimensional. The primary encoder and projector arethen tasked with predicting the output of the target projector. After training, we use fθto extractfeatures from observations.Using Tactile Readings with BYOL: After getting the tactile images as mentioned in 3.1, wescale the tactile image up to 224x224 to work with standard image encoders. For the majorityof our experiments, we use the AlexNet [45] architecture, also starting with pre-trained weights.Unlike SSL techniques in vision [44], we only apply the Gaussian blur and small random resizedcrop augmentations, since other augmentations such as color jitter and grayscale would violate theassumption that augmentations do not change the tactile signal significantly. Importantly, unlikevision, since all of the tactile data is collected in the frame of the hand, the sensor readings areinvariant to hand pose and can be easily reused between tasks.The complete list of BYOL hyperparameters has been provided in Table 3. We take the model withthe lowest training loss out of all the epochs.D.2 Nearest Neighbors DetailsWe give equal weight to visual and tactile distances for all of the tasks except bottle cap, wheretactile and image features were given weights of 2 and 1, respectively. We do this because thequality of the neighbors on image data was poor and emphasizing the tactile data slightly vastlyimproves performance.While executing NN imitation, we keep a buffer of recently executed neighbors that we call thereject buffer. Given a new observation, we pick the first nearest neighbor not in the reject buffer.This prevents the policy from getting stuck in loops if a chain of neighbors and actions are cyclical.We set the reject buffer size to 10 for every task except Joystick Pulling, which is set to 3. Thebuffer, combined with the 2cm spatial subsampling are critical for the success of NN policies.D.3 BC DetailsWe train BC end-to-end using standard MSE loss on the actions with the same learning rate asBYOL and a batch size of 64.18Parameter ValueOptimizer AdamLearning rate 1e−3Weight decay 1e−5Max epochs 1000Batch size (Tactile) 1024Batch size (Image) 64Aug. (Tactile) Gaussian Blur (3x3) (1.0, 2.0) p= 0.5Random Resize Crop (0.9, 1.0) p= 0.5Aug. (Image) Color Jitter (0.8, 0.8, 0.8, 0.2) p= 0.2Gaussian Blur (3x3) (1.0, 2.0) p= 0.2Random Grayscale p= 0.2Table 3: BYOL Hyperparameters.0 100 200 300 400 500 600 700Number of Components0.50.60.70.80.91.0Explained Variance RatioExplained Variance vs Number of ComponentsFigure 12: Explained variance ratio for PCA on the play tactile data. Most variance is captured in the first 100components.D.4 NN-Torque DetailsOur hand does not have torque sensors, but is actuated by torque targets from a low-level PD positioncontroller. We use the torque targets as a proxy for torque information since the desired torque willbe higher when the finger is in contact with an object, but trying to move further inside, and lowerwhen it is not in contact.D.5 PCA DetailsWe run PCA on the tactile play data and take the top 100 components for use as features. Thecaptured variance is about 95% and the entire explained variance ratio can be seen in Figure 12.By visualizing the reconstructions (Figure 13), we can see that it retains a majority of the tactileinformation.D.6 Shuffled Pad DetailsFor this experiment, we permute the position of the 15 4x4 tactile sensors using the same permutationfor both pretraining and deployment. This ensures that we’re inputting the same data from eachsensor to each location in the tactile image, but does not leverage the spatial locations of the pads on19Original Tactile Data PCA ReconstructionOriginal Tactile Data PCA ReconstructionFigure 13: Tactile data and the PCA reconstruction of two using 100 components for two tactile readings. Mostof the information is preserved, but we can see minor differences in magnitude and offset.Bottle OpeningNN-Image NN-tactile T-DexFigure 14: Additional rollouts for the Bottle Opening task.the hand. If spatial layout had no effect, we would expect no difference in the performance betweenthis and T-D EX.E Additional RolloutsWe visualize extra rollouts for each task in Figures 14-18.F Tactile Image VisualizationWe visualize tactile images for each task in Figures 19-23.20Book OpeningNN-Image NN-tactile T-DexFigure 15: Additional rollouts for the Book Opening task.Bowl UnstackingNN-Image NN-tactile T-DexFigure 16: Additional rollouts for the Bowl Unstacking task.21Cup UnstackingNN-Image NN-tactile T-DexFigure 17: Additional rollouts for the Cup Unstacking task.Joystick PullingNN-Image NN-tactile T-DexFigure 18: Additional rollouts for the Joystick Pulling task.22Figure 19: Tactile Image for the Bottle Opening task.Figure 20: Tactile Image for the Book Opening task.23Figure 21: Tactile Image for the Bowl Unstacking task.Figure 22: Tactile Image for the Cup Unstacking task.24Figure 23: Tactile Image for the Joystick Pulling task.25 |
psLlVbTFBua | FlowBot++: Learning Generalized ArticulatedObjects Manipulation via Articulation ProjectionHarry Zhang, Ben Eisner, David HeldRobotics Institute, School of Computer ScienceCarnegie Mellon University, United States{haolunz, baeisner, dheld }@andrew.cmu.eduAbstract: Understanding and manipulating articulated objects, such as doors anddrawers, is crucial for robots operating in human environments. We wish to de-velop a system that can learn to articulate novel objects with no prior interaction,after training on other articulated objects. Previous approaches for articulated ob-ject manipulation rely on either modular methods which are brittle or end-to-endmethods, which lack generalizability. This paper presents FlowBot++, a deep 3Dvision-based robotic system that predicts dense per-point motion and dense ar-ticulation parameters of articulated objects to assist in downstream manipulationtasks. FlowBot++ introduces a novel per-point representation of the articulatedmotion and articulation parameters that are combined to produce a more accurateestimate than either method on their own. Simulated experiments on the PartNet-Mobility dataset validate the performance of our system in articulating a widerange of objects, while real-world experiments on real objects’ point clouds anda Sawyer robot demonstrate the generalizability and feasibility of our system inreal-world scenarios. Videos are available on our anonymized website here.Keywords: Articulated Objects, 3D Learning, ManipulationFigure 1: FlowBot++ in Action. The system first observes a point cloud observation of an articulated objectand estimates the object’s Articulation Flow and Articulation Projection to infer the articulation axis. Then theinferred axis is used to output a smooth trajectory to actuate the object.1 IntroductionUnderstanding and manipulating articulated objects such as doors and drawers is a key skill forrobots operating in human environments. Unlike previous work [1, 2] that learns to build the kine-matic structure of an object through experience, we aim to teach a robot to manipulate novel artic-ulated objects, transferring knowledge from prior experience with other articulated objects. Whilehumans can manipulate novel articulated objects, constructing robotic manipulation agents that cangeneralize in the same way poses significant challenges, since the complex structure of such objectsrequires three-dimensional reasoning of their parts and functionality. Due to the large number ofcategories of such objects and intra-class variations of the objects’ structure and kinematics, it isdifficult to train perception and manipulation systems that can generalize across such variations.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.One approach used in previous work estimates the kinematic tree of an articulated object, using amodular pipeline of segmentation, connectivity estimation, and articulation-parameter estimation [3,4, 5]; however, such a modular pipeline can be brittle for unseen objects, since a failure in any onecomponent will cause failures in downstream modules. To learn a generalizable perception andmanipulation pipeline, robots need to be robust to variations in the articulated objects’ geometriesand kinematic structures. Another recent approach [6] predicts a dense per-point estimate of howeach point would move under articulated motion, without needing to predict the part’s articulationparameters. Experiments demonstrate that this approach shows superior generalization to unseenarticulated objects. However, the motion estimation needs to be run at each time step, which oftenyields jerky motion, costly computation, and poor performance in the face of heavy occlusions.Our approach combines the advantages of both of these approaches: we jointly predict dense per-point motion of how that point would move if the object were to be articulated1(similar to Eisneret al. [6]) as well as a dense per-point representation of the corresponding part’s articulation param-eters. These two predictions are then integrated to produce a more accurate estimate of the object’smotion than either method on their own. Each per-point prediction is grounded in a reference framecentered on that point, which avoids the need for the network to make a prediction relative to anunknown, global coordinate system. By estimating per-point predictions, we leverage the advan-tages of prior work [6] that has shown that per-point predictions enable enhanced generalization todifferent object geometries and kinematics.Our system, FlowBot++, leverages these predictions to produce a smooth sequence of actions thatarticulate the desired part on the object. We train a single 3D perception module across many objectcategories; we evaluate the system on zero-shot manipulation of novel objects, without requiringadditional videos or interactions with each test object. We show that the trained model generalizesto a wide variety of objects – both in seen categories as well as unseen object categories. Thecontributions of this paper include:1. A novel per-point 3D representation of articulated objects that densely models the objects’ in-stantaneous motion under actuation as well as its articulation parameters.2. A novel approach to integrate the per-point motion prediction and per-point articulation predic-tion into a single prediction that outperforms either prediction individually.3. Demonstrations of a robot manipulation system (FlowBot++) that uses the aforementioned per-point predictions to manipulate novel articulated objects in a zero-shot manner. Our experimentsevaluate the performance of our system in articulating a wide range of objects in the PartNet-Mobility dataset as well as in the real-world.2 Related WorkArticulated Object Manipulation : Manipulation of unseen articulated objects and other objectswith non-rigid properties remains an open research area due to the objects’ complex geometries andkinematics. Previous work proposed manipulating such objects using analytical methods, such asthe immobilization of a chain of hinged objects, constraint-aware planning, and relational represen-tations [7, 3, 8]. With the development of larger-scale datasets of articulated objects [9, 10], severalworks have proposed learning methods based on large-scale simulation, supervised visual learningand visual affordance learning [11, 12, 13]. Several works have focused on visual recognition andestimation of articulation parameters, learning to predict the pose [4, 5, 14, 15, 16] and identify ar-ticulation parameters [1, 17] to obtain action trajectories. Statistical motion planning methods couldalso be applied in the scenarios of complex hinged objects manipulation [18, 19, 20]. More closelyrelated to our work is that of Eisner et al. [6], which learns to predict per-point motion of articulatedobjects under instantaneous motion. While the per-point motion affordance learning of Eisner et al.[6] outperforms previous approaches, it does not explicitly model the articulation parameters. Thus,the affordance needs to be estimated in each time step, yielding jerky robot trajectories. Concur-1Following the convention of Eisner et al. [6], the system predicts the motion under the “opening” direction,although it can be reversed to perform closing.2rently with our work, Nie et al. [2] proposed a similar idea of estimating articulation parametersfrom interactions to guide downstream manipulation tasks. However, our method does not require apriori interactions with the manipulated objects.Flow for Policy Learning : Optical flow [21] is used to estimate per-pixel correspondences be-tween two images for object tracking and motion prediction and estimation. Current state-of-the-art methods for optical flow estimation leverage convolutional neural networks [22, 23, 24]. Donget al. [25], Amiranashvili et al. [26] use optical flow as an input representation to capture objectmotion for downstream manipulation tasks. Weng et al. [27] use flow to learn a policy for fabricmanipulation. Previous work generalized optical flow beyond pixel space to 3D for robotic manip-ulation [6, 28, 29]. We learn a 3D representation for articulated objects, Articulation Flow , whichdescribes per-point correspondence between two point clouds of the same object and ArticulationProjection , a dense projection representation of the object’s articulation parameters.3 BackgroundIn this paper, we study the task of manipulating an articulated object. We assume that an articulatedobject could be of two classes: prismatic, which is parameterized by a translational axis, or revolute,which is parameterized by a rotational axis. Eisner et al. [6] derived the optimal instantaneousmotion to articulate objects using physics and we review their reasoning here to better motivate ourmethod. Suppose we are able to attach a gripper to some point p∈ P on the surface P ⊂R3of achild link with mass m. At this point, the policy can apply a 3D force F, with constant magnitude||F||to the object at that point. We wish to choose a contact point and force direction that maximizethe acceleration of the articulation’s child link. Both articulation types are parameterized via axesv(t) =ωt+v, where ωis a unit vector that represents the axis direction and vrepresents the originof the axis2(dashed lines in Fig. 2). Based on Newton’s Second Law, Eisner et al. [6] derived thatforprismatic joints , a force Fmaximizes the articulated part’s acceleration if the force’s directionis parallel to the part’s axis ω; the force’s point of exertion does not matter as long as the directionis parallel to ω. For revolute joints , a force Fmaximizes the articulation part’s acceleration whenFis tangent to the circle defined by axis vand radius r, which connects the point of exertion to thenearest point on the axis. Selecting the point that maximizes rproduces maximal linear acceleration.Thus for revolute joints, the optimal choice for the force Fis to pick a point on the articulated partthat is farthest from the axis of rotation vand apply a force tangent to the circle defined by randaxisv. Similar to Eisner et al. [6], we predict each point’s Articulation Flow , defined as follows:for each point pon each link on the object, define a vector fpin the direction of the motion of thatpoint caused by an infinitesimal displacement δθof the joint in the opening direction, normalizedby the largest such displacement on the link:Articulation Flow: fp=(ω, if prismaticω×r||ω×rmax||if revolute(1)where ris the radius for a contact point and rmax represents the longest radius to the axis on arevolute object. Eisner et al. [6] showed how the predicted Articulation Flow can be used to derive apolicy to manipulate previously unseen articulated objects. Below, we will treat the notation for thearticulation axis v(t)andvinterchangably.4 FlowBot++: From New Representations to Smooth TrajectoriesEisner et al. [6] predicts the per-point Articulation Flow, which allows the method to avoid need-ing to directly predict the overall kinematic structure. However, the Articulation Flow needs to bere-predicted each timestep, since it captures the ideal instantaneous motion of the articulated partunder an infitesimal displacement δθ; after the robot takes a small action, the Articulation Flow2There could be infinitely many possible prismatic axes, but the convention of CAD models is defining anaxis through the centroid of the prismatic part.3pfprpv(t) =ωt+vlgτ(a) Prismaticpτφgfpv(t) =ωt+vrp(b) RevoluteFigure 2: For each point pon the object, Articulation Flow fp[6] represents its instantaneous motion under aforce in the opening direction; our new representation, Articulation Projection rp, represents the displacementthat projects pto the articulation axis v. We train a network to predict both fpandrpand combine theirpredictions to get a smoother and more robust estimate. The purple points represent an interpolated prismatictrajectory of length lgand an interpolated revolute trajectory of φgangle rotation. This corresponds to thetrajectory prediction described in Sec. 4.2.fpfor revolute joints will change direction, since the direction of motion caused by an infitesimaldisplacement δθwill have changed. Since the motion derived from Articulation Flow by construc-tion only models small movements, this representation lacks the ability to predict a full, multi-steptrajectory from the initial observation.4.1 A New Representation of Articulated ObjectsIn order to overcome the limitations of previous work [6, 12], if we could estimate the articulationparameters v(or equivalently, ωandv), then we would be able to derive the full trajectory ofany arbitrary point pon the object would move under articulation of the object under kinematicconstraints of v. To achieve this, we formally define a 3D dense visual representation of articulationparameters on top of Articulation Flow; we illustrate each representation graphically in Fig. 2. Inthis work, we introduce another 3D representation that densely represents the object’s articulationparameters, Articulation Projection - for each point on the articulated part of the object, we definea vector rpthat represents the displacement from the point itself to its projection onto the part’sarticulation axis v(t). For each point p∈R3of articulated object P, we define the ArticulationProjection rpmathematically as follows:Articulation Projection: rp= projvp−p=ωωT−I(p−v) (2)where ωis a unit vector that represents the axis direction and vrepresents the origin of the axisandIrepresents a 3x3 identity matrix. Mathematically, rpis calculated as the difference betweenthe vector− →vpand its projection onto v, which is equivalently a perpendicular vector from eachpoint to the axis of rotation defined by direction ωand origin v, which is represented using the bluevectors in Fig. 2. As we will see, Articulation Projection can be used to predict a longer articulationtrajectory while still inheriting the generalization properties of prior work [6].4.2 Manipulation via Learned Articulation Flow and Articulation ProjectionAs mentioned above, previous work [6] used the Articulation Flow fpto compute the instantaneousmotion direction, which needs to be re-predicted every time step. In contrast, if we were able toinfer the articulation parameters (rotation axis ωand its origin v), we could then analytically com-pute a multi-step articulation trajectory using the initial observation of the object without the needto stop and re-compute each step to actuate the articulated part as in previous work [6]. However,Eisner et al. [6] has shown that estimating the Articulation Flow results in more generalizable pre-diction than estimating the articulation parameters directly. We propose to estimate an articulationtrajectory from the Articulation Projection. Because the Articulation Projection is a dense per-pointprediction with a reference frame defined at each point (similar to Articulation Flow [6]), it inheritsthe generalization properties of Articulation Flow that allows it to be used for unseen articulatedobjects, thus hopefully obtaining the best of both worlds. Further, we will combine both predictionsin Section 4.3 to obtain an even more accurate prediction.4FlowProjNetMotion PlanningContact Selection & Trajectory PlanningRepeat (Low Frequency)Downstream ManipulationPredict Articulation Flow & Articulation ProjectionGet Point Cloud ObservationArticulation ClassifierFigure 3: FlowBot++ System Overview. Our system in deployment first takes as input a partial point cloudobservation of an articulated object (a microwave shown here) and uses FlowProjNet to jointly estimate theobject’s Articulation Flow ( top) and Articulation Projection ( bottom , points displaced by AP shown). Theestimates will then be used in the downstream manipulation pipeline that interpolates and follows the plannedtrajectory smoothly. Unlike FlowBot3D [6], we do not repeat the estimation every step. Instead, we repeat thisloop in a much lower frequency (once every H steps) to improve the smoothness of the planned trajectory.Forrevolute objects , we infer the rotation axis using both the predicted Articulation Flow (Eq. 1)as well as the Articulation Projection (Eq. 2). We then estimate an H-step trajectory of the contactpoint under articulated motion in the opening direction. To obtain the articulation parameter ω,we calculate the cross product between the articulation flow fpand the articulation projection rprespectively as the articulation axis ωshould be perpendicular to both vectors. To compute theorigin of the rotation, we use the point displaced by the articulation projection vector. For each pointp,p+rpprojects the point onto the rotation axis v, effectively giving us an estimate of the origin ofthe rotation:ˆω=rp×fp||rp×fp||,ˆv=p+rp (3)These predictions of ˆvare visualized as the blue points in the bottom output branch of Fig. 3. Usingthe inferred axis direction ˆωand origin ˆv, we can interpolate a smooth trajectory: given a contactpoint p, the proposed trajectory should lie on a circle in a plane perpendicular to the inferred axisparameterized by the normalized direction ˆωand the origin ˆv. Formally, with Rodrigues’ Formulaof Lie algebra, given an angle of rotation φabout the normalized vector ˆωwe are able to define arotation matrix as follows:R(φ) =I+ sin φ[ˆω]×+ (1−cosφ)[ˆω]2× (4)where Iis identity matrix and [ˆω]×is the skew-symmetric matrix of ˆω, and the rotated point paboutthis inferred axis by angle φis given by: p′=R(φ)(p−ˆv) + ˆv. In order to obtain a trajectory, weinterpolate Kangles between 0 and a goal angle φg, and the smooth, K-step trajectory3that rotatesthe contact point pby angle φgabout the estimated axis parameterized by ˆωandˆvis calculated via:τrevolute =RiKφg(p−ˆv) + ˆv∀i∈[0,K](5)We learn a separate articulation type classifier fψwhich takes as input the point cloud observationof the object and classifies if the object is prismatic or revolute to guide the trajectory planning step.Forprismatic objects , since the motion direction and the axis direction should be the same, we justuse the normalized flow prediction fpto compute the prismatic articulation axis, and we use theestimated axis to compute a smooth, K-step trajectory indexed by ito translate the contact point p3In practice, the goal translation distance and goal angle is set large enough so that the termination condition(part being fully open) of the policy can be triggered.5Without Gram-Schmidt With Gram-Schmidt (a) (b)Figure 4: Gram-Schmidt Correction & Performance of Different Methods. In (a), without Gram-Schmidt,the inferred articulation axis ˆω(green) is not accurate; blue points show the displacement of points by theArticulation Projection rp. Using Gram-Schmidt, we use the Articulation Flow fp(red vectors) to correct theaxis direction; ˆωis now perpendicular to the Articulation Flow and aligns better with the ground-truth axisdirection. In (b), we show the bar plot of our method compared with several baseline methods on training andtesting prismatic/revolute objects using normalized distance ( ↓). Note the performance gain after correction viaGram-Schmidt (AP Only vs Combined). Some values are not visible because they are <0.05.by a goal translation distance lg:ˆω=fp||fp||, τ prismatic =p+iKlgˆωi∈[0,K](6)In practice, to make the prediction more robust, we use a segmentation mask to average the pre-dictions over all points in a given part to get an aggregated version of ˆωandˆv. We also deploy anMPC-style controller that replans after taking the first Hsteps of trajectory for better performance.4.3 Jointly Learning Articulation Flow and Articulation ProjectionEmpirically, the Articulation Projection prediction is sometimes less accurate than the ArticulationFlow prediction. On the other hand, the Articulation Flow represents the instantaneous motion andthus needs to predicted at each timestep, which prevents multi-step planning. To obtain the best ofboth worlds, we correct the Articulation Projection rpprediction with the flow estimate using Gram-Schmidt on rpto make it perpendicular to the Articulation Flow fp; the correction is computed as: ̃rp=rp−projfprp, which is used in place of rpin Eq. 3. The effect of using Gram-Schmidt isshown in Fig. 4a.We learn to estimate the Articulation Flow fpand the Articulation Projection rpjointly using a deepneural network, which we refer to as FlowProjNet, denoted as fθ. We assume that the robot has adepth camera and records point cloud observations Ot∈R3×N, where Nis the total number ofobservable points from the sensor. The task is for the robot to articulate a specified part throughits entire range of motion. For each configuration of the object, there exists a unique ground-truthArticulation Flow ftand ground-truth Articulation Projection rt, given by Equations 1 and 2. Thus,our learning objective is to find a function fθ(Ot)that predicts Articulation Flow and ArticulationProjection directly from point cloud observations. We define the objective of minimizing the jointL2 error of the predictions:LMSE=||(ft⊕rt)−fθ(Ot)||22 (7)where ⊕represents concatenation. We train FlowProjNet via supervised learning with this loss.We describe the method to obtain ground-truth labels in Appendix E. We implement FlowProjNetusing a segmentation-style PointNet++ [30] as a backbone, and we train a single model across allcategories, using a dataset of synthetically-generated data tuples based on the ground-truth kinemat-ics provided by the PartNet-Mobility dataset [10]. Details of our dataset construction and modelarchitecture can be found in Appendix E.5 ExperimentsWe conduct a wide range of simulated and real-world experiments to evaluate FlowBot++.6Novel Instances in Train Categories Test CategoriesA VG. A VG.UMP-Net [12] 0.18 0.18 0.17 0.32 0.32 0.05 0.06 0.12 0.24 0.23 0.18 0.08 0.15 0.23 0.14 0.04 0.00 0.25 0.27 0.09 0.21 0.13 0.19Normal Direction 0.41 0.52 0.67 0.16 0.19 0.51 0.60 0.13 0.14 0.45 0.71 0.31 0.40 0.79 0.43 0.07 0.00 0.64 0.15 0.70 1.00 0.41 0.29Screw Parameters [1] 0.42 0.44 0.42 0.44 0.16 0.64 0.52 0.21 0.57 0.43 0.58 0.11 0.24 0.28 0.25 0.14 0.12 0.26 0.12 0.11 0.12 0.32 0.26DAgger E2E [31] 0.52 0.25 0.86 0.51 0.71 0.55 1.00 0.86 0.80 0.74 0.58 0.40 0.64 0.53 0.52 1.00 0.65 0.75 0.54 0.63 0.68 0.87 0.74FlowBot3D (AF Only) [6] 0.17 0.42 0.22 0.16 0.17 0.03 0.00 0.20 0.51 0.07 0.00 0.08 0.21 0.17 0.29 0.00 0.06 0.21 0.10 0.06 0.16 0.29 0.73AP Only (Ours) 0.11 0.07 0.01 0.05 0.08 0.06 0.22 0.17 0.15 0.14 0.13 0.08 0.13 0.12 0.18 0.09 0.00 0.29 0.02 0.14 0.07 0.21 0.27FlowBot++ (Ours - Combined) 0.07 0.04 0.01 0.04 0.08 0.02 0.19 0.17 0.14 0.07 0.09 0.02 0.10 0.00 0.11 0.09 0.00 0.23 0.02 0.09 0.09 0.20 0.18Table 1: Normalized Distance Metric Results ( ↓): Normalized distances to the target articulation joint angleafter a full rollout across different methods. The lower the better.5.1 Simulation ResultsTo evaluate our method in simulation, we implement a suction gripper in PyBullet. We consider thesame subset of PartNet-Mobility as in previous work [12, 6]. Each object starts in the “closed” state(one end of its range of motion), and the goal is to actuate the joint to its “open” state (the otherend of its range of motion). During our experiments, we use the same Normalized Distance metricdefined in prior work [6, 12].Baseline Comparisons : We compare our proposed method with several baseline methods: UMP-Net [12], Normal Direction, Screw Parameters [1], DAgger End2End [31], FlowBot3D (AFOnly) [6] which only uses Articulation Flow, and our model without the Gram-Schmidt correc-tion (“AP Only”), which only uses the inferred Articulation Parameters. Please refer to Appendix Bfor more details. Each method above consists of a single model trained across all PartNet-Mobilitytraining categories.(a) (b) Figure 5: Comparison of Angular Acceleration and Opening Trajectory between FlowBot3D and FlowBot++.In(a), we show two 20-step trajectories’ scatter plots of the contact point on a revolute object using FlowBot3D(left) and FlowBot++ (right), colored using the signed intensity of the angular acceleration. In (b), we plot 20steps vs. the average angular acceleration and opening fraction across 300 trials involving 15 revolute objects.Both plots show that FlowBot++ is able to produce more consistent motions and open the objects further underthe same number of steps.Analysis : Results are shown in Table 1; please refer to Appendix D for comparisons using othermetrics. Overall, FlowBot++ outperforms all of the baselines, including FlowBot3D [6] in terms ofthe normalized distance across many object categories, including unseen test categories. Moreover,without Gram-Schmidt correction (AP Only), the performance degrades but is still slightly betterthan FlowBot3D [6]. To illustrate why, we show an example in Fig. 5a in which we plot the xylocations of the contact point in the first 20 execution steps as well as the angular acceleration whenusing each method for articulation. As shown, both FlowBot3D and FlowBot++ perform reasonablywell in the first 5 steps due to the absence of occlusions. However, after 5 steps, when the robotstarts to heavily occlude the object, FlowBot3D’s prediction of each step begins to make the contactpoint go back and forth, yielding little progress in the opening direction. In contrast, FlowBot++plans a longer, multi-step trajectory at the start of the motion, interpolated via Eq. 5. This trajectoryis much more consistent and smooth in terms of the direction of the motion. This trend is furthershown in Fig. 5b, where we show the average angular acceleration and opening fraction across 15revolute objects that were challenging for FlowBot3D [6]; FlowBot++ is able to achieve a larger7opening fraction and a smoother trajectory due to its lower replanning frequency, which also leadsit to suffer less from occlusions since it can make a longer-scale prediction before contact.Execution Time Comparisons : Here, we provide a comparison of execution time to open objectsbetween FlowBot3D and FlowBot++. In simulation, under the same setup, execution wall-clocktime gets reduced from 17.1 seconds per object on average to 1.2 seconds per object on average.The reason for the increased speed is twofold: First, FlowBot++ deploys an MPC-style controller,which only replans after Hsteps ( H= 7in our experiments), compared to FlowBot3D’s closed-loopcontroller which requires replanning every step, since only the instantaneous motion is predicted.Second, with fewer replanning steps, FlowBot++ is less prone to be affected by occlusions because,in each replanning step, it rolls out a longer trajectory, as opposed to FlowBot3D, in which occlu-sions inevitably affect each step’s prediction quality. The reduced prediction quality of FlowBot3Dcauses the robot to move the object back and forth, extending the execution time.5.2 Real-World ExperimentsReal-World Object Input Point Cloud Predicted Trajectory Robot Execution Figure 6: Real-World Experiments. We show two ex-amples of FlowBot++ trained in simulation predictingand executing a full opening trajectory on real-worldobjects without any fine-tuning or retraining.We conduct qualitative real-world experimentsto assess the sim2real capabilities of Flow-Bot++. We run FlowBot++ trained in simu-lation directly on denoised point clouds col-lected on real-world objects [32] without anyretraining. We test our method on 6 real-worldprismatic (Drawer) and revolute (Fridge, Oven,Toilet, Microwave, Trashcan) objects. In real-world trials, we provide hand-labeled segmen-tation masks to the network, which are usedto filter and aggregate the results. We use aheuristically-defined grasping policy to select agrasp point and execute the planned trajectorieson a Sawyer Black Robot equipped with a parallel-jaw gripper. Following the robot controller designin [28], once a contact point is chosen, we could transform the contact point using Eq. 5 becausethe gripper is now modeled as rigidly attached to the contact point, forming a full robot end-effectortrajectory. Example predictions are shown in Fig. 6, in which all points on the segmented part arecollectively transformed using Eq. 5, showing a full trajectory of the articulation. Real-world ex-periments show promising results that our networks transfer the learned 3D geometry informationto real-world data. We show side-by-side comparisons between FlowBot++ and FlowBot3D on ourwebsite. FlowBot3D [6] completely failed on real-world oven, while FlowBot++ is able to predicta feasible full-trajectory to open the oven door with only one planning step. We also observe thatFlowBot++ is able to produce smoother trajectories without undesired motions and movements ofthe objects and finish the execution in a shorter time frame, corroborating our findings in simulation.Details of the real-world experiments and system implementation are documented in Appendix F.6 Conclusions and LimitationsIn this work, we propose a novel visual representation for articulated objects, namely ArticulationProjection, as well as a policy, FlowBot++, which leverages this representation, combined withArticulation Flow, to successfully manipulate articulated objects, outperforming previous state-of-the-art methods. We demonstrate the effectiveness of our method in both simulated and real envi-ronments and observe strong generalization to unseen objects.Limitations: While our method shows strong performance on a range of object classes, there isstill room for improvement. When both Articulation Flow and Articulation Projection predictionsare incorrect, the predicted trajectory cannot be further corrected, causing failures. Furthermore, inorder to aggregate all the per-point predictions on a part, a segmentation mask is required, addinganother potential point of failure. Still, this work represents a step forward in the manipulation ofunseen articulated objects, and we hope it provides a foundation for future work in this direction.8AcknowledgmentsThis material is based upon work supported by the National Science Foundation under Grant No.IIS-1849154. We are grateful to Prof. Daniel Seita for his helpful feedback and discussion on thepaper.References[1] A. Jain, R. Lioutikov, C. Chuck, and S. Niekum. ScrewNet: Category-Independent articu-lation model estimation from depth images using screw theory. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 13670–13677, May 2021.[2] N. Nie, S. Y . Gadre, K. Ehsani, and S. Song. Structure from action: Learning interactions forarticulated object 3d structure discovery. arXiv preprint arXiv:2207.08997 , 2022.[3] D. Berenson, S. Srinivasa, and J. Kuffner. Task space regions: A framework for pose-constrained manipulation planning. The International Journal of Robotics Research , 30(12):1435–1460, 2011.[4] L. Yi, H. Huang, D. Liu, E. Kalogerakis, H. Su, and L. Guibas. Deep part induction fromarticulated object pairs. arXiv preprint arXiv:1809.07417 , 2018.[5] Z. Yan, R. Hu, X. Yan, L. Chen, O. Van Kaick, H. Zhang, and H. Huang. Rpm-net: recurrentprediction of motion and parts from point cloud. arXiv preprint arXiv:2006.14865 , 2020.[6] B. Eisner, H. Zhang, and D. Held. Flowbot3d: Learning 3d articulation flow to manipulatearticulated objects. Robotics: Science and Systems (RSS) , 2022.[7] J.-S. Cheong, A. F. Van Der Stappen, K. Goldberg, M. H. Overmars, and E. Rimon. Immobi-lizing Hinged Polygons. Int. J. Comput. Geom. Appl. , 17(01):45–69, Feb. 2007.[8] D. Katz, Y . Pyuro, and O. Brock. Learning to manipulate articulated objects in unstructuredenvironments using a grounded relational representation. In Robotics: Science and SystemsIV. Robotics: Science and Systems Foundation, June 2008.[9] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. Partnet: A large-scalebenchmark for fine-grained and hierarchical part-level 3d object understanding. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 909–918,2019.[10] F. Xiang, Y . Qin, K. Mo, Y . Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y . Yuan, H. Wang, and Oth-ers. Sapien: A simulated part-based interactive environment. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 11097–11107, 2020.[11] K. Mo, L. J. Guibas, M. Mukadam, A. Gupta, and S. Tulsiani. Where2act: From pixels toactions for articulated 3d objects. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision , pages 6813–6823, 2021.[12] Z. Xu, H. Zhanpeng, and S. Song. Umpnet: Universal manipulation policy network for articu-lated objects. IEEE Robotics and Automation Letters , 2022.[13] T. Mu, Z. Ling, F. Xiang, D. Yang, X. Li, S. Tao, Z. Huang, Z. Jia, and H. Su. ManiSkill:Learning-from-Demonstrations benchmark for generalizable manipulation skills. arXiv e-prints , pages arXiv–2107, 2021.[14] X. Wang, B. Zhou, Y . Shi, X. Chen, Q. Zhao, and K. Xu. Shape2motion: Joint analysis ofmotion parts and attributes from 3d shapes. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 8876–8884, 2019.9[15] R. Hu, W. Li, O. Van Kaick, A. Shamir, H. Zhang, and H. Huang. Learning to predict partmobility from a single static snapshot. ACM Trans. Graph. , 36(6):1–13, Nov. 2017.[16] X. Li, H. Wang, L. Yi, L. J. Guibas, A. Lynn Abbott, and S. Song. Category-Level articulatedobject pose estimation, 2020.[17] V . Zeng, T. E. Lee, J. Liang, and O. Kroemer. Visual identification of articulated object parts.In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages2443–2450. IEEE, 2020.[18] V . Narayanan and M. Likhachev. Task-oriented planning for manipulating articulated mech-anisms under model uncertainty. In 2015 IEEE International Conference on Robotics andAutomation (ICRA) , pages 3095–3101, May 2015.[19] F. Burget, A. Hornung, and M. Bennewitz. Whole-body motion planning for manipulationof articulated objects. In 2013 IEEE International Conference on Robotics and Automation ,pages 1656–1662, May 2013.[20] S. Chitta, B. Cohen, and M. Likhachev. Planning for autonomous door opening with a mo-bile manipulator. In 2010 IEEE International Conference on Robotics and Automation , pages1799–1806, May 2010.[21] B. K. P. Horn and B. G. Schunck. Determining optical flow. Artif. Intell. , 17(1):185–203, Aug.1981.[22] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V . Golkov, P. van der Smagt, D. Cre-mers, and T. Brox. FlowNet: Learning optical flow with convolutional networks, 2015.[23] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolutionof optical flow estimation with deep networks. In Proceedings of the IEEE conference oncomputer vision and pattern recognition , pages 2462–2470, 2017.[24] Z. Teed and J. Deng. RAFT: Recurrent All-Pairs field transforms for optical flow. In ComputerVision – ECCV 2020 , pages 402–419. Springer International Publishing, 2020.[25] S. Dong, D. K. Jha, D. Romeres, S. Kim, D. Nikovski, and A. Rodriguez. Tactile-rl for inser-tion: Generalization to objects of unknown geometry. In 2021 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 6437–6443. IEEE, 2021.[26] A. Amiranashvili, A. Dosovitskiy, V . Koltun, and T. Brox. Motion perception in reinforcementlearning with dynamic objects. In A. Billard, A. Dragan, J. Peters, and J. Morimoto, editors,Proceedings of The 2nd Conference on Robot Learning , volume 87 of Proceedings of MachineLearning Research , pages 156–168. PMLR, 2018.[27] T. Weng, S. M. Bajracharya, Y . Wang, K. Agrawal, and D. Held. Fabricflownet: Bimanualcloth manipulation with a flow-based policy. In Conference on Robot Learning , pages 192–202. PMLR, 2022.[28] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. Tax-pose: Task-specific cross-pose esti-mation for robot manipulation. In Conference on Robot Learning , pages 1783–1792. PMLR,2023.[29] D. Seita, Y . Wang, S. J. Shetty, E. Y . Li, Z. Erickson, and D. Held. Toolflownet: Roboticmanipulation with tools via predicting tool flow from point clouds. In Conference on RobotLearning , pages 1038–1049. PMLR, 2023.[30] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3dclassification and segmentation. In Proceedings of the IEEE conference on computer visionand pattern recognition , pages 652–660, 2017.10[31] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the fourteenth international conferenceon artificial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Pro-ceedings, 2011.[32] H. Zhang, J. Ichnowski, Y . Avigal, J. Gonzales, I. Stoica, and K. Goldberg. Dex-Net AR:Distributed deep grasp planning using a commodity cellphone and augmented reality app. In2020 IEEE International Conference on Robotics and Automation (ICRA) , pages 552–558,May 2020.[33] Q.-Y . Zhou, J. Park, and V . Koltun. Open3d: A modern library for 3d data processing. arXivpreprint arXiv:1801.09847 , 2018.[34] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep hierarchical feature learning onpoint sets in a metric space. June 2017.[35] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.11AppendixResults are better shown visually in videos. Please refer to our website for video results.Table of ContentsA Full FlowBot++ Manipulation Policy 12B Ablations 12B.1 Controller HValues (Replanning Frequency) . . . . . . . . . . . . . . . . . . . 12B.2 Mean Aggregation with Segmentation Masks . . . . . . . . . . . . . . . . . . . 13C Baselines 13D Metrics 14E Simulation and Training Details 15E.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15E.2 Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15E.3 Ground Truth Labels Generation . . . . . . . . . . . . . . . . . . . . . . . . . 15E.4 Using Segmentation Masks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16E.5 Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16F Real-World Experiments 16F.1 Experiments Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16F.2 Robotic System Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 18F.3 Reducing Unwanted Movements . . . . . . . . . . . . . . . . . . . . . . . . . 18F.4 Failure Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19A Full FlowBot++ Manipulation PolicyGiven an articulated object, we first observe an initial observation O0, which is used to classify theobject’s articulation type. We then predict the initial flow f0and projection r0, where f0is used toselect a contact pose and grasp object. Then the system infers the articulation parameters based onEq. 5 or 6 and follows the first Hsteps. This process repeats in a low-frequency if re-planning isneeded until the object has been fully-articulated, a max number of steps has been exceeded, or theepisode is otherwise terminated. See Algorithm 1 for a full description of the generalized policy.The while loop runs in a much lower frequency compared to FlowBot3D, which further bypassesthe potential error from heavy occlusions.B AblationsWe document a variety of Ablation Studies in this section. Specifically, we investigate the effect ofusing different Hvalues (i.e. replanning frequency), Gram-Schmidt Correction, and mean aggrega-tion using part segmentation masks.B.1 Controller HValues (Replanning Frequency)Hvalues represent how many steps of the interpolated trajectory we aim to execute after eachprediction. Thus, it also represents the replanning frequency, where s higher Hvalue means a lowerreplanning frequency, and vice versa. As shown in Fig. 7 and Table 2, when the replanning12Algorithm 1 The FlowBot++ articulation manipulation policyRequire: θ←parameters of a trained flow-projection prediction network, H←controller look-ahead horizon, ψ←articulation type classifier parametersO0←Initial observationartType ←fψ(O0), Classify articulation typef0, r0←fθ(O0), Predict the initial flow and projectiong0=SelectContact (O0, f0), Select a contact pose and grasp object as shown in [6]done←Falsewhile notdone doOt←Observationft, rt←fθ(Ot), Predict the current articulation flow and articulation projectionτt←TrajCalculation (ft, rt), Calculate trajectory using Eq. 5 or 6 based on artTypeFollow the first Hsteps in τt(MPC)done←EpisodeComplete ()Figure 7: Ablation Studies on Lookahead Horizon. The plot shows the normalized distance performances ofdifferent Hvalues on both train and testing objects.frequency is too low or too high, the performance becomes suboptimal. When H= 1, the systemeffectively reduces to FlowBot 3D, which replans every step. Another interesting comparison inthis experiment is that when we do not use this MPC controller (i.e. we do not replan and trust theone-shot open-loop plan), the performance degrades by a lot. This suggest that we do need to replanat a certain frequency to correct ourselves. Experiments suggest that the optimal Hvalue here isH= 7and we use this value in our final system.B.2 Mean Aggregation with Segmentation MasksWe also ablate on the choice of using a segmentation mask to aggregate the articulation parametersestimates in Table 3. Results suggest the effectiveness of using segmentation masks to aggregatemultiple results for a robust estimate. Such effectiveness is better shown on revolute objects as flowdirections alone suffice to produce a good motion for prismatic objects.C BaselinesBaseline Comparisons : We compare our proposed method with several baseline methods:•UMP-Net : This is the same implementation/code base as provided in [12].•Normal Direction : We use off-the-shelf normal estimation to estimate the surface normalsof the point cloud using Open3D [33]. To break symmetry, we align the normal directionvectors to the camera. At execution time, we first choose the ground-truth maximum-flowpoint and then follow the direction of the estimated normal vector of the surface.13Novel Instances in Train Categories Test CategoriesA VG. A VG.FlowBot++ (H=1) 0.10 0.06 0.09 0.09 0.09 0.02 0.01 0.17 0.20 0.17 0.15 0.02 0.15 0.00 0.10 0.12 0.00 0.33 0.12 0.22 0.17 0.18 0.25FlowBot++ (H=3) 0.11 0.03 0.10 0.06 0.09 0.00 0.24 0.19 0.18 0.21 0.09 0.03 0.16 0.15 0.14 0.14 0.00 0.34 0.04 0.21 0.20 0.19 0.27FlowBot++ (H=5) 0.10 0.06 0.08 0.06 0.09 0.01 0.23 0.17 0.17 0.12 0.11 0.02 0.14 0.12 0.09 0.15 0.00 0.27 0.01 0.16 0.18 0.18 0.22FlowBot++ (H=7) 0.07 0.04 0.01 0.04 0.08 0.02 0.19 0.17 0.14 0.07 0.09 0.02 0.10 0.00 0.11 0.09 0.00 0.23 0.02 0.09 0.09 0.20 0.18FlowBot++ (H=9) 0.11 0.06 0.05 0.05 0.27 0.03 0.17 0.19 0.25 0.06 0.07 0.02 0.15 0.13 0.09 0.22 0.00 0.14 0.22 0.11 0.21 0.17 0.16FlowBot++ (no MPC) 0.22 0.25 0.24 0.20 0.50 0.04 0.22 0.32 0.37 0.10 0.09 0.07 0.24 0.31 0.14 0.35 0.00 0.21 0.32 0.22 0.18 0.22 0.40Table 2: Ablation Studies of Lookahead Horizon ( H) via Normalized Distance ( ↓). The lower the better.Novel Instances in Train Categories Test CategoriesA VG. A VG.FlowBot++ 0.07 0.04 0.01 0.04 0.08 0.02 0.19 0.17 0.14 0.07 0.09 0.02 0.10 0.00 0.11 0.09 0.00 0.23 0.02 0.09 0.09 0.20 0.18FlowBot++ No Seg 0.10 0.06 0.01 0.12 0.08 0.09 0.21 0.18 0.14 0.09 0.09 0.08 0.13 0.15 0.15 0.09 0.00 0.25 0.02 0.12 0.09 0.19 0.25Table 3: Ablation Studies of Mean Aggregation with Segmentation via Normalized Distance ( ↓). The lowerthe better.•Screw Parameters : We predict the screw parameters for the selected joint of the articulatedobject. We then generate 3DAF from these predicted parameters and use the FlowBot3Dpolicy on top of the generated flow.•DAgger E2E : We also conduct behavioral cloning experiments with DAgger [31] on thesame expert dataset as in the BC baseline. We train it end-to-end (E2E), similar to the BCmodel above.•FlowBot3D : We also call this AF Only, since it only uses the articulation flow fpduringplanning) [6],•Without Gram-Schmidt Correction : Also called AP Only. This is our model but withoutGram-Schmidt correction via fp, hence the name AP Only, since it only uses the inferredarticulation parameters during planning without fpcorrection).Novel Instances in Train Categories Test CategoriesA VG. A VG.UMP-Net [12] 0.73 0.73 0.71 0.60 0.49 0.89 0.90 0.79 0.60 0.64 0.78 0.86 0.75 0.55 0.80 0.89 1.00 0.66 0.64 0.77 0.64 0.75 0.76Normal Direction 0.51 0.60 0.12 0.62 0.75 0.10 0.10 0.85 0.42 0.12 0.10 0.60 0.45 0.23 0.09 0.60 1.00 0.30 0.51 0.00 0.20 0.22 0.71Screw Parameters [1] 0.55 0.55 0.58 0.52 0.90 0.20 0.54 0.69 0.19 0.39 0.41 0.85 0.69 0.19 0.73 0.69 0.91 0.63 0.82 0.89 0.85 0.60 0.78DAgger E2E [31] 0.32 0.63 0.10 0.36 0.19 0.39 0.12 0.13 0.00 0.12 0.38 0.10 0.09 0.00 0.00 0.10 0.25 0.20 0.17 0.02 0.00 0.00 0.00FlowBot3D (AF Only) [6] 0.81 0.53 0.74 0.81 0.82 0.96 0.99 0.79 0.44 0.90 1.00 0.89 0.70 0.69 0.63 1.00 0.94 0.67 0.89 0.75 0.66 0.69 0.14AP Only (Ours) 0.77 0.58 0.59 0.90 0.81 0.83 0.80 0.78 0.46 0.79 0.72 1.00 0.68 1.00 0.54 0.82 1.00 0.41 0.44 0.79 0.87 0.41 0.45FlowBot++ (Ours - Combined) 0.82 0.70 0.62 0.89 0.91 0.96 0.96 0.83 0.72 0.78 0.85 1.00 0.76 1.00 0.68 1.00 1.00 0.43 0.63 0.91 0.81 0.45 0.82Table 4: Success Rate Metric Results ( ↑): Fraction of success trials (normalized distance less than 0.1) ofdifferent objects’ categories after a full rollout across different methods. The higher the better.D MetricsWe specify the metrics we used for our simulated experiments. First, shown in Table 1, we useNormalized Distance, which is defined as the normalized distance traveled by a specific child linkthrough its range of motion. The metric is computed based on the final configuration after a policyrollout ( jend) and the initial configuration ( jinit):Egoal=||jend−jgoal||||jgoal−jinit||We also conduct experiments using the Success Rate: we define a binary success metric, which iscomputed by thresholding the final resulting normalized distance at δ:Success = 1(Egoal≤δ)We set δ= 0.1, meaning that we define a success as articulating a part for more than 90%.14We show the success rate performance of our method and baselines in Table 4. Similar to the resultsusing Normalized Distance, FlowBot++ outperforms previous methods.E Simulation and Training DetailsIn simulation, the suction is implemented using a strong force between the robot gripper and thetarget part. During training step t, we randomly select an object from the dataset, randomize theobject’s configuration, and compute a new training example which we use to compute the loss usingEq. 7. During training, each object is seen in 100 different randomized configurations.E.1 DatasetsTo evaluate our method in simulation, we implement a suction gripper in the PyBullet environment,which serves as a simulation interface for interacting with the PartNet-Mobility dataset [10]. ThePartNet-Mobility dataset contains 46 categories of articulated objects; following UMPNet [12], weconsider a subset of PartNet-Mobility containing 21 classes, split into 11 training categories (499training objects, 128 testing objects) and 10 entirely unseen object categories (238 unseen objects).Several objects in the original dataset contain invalid meshes, which we exclude from evaluation. Wetrain our models (FlowProjNet and baselines) exclusively on the training instances of the trainingobject categories, and evaluate by rolling out the corresponding policies for every object in thedataset. Each object starts in the “closed” state (one end of its range of motion), and the goal isto actuate the joint to its “open” state (the other end of its range of motion). For experiments insimulation, we include in the observation Ota binary part mask indicating which points belong tothe child joint of interest.E.2 Network ArchitectureFlowProjNet, the joint Articulation Flow and Articulation Projection prediction model in Flow-Bot++, is based on the PointNet++ [34, 30] architecture. The architecture largely remains similarto the original architecture except for the output head. Instead of using a segmentation output head,we use a regression head. The FlowProjNet architecture is implemented using Pytorch-Geometric,a graph-learning framework based on PyTorch. Since we are doing regression, we use standard L2loss optimized by an Adam optimizer [35].The articulation type classifier model in FlowBot++, which is used to predict prismatic vs. revoluteobjects, is also based on the PointNet++ architecture. The architecture now uses a classificationhead, which outputs a global binary label representing the articulation label. We use standard BinaryCross Entropy loss optimized by an Adam optimizer [35] and we achieve 97% accuracy on testobjects.E.3 Ground Truth Labels GenerationE.3.1 Ground Truth Articulation FlowWe implement efficient ground truth Articulation Flow generation. At each timestep, the systemreads the current state of the object of interest in simulation as an URDF file and parses it to obtaina kinematic chain. Then the system uses the kinematic chain to analytically calculate each point’slocation after a small, given amount of displacement. In simulation, since we have access to part-specific masks, the calculated points’ location will be masked out such that only the part of interestwill be articulated. Then we take difference between the calculated new points and the current timestep’s points to obtain the ground truth Articulation Flow.E.3.2 Ground Truth Articulation ProjectionWe also implement efficient ground truth Articulation Projection generation. For each object, thesystem reads the current state of the object of interest in simulation as an URDF file and parses it15to obtain the origin vand direction ωof the axis of articulation. The system then uses Eq. 2 tocalculate the Articulation Projection label. Since we have access to part-specific masks in PyBullet,the calculated points’ location will be masked out such that only the points on part of interest willbe articulated.E.4 Using Segmentation MasksAs mentioned above, we use part-specific segmentation masks to define tasks. Specifically, wefollow the convention in [12, 6, 13], where a segmentation mask is provided to give us the part ofinterest. Thus, it is possible that an object (e.g. a cabinet) could have multiple doors and drawers atthe same time. By the construction of the dataset [12, 6, 13], in each data point (object), a mask isused to define which part on the object needs to be articulated. For the cabinet example, if the maskis provided for a drawer, then the cabinet is classified as a prismatic object. If the mask is providedfor a door, then it is classified as a revolute object. We use segmentation masks in the followingsteps of the FlowBot++ pipeline:1.Articulation Flow Ground Truth : During the generation of articulation flow labels, weuse segmentation masks to mask out irrelevant parts on objects so that those parts’ articula-tion flow values will be zeroed out. In this way, FlowProjNet will learn to predict all-zeroon irrelevant parts.2.Articulation Projection Ground Truth : Similar to how we use the mask in ArticulationFlow Ground Truth generation, we only produces the projection vectors for the relevantmasked points.3.Articulation Flow and Articulation Projection Prediction : During the learning step ofFlowProjNet, we use segmentation masks as an additional per-point channel into the net-work, where 1 represents relevant points and 0 represents irrelevant points. In this way, thenetwork output learns to be conditioned on this extra channel such that it does not outputvalues on irrelevant parts.4.Articulation Parameters Estimation : When estimating ωandv, we first obtain a per-point estimate. To make the estimate more robust, we aggregate all points on the relevantpart, which is masked using the segmentation mask, and average them out to get a robustestimate.E.5 HyperparametersWe use a batch size of 64 and a learning rate of 1e-4. We use the standard set of hyperparametersfrom the original PointNet++ paper.F Real-World ExperimentsF.1 Experiments DetailsWe experiment with 6 different objects in the real world. Specifically, we choose 5 revolute objects:Oven, Fridge, Toilet, Trashcan, and Microwave, where the predicted trajectories are generated usingEq. 5, and we choose 1 prismatic object: Drawer, where the predicted trajectories are generated viaEq. 6. For each object in this dataset, we conducted 5 trials of each method. For each trial, the objectis placed in the scene at a random position and orientation. For each trial, we visualize the pointclouds beforehand and hand-label the segmentation masks using bounding boxes. We then pass thesegmentation masks as the auxiliary input channel to FlowProjNet and use them to aggregate thefinal output to improve robustness. We then qualitatively assess the prediction by visualizing thepoints belonging to the segmented part under the predicted trajectory’s transformation. We showsome examples of the predictions in Fig. 8.16Real World Objects Input Point Cloud & Mask Trajectory Prediction Another View Figure 8: Successful Predictions on 6 Real-World Objects. We show FlowBot++’s prediction quality of 6different real-world objects. For better readability, we provide an alternative view for each prediction. Fromtop to bottom, the objects are: Oven, Fridge, Toilet, Trashcan, Microwave, and Drawer.17F.2 Robotic System ImplementationWe provide the details of the physical robot system implementation of FlowBot++. The setup re-mains straightforward and largely similar to previous works [28, 6].F.2.1 HardwareIn all of our real-world experiments, we deploy our system on a Rethink Sawyer Robot and thesensory data (point cloud) come from an Intel RealSence depth camera. The robot’s end effectoris an official Saywer Parallel-Jaw Gripper. We set up our workspace in a 1.2 m by 1.00 m spaceput together using the official Sawyer robot mount and a regular desk. We set up the RealSensecamera such that it points toward the center of workspace and has minimal interference with therobot arm-reach trajectory.F.2.2 Solving for the Robot’s TrajectoryThe choice of the contact point is similar to the procedures described in Eisner et al. [6]. When thecontact point’s full trajectory is predicted using Eq. 5 or 6 based on the part’s articulation type, therobot should just plan motions in order to make its end-effector follow the predicted trajectory. Oncea successful contact is made, the robot end-effector is rigidly attached to the action object, and wethen use the same predicted trajectory waypoints as the end positions of the robot end effector, andthen feed the end-effector positions to MoveIt! to get a full trajectory in joint space using InverseKinematics. For prismatic objects , this is convenient because the robot gripper does not need tochange its orientation throughout the predicted trajectory. For revolute objects , we propose a methodto efficiently calculate the robot’s orientations in tandem with the positions in the planned trajectory.Concretely, the trajectory of Eq. 5 gives us the end-effector’s xyz positions in the trajectory; it isrigidly attached to the contact point, so we could treat their xyz positions to be the same in thistrajectory. Now, we are interested in obtaining the orientation of the end-effector for each step.Assume the end-effector’s orientation (obtained via Forward Kinematics, in the form of rotationmatrix in SO(3)) isq0when making a successful contact with the part of interest. By definition,each step in τrevolute corresponds to a unique rotation matrix R(φg/K), representing the difference ofrotation due to the increase of opening angle in each step. We then calculate the robot end-effector’sorientation at each step i:qi=R(φgK)qi−1 (8)by applying the difference of rotation onto the orientation’s rotation matrix itself iteratively. Thus,in this case, the robot’s end-effector’s full SE(3)trajectory is obtained:τee=τirevolute ,qi∀i∈[0,K](9)We then obtain the robot’s joint-space trajectory using Inverse-Kinematics (IK):τjoint=IK(τee) (10)F.3 Reducing Unwanted MovementsWith the ability to control the full 6D pose of the robot end-effector in the trajectory, we are alsoable to reduce the unwanted movements of the object itself. In FlowBot3D [6], the suction gripper’srotation is controlled using a heuristic based on the flow direction prediction, which is often off.Thus, incorrect rotation could cause the gripper to yank the object too hard in a wrong direction,causing unwanted motion of the articulated object or even detaching the gripper from the objectsurface. With the full 6D gripper trajectory produced by FlowBot++ as a byproduct, the relative posebetween the gripper and the articulated part remains the same as when a contact is made throughoutthe trajectory. This largely eliminates the unwanted movement problem in [6]. We illustrate thisproperty in Fig. 9. A disadvantage of deploying FlowBot3D in the real world is that each step isprone to error, causing the gripper to move to wrong directions, which would unexpectedly move theobject, potentially causing damage. Using the full gripper trajectory derived from Eq. 9, FlowBot++18FlowBot++ (Ours) FlowBot3D Figure 9: FlowBot++ Reducing Unwanted Movements of the Object. Top: FlowBot++ opens the left door ofthe fridge. The position and pose of the body of the fridge remain unchanged. Bottom: FlowBot3D opens theright door of the fridge. Due to wrong flow prediction at intermediate steps, the gripper yanks the fridge toohard that it tips over, causing unwanted motions of the fridge and opening the wrong door by accident.is more likely to be more compliant with respect to the object’s kinematic constraints without usinghand-designed heuristics based on Articulation Flow predictions. The position and pose of the bodyof the articulated object are then able to remain unchanged. In contrast, FlowBot3D has more pointsof failure due to its closed-loop nature. When a single step’s Articulation Flow prediction is off -namely, non-parallel to the ground-truth flow direction, the gripper would move against the object’skinematic constraint, moving the other parts of the object unexpectedly. Please note that this is betterunderstood by watching the video comparisons on our website.F.4 Failure CaseWe illustrate a failure case of FlowBot++ deployed in the real world here. The failure is caused bypredictions that are off, which results in off-axis rotation. The failure case is shown in Fig. 10. InFigure 10: Failed FlowBot++ Prediction on a Microwave. Imperfect articulation parameter prediction causedthe rotation to be off-axis.this prediction, the prediction result in incorrect articulation parameters. From the visualization, thepredicted axis is off, causing the rotated part to go “off the hinge.” If a real robot were to executethis, the planned motion would either be infeasible or make the robot lose contact with the grasppoint.1920 |
o82EXEK5hu6 | Parting with Misconceptions aboutLearning-based Vehicle Motion PlanningDaniel Dauner1,2Marcel Hallgarten1,3Andreas Geiger1,2Kashyap Chitta1,21University of Tübingen2Tübingen AI Center3Robert Bosch GmbHhttps://github.com/autonomousvision/tuplan_garageAbstract: The release of nuPlan marks a new era in vehicle motion planningresearch, offering the first large-scale real-world dataset and evaluation schemesrequiring both precise short-term planning and long-horizon ego-forecasting. Exist-ing systems struggle to simultaneously meet both requirements. Indeed, we find thatthese tasks are fundamentally misaligned and should be addressed independently.We further assess the current state of closed-loop planning in the field, revealingthe limitations of learning-based methods in complex real-world scenarios and thevalue of simple rule-based priors such as centerline selection through lane graphsearch algorithms. More surprisingly, for the open-loop sub-task, we observe thatthe best results are achieved when using only this centerline as scene context (i.e.,ignoring all information regarding the map and other agents). Combining theseinsights, we propose an extremely simple and efficient planner which outperformsan extensive set of competitors, winning the nuPlan planning challenge 2023.Keywords: Motion Planning, Autonomous Driving, Data-driven Simulation1 IntroductionDespite learning-based systems’ success in vehicle motion planning research [ 1,2,3,4,5], a lackof standardized large-scale datasets for benchmarking holds back their transfer from research toapplications [ 6,7,8]. The recent release of the nuPlan dataset and simulator [ 9], a collection of1300 hours of real-world vehicle motion data, has changed this, enabling the development of a newgeneration of learned motion planners, which promise reduced manual design effort and improvedscalability. Equipped with this new benchmark, we perform the first rigorous empirical analysison a large-scale, open-source, and data-driven simulator for vehicle motion planning, including acomprehensive set of state-of-the-art (SoTA) planners [ 10,11,12] using the official metrics. Ouranalysis yields several surprising findings:Open- and closed-loop evaluation are misaligned. Most learned planners are trained throughthe supervised learning task of forecasting the ego vehicle’s future motion conditioned on a desiredgoal location. We refer to this setting as ego-forecasting [ 2,3,13,14]. In nuPlan, planners canbe evaluated in two ways: (1) in open-loop evaluation, which measures ego-forecasting accuracyusing distance-based metrics or (2) in closed-loop evaluation, which assesses the actual drivingperformance in simulation with metrics such as progress or collision rates. Open-loop evaluationlacks dynamic feedback and can have little correlation with closed-loop driving, as previously shownon the simplistic CARLA simulator [ 15,16]. Our primary contribution lies in uncovering a negativecorrelation between both evaluation schemes. Learned planners excel at ego-forecasting but struggleto make safe closed-loop plans, whereas rule-based planners exhibit the opposite trend.Rule-based planning generalizes. We surprisingly find that an established rule-based planningbaseline from over twenty years ago [ 17] surpasses all SoTA learning-based methods in terms ofclosed-loop evaluation metrics on our benchmark. This contradicts the prevalent motivating claimused in most research on learned planners that rule-based planning faces difficulties in generalization.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.This was previously only verified on simpler benchmarks [ 4,10,11]. As a result, most current workon learned planning only compares to other learned methods, ignoring rule-based baselines [ 3,5,18].A centerline is all you need for ego-forecasting. We implement a naïve learned planning baselinewhich does not incorporate any input about other agents in the scene and merely extrapolates theego state given a centerline representation of the desired route. This baseline sets the new SoTA foropen-loop evaluation on our benchmark. It does not require intricate scene representations (e.g. lanegraphs, vectorized maps, rasterized maps, tokenized objects), which have been the central subject ofinquiry in previous work [ 10,11,12]. None of these prior studies considered a simple centerline-onlyrepresentation as a baseline, perhaps due to its extraordinary simplicity.Our contributions are as follows: (1) We demonstrate and analyze the misalignment between open-and closed-loop evaluation schemes in planning. (2) We propose a lightweight extension of IDM[17]with real-time capability that achieves state-of-the-art closed-loop performance. (3) We conductexperiments with an open-loop planner, which is only conditioned on the current dynamic state and acenterline, showing that it outperforms sophisticated models with complex input representations. (4)By combining both models into a hybrid planner, we establish a simple baseline that outperformed 24other, often learning-based, competing approaches and claimed victory in the nuPlan challenge 2023.2 Related WorkRule-based planning. Rule-based planners offer a structured, interpretable decision-makingframework [ 17,19,20,21,22,23,24,25,26]. They employ explicit rules to determine an autonomousvehicle’s behavior (e.g., brake when an object is straight ahead). A seminal approach in rule-basedplanning is the Intelligent Driver Model ( IDM[17]), which is designed to follow a leading vehicle intraffic while maintaining a safe distance. There exist extensions of IDM[27] which focus on enablinglane changes on highways. However, this is not the goal of our work. Instead, we extend IDMbyexecuting multiple policies with different hyperparameters, and scoring them to select the best option.Prior work also combines rule-based decision-making with learned components, e.g., with learnedagent forecasts [ 28], affordance indicators [ 23,24], cost-based imitation learning [ 4,29,30,31,32],or learning-based planning with rule-based safety filtering [ 33]. These hybrid planners often forecastfuture environmental states, enabling informed and contingent driving decisions. This forecasting caneither be agent-centric [ 34,35,36], where trajectories are determined for each actor, or environment-centric [ 4,31,30,29,37,38], involving occupancy or cost maps. Additionally, forecasting can beconditioned on the ego-plan, modeling the ego vehicle’s influence on the scene’s future [ 39,40,41,42].We employ an agent-centric forecasting module that is considerably simpler than existing methods,allowing for its use as a starting point in the newly released nuPlan framework.Ego-forecasting. Unlike predictive planning, ego-forecasting methods use observational datato directly determine the future trajectory. Ego-forecasting approaches include both end-to-endmethods [ 43] that utilize LiDAR scans [ 44,45], RGB images [ 46,47,48,49,14,50] or both [ 13,51,5,52], as well as modular methods involving lower-dimensional inputs like bird’s eye view (BEV)grids or state vectors [ 24,53,11,54,55,56]. A concurrent study introduces a naive MLP inputtingthe current dynamic state, yielding competitive ego-forecasting results on the nuScenes dataset [ 57]with no scene context input [ 58]. Our findings complement these results, differing by evaluatinglong-term (8s) ego-forecasting in the challenging 2023 nuPlan challenge scenario test distribution [ 9].We show that in this setting, completely removing scene context (as in [ 58]) is harmful, whereas asimple centerline representation of the context is sufficient for strong open-loop performance.3 Ego-forecasting and Planning are MisalignedIn this section, we provide the relevant background regarding the data-driven simulator nuPlan [ 9].We describe two baselines for a preliminary experiment to demonstrate that although ego-forecastingand planning are often considered related tasks, they are not well-aligned given their definitions onnuPlan. Improvements in one task can often lead to degradation in the other.2Figure 1: Planning vs. ego-forecasting. We present a nuPlan scenario, highlighting the driveablearea in grey and the original human trajectory as a dashed black line. In each snapshot, we display theego agent with its prediction. (Left) observe the significant displacement between the IDMprediction(constrained to a rule-based centerline) and the human trajectory, resulting in low open-loop scores.(Mid + right) after 0.5 seconds of simulation, the learned PDM-Open planner extrapolates its ownerrors and eventually veers off-road, leading to suboptimal closed-loop scores.3.1 BackgroundnuPlan. The nuPlan simulator is the first publicly available real-world planning benchmark andenables rapid prototyping and testing of motion planners. nuPlan constructs a simulated environmentas closely as possible to a real-world driving setting through data-driven simulation [ 59,60,61,62,63,64,65]. This method extracts road maps, traffic patterns, and object properties (positions,orientations, and speeds) from a pre-recorded dataset consisting of 1,300 hours of real-world driving.These elements are then used to initialize scenarios, which are 15-second simulations employed toassess open-loop and closed-loop driving performance. Hence, in simulation, our methods rely onaccess to detailed HD map information and ground-truth perception. i.e., no localization errors, mapimperfections, or misdetections are considered. In open-loop simulation, the entire log is merelyreplayed (for both the ego vehicle and other actors). Conversely, in closed-loop simulation, the egovehicle operates under the control of the planner being tested. There are two versions of closed-loopsimulation: non-reactive, where all other actors are replayed along their original trajectory, andreactive, where other vehicles employ an IDMplanner [17], which detail in the following.Metrics. nuPlan offers three official evaluation metrics: open-loop score (OLS), closed-loop scorenon-reactive (CLS-NR), and closed-loop score reactive (CLS-R). Although CLS-NR and CLS-R arecomputed identically, they differ in background traffic behavior. Each score is a weighted average ofsub-scores that are multiplied by a set of penalties. In OLS, the sub-scores account for displacementand heading errors, both average and final, over an extended period (8 seconds). Moreover, if theprediction error is above a threshold, the penalty results in an OLS score of zero for that scenario.Similarly, sub-scores in CLS comprise time-to-collision, progress along the experts’ route, speed-limit compliance, and comfort. Multiplicative CLS penalties are at-fault collisions, driveable area ordriving direction infringements and not making progress. These penalties result in substantial CLSreductions, mostly to a zero scenario score, e.g., when colliding with a vehicle. Notably, the CLSprimarily relies on short-term actions rather than on consistent long-term planning. All scores (incl.OLS/CLS) lie range from 0 to 100, where higher scores are better. Given the elaborate compositionof nuPlan’s metrics, we refer to the supplementary material for a detailed description.Intelligent Driver Model. The simple planning baseline IDM[17] not only simulates the non-egovehicles in the CLS-R evaluation of nuPlan, but also serves as a baseline for the ego-vehicle’splanning. The nuPlan map is provided as a graph, with centerline segments functioning as nodes.After choosing a set of such nodes to follow via a graph search algorithm, IDMinfers a longitudinaltrajectory along the selected centerline. Given the current longitudinal position x, velocity v, anddistance to the leading vehicle salong the centerline, it iteratively applies the following policy to3calculate a longitudinal acceleration:dvdt=a1−vv0δ−s∗s2. (1)The acceleration limit a, target speed v0, safety margin s∗, and exponent δare manually selected.Intuitively, the policy uses an acceleration aunless the velocity is already close to v0or the leadingvehicle is at a distance of only s∗. Additional details and our exact hyper-parameter choices can befound in the supplementary material.3.2 MisalignmentCenterline-conditioned ego-forecasting. We now propose the Predictive Driver Model (Open),i.e.,PDM-Open , which is a straightforward multi-layer perceptron (MLP) designed to predict futurewaypoints. The inputs to this MLP are the centerline ( c) extracted by IDMand the ego history ( h). Toaccommodate the high speeds (reaching up to 15 m/s) and ego-forecasting horizons (extending to 8seconds) observed in nuPlan, the centerline is sampled with a resolution of 1 meter up to a lengthof 120 meters. Meanwhile, the ego history incorporates the positions, velocities, and accelerationsof the vehicle over the previous two seconds, sampled at a rate of 5Hz. Both candhare linearlyprojected to feature vectors of size 512, concatenated, and input to the MLP φOpenwhich has two512-dimensional hidden layers. The output are the future waypoints for an 8-second horizon, spaced0.5 seconds apart, expressed as wOpen=φOpen(c,h). The model is trained using an L1loss on ourtraining dataset of 177k samples (described in Section 4). By design, PDM-Open is considerablysimpler than existing learned planners [10, 12].MethodCenterlineHistoryCLS-R ↑CLS-NR ↑OLS ↑IDM[17]a= 1.0✓ -77 76 38a= 0.1 54 66 48PDM-Open- - 51 50 69-✓38 34 72✓ -54 53 85✓ ✓ 54 50 86Table 1: OLS-CLS Tradeoff. Baselinescores on nuPlan with different inputs.OLS vs. CLS. In Table 1, we benchmark the IDM andPDM-Open baselines using the nuPlan metrics. We presenttwoIDMvariants with different maximum acceleration val-ues (the default a= 1.0ms−2anda= 0.1ms−2) andfourPDM-Open variants based on different inputs. We ob-serve that reducing IDM’s acceleration improves OLS butnegatively impacts CLS. While IDMdemonstrates strongclosed-loop performance, PDM-Open outperforms IDMinopen-loop even if it only uses the current ego state as in-put (first row). The past ego states (History) only yieldlittle improvement and lead to a drop in CLS. Most im-portantly, adding the centerline significantly contributes toego-forecasting performance. A clear trade-off betweenCLS and OLS indicates a misalignment between the goals of ego-forecasting and planning. This sortof inverse correlation on nuPlan is unanticipated, considering the increasing use of ego-forecastingin current planning literature [ 3,10,12,11]. While ego-forecasting is not necessary for drivingperformance, the nuPlan challenge requires both a high OLS and CLS.In Fig. 1, we illustrate the misalignment between the OLS and CLS metrics. In the depicted scenario,the rule-based IDMselects a different lane in comparison to the human driver. However, it maintains itsposition on the road throughout the simulation. This results in a high CLS yet a low OLS. Conversely,the learned PDM-Open generates predictions along the lane chosen by the human driver, therebyobtaining a high OLS. Nonetheless, as errors accumulate in its short-term predictions during thesimulation [66, 67], the model’s trajectory veers off the drivable area, culminating in a subpar CLS.3.3 MethodsWe now extend IDM by incorporating several concepts from model predictive control, includingforecasting, proposals, simulation, scoring, and selection, as illustrated in Fig. 2 (top). We call thismodel PDM-Closed . Note that as a first step, we still require a graph search to find a sequence oflanes along the route and extract their centerline, as in the IDMplanner.4Select Centerline IDM Proposals Select TrajectoryScoringOutput:SimulationPrediction Lin.Lin.Lin................Ego History Plan RouteCenterlineTrajectoryPositionVelocityAcceleration...............Input Scene:Agent ForecastFigure 2: Architecture. PDM-Closed selects a centerline, forecasts the environment, and creates vary-ing trajectory proposals, which are simulated and scored for trajectory selection. The PDM-Hybridmodule predicts offsets using the PDM-Closed centerline, trajectory, and ego history, correcting onlylong-term waypoints and thereby limiting the learned model’s influence in closed-loop simulation.Forecasting. In nuPlan, the simulator provides an orientation vector and speed for each dynamicagent such as a vehicle or pedestrian. We leverage a simple yet effective constant velocity forecast-ing [68, 69, 70] over the horizon Fof 8 seconds at 10Hz.Proposals. In the process of calibrating the IDMplanner, we observed a trade-off when selecting a sin-gle value for the target speed hyperparameter ( v0), which either yielded aggressive driving behavior orinsufficient progress across various scenarios. Consequently, we generate a set of trajectory proposalsby implementing IDMpolicies at five distinct target speeds, namely, {20%,40%,60%,80%,100%}of the designated speed limit. For each target speed, we also incorporate proposals with three lateralcenterline offsets ( ±1m and 0m), thereby producing N= 15 proposals in total. To circumventcomputational demands in subsequent stages, the proposals have a reduced horizon of Hsteps, whichcorresponds to 4 seconds at a 10Hz.Simulation. Trajectories in nuPlan are simulated by iteratively retrieving actions from an LQRcontroller [ 71] and propagating the ego vehicle with a kinematic bicycle model [ 72,73]. We simulatethe proposals with the same parameters and a faster re-implementation of this two-stage pipeline.Thereby, the proposals are evaluated based on the expected movement in closed-loop.Scoring. Each simulated proposal is scored to favor traffic-rule compliance, progress, and comfort.By considering proposals with lateral and longitudinal variety, the planner can avoid collisionswith agent forecasts and correct drift that may arise when the controller fails to accurately trackthe intended trajectory. Furthermore, our scoring function closely resembles the nuPlan evaluationmetrics. We direct the reader to the supplementary material for additional details.Trajectory selection. Finally, PDM-Closed selects the highest-scoring proposal which is extendedto the complete forecasting horizon Fwith the corresponding IDMpolicy. If the best trajectory isexpected to collide within 2 seconds, the output is overwritten with an emergency brake maneuver.Enhancing long-horizon accuracy. To integrate the accurate ego-forecasting capabilities ofPDM-Open with the precise short-term actions of PDM-Closed , we now propose a hybrid versionofPDM, i.e., PDM-Hybrid . Specifically, PDM-Hybrid uses a learned module PDM-Offset to predictoffsets to waypoints from PDM-Closed , as shown in Fig. 2 (bottom).5Method Rep. CLS-R ↑ CLS-NR ↑ OLS↑ Time↓Urban Driver [10] Polygon 50 53 82 64GC-PGP [12] Graph 55 59 83 100PlanCNN [11] Raster 72 73 64 43IDM[17] Centerline 77 76 38 27PDM-Open Centerline 54 50 86 7PDM-Closed Centerline 92 93 42 91PDM-Hybrid Centerline 92 93 84 96PDM-Hybrid * Graph 92 93 84 172Log Replay GT 80 94 100 -Table 2: Val14 benchmark. We show the closed-loop score reactive/non-reactive (CLS-R/CLS-NR),open loop score (OLS) and runtime in ms for several planners. We specify the input representation(Rep.) used by each planner. PDM-Hybrid accomplishes strong ego-forecasting (OLS) and planning(CLS). *This is a preliminary version of PDM-Hybrid that combined PDM-Closed with GC-PGP [12],and was used in our online leaderboard submission (Table 3).In practice, the LQR controller used in nuPlan relies exclusively on the first 2 seconds of the trajectorywhen determining actions in closed-loop. Therefore, applying the correction only to long-termwaypoints (i.e., beyond 2 seconds by default, which we refer to as the correction horizon C) allowsPDM-Hybrid to maintain closed-loop planning performance. The final planner outputs waypoints(up to the forecasting horizon F){wtHybrid}Ft=0that are given by:wtHybrid =wtClosed +1[t>C]φtOffset (wClosed ,c,h). (2)Where candhare the centerline and history (identical to the inputs of PDM-Open ).{wtClosed}Ft=0arethePDM-Closed waypoints added to the hybrid approach, and φOffset is an MLP. Its architecture isidentical to φOpenexcept for an extra linear projection to accommodate wClosed as an additional input.It is important to note that PDM-Hybrid is designed with high modularity, enabling the substitutionof individual components with alternative options when diverse requirements emerge. For example,we show results with a different open-loop module in the supplementary material. Given its overallsimplicity, one interesting approach to explore involves incorporating modular yet differentiablealgorithms as components, as seen in [ 34]. Exploring the integration of these modules within unifiedmulti-task architectures is another interesting direction. We reserve such exploration for future work.4 ExperimentsWe now outline our proposed benchmark and highlight the driving performance of our approach.Val14 benchmark. We offer standardized data splits for training and evaluation. Training usesall 70 scenario types from nuPlan, restricted to a maximum of 4k scenarios per type, resulting in∼177k training scenarios. For evaluation, we use 100 scenarios of the 14 scenario types consideredby the leaderboard, totaling 1,118 scenarios. Despite minor imbalance (all 14 types do not have 100available scenarios), our validation split aligns with the online leaderboard evaluation (Table 2 andTable 3), confirming the suitability of our Val14 benchmark as a proxy for the online test set.Baselines. We include several additional SoTA approaches adopting ego-forecasting for planning inour study. Urban Driver [10] encodes polygons with PointNet layers and predicts trajectories witha linear layer after a multi-head attention block. Our study uses an implementation of Urban Drivertrained in the open-loop setting. GC-PGP [12] clusters trajectory proposals based on route-constrainedlane-graph traversals before returning the most likely cluster center. PlanCNN [11] predicts waypointsusing a CNN from rasterized grid features without an ego state input. It shares several similaritiestoChauffeurNet [8], a seminal work in the field. A preliminary version of PDM-Hybrid , whichwon the nuPlan competition, used GC-PGP as its ego-forecasting component, and we include this as abaseline. We provide a complete description of this version in the supplementary.6Results. Our results are presented in Table 2. PlanCNN achieves the best CLS among learnedplanners, possibly due to its design choice of removing ego state from input, trading OLS forenhanced CLS. Contrary to the community’s growing preference for graph- and vector-based scenerepresentations in prediction and planning [ 74,11,75,76], these results show no clear disadvantageof raster representations for the closed-loop task, with PlanCNN also offering a lower runtime.Surprisingly, the simplest rule-based approach in our study, IDM, outperforms the best learnedplanner, PlanCNN . Moreover, we observe PDM-Closed ’s advantages over IDMin terms of CLS: animprovement from 76-77 to 92-93 as a result of the ideas from Section 3. Surprisingly, PDM-Openachieves the highest OLS of 86 with a runtime of only 7ms using only a centerline and the egostate as input. We observe that PDM-Open improves on other methods in accurate long-horizonlane-following, as detailed further in our supplementary material. Next, despite PDM-Closed ’sunsatisfactory 42 OLS, PDM-Hybrid successfully combines PDM-Closed with PDM-Open . Both thecenterline and graph versions of PDM-Hybrid achieve identical scores in our evaluation. However,the final centerline version, using PDM-Open instead of GC-PGP , is more efficient during inference.Finally, the privileged approach of outputting the ground-truth ego future trajectory (log replay) failsto achieve a perfect CLS, in part due to the nuPlan framework’s LQR controller occasionally driftingfrom the provided trajectory. PDM-Hybrid compensates for this by evaluating proposals based on theexpected controller outcome, causing it to match/outperform log replay in closed-loop evaluation.MethodCLS-R ↑CLS-NR ↑OLS ↑Score ↑PDM-Hybrid * 93 93 8390hoplan 89 88 85 87pegasus_multi_path 82 85 88 85Urban Driver [10] 68 70 86 75IDM[17] 72 75 29 59Table 3: 2023 nuPlan Challenge.Challenge. The 2023 nuPlan challenge saw the prelim-inary (graph) version of PDM-Hybrid rank first out of 25participating teams. The leaderboard considers the meanof CLS-R, CLS-NR, and OLS. While open-loop perfor-mance lagged slightly, closed-loop performance excelled,resulting in an overall SoTA score. Unfortunately, due tothe closure of the leaderboard, our final (centerline) ver-sion of PDM-Hybrid that replaces GC-PGP with the simplerPDM-Open module could not be benchmarked. All top con-tenders combined learned ego-forecasting with rule-basedpost-solvers or post-processing to boost CLS performancefor the challenge [ 77,78,79]. Thus, we expect to see morehybrid approaches in the future.Importantly, near identical scores were recorded for our submission on both our Val14 benchmark(Table 2) and the official leaderboard (Table 3). Note that the Urban Driver andIDMresults onthe leaderboard are provided by the nuPlan team, so they likely use different training data andhyper-parameters than our implementations from Table 2.Ablation Study. We delve into our design choices through an ablation study in Table 4. Table 4adisplays PDM-Hybrid ’s closed-loop score reactive (CLS-R) and open-loop score (OLS) with variedcorrection horizons ( C) from 0s to 3s. Applying the waypoint correction to all waypoints (i.e.,C= 0), outperforms PDM-Open in OLS (87 vs. 86, see Table 2) but leads to a substantial drop inCLS-R compared to the default value of C= 2. On the other hand, a noticeable OLS decline occurswhen initiating corrections deeper into the trajectory (e.g., C= 3), with minimal impact on CLS-R.ForPDM-Closed (Table 4b), we compare CLS-R and runtime (ms) with the base planner acrossthree scenarios: removing lateral centerline offsets ("lat."), longitudinal IDM proposals ("lon."),and environment forecasting ("cast."). Our analysis reveals that eliminating proposals diminishesCLS-R effectiveness but accelerates runtimes. Performance significantly drops when excluding theforecasting used for creating and evaluating proposals. However, the runtime remains nearly identical,showing the effectiveness of the simple forecasting mechanism.As for PDM-Open (Table 4c), we test three variations: a shorter centerline (30m vs. 120m), a coarsercenterline (every 10m vs. 1m), and a smaller MLP with a reduced hidden dimension (from 512 to256). Both a smaller MLP and a reduced centerline length lead to performance degradation, butthe impact remains relatively minor compared to disregarding the centerline altogether (Table 1,OLS=72). Meanwhile, the impact of a coarser centerline is negligible.7C CLS-R ↑OLS ↑0.0s 58 872.0s 92 842.5s 92 843.0s 92 72(a)PDM-HybridMethod CLS-R ↑Time ↓Base 92 91No lat. 89 55No lon. 88 64No cast. 86 90(b)PDM-ClosedMethod OLS ↑Baseline 86Shorter centerline 84Coarser centerline 86Smaller MLP 84(c)PDM-OpenTable 4: Ablation Study. We show the closed-loop score reactive (CLS-R), open loop score (OLS)and runtime in ms. We investigate (a) varying correction horizons for PDM-Hybrid , (b) ignoringsub-modules of PDM-Closed , and (c) the effects of input and architecture choices on PDM-Open . Thedefault configuration, highlighted in gray, achieves the best trade-offs.5 DiscussionAlthough rule-based planning is often criticized for its limited generalization, our results demonstratestrong performance in the closed-loop nuPlan task which best resembles real-world evaluation.Notably, open-loop success in part requires a trade-off in closed-loop performance . Consequently,imitation-trained ego-forecasting methods fare poorly in closed-loop. This suggests that rule-basedplanners remain promising and warrant further exploration. At the same time, given their poorperformance out-of-the-box, there is room for improvement in imitation-based methods on nuPlan.Integrating the strengths of closed-loop planning and open-loop ego-forecasting, we present a hybridmodel. However, this does not enhance closed-loop driving performance; instead, it boosts open-loopperformance while executing identical driving maneuvers . We conclude that considering preciseopen-loop ego-forecasting as a prerequisite for achieving long-term planning goals is misleading.Acknowledging the potential importance of ego-forecasting for interpretability and assessing human-like behavior, we propose focusing this evaluation on the short horizon (e.g., 2 seconds) relevant forclosed-loop driving. The current nuPlan OLS definition, requiring a unimodal 8-second ego-forecast,may only be useful for alternate applications , like setting goals for background agents in data-driven traffic simulations or allocating computational resources better, e.g. to prioritize perceptionor prediction in areas the ego-vehicle is expected to traverse. We discourage the use of open-loopmetrics as a primary indicator of planning performance [18].Limitations. While we significantly improve upon the established IDMmodel, PDMstill does notexecute lane-change maneuvers. Lane change attempts often lead to collisions when the ego-vehicleis between two lanes, resulting in a high penalty as per the nuPlan metrics. PDMrelies on HD mapsand precise offboard perception [ 80,9] that may be unavailable in real-world driving situations.While real-world deployment was demonstrated for learning-based methods [ 81,33,8], it remains asignificant challenge for rule-based approaches. Moreover, our experiments, aside from the held-outtest set, have not specifically evaluated the model’s generalization capabilities when encounteringdistributional shifts, such as unseen towns or novel scenario types. They were all conducted on asingle simulator, nuPlan. Therefore, it is important to recognize the limitations inherent in nuPlan’sdata-driven simulation approach. When a planner advances more rapidly than the human driving log,objects materialize abruptly in front of the ego-vehicle during simulation. For CLS-NR, vehicles moveindependently as observed in reality, disregarding the ego agent, leading to excessively aggressivebehavior. Conversely, CLS-R background agents rely on IDMand adhere strictly to the centerline,leading to unrealistically passive behavior. We see high value in developing a more refined reactiveenvironment for future work.Conclusion. In this paper, we identify prevalent misconceptions in learning-based vehicle motionplanning. Based on our insights, we introduce PDM-Hybrid , which builds upon IDMand combinesit with a learned ego-forecasting component. It surpassed a comprehensive set of competitors andclaimed victory in the 2023 nuPlan competition.8AcknowledgementsAndreas Geiger was supported by the ERC Starting Grant LEGO-3D (850533), the BMWi in theproject KI Delta Learning (project number 19A19013O) and the DFG EXC number 2064/1 - projectnumber 390727645. Kashyap Chitta was supported by the German Federal Ministry of Education andResearch (BMBF): Tübingen AI Center, FKZ: 01IS18039A. We thank the International Max PlanckResearch School for Intelligent Systems (IMPRS-IS) for supporting Kashyap Chitta. We also thankPat Karnchanachari and Gianmarco Bernasconi for helping with technical issues in our leaderboardsubmissions, Niklas Hanselmann for proofreading, and Andreas Zell for helpful discussions.References[1]F. Codevilla, M. Miiller, A. López, V . Koltun, and A. Dosovitskiy. End-to-end driving viaconditional imitation learning. In Proc. IEEE International Conf. on Robotics and Automation(ICRA) , 2018.[2]F. Codevilla, E. Santana, A. M. López, and A. Gaidon. Exploring the limitations of behaviorcloning for autonomous driving. In Proc. of the IEEE International Conf. on Computer Vision(ICCV) , 2019.[3]N. Rhinehart, R. McAllister, K. Kitani, and S. Levine. PRECOG: prediction conditioned ongoals in visual multi-agent settings. In Proc. of the IEEE International Conf. on ComputerVision (ICCV) , 2019.[4]W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun. End-to-end interpretableneural motion planner. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition(CVPR) , 2019.[5]K. Chitta, A. Prakash, B. Jaeger, Z. Yu, K. Renz, and A. Geiger. Transfuser: Imitation withtransformer-based sensor fusion for autonomous driving. IEEE Trans. on Pattern Analysis andMachine Intelligence (PAMI) , 2022.[6]M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Mon-fort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. End to end learning for self-drivingcars. arXiv.org , 1604.07316, 2016.[7]A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J. M. Allen, V . D. Lam, A. Bewley, andA. Shah. Learning to drive in a day. arXiv.org , abs/1807.00412, 2018.[8]M. Bansal, A. Krizhevsky, and A. S. Ogale. Chauffeurnet: Learning to drive by imitating thebest and synthesizing the worst. In Proc. Robotics: Science and Systems (RSS) , 2019.[9]H. Caesar, J. Kabzan, K. S. Tan, W. K. Fong, E. M. Wolff, A. H. Lang, L. Fletcher, O. Beijbom,and S. Omari. nuplan: A closed-loop ml-based planning benchmark for autonomous vehicles.InProc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) Workshops , 2021.[10] O. Scheel, L. Bergamini, M. Wolczyk, B. Osi ́nski, and P. Ondruska. Urban driver: Learning todrive from real-world demonstrations using policy gradients. In Proc. Conf. on Robot Learning(CoRL) , 2021.[11] K. Renz, K. Chitta, O.-B. Mercea, A. S. Koepke, Z. Akata, and A. Geiger. Plant: Explainableplanning transformers via object-level representations. In Proc. Conf. on Robot Learning(CoRL) , 2022.[12] M. Hallgarten, M. Stoll, and A. Zell. From Prediction to Planning With Goal Conditioned LaneGraph Traversals. arXiv.org , 2302.07753, 2023.9[13] A. Prakash, K. Chitta, and A. Geiger. Multi-modal fusion transformer for end-to-end au-tonomous driving. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) ,2021.[14] K. Chitta, A. Prakash, and A. Geiger. Neat: Neural attention fields for end-to-end autonomousdriving. In Proc. of the IEEE International Conf. on Computer Vision (ICCV) , 2021.[15] F. Codevilla, A. M. Lopez, V . Koltun, and A. Dosovitskiy. On offline evaluation of vision-baseddriving models. In Proc. of the European Conf. on Computer Vision (ECCV) , 2018.[16] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. CARLA: An open urban drivingsimulator. In Proc. Conf. on Robot Learning (CoRL) , 2017.[17] M. Treiber, A. Hennecke, and D. Helbing. Congested traffic states in empirical observationsand microscopic simulations. Physical review E , 2000.[18] Y . Hu, J. Yang, L. Chen, K. Li, C. Sima, X. Zhu, S. Chai, S. Du, T. Lin, W. Wang, L. Lu, X. Jia,Q. Liu, J. Dai, Y . Qiao, and H. Li. Planning-oriented autonomous driving. In Proc. IEEE Conf.on Computer Vision and Pattern Recognition (CVPR) , 2023.[19] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale,M. Halpenny, G. Hoffmann, K. Lau, C. M. Oakley, M. Palatucci, V . R. Pratt, P. Stang, S. Stro-hband, C. Dupont, L. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen,P. Alessandrini, G. R. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. V . Nefian, and P. Mahoney.Stanley: The robot that won the DARPA grand challenge. Journal of Field Robotics (JFR) , 23(9):661–692, 2006.[20] A. Bacha, C. Bauman, R. Faruque, M. Fleming, C. Terwelp, C. Reinholtz, D. Hong, A. Wicks,T. Alberi, D. Anderson, et al. Odin: Team victortango’s entry in the darpa urban challenge.Journal of Field Robotics (JFR) , 25(8), 2008.[21] J. J. Leonard, J. P. How, S. J. Teller, M. Berger, S. Campbell, G. A. Fiore, L. Fletcher, E. Frazzoli,A. S. Huang, S. Karaman, O. Koch, Y . Kuwata, D. Moore, E. Olson, S. Peters, J. Teo, R. Truax,M. R. Walter, D. Barrett, A. Epstein, K. Maheloni, K. Moyer, T. Jones, R. Buckley, M. E.Antone, R. Galejs, S. Krishnamurthy, and J. Williams. A perception-driven autonomous urbanvehicle. Journal of Field Robotics (JFR) , 25(10):727–774, 2008.[22] C. Urmson, J. Anhalt, D. Bagnell, C. Baker, R. Bittner, M. Clark, J. Dolan, D. Duggins,T. Galatali, C. Geyer, et al. Autonomous driving in urban environments: Boss and the urbanchallenge. Journal of Field Robotics (JFR) , 25(8), 2008.[23] C. Chen, A. Seff, A. L. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for directperception in autonomous driving. In Proc. of the IEEE International Conf. on Computer Vision(ICCV) , pages 2722–2730, 2015.[24] A. Sauer, N. Savinov, and A. Geiger. Conditional affordance learning for driving in urbanenvironments. In Proc. Conf. on Robot Learning (CoRL) , 2018.[25] H. Fan, F. Zhu, C. Liu, L. Zhang, L. Zhuang, D. Li, W. Zhu, J. Hu, H. Li, and Q. Kong. Baiduapollo EM motion planner. arXiv.org , 1807.08048, 2018.[26] A. Sadat, M. Ren, A. Pokrovsky, Y . Lin, E. Yumer, and R. Urtasun. Jointly learnable behaviorand trajectory planning for self-driving vehicles. In Proc. IEEE International Conf. on IntelligentRobots and Systems (IROS) , 2019.[27] A. Kesting, M. Treiber, and D. Helbing. General lane-changing model mobil for car-followingmodels. Transportation Research Record , 1999(1), 2007.[28] R. Chekroun, T. Gilles, M. Toromanoff, S. Hornauer, and F. Moutarde. Mbappe: Mcts-built-around prediction for planning explicitly. arXiv.org , 2309.08452, 2023.10[29] A. Cui, A. Sadat, S. Casas, R. Liao, and R. Urtasun. Lookout: Diverse multi-future predictionand planning for self-driving. In Proc. of the IEEE International Conf. on Computer Vision(ICCV) , 2021.[30] S. Casas, A. Sadat, and R. Urtasun. Mp3: A unified model to map, perceive, predict and plan.InProc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , 2021.[31] A. Sadat, S. Casas, M. Ren, X. Wu, P. Dhawan, and R. Urtasun. Perceive, predict, and plan:Safe motion planning through interpretable semantic representations. In Proc. of the EuropeanConf. on Computer Vision (ECCV) , 2020.[32] Z. Huang, P. Karkus, B. Ivanovic, Y . Chen, M. Pavone, and C. Lv. Dtpp: Differentiable jointconditional prediction and cost evaluation for tree policy planning in autonomous driving.arXiv.org , 2310.05885, 2023.[33] T. Phan-Minh, F. Howington, T.-S. Chu, M. S. Tomov, R. E. Beaudoin, S. U. Lee, N. Li, C. Dicle,S. Findler, F. Suarez-Ruiz, et al. Driveirl: Drive in real life with inverse reinforcement learning.InProc. IEEE International Conf. on Robotics and Automation (ICRA) , 2023.[34] P. Karkus, B. Ivanovic, S. Mannor, and M. Pavone. Diffstack: A differentiable and modularcontrol stack for autonomous vehicles. In Proc. Conf. on Robot Learning (CoRL) , 2022.[35] M. H. Danesh, P. Cai, and D. Hsu. Leader: Learning attention over driving behaviors forplanning under uncertainty. In Proc. Conf. on Robot Learning (CoRL) , 2022.[36] Z. Huang, H. Liu, J. Wu, and C. Lv. Differentiable integrated motion prediction and planningwith learnable cost function for autonomous driving. arXiv.org , 2207.10422, 2022.[37] B. Wei, M. Ren, W. Zeng, M. Liang, B. Yang, and R. Urtasun. Perceive, attend, and drive:Learning spatial attention for safe self-driving. Proc. IEEE International Conf. on Robotics andAutomation (ICRA) , 2021.[38] S. Hu, L. Chen, P. Wu, H. Li, J. Yan, and D. Tao. St-p3: End-to-end vision-based autonomousdriving via spatial-temporal feature learning. In Proc. of the European Conf. on ComputerVision (ECCV) , 2022.[39] H. Song, W. Ding, Y . Chen, S. Shen, M. Y . Wang, and Q. Chen. Pip: Planning-informedtrajectory prediction for autonomous driving. In Proc. of the European Conf. on ComputerVision (ECCV) , 2020.[40] N. Rhinehart, J. He, C. Packer, M. A. Wright, R. McAllister, J. E. Gonzalez, and S. Levine.Contingencies from observations: Tractable contingency planning with learned behavior models.InProc. IEEE International Conf. on Robotics and Automation (ICRA) , 2021.[41] Y . Chen, P. Karkus, B. Ivanovic, X. Weng, and M. Pavone. Tree-structured policy planningwith learned behavior models. In Proc. IEEE International Conf. on Robotics and Automation(ICRA) , 2023.[42] Z. Huang, H. Liu, and C. Lv. Gameformer: Game-theoretic modeling and learning oftransformer-based interactive prediction and planning for autonomous driving. arXiv.org ,2303.05760, 2023.[43] L. Chen, P. Wu, K. Chitta, B. Jaeger, A. Geiger, and H. Li. End-to-end autonomous driving:Challenges and frontiers. arXiv.org , 2306.16927, 2023.[44] N. Rhinehart, R. McAllister, and S. Levine. Deep imitative models for flexible inference,planning, and control. In Proc. of the International Conf. on Learning Representations (ICLR) ,2020.11[45] A. Filos, P. Tigas, R. McAllister, N. Rhinehart, S. Levine, and Y . Gal. Can autonomous vehiclesidentify, recover from, and adapt to distribution shifts? In Proc. of the International Conf. onMachine learning (ICML) , 2020.[46] H. Xu, Y . Gao, F. Yu, and T. Darrell. End-to-end learning of driving models from large-scalevideo datasets. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) ,pages 3530–3538, 2017.[47] D. Chen, B. Zhou, V . Koltun, and P. Krähenbühl. Learning by cheating. In Proc. Conf. on RobotLearning (CoRL) , 2019.[48] E. Ohn-Bar, A. Prakash, A. Behl, K. Chitta, and A. Geiger. Learning situational driving. InProc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , 2020.[49] A. Behl, K. Chitta, A. Prakash, E. Ohn-Bar, and A. Geiger. Label efficient visual abstractionsfor autonomous driving. In Proc. IEEE International Conf. on Intelligent Robots and Systems(IROS) , 2020.[50] P. Wu, X. Jia, L. Chen, J. Yan, H. Li, and Y . Qiao. Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline. In Advances in Neural InformationProcessing Systems (NeurIPS) , 2022.[51] D. Chen and P. Krähenbühl. Learning from all vehicles. In CVPR , 2022.[52] B. Jaeger, K. Chitta, and A. Geiger. Hidden biases of end-to-end driving models. Proc. of theIEEE International Conf. on Computer Vision (ICCV) , 2023.[53] N. Hanselmann, K. Renz, K. Chitta, A. Bhattacharyya, and A. Geiger. King: Generatingsafety-critical driving scenarios for robust imitation via kinematics gradients. In Proc. of theEuropean Conf. on Computer Vision (ECCV) , 2022.[54] M. Vitelli, Y . Chang, Y . Ye, A. Ferreira, M. Wołczyk, B. Osi ́nski, M. Niendorf, H. Grimmett,Q. Huang, A. Jain, et al. Safetynet: Safe planning for real-world self-driving vehicles usingmachine-learned policies. In Proc. IEEE International Conf. on Robotics and Automation(ICRA) , 2022.[55] S. Pini, C. S. Perone, A. Ahuja, A. S. R. Ferreira, M. Niendorf, and S. Zagoruyko. Safereal-world autonomous driving by learning to predict and plan with a mixture of experts. InProc. IEEE International Conf. on Robotics and Automation (ICRA) , 2023.[56] J. Cheng, Y . Chen, X. Mei, B. Yang, B. Li, and M. Liu. Rethinking imitation-based planner forautonomous driving. arXiv.org , 2309.10443, 2023.[57] H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan,and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proc. IEEE Conf.on Computer Vision and Pattern Recognition (CVPR) , 2020.[58] J.-T. Zhai, Z. Feng, J. Du, Y . Mao, J.-J. Liu, Z. Tan, Y . Zhang, X. Ye, and J. Wang. Rethinkingthe open-loop evaluation of end-to-end autonomous driving in nuscenes. arXiv.org , 2305.10430,2023.[59] M. Althoff, M. Koschi, and S. Manzinger. Commonroad: Composable benchmarks for motionplanning on roads. In Proc. IEEE Intelligent Vehicles Symposium (IV) , 2017.[60] L. Bergamini, Y . Ye, O. Scheel, L. Chen, C. Hu, L. D. Pero, B. Osinski, H. Grimmett, andP. Ondruska. Simnet: Learning reactive self-driving simulations from real-world observations.InProc. IEEE International Conf. on Robotics and Automation (ICRA) , 2021.12[61] S. Suo, S. Regalado, S. Casas, and R. Urtasun. Trafficsim: Learning to simulate realisticmulti-agent behaviors. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition(CVPR) , 2021.[62] Z. Zhong, D. Rempe, D. Xu, Y . Chen, S. Veer, T. Che, B. Ray, and M. Pavone. Guidedconditional diffusion for controllable traffic simulation. In Proc. IEEE International Conf. onRobotics and Automation (ICRA) , 2023.[63] D. Xu, Y . Chen, B. Ivanovic, and M. Pavone. Bits: Bi-level imitation for traffic simulation. InProc. IEEE International Conf. on Robotics and Automation (ICRA) , 2023.[64] Z. Zhang, A. Liniger, D. Dai, F. Yu, and L. Van Gool. TrafficBots: Towards world models forautonomous driving simulation and motion prediction. In Proc. IEEE International Conf. onRobotics and Automation (ICRA) , 2023.[65] L. Feng, Q. Li, Z. Peng, S. Tan, and B. Zhou. Trafficgen: Learning to generate diverse andrealistic traffic scenarios. In Proc. IEEE International Conf. on Robotics and Automation(ICRA) , 2023.[66] S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structuredprediction to no-regret online learning. In Conference on Artificial Intelligence and Statistics(AISTATS) , 2011.[67] A. Prakash, A. Behl, E. Ohn-Bar, K. Chitta, and A. Geiger. Exploring data aggregation in policylearning for vision-based urban autonomous driving. In Proc. IEEE Conf. on Computer Visionand Pattern Recognition (CVPR) , 2020.[68] S. Poddar, C. Mavrogiannis, and S. S. Srinivasa. From crowd motion prediction to robotnavigation in crowds. arXiv preprint arXiv:2303.01424 , 2023.[69] C. Schöller, V . Aravantinos, F. Lay, and A. Knoll. What the constant velocity model can teachus about pedestrian motion prediction. IEEE Robotics and Automation Letters , 5(2):1696–1703,2020.[70] H. Wu, T. Phong, C. Yu, P. Cai, S. Zheng, and D. Hsu. What truly matters in trajectory predictionfor autonomous driving? arXiv preprint arXiv:2306.15136 , 2023.[71] Y . Tassa, N. Mansard, and E. Todorov. Control-limited differential dynamic programming. InProc. IEEE International Conf. on Robotics and Automation (ICRA) , 2014.[72] R. Rajamani. Vehicle dynamics and control . Springer Science & Business Media, 2011.[73] P. Polack, F. Altché, B. d’Andréa Novel, and A. de La Fortelle. The kinematic bicycle model: Aconsistent model for planning feasible trajectories for autonomous vehicles? In Proc. IEEEIntelligent Vehicles Symposium (IV) , 2017.[74] N. Deo, E. M. Wolff, and O. Beijbom. Multimodal Trajectory Prediction Conditioned onLane-Graph Traversals. In Proc. Conf. on Robot Learning (CoRL) , 2021.[75] N. Nayakanti, R. Al-Rfou, A. Zhou, K. Goel, K. S. Refaat, and B. Sapp. Wayformer: Motionforecasting via simple and efficient attention networks. In Proc. IEEE International Conf. onRobotics and Automation (ICRA) , 2023.[76] A. Cui, S. Casas, K. Wong, S. Suo, and R. Urtasun. Gorela: Go relative for viewpoint-invariantmotion forecasting. In Proc. IEEE International Conf. on Robotics and Automation (ICRA) ,2023.[77] Y . Hu, K. Li, P. Liang, J. Qian, Z. Yang, H. Zhang, W. Shao, Z. Ding, W. Xu, and Q. Liu.Imitation with spatial-temporal heatmap: 2nd place solution for nuplan challenge. arXiv.org ,2306.15700, 2023.13[78] W. Xi, L. Shi, and G. Cao. An imitation learning method with data augmentation andpost processing for planning in autonomous driving. URL https://opendrivelab.com/e2ead/AD23Challenge/Track_4_pegasus_weitao.pdf.[79] Z. Huang, H. Liu, X. Mo, and C. Lyu. Gameformer planner: A learning-enabled interactiveprediction and planning framework for autonomous vehicles. URL https://opendrivelab.com/e2ead/AD23Challenge/Track_4_AID.pdf.[80] C. R. Qi, Y . Zhou, M. Najibi, P. Sun, K. V o, B. Deng, and D. Anguelov. Offboard 3d objectdetection from point cloud sequences. In Proc. IEEE Conf. on Computer Vision and PatternRecognition (CVPR) , 2021.[81] O. Scheel, L. Bergamini, M. Wolczyk, B. Osinski, and P. Ondruska. Urban driver: Learning todrive from real-world demonstrations using policy gradients. In Proc. Conf. on Robot Learning(CoRL) , 2021.14 |
E2vL12SwO1 | PreCo: Enhancing Generalization in Co-Design ofModular Soft Robots via Brain-Body Pre-TrainingYuxing Wang1, Shuang Wu2, Tiantian Zhang1, Yongzhe Chang1∗, Haobo Fu2,Qiang Fu2, Xueqian Wang1∗1Tsinghua University2Tencent AI LabAbstract: Brain-body co-design, which involves the collaborative design of con-trol strategies and morphologies, has emerged as a promising approach to en-hance a robot’s adaptability to its environment. However, the conventional co-design process often starts from scratch, lacking the utilization of prior knowl-edge. This can result in time-consuming and costly endeavors. In this paper,we present PreCo, a novel methodology that efficiently integrates brain-body pre-training into the co-design process of modular soft robots. PreCo is based on theinsight of embedding co-design principles into models, achieved by pre-training auniversal co-design policy on a diverse set of tasks. This pre-trained co-designer isutilized to generate initial designs and control policies, which are then fine-tunedfor specific co-design tasks. Through experiments on a modular soft robot sys-tem, our method demonstrates zero-shot generalization to unseen co-design tasks,facilitating few-shot adaptation while significantly reducing the number of policyiterations required. Our video is available here.Keywords: Co-design, Pre-training, Modular Soft Robots1 IntroductionNature does not treat the development of the brain and body as separate processes, indicating thatcognitive processes are intricately connected to the body and the external environment in whichorganisms operate [1, 2]. This theory holds significant implications for the robotics community.To enable effective interaction with the environment, it is essential to prioritize the co-design ofboth physical bodies and control systems of robots. In this work, we consider the co-design ofModular Soft Robots (MSRs), which are a promising category of flexible robotic systems that offerdesigners the ability to construct robot bodies by combining various types of deformable cubes, andthe control signals can be generated by adjusting cubes’ volume (Figure 1). Currently, the majorityof related co-design studies [3, 4, 5] for MSRs mainly focus on “one robot one task”, where theprimary objective is to discover the optimal robot morphology and controller for a specific task.However, this approach seems to diverge from biological morphologies, such as the human body,which inherently possesses the ability to perform multiple tasks.As a matter of fact, even improving a robot morphology for a single task can be highly challengingdue to: (1) the presence of a severe combinatorial explosion within the robot design space and (2)the existence of incompatible state-action spaces that necessitate training a separate control policyfor each morphology. Consequently, past studies [6, 7, 8, 9, 10] often address these challenges byconsidering the evolution of body and control as separate processes, directly conducted within thelarge, high-dimensional design space. In other words, these methods typically learn from scratch,neglecting prior co-design knowledge, resulting in costly and inefficient endeavors. But how can weleverage this knowledge to enhance the co-design process for new applications?∗Correspondence to: Yongzhe Chang and Xueqian Wang {changyongzhe, wang.xq }@sz.tsinghua.edu.cn7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Pre-trained Co-designerWalk on flat terrainClimb stairsTraverse terrainJump across stairsBrain -Body Fine-Tuning Universal Design -and-Control PolicyMore uneven terrainBrain -Body Pre-TrainingDescend stairsJump over gapsWalk on soft bridgeGeneralizable Prototype Unseen Co -design TasksDiverse Co -Design TasksEmpty voxel Rigid voxel Soft voxel Horizontal actuator Vertical actuator............Figure 1: Workflow of PreCo. At the heart of PreCo lies a universal co-design policy, which un-dergoes pre-training using end-to-end deep reinforcement learning on a diverse set of co-designtasks. The resulting pre-trained co-designer is utilized to generate initial designs and control poli-cies, which are further fine-tuned for unseen tasks.In this paper, we propose PreCo, a methodology that entails pre-training a universal co-design policyto grasp the interdependencies between the robot morphology, control and tasks. Following thisfoundational step, the policy is utilized to generate initial designs and control strategies, which arethen fine-tuned for unseen co-design tasks, thereby reducing the learning burden. Precisely, ourapproach implies that both the morphologies and control strategies of a robot stem from the sameset of parameters. Any mutation to the parameter simultaneously affects the two, and they are trainedusing deep reinforcement learning. In contrast to conventional co-design methods that segregate notonly the optimization processes of the brain and body but also the parameters that generate them,we draw inspiration from pleiotropy in biology, which refers to a single gene expressing multiplephenotypic traits [11, 12, 13]. Thus, our method eliminates the need for a robot population andprovides empirical evidence of its capacity to enhance the efficiency of the learning process.Our study offers the following key contributions: Firstly, we introduce brain-body pre-training.Subsequently, we present PreCo, a novel approach that learns a universal design-and-control policycapable of handling multiple challenging co-design tasks for modular soft robots. Secondly, usingthe pre-trained co-design policy, we showcase that properly integrating prior knowledge makes co-design on new tasks easier in several ways: enabling improved sample efficiency, zero-shot general-ization and effective few-shot adaptation, providing the benefit over training from scratch. Thirdly,through empirical analysis, we demonstrate that the shared policy structure of PreCo exhibits greaterrobustness in terms of mitigating premature convergence, resulting in improved exploration andflexibility. Furthermore, our work provides the first experimental comparison among meta-learning,curriculum learning and pre-training methods in addressing co-design problems.2 Related WorkRobot Co-Design The process of developing both the physical body and the cognitive capabilitiesin nature is intricately intertwined [14, 15]. To replicate this fundamental principle, robot designersare tasked with concurrently optimizing the morphology and control strategy of robots. In the fieldof Evolutionary Robotics (ER), researchers have extensively investigated the application of Evolu-tionary Algorithms (EAs) for co-designing robotic systems [6, 8, 10, 16]. A prominent focus in thisfield lies in the representation of robot morphologies. Various approaches, such as generative encod-ing schemes [17, 18], have been explored to facilitate the discovery of novel and efficient designs.Techniques like neural networks [19, 20], Neural Cellular Automata (NCA) [21, 22, 23] and Compo-sitional Pattern-Producing Networks (CPPNs) [4, 24] have been employed to generate diverse andcomplex robot morphologies, enabling the exploration of a broad design space. In addition, EAscan also be integrated with Reinforcement Learning (RL), allowing robots to evolve and improvetheir behaviors through interactions with the environment [3, 25]. These approaches, however, adopt2separate parameters for generating the robot morphology and control, which are optimized in a bi-level fashion. Consequently, they rely on a population of design prototypes to facilitate exploration,leading to challenges in terms of sample efficiency and computational requirements. In contrast tothese population-based approaches, we propose an alternative methodology that utilizes a universalpolicy representation, enabling the robot morphology and control strategy to be derived from thesame set of parameters and jointly optimized. Through empirical experiments, we demonstrate thatthis shared representation facilitates the exploration of the design space, leading to enhanced sampleefficiency and increased flexibility.In addition to EA methods, when we have access to certain aspects of the system’s physical dynam-ics, a model-based differentiable simulator can be employed to jointly optimize the design parame-ters and control using Back-Propagation Through Time (BPTT) [26, 27, 28, 29, 30, 31]. In contrast,our work specifically addresses the model-free setting, where system modeling is not required. Toachieve this, Policy Gradient (PG) methods can be used to approximate the gradient of design param-eters or evaluate the fitness of a robot morphology [5, 32, 33, 34, 35]. Building upon this technology,PreCo is introduced as a novel approach that distinguishes itself from previous works by tackling amore challenging objective of addressing multiple co-design tasks simultaneously.Many endeavors have been conducted to bring robot co-design to real-world settings, includingco-designing soft hands [36], voxel-based soft robots [37, 38], soft robotic fishes [39, 40] and softlegged robots [41]. To enable effective sim-to-real transfer, numerical mathematic techniques likeFinite Element Method (FEM) [42] or Material Point Method (MPM) [43] together with a high-quality simulator are required to model and simulate soft-body physics. Besides, factors like materialimperfections, air resistance, friction and many others come into play, iteratively refining the designand control algorithms based on real-world feedback is also needed.Multi-task Reinforcement Learning In this study, we approach the challenge of brain-body pre-training by adopting a Multi-task Reinforcement Learning (MTRL) framework, which has garneredconsiderable interest in the field of embodied intelligence [44, 45, 46, 47, 48]. MTRL involves thetraining of an agent to perform multiple tasks concurrently, aiming to leverage shared knowledgeacross tasks for improved learning efficiency and generalization. However, previous research basedon the transformer structure primarily focuses on training a universal controller for multiple robotbodies [9, 49, 50, 51, 52, 53]. PreCo takes a further step by exploring how the intrinsic brain-bodyconnections can be utilized to improve efficiency and generalization when facing new applications.In essence, our method aims to avoid learning from scratch, sharing a similar spirit with curriculumlearning [5, 19, 54] and meta-learning [25, 55, 56, 57] but differs in its framework.3 PreliminariesReinforcement Learning In our study, we approach the problem of brain-body pre-training for aset of Kco-design tasks by formulating it as a MTRL problem. In the domain of RL, the problemis typically formulated as a Markov Decision Process (MDP), defined by a 5-tuple (S,A, P, r, γ ).Here,Srepresents the state space and Arepresents the action space. The transition function P:S × S × A → [0,1]determines the probability of transitioning from one state to another given aspecific action. The reward function r(s, a) :S × A → Rassigns a numeric value to the state-action pairs, indicating the desirability of taking a particular action in a given state, and the discountfactor γ∈(0,1]specifies the degree to which rewards are discounted over time. Our goal is to findpolicy parameters θwhich can maximize the average expected reward across all co-design tasks:1KPKk=1P∞t=0γtrkt(st, at), here the policy is represented by a deep neural network parameterizedasπθ(at|st), which maps from states to distributions over actions.We employ Proximal Policy Optimization (PPO) [58], a popular RL algorithm that is widely usedin a variety of robot tasks. The algorithm utilizes a surrogate objective function that approximatesthe policy gradient, and the objective function of PPO is:J(θ) =Etπθ(at|st)πθold(at|st)ˆAt−βDKL(πθold(·|st)∥πθ(·|st)](1)3...Multi-HeadAttentionAdd & NormFeedForwardAdd & Norm×kShared Transformer BlocksModular observationControl Action Decoder...Linear ProjectionDesign observation sdTask-relatedobservation...Design Action DecoderPosition embedding..................Concatenates0ds1dsNdsN+1csN+2c... ...a0d aN+1cDesign ControlsTcre r=0acadsc...PolicyFigure 2: Architecture of the co-design policy. Our policy is designed with a shared structurethat influences both morphology and control. It receives unified design-and-control observationsand generates corresponding actions. This policy operates under the framework of reinforcementlearning, where the design and control processes are unified as a single MDP (gray box).where ˆAtis the advantage estimation, Etrepresents the empirical average over a batch of generatedsamples. By iteratively collecting experiences and optimizing J(θ), the policy πθ(at|st)is updatedin the direction of maximizing the cumulative reward.Transformer Transformer [59] is a popular neural network architecture that has revolutionized thedomain of natural language processing and computer vision [60, 61, 62, 63, 64], and has becomea fundamental component in many cutting-edge models. At its core, transformer employs a pow-erful self-attention mechanism to capture the dependencies and relationships among elements in asequence. It allows the model to dynamically allocate attention to different parts of the input se-quence based on their relevance. In each self-attention layer, attention weights are computed foreach element by considering its interactions with all other elements in the sequence. These weightsplay a crucial role in aggregating information from the entire sequence, enabling the transformer togenerate comprehensive and informative representations for each element.The transformer architecture is well-suited for modular robot systems because it is agnostic to in-compatible state-action spaces. In our work, we model the co-design of modular soft robots as asequence-to-sequence task. Under this framework, the local observations from all voxels are or-ganized in sequences. By leveraging the self-attention mechanism, the co-design policy can focusmore on crucial parts of the state space and capture the internal dependencies between voxels, al-lowing the policy to dynamically adjust its focus depending on the input context, which caters to theneed for dynamically accommodating changes in morphologies.4 Brain-Body Pre-TrainingOur motivation for employing brain-body pre-training is that given the assumption of the existenceof underlying structural similarity between the pre-training tasks and the target tasks, properly inte-grating prior co-design knowledge into a universal co-design policy makes robot co-design easier.For instance, if a target task requires the robot to master a complex skill, such as traversing acrossextremely uneven terrain, this skill can be broken down into some foundational abilities like walk-ing, ascending stairs and surmounting minor obstacles. During pre-training, the universal co-designpolicy aims to extract basic brain-body links from these tasks and merge them. When facing spe-cific target tasks, it is anticipated to leverage the prior knowledge, thereby alleviating the co-design4challenge. In the remaining section, we describe details about our universal co-design policy, whichis optimized end-to-end through reinforcement learning within a unified state-action space.Universal Co-Design Policy We start by reviewing co-design methods [5, 35] that utilize RL toapproximate the gradient of design and control parameters. At the start of each episode, a dedi-cated design policy takes a finite number of actions in order to develop a robot morphology, andno reward is assigned to the design policy during this period. Subsequently, the resulting robot isconsumed by a control policy to collect the environmental rewards, which also provides learningsignals for the design actions. Using the RL method, two policies are optimized jointly to maximizethe performance for the given task.However, a notable issue of this approach is the presence of an imbalanced sample distributionbetween design and control. While this imbalance might not be immediately evident during theinitial stages of learning, where both design and control steps are short, it becomes more pronouncedas training advances (the execution steps become much longer). When employing randomly sampledexperiences for training, the design policy, given the separate policy representation, tends to receivefewer updates compared to the control policy. Consequently, it can quickly become optimized onlyfor a limited region around the local morphological optimum, which hinders the effective explorationof the design space, as shown in Section 5.2.To address this concern, our co-design policy (actor network) is designed to facilitate more infor-mation sharing (Figure 2). While directly representing the intricate interplay between morphologyand controller is challenging, we employ shared parameters to implicitly capture their relationships.This approach guarantees that both the “brain” and the “body” of a robot are derived from the sameset of parameters and developed together.Unified State-Action Space Our study focuses on co-designing flexible MSRs comprising varioustypes of blocks, also known as voxels. Each voxel in the design space is represented by a discretevalue that corresponds to its material or type of actuator (e.g., empty voxel= 0, soft voxel= 1, rigidvoxel= 2, horizontal actuator= 3and vertical actuator= 4). In practice, we employ one-hot encodingto represent these values. The co-design policy is uniformly denoted as πθ(at|st)and integratedinto the aforementioned design-and-control MDP. Here, st={sdt, sct}represents the concatenationof the design observation sdtand control observation sctat time step tin each episode. During thedesign stage, sctofstwill be set to zero and during the control stage, sdtofstwill be consistentwith the state of the last design step and unchanged. Precisely, with Ndenoting the size of thedesign space (e.g., N= 25 for a5×5design space), we define sdt={sd1t, sd2t, ..., sdNt}, wheresditfor voxel iis a vector comprising its type and the types of its Moore neighborhood, as shownin Appendix A. sctis the control observation, which comprises local observations from all voxels,denoted as sct={svt, sgt}.svt={sv1t, sv2t, ..., svNt}represents modular observations, where svitincludes the relative position of each voxel’s four corners with respect to the center of mass of therobot. sgtrepresents task-related observations, such as the terrain information.We utilize two feed-forward neural networks to decode shared information from the transformer-based encoder. The output layer dimension of the design action decoder matches the total numberof material types ( 5in our work), while the output layer dimension of the control action decoderis set to 1. As we model the co-design of modular soft robots as a sequence-to-sequence task,both the design and control actions have a length of N. During training, voxels are determined bysampling from a categorical distribution, which is formulated based on the output logits. Duringevaluation, the action corresponding to the highest logit value is selected. Additionally, the controlaction decoder generates the mean value μ. By combining it with a constant standard deviation Σ,control signals can be sampled from this Gaussian distribution and then clipped within the rangeof[0.6,1.6], which corresponds to gradual contractions/expansions of the actuators. We use actionmasks to inform the policy whether an element in the output sequence is an actuator.Learning Process Based on the unified MDP and standard RL practices, we use distributed trajec-tory sampling with multiple CPU threads to collect training data. Given that we have Kpre-trainingtasks (or environments in RL terminology), each task is allocated to its respective CPU thread.5Therefore, Kalso signifies the number of threads we deploy. During each RL interaction, the statefed to our policy is represented as {stask 1t, stask 2t, ..., staskKt}, which can be viewed as a uniformsampling process. Note that the time step there may vary among tasks. We train our co-design pol-icy using PPO [58], which is based on the popular actor-critic architecture. The critic network sharesthe same architecture as the actor network (Figure 2), and it computes the value function, indicatinga probable policy distribution. Its output is a N×1continuous-valued vector where each elementcorresponds to the estimated value of a voxel. Here, we represent the overall morphology value byaveraging the values across all voxels. With the policy gradient technique, the co-design policy isupdated to optimize the predicted morphology value. After completing the pre-training phase, theresulting pre-trained co-designer is utilized to generate initial designs and control policies, whichare subsequently fine-tuned using PPO again for unseen tasks.5 ExperimentsIn this section, we experimentally evaluate our proposed approach to answer the following ques-tions: (1) Does our method, PreCo, effectively perform brain-body pre-training and discover robotmorphologies capable of executing multiple tasks? (2) How well does our method demonstrate zero-shot generalization and brain-body fine-tuning capabilities when faced with unseen co-design tasks?(3) What is the impact of the unified policy representations on the performance of PreCo?5.1 Environments and ImplementationBased on the Evolution Gym platform [3], we establish a modular robot state-action space2thatsupports brain-body pre-training as described in Section 4. Our focus lies on a fixed design space ofsize5×5which includes 9locomotion co-design tasks with open-ended environments: Walker-v0(easy) ,PlatformJumper-v0 (hard) ,UpStepper-v0 (medium) ,ObstacleTraverser-v0 (medium) ,BridgeWalker-v0 (easy) ,GapJumper-v0 (hard) ,DownStepper-v0 (easy) ,ObstacleTraverser-v1(hard) andHurdler-v0 (hard) , as shown in Figure 1. The difficulty levels are determined based onthe performance of evolution-based co-design algorithms from the platform. For more informationregarding the environmental details, please refer to Appendix B.We work on the assumption that there are structural similarities between the pre-training and targetco-design tasks. Guided by this understanding, we select the first four tasks for pre-training. The tar-get tasks in our study essentially encompass different adaptations of the pre-training tasks, includingthe following types: (1) More challenging scenarios , such as the transition from ObstacleTraverser-v0 to ObstacleTraverser-v1 where the terrain becomes increasingly uneven; (2) Transfers of com-parable difficulty , as seen when moving from Walker-v0 to BridgeWalker-v0 (with a shift to softerterrain) or from PlatformJumper-v0 to GapJumper-v0, wherein the gap between steps expands butthe height of these steps reduces; (3) “Reverse” scenarios , exemplified by the transition fromUpStepper-v0 to DownStepper-v0, where the direction of the steps is inverted.We compare PreCo against the following baselines that are also suitable for multiple co-design tasks:(1)PreCo-Sep , which is based on PreCo but utilizes separate transformer-based design and controlpolicies. This baseline allows us to investigate the effectiveness of the unified policy representation;(2)CuCo [5], a curriculum-based co-design method that consists of separate NCA-based designpolicy and transformer-based control policy. We set the curriculum of CuCo to be 3×3→5×5;(3)MeCo , which adopts the same network architecture as PreCo but is trained using Reptile [65], apopular meta-learning method. We use a 3-layer transformer encoder and run all experiments withthe same number of policy iterations. More implementation details can be found in Appendix C.5.2 ResultsBrain-Body Pre-Training We show the learning curves and converged robot morphologies of allmethods in Figure 3. For each method, the learning curve is reported over 7different runs. The2https://github.com/Yuxing-Wang-THU/ModularEvoGym6/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013 /uni00000014/uni00000011/uni00000015/uni00000018 /uni00000014/uni00000011/uni00000018/uni00000013 /uni00000014/uni00000011/uni0000001a/uni00000018 /uni00000015/uni00000011/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000003(×103)/uni00000013/uni00000014/uni00000015/uni00000016/uni00000017/uni00000018/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000026/uni00000058/uni00000026/uni00000052/uni00000030/uni00000048/uni00000026/uni00000052/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000010/uni00000036/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013 /uni00000014/uni00000011/uni00000015/uni00000018 /uni00000014/uni00000011/uni00000018/uni00000013 /uni00000014/uni00000011/uni0000001a/uni00000018 /uni00000015/uni00000011/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000003(×103)/uni00000013/uni00000015/uni00000017/uni00000019/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni0000003a/uni00000044/uni0000004f/uni0000004e/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000038/uni00000053/uni00000036/uni00000057/uni00000048/uni00000053/uni00000053/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000032/uni00000045/uni00000056/uni00000057/uni00000044/uni00000046/uni0000004f/uni00000048/uni00000037/uni00000055/uni00000044/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000033/uni0000004f/uni00000044/uni00000057/uni00000049/uni00000052/uni00000055/uni00000050/uni0000002d/uni00000058/uni00000050/uni00000053/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013PreCoPreCo -Sep MeCoCuCoFigure 3: Learning curves and converged morphologies of brain-body pre-training. In the left figure,we demonstrate the mean and standard deviation of average task performance against the number ofpolicy iterations for all methods. The right figure displays the individual learning curves of PreCo.The bottom figure shows two representative converged morphologies from each method./uni00000025/uni00000055/uni0000004c/uni00000047/uni0000004a/uni00000048/uni0000003a/uni00000044/uni0000004f/uni0000004e/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013 /uni00000027/uni00000052/uni0000005a/uni00000051/uni00000036/uni00000057/uni00000048/uni00000053/uni00000053/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013 /uni00000032/uni00000045/uni00000056/uni00000057/uni00000044/uni00000046/uni0000004f/uni00000048/uni00000037/uni00000055/uni00000044/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000055/uni00000010/uni00000059/uni00000014 /uni0000002a/uni00000044/uni00000053/uni0000002d/uni00000058/uni00000050/uni00000053/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013 /uni0000002b/uni00000058/uni00000055/uni00000047/uni0000004f/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000013/uni00000014/uni00000015/uni00000016/uni00000017/uni00000018/uni00000019/uni0000001a/uni0000001b/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000038/uni00000051/uni00000056/uni00000048/uni00000048/uni00000051/uni00000003/uni00000026/uni00000052/uni00000010/uni00000027/uni00000048/uni00000056/uni0000004c/uni0000004a/uni00000051/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000010/uni00000036/uni00000048/uni00000053/uni00000030/uni00000048/uni00000026/uni00000052/uni00000026/uni00000058/uni00000026/uni00000052Figure 4: Evaluation of zero-shot generalization. The height of each bar represents the averagezero-shot performance of a method, and error bars indicate the corresponding standard deviation.figure clearly demonstrates that our proposed method, PreCo, outperforms the baselines in terms oflearning speed and final performance. A comparison of the robot morphologies designed by eachmethod reveals intriguing distinctions. Both PreCo and PreCo-Sep are capable of discovering robotbodies with rigid legs, which are essential for maintaining balance in various locomotion tasks.PreCo, in particular, exhibits the ability to utilize empty voxels, resulting in robots with serrated andhollow structures that potentially contribute to enhanced performance. On the other hand, CuCoproduces robots with several repeated body segments. Although it benefits from curriculum learn-ing, as evident from its learning curve, it converges to a local morphological optimum with lowerperformance. As for MeCo, the “meta-bodies” it discovered seem to prefer horizontal actuators,which may not be efficient in certain jumping tasks. However, our primary interest lies in evaluatingits performance on the unseen co-design tasks.Zero-Shot Generalization One of our objectives in this work is to train a co-design policy capableof generating a single modular soft robot that can generalize to unseen co-design tasks. Figure 4illustrates that PreCo consistently achieves higher zero-shot performance across new environments7Table 1: Final performance across environments and baselines, each result is reported over 7differ-ent runs. Methods on the left side of the divider are fine-tuned from their corresponding pre-trainedmodels, while those on the right side are trained from scratch on target tasks.Environment PreCo-FT PreCo-Sep-FT MeCo-FT CuCo-FT PreCo-Scratch GABridgeWalker-v0 3.69±0.63 3 .11±0.91 4 .75±0.62 4 .37±0.61 4.81±0.21 5.31±0.25Hurdler-v0 4.10±0.52 1.73±2.19 1 .88±1.30 1 .66±1.24 2.76±1.39 1 .51±0.25DownStepper-v0 9.01±0.02 8.90±0.01 4 .58±0.74 8 .65±0.33 8.98±0.01 8 .91±0.01GapJumper-v0 5.78±1.25 2.58±0.19 2 .83±0.50 3 .58±0.87 3.48±0.66 3 .35±0.15ObstacleTraverser-v1 4.88±0.12 3.88±0.69 3 .03±0.50 2 .00±0.77 3.38±0.69 2 .98±0.63when compared to the baseline methods. Figure 7 in Appendix D shows that the robot designedby PreCo demonstrates the ability to employ the skill of somersaulting for traversing challengingterrains, relying on its environmental comprehension. Furthermore, it exhibits an understanding ofthe necessity to lean back to preserve stability while descending stairs.Brain-Body Fine-Tuning Besides the evaluation of zero-shot generalization, we also consider amore general setting that allows the co-design policy to fine-tune its parameters to adapt to targettasks. We aim to investigate whether brain-body fine-tuning is better than training from scratch.Keeping this in mind, we introduce PreCo-Scratch and GA [3] as additional baselines, all of whichare trained from scratch on target tasks. Here, we limit the number of brain-body fine-tuning it-erations to 300and policy iterations of learning from scratch to 2000 . Table 1 presents all resultsacross unseen environments. It is evident that PreCo outperforms the baseline algorithms in mostenvironments. For morphological results, Figure 8 in Appendix D demonstrates that PreCo exhibitsintelligent behavior by retaining the beneficial serrated structure for effective stair climbing andobstacle traversal, while also making adaptive modifications to suit the new environment.The Shared Policy Representation To go a step further, in Appendix E, we provide a performancecomparison between PreCo and PreCo-Sep when trained from scratch across 10co-design tasks(Table 3). Figure 9 and Figure 10 illustrate their learning processes in a complex task, Climber-v0,which requires the policy to have a good exploration ability to grow irregular structures. Clearly,PreCo exhibits the ability to explore beyond the local morphological optimum, allowing it to developthin limbs that aid in climbing. In summary, the shared policy representation creates additionalopportunities for exploring the design space. This is because, as the parameters of the “controlpolicy” undergo adjustments, the “design policy” is concurrently updated.6 Limitations and ConclusionWe have introduced PreCo, a co-design method that utilizes brain-body pre-training to generatemodular soft robots capable of performing multiple tasks. Through the adoption of shared policyrepresentations, which capture the inherent brain-body connections across various co-design tasks,we have observed its favorable zero-shot generalization and few-shot adaptation capabilities in ad-dressing previously unseen co-design tasks.There are a number of areas for improvement. As shown in Table 1, our method does not performvery well in BridgeWalker. This might be because the selection of training tasks does not coverthe variation of soft terrain, potentially leading to ambiguities in the co-design policy. Exploringthe selection of pre-training co-design tasks could be interesting for future research. Addition-ally, although our policy representation appears to facilitate the learning process, it is worth notingthat destructive mutations of the network parameters can still occur. Further investigation into thegenotype-phenotype-fitness mapping of this policy would be valuable. In our paper, we tested ourmethod using a simulator with relatively fundamental modules as a proof of concept to show itseffectiveness, developing a general pipeline for translating learned models to physical will definitelybe our next step. We also provide a detailed discussion of this sim-to-real issue in Appendix Fand envision our method serving as a foundation for subsequent research which tends to make theco-design of modular robots more practical both in the simulation and the real world.8AcknowledgmentsWe sincerely thank the anonymous reviewers for their helpful comments in revising the paper. Thiswork was supported by the National Key R&D Program of China (2022YFB4701400/4701402).References[1] H. Lipson, V . SunSpiral, J. C. Bongard, and N. Cheney. On the difficulty of co-optimizingmorphology and control in evolved virtual creatures. In IEEE Symposium on Artificial Life ,2016.[2] R. Pfeifer, F. Iida, and M. Lungarella. Cognition from the bottom up: on biological inspiration,body morphology, and soft materials. Trends in Cognitive Sciences , 18:404–413, 2014.[3] J. Bhatia, H. Jackson, Y . Tian, J. Xu, and W. Matusik. Evolution gym: A large-scale benchmarkfor evolving soft robots. In NeurIPS , 2021.[4] N. Cheney, J. C. Bongard, V . SunSpiral, and H. Lipson. Scalable co-optimization of morphol-ogy and control in embodied machines. Journal of The Royal Society Interface , 15, 2018.[5] Y . Wang, S. Wu, H. Fu, Q. Fu, T. Zhang, Y . Chang, and X. Wang. Curriculum-based co-design of morphology and control of voxel-based soft robots. In The Eleventh InternationalConference on Learning Representations , 2023.[6] K. Sims. Evolving 3d morphology and behavior by competition. Artificial Life , 1:353–372,1994.[7] N. Cheney, R. MacCurdy, J. Clune, and H. Lipson. Unshackling evolution: evolving soft robotswith multiple materials and a powerful generative encoding. In GECCO ’13 , 2013.[8] E. Medvet, A. Bartoli, F. Pigozzi, and M. Rochelli. Biodiversity in evolved voxel-based softrobots. Proceedings of the Genetic and Evolutionary Computation Conference , 2021.[9] T. Wang, Y . Zhou, S. Fidler, and J. Ba. Neural graph evolution: Towards efficient automaticrobot design. ArXiv , abs/1906.05370, 2019.[10] T. F. Nygaard, D. Howard, and K. Glette. Real world morphological evolution is feasible. Pro-ceedings of the 2020 Genetic and Evolutionary Computation Conference Companion , 2020.[11] G. Williams. Pleiotropy, natural selection, and the evolution of senescence. Evolution , 11,1957.[12] N. Solovieff, C. Cotsapas, P. H. Lee, S. M. Purcell, and J. W. Smoller. Pleiotropy in complextraits: challenges and strategies. Nature Reviews Genetics , 14:483–495, 2013.[13] D. Marzougui, M. Biondina, and F. Wyffels. A comparative analysis on genome pleiotropy forevolved soft robots. Proceedings of the Genetic and Evolutionary Computation ConferenceCompanion , 2022.[14] M. Farina. Embodied cognition: dimensions, domains and applications. Adaptive Behavior ,29(1):73–88, 2021.[15] A. Cangelosi and M. Asada. Cognitive robotics . MIT Press, 2022.[16] J. Nordmoen, F. Veenstra, K. O. Ellefsen, and K. Glette. Quality and diversity in evolutionarymodular robotics. 2020 IEEE Symposium Series on Computational Intelligence (SSCI) , pages2109–2116, 2020.[17] F. Veenstra, A. Fa ́ı ̃na, S. Risi, and K. Støy. Evolution and morphogenesis of simulated modularrobots: A comparison between a direct and generative encoding. In EvoApplications , 2017.9[18] E. Samuelsen, K. Glette, and J. Tørresen. A hox gene inspired generative approach to evolvingrobot morphology. In Annual Conference on Genetic and Evolutionary Computation , 2013.[19] R. Wang, J. Lehman, J. Clune, and K. O. Stanley. Poet: open-ended coevolution of environ-ments and their optimized solutions. Proceedings of the Genetic and Evolutionary Computa-tion Conference , 2019.[20] K. Walker and H. Hauser. Evolution of morphology through sculpting in a voxel based robot.InALIFE , 2021.[21] A. Mordvintsev, E. Randazzo, E. Niklasson, and M. Levin. Growing neural cellular automata.Distill , 2020.[22] S. Sudhakaran, E. Najarro, and S. Risi. Goal-guided neural cellular automata: Learning tocontrol self-organising systems. ArXiv , abs/2205.06806, 2022.[23] R. B. Palm, M. G. Duque, S. Sudhakaran, and S. Risi. Variational neural cellular automata.ArXiv , abs/2201.12360, 2022.[24] F. H. K. dos Santos Tanaka and C. C. Aranha. Co-evolving morphology and control of softrobots using a single genome. 2022 IEEE Symposium Series on Computational Intelligence(SSCI) , pages 1235–1242, 2022.[25] ́A. Belmonte-Baeza, J. Lee, G. Valsecchi, and M. Hutter. Meta reinforcement learning foroptimal design of legged robots. IEEE Robotics and Automation Letters , 7:12134–12141,2022.[26] T.-H. Wang, P. Ma, A. E. Spielberg, Z. Xian, H. Zhang, J. B. Tenenbaum, D. Rus, and C. Gan.Softzoo: A soft robot co-design benchmark for locomotion in diverse environments. arXivpreprint arXiv:2303.09555 , 2023.[27] P. Ma, T. Du, J. Z. Zhang, K. Wu, A. Spielberg, R. K. Katzschmann, and W. Matusik. Diffaqua:A differentiable computational design pipeline for soft underwater swimmers with shape inter-polation. ACM Trans. Graph. , 40:132:1–132:14, 2021.[28] M. B ̈acher, E. Knoop, and C. Schumacher. Design and control of soft robots using differen-tiable simulation. Current Robotics Reports , 2(2):211–221, 2021.[29] J. Z. Zhang, Y . Zhang, P. Ma, E. Nava, T. Du, P. Arm, W. Matusik, and R. K. Katzschmann.Sim2real for soft robotic fish via differentiable simulation. arXiv preprint arXiv:2109.14855 ,2021.[30] Y . Hu, J. Liu, A. Spielberg, J. B. Tenenbaum, W. T. Freeman, J. Wu, D. Rus, and W. Ma-tusik. Chainqueen: A real-time differentiable physical simulator for soft robotics. In 2019International conference on robotics and automation (ICRA) , pages 6265–6271. IEEE, 2019.[31] T. Du, J. Hughes, S. Wah, W. Matusik, and D. Rus. Underwater soft robot modeling and controlwith differentiable simulation. IEEE Robotics and Automation Letters , 6(3):4994–5001, 2021.[32] K. S. Luck, H. B. Amor, and R. Calandra. Data-efficient co-adaptation of morphology andbehaviour with deep reinforcement learning. In CoRL , 2019.[33] C. B. Schaff, D. Yunis, A. Chakrabarti, and M. R. Walter. Jointly learning to construct andcontrol agents using deep reinforcement learning. 2019 International Conference on Roboticsand Automation (ICRA) , pages 9798–9805, 2019.[34] D. R. Ha. Reinforcement learning for improving agent design. Artificial Life , 25:352–365,2019.10[35] Y . Yuan, Y . Song, Z. Luo, W. Sun, and K. M. Kitani. Transform2act: Learning a transform-and-control policy for efficient agent design. ArXiv , abs/2110.03659, 2022.[36] R. Deimel, P. Irmisch, V . Wall, and O. Brock. Automated co-design of soft hand mor-phology and control strategy for grasping. 2017 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 1213–1218, 2017. URL https://api.semanticscholar.org/CorpusID:3037063 .[37] S. Kriegman, S. J. Walker, D. S. Shah, M. Levin, R. Kramer-Bottiglio, and J. C. Bongard.Automated shapeshifting for function recovery in damaged robots. Robotics: Science andSystems XV , 2019. doi:10.15607/rss.2019.xv.028.[38] S. Kriegman, A. M. Nasab, D. S. Shah, H. Steele, G. Branin, M. Levin, J. C. Bongard, andR. Kramer-Bottiglio. Scalable sim-to-real transfer of soft robot designs. 2020 3rd IEEE Inter-national Conference on Soft Robotics (RoboSoft) , pages 359–366, 2020.[39] S.-D. Gravert, M. Y . Michelis, S. Rogler, D. Tscholl, T. Buchner, and R. K. Katzschmann.Planar modeling and sim-to-real of a tethered multimaterial soft swimmer driven by peano-hasels. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 9417–9423. IEEE, 2022.[40] E. Nava, J. Z. Zhang, M. Y . Michelis, T. Du, P. Ma, B. F. Grewe, W. Matusik, and R. K.Katzschmann. Fast aquatic swimmer optimization with differentiable projective dynamics andneural network hydrodynamic models. In International Conference on Machine Learning ,pages 16413–16427. PMLR, 2022.[41] C. Schaff, A. Sedal, and M. J. Walter. Soft robots learn to crawl: Jointly optimizing design andcontrol with sim-to-real transfer. Robotics: Science and Systems XVIII , 2022. doi:10.15607/rss.2022.xviii.062.[42] M. Dubied, M. Y . Michelis, A. Spielberg, and R. K. Katzschmann. Sim-to-real for soft robotsusing differentiable fem: Recipes for meshing, damping, and actuation. IEEE Robotics andAutomation Letters , 7(2):5015–5022, 2022.[43] A. Spielberg, A. Amini, L. Chin, W. Matusik, and D. Rus. Co-learning of task and sensorplacement for soft robotics. IEEE Robotics and Automation Letters , 6(2):1208–1215, 2021.[44] Y . W. Teh, V . Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. M. O. Heess, andR. Pascanu. Distral: Robust multitask reinforcement learning. In NIPS , 2017.[45] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V . Mnih, T. Ward, Y . Doron, V . Firoiu,T. Harley, I. Dunning, S. Legg, and K. Kavukcuoglu. Impala: Scalable distributed deep-rlwith importance weighted actor-learner architectures. ArXiv , abs/1802.01561, 2018.[46] M. Hessel, H. Soyer, L. Espeholt, W. M. Czarnecki, S. Schmitt, and H. V . Hasselt. Multi-taskdeep reinforcement learning with popart. In AAAI Conference on Artificial Intelligence , 2018.[47] T. Yu, S. Kumar, A. Gupta, S. Levine, K. Hausman, and C. Finn. Gradient surgery for multi-task learning. ArXiv , abs/2001.06782, 2020.[48] Z. Xu, K. Wu, Z. Che, J. Tang, and J. Ye. Knowledge transfer in multi-task deep reinforcementlearning for continuous control. ArXiv , abs/2010.07494, 2020.[49] A. Sanchez-Gonzalez, N. M. O. Heess, J. T. Springenberg, J. Merel, M. A. Riedmiller, R. Had-sell, and P. W. Battaglia. Graph networks as learnable physics engines for inference and control.ArXiv , abs/1806.01242, 2018.[50] W. Huang, I. Mordatch, and D. Pathak. One policy to control them all: Shared modular policiesfor agent-agnostic control. In ICML , 2020.11[51] A. Gupta, L. J. Fan, S. Ganguli, and L. Fei-Fei. Metamorph: Learning universal controllerswith transformers. ArXiv , abs/2203.11931, 2022.[52] B. Trabucco, M. Phielipp, and G. Berseth. Anymorph: Learning transferable polices by infer-ring agent morphology. In ICML , 2022.[53] G. Cheng, L. Dong, W. Cai, and C. Sun. Multi-task reinforcement learning with attention-based mixture of experts. IEEE Robotics and Automation Letters , 8:3811–3818, 2023.[54] R. Wang, J. Lehman, A. Rawal, J. Zhi, Y . Li, J. Clune, and K. O. Stanley. Enhanced poet:Open-ended reinforcement learning through unbounded invention of learning challenges andtheir solutions. In ICML , 2020.[55] T. Anne, J. Wilkinson, and Z. Li. Meta-reinforcement learning for adaptive motor control inchanging robot dynamics and environments. ArXiv , abs/2101.07599, 2021.[56] T. Yu, D. Quillen, Z. He, R. C. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onRobot Learning , 2019.[57] K. Rakelly, A. Zhou, D. Quillen, C. Finn, and S. Levine. Efficient off-policy meta-reinforcement learning via probabilistic context variables. ArXiv , abs/1903.08254, 2019.[58] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. ArXiv , abs/1707.06347, 2017.[59] A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, andI. Polosukhin. Attention is all you need. ArXiv , abs/1706.03762, 2017.[60] K. He, X. Chen, S. Xie, Y . Li, P. Doll’ar, and R. B. Girshick. Masked autoencoders are scalablevision learners. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) , pages 15979–15988, 2021.[61] Z. Liu, Y . Lin, Y . Cao, H. Hu, Y . Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hier-archical vision transformer using shifted windows. 2021 IEEE/CVF International Conferenceon Computer Vision (ICCV) , pages 9992–10002, 2021.[62] C. Raffel, N. M. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J.Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv ,abs/1910.10683, 2019.[63] Z. Dai, Z. Yang, Y . Yang, J. G. Carbonell, Q. V . Le, and R. Salakhutdinov. Transformer-xl:Attentive language models beyond a fixed-length context. ArXiv , abs/1901.02860, 2019.[64] I. Beltagy, M. E. Peters, and A. Cohan. Longformer: The long-document transformer. ArXiv ,abs/2004.05150, 2020.[65] A. Nichol and J. Schulman. Reptile: a scalable metalearning algorithm. arXiv preprintarXiv:1803.02999 , 2(3):4, 2018.[66] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, A. Desmaison, A. K ̈opf, E. Yang, Z. DeVito, M. Raison, A. Te-jani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperativestyle, high-performance deep learning library. In NeurIPS , 2019.[67] A. Antoniou, H. Edwards, and A. Storkey. How to train your maml. In Seventh InternationalConference on Learning Representations , 2019.[68] C. Zhang, P. Zhu, Y . Lin, Z. Jiao, and J. Zou. Modular soft robotics: Modular units, connectionmechanisms, and applications. Advanced Intelligent Systems , 2(6):1900166, 2020.12[69] M. Tebyani, A. Spaeth, N. Cramer, and M. Teodorescu. A geometric kinematic model forflexible voxel-based robots. Soft Robotics , 10(3):517–526, 2023.[70] J. Legrand, S. Terryn, E. Roels, and B. Vanderborght. Reconfigurable, multi-material, voxel-based soft robots. IEEE Robotics and Automation Letters , 8(3):1255–1262, 2023.[71] N. Kellaris, V . Gopaluni Venkata, G. M. Smith, S. K. Mitchell, and C. Keplinger. Peano-haselactuators: Muscle-mimetic, electrohydraulic transducers that linearly contract on activation.Science Robotics , 3(14):eaar3276, 2018.[72] S. Kriegman, S. Walker, D. S. Shah, M. Levin, R. Kramer-Bottiglio, and J. C. Bongard. Auto-mated shapeshifting for function recovery in damaged robots. ArXiv , abs/1905.09264, 2019.A Parameterization of the Design SpacePadding7×7 5×5Sliding window7×7Cropping...stdFlatten...25×3×325×9Moore Neighborhoodstd=0,0,0,0,3,3,0,2,20,0,0,3,3,2,2,2,4...4,0,0,0,0,0,0,0,0Empty voxel=0 Soft voxel =1 Rigid voxel =2 Horizontal actuator =3 Vertical actuator =425×9Figure 5: Parameterization of the design space. Initially, the design space is surrounded by emptyvoxels. Each voxel is denoted by a discrete value, reflecting its material characteristic. Then, asliding window is used to get each voxel’s local state, which is composed of its type and the typesof its Moore neighbors. Finally, the design state is formulated as an ordered sequence.B Environment DetailsOur experiments are based on the simulation platform from [3, 5]. In this section, we provideadditional details of the used environments (Figure 6).Position. pois a2-dim vector that represents the position of the center of mass of an object oin thesimulation at time t.poxandpoyarexandycomponents of this vector, respectively. pois calculatedby averaging the positions of all the point-masses that make up object oat time t.Velocity. vois a2-dim vector that represents the velocity of the center of mass of an object oin thesimulation at time t.voxandvoyarexandycomponents of this vector, respectively. vois calculatedby averaging the velocities of all the point-masses that make up object oat time t.Orientation. θois a1-dim vector that represents the orientation of an object oin the simulation attimet. Letpibe the position of point mass iof object o.θois computed by averaging over all itheangle between the vector pi−poat time tand time 0. This average is a weighted average weightedby||pi−po||at time 0.Other observations. hob(d)is a vector of length (2d+1) that describes elevation information aroundthe robot below its center of mass. More specifically, for some integer x≤d, the correspondingentry in vector hob(d)will be the highest point of the terrain which is less than poybetween a rangeof[x, x+ 1] voxels from poxin the x-direction.13Figure 6: Visualization of all environments used in our work.B.1 Walker-v0In this task, the robot is rewarded by walking as far as possible on flat terrain. The task-specificobservation is vrobot, and the reward Ris:R= ∆probotx (2)which rewards the robot for moving in the positive x-direction. The robot receives a reward of 1forreaching the end of the terrain. The episode duration reaches a 500time steps.B.2 BridgWalker-v0In this task, the robot is rewarded by walking as far as possible on a soft rope-bridge. The task-specific observation is {vrobot, θrobot}, and the reward Ris:R= ∆probotx (3)which rewards the robot for moving in the positive x-direction. The robot receives a reward of 1forreaching the end of the terrain. The episode duration reaches a 500time steps.B.3 Upstepper-v0In this task, the robot climbs up stairs of varying lengths. The task-specific observation is formed byconcatenating vectors {vrobot, θrobot, hrobotb(5)}, and the reward Ris:R= ∆probotx (4)which rewards the robot for moving in the positive x-direction. The robot also receives a one-timereward of 2for reaching the end of the terrain. The episode duration reaches a 600time steps.B.4 Downstepper-v0In this task, the robot climbs down stairs of varying lengths. The task-specific observation is formedby concatenating vectors {vrobot, θrobot, hrobotb(5)}, and the reward Ris:R= ∆probotx (5)14which rewards the robot for moving in the positive x-direction. The robot also receives a one-timereward of 2for reaching the end of the terrain, and a one-time penalty of −3for rotating more than90degrees from its originally orientation in either direction. The episode duration reaches a 500time steps.B.5 ObstacleTraverser-v0In this task, the robot walks across terrain that gets increasingly more bumpy. The task-specificobservation is formed by concatenating vectors {vrobot, θrobot, hrobotb(5)}, and the reward Ris:R= ∆probotx (6)which rewards the robot for moving in the positive x-direction. The robot also receives a one-timereward of 2for reaching the end of the terrain, and a one-time penalty of −3for rotating more than90degrees from its originally orientation in either direction. The episode duration reaches a 1000time steps.B.6 ObstacleTraverser-v1In this task, the robot walks through very bumpy terrain. The task-specific observation is formed byconcatenating vectors {vrobot, θrobot, hrobotb(5)}, and the reward Ris:R= ∆probotx (7)which rewards the robot for moving in the positive x-direction. The robot also receives a one-timereward of 2for reaching the end of the terrain. The episode duration reaches a 1000 time steps.B.7 Hurdler-v0In this task, the robot walks across terrain with tall obstacles. The task-specific observation is formedby concatenating vectors {vrobot, θrobot, hrobotb(5)}, and the reward Ris:R= ∆probotx (8)which rewards the robot for moving in the positive x-direction. The robot also receives a one-timepenalty of −3for rotating more than 90degrees from its originally orientation in either direction.The episode duration reaches a 1000 time steps.B.8 GapJumper-v0In this task, the robot traverses a series of spaced-out floating platforms all at the same height.The task-specific observation is formed by concatenating vectors {vrobot, θrobot, hrobotb(5)}, and thereward Ris:R= ∆probotx (9)which rewards the robot for moving in the positive x-direction. The robot also receives a one-timepenalty of −3for falling off the platforms. The episode duration reaches a 1000 time steps.B.9 PlatformJumper-v0In this task, the robot traverses a series of floating platforms at different heights. The tar-get design space is 5×5. The task-specific observation is formed by concatenating vectors{vrobot, θrobot, hrobotb(5)}, and the reward Ris:R= ∆probotx (10)which rewards the robot for moving in the positive x-direction, The robot also receives a one-timepenalty of −3for rotating more than 90degrees from its originally orientation in either direction orfor falling off the platforms (after which the environment resets). The episode duration reaches a1000 time steps.15C Implementation DetailsC.1 Hyperparameters and Training ProcedureWe use PyTorch [66] to implement all the models used in our work. We take the official imple-mentation of transformer from Pytorch which uses TransformerEncoderLayer module, and add alearnable position embedding. All hyperparameters of PreCo are listed in Table 2.Our co-design policy can be trained in an end-to-end RL manner because we unify the design andcontrol processes as a single MDP. That is, at the start of each RL episode, the policy first takesa finite number of design actions to develop a robot morphology, and no reward is assigned to thepolicy during this period. Subsequently, the resulting robot is controlled by this policy to collect theenvironmental rewards, which also provides learning signals for the design actions. Once the desirednumber of trajectories is collected using distributed trajectory sampling (described in Section 4), thepolicy is updated using PPO. We also ensure that the baselines and our method use the same numberof policy iterations (simulation steps) for optimization.Table 2: Hyperparameters of PreCo.Hyperparameter ValueGAE TrueGAE λ 0.95Learning rate 2.5·10−4Linear learning rate decay TrueClip parameter 0.1Value loss coefficient 0.5Entropy coefficient 0.01Time steps per rollout 5120PPO Optimizer AdamEvaluation interval 10Discount factor γ 0.99Clipped value function TrueObservation normalization TrueObservation clipping [−10,10]Reward normalization TrueReward clipping [−10,10]Policy epochs 8Neighborhood MooreDesign steps 1Number of layers 3Number of attention heads 1Transformer Embedding dimension 128Feedforward dimension 256Non linearity function ReLUDropout 0.0C.2 Details of the Baseline AlgorithmsFor baseline algorithms, we use the official implementation of GA from Evolution Gym [3] andemploy a population of 12agents. It’s worth noting that the inner loop of control optimization isalso driven by PPO, while the outer loop of morphology optimization is implemented using theevolutionary algorithm. Additionally, we use the official implementation of CuCo from [5] andReptile from [65]. In the remaining section, we demonstrate details about these baselines.16GA GA directly encodes the robot’s morphology as a vector where each element is tailored to thevoxel’s material property in order. It uses elitism selection and a simple mutation strategy to evolvethe population of robot designs. The selection keeps the top x%of the robots from the currentpopulation as survivors and discards the rest, and the mutation can randomly change each voxel ofthe robot with a certain probability (mutation rate). In our study, the survivor rate starts at 60% anddecreases linearly to 0%, and the mutation rate is set to 10%.CuCo CuCo is a curriculum-based co-design method that consists of separate NCA-based designpolicy and transformer-based control policy. This curriculum-based method expands the designspace from a small size to the target size using reinforcement learning with a predefined curriculum.In our study, we set the curriculum of CuCo to be 3×3→5×5and adhere to the originalhyperparameter settings of CuCo as presented in [5].MeCo MeCo utilizes the same network architecture as PreCo but is trained with the Reptile [65],a popular meta-learning method. Reptile is designed to identify model parameters that serve asan optimal starting point for adaptation across various tasks. When encountering a novel task, themodel is expected to need fewer updates or episodes to achieve proficient performance. In contrastto another meta-learning method, MAML[67], which necessitates second-order gradients (gradientsof gradients) during its meta-update step, Reptile simply averages the updates. This characteristicmakes Reptile more computationally efficient and easier to implement. In our study, we set themeta-learning rate to 0.25, and the update iteration for each training task is configured to be 20.C.3 Computational CostWe use distributed trajectory sampling with multiple CPU threads to collect training data (describedin Section 4). For pre-training experiments in the paper, it takes around 2days to train our model ona standard server with 40CPU cores and an NVIDIA RTX 3090 GPU.D Visualization ResultsIn this section, we provide some visualization results of brain-body pre-training and brain-bodyfine-tuning, as shown in Figure 7 and Figure 8, respectively.Zero -Shot GeneralizationTraining TasksClimb stairs Cross uneven terrainMore uneven terrainDescend stairsFigure 7: Visualization of PreCo’s zero-shot behavior. The figure shows screenshots at consecutivetime intervals of the designed robot’s behavior. Compared with training tasks, We find that PreCoshows favorable generalization properties and intriguing behavior when facing new environments(beginning from 00 : 28 in this video).E Ablation of the Shared Policy RepresentationThe performance results of PreCo and PreCo-Sep in Figure 3 and Table 1 suggest that a shared policyrepresentation facilitates zero-shot generalization and few-shot adaptation, surpassing methods thatemploy separated representations. Furthermore, we present a comparison of the performance of17Brain -Body Pre-TrainingBrain -body Fine-Tuning Figure 8: Visualization of brain-body fine-tuning. The pre-trained co-design policy shows the abilityto swiftly adjust both the morphology and control strategy to adapt to new co-design tasks (beginningfrom00 : 42 in this video).Table 3: Performance results across 10co-designtasks. All methods are trained from scratch.Environment PreCo-Scratch PreCo-Sep-ScratchWalker-v0 10.47±0.01 10 .46±0.01Climber-v0 2.23±0.87 0.43±0.02Hurdler-v0 2.76±1.39 2.07±1.33UpStepper-v0 7.23±1.46 4.06±0.28DownStepper-v0 8.98±0.01 7.46±0.71GapJumper-v0 3.48±0.66 3.51±0.53BridgeWalker-v0 4.81±0.21 5.46±1.01PlatformJumper-v0 6.12±0.82 3.97±0.33ObstacleTraverser-v0 6.03±2.34 5.08±0.19ObstacleTraverser-v1 3.38±0.69 2.78±1.06/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013 /uni00000014/uni00000011/uni00000015/uni00000018 /uni00000014/uni00000011/uni00000018/uni00000013 /uni00000014/uni00000011/uni0000001a/uni00000018 /uni00000015/uni00000011/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000003(×103)/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000026/uni0000004f/uni0000004c/uni00000050/uni00000045/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000010/uni00000036/uni00000046/uni00000055/uni00000044/uni00000057/uni00000046/uni0000004b/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000010/uni00000036/uni00000048/uni00000053/uni00000010/uni00000036/uni00000046/uni00000055/uni00000044/uni00000057/uni00000046/uni0000004bFigure 9: Learning curves in Climber-v0.PreCo -Scratch PreCo -Sep-ScratchIteration: 25 Iteration: 50 Iteration: 100 Iteration: 250 Iteration: 500 Iteration: 1000Figure 10: Morphological results of the two methods obtained during their learning processes.these two methods when trained from scratch across 10co-design tasks, the results are shown inTable 3. Figure 9 and Figure 10 illustrate their learning processes in a complex task, Climber-v0.F Discussion of the Sim-to-Real IssueIn our paper, we tested our method using a simulator with relatively fundamental modules as a proofof concept to show its effectiveness. In this section, We discuss the sim-to-real issue of our work.From the perspective of “Sim”, the EvolutionGym platform [3] used in our work employs severalsimplifications to reach a trade-off between the simulation quality and velocity. Thus, for morerealizable sim-to-real transfer, an improved version of its physics engine is needed to model soft-body physics in 3D space. We believe that using the Finite Element Method (FEM) numericalsimulation would be one of the feasible ways to narrow this sim-to-real gap because the voxel-baseddesign provides a naturally organized mesh, and its resolution and element type could be relatively18/uni00000013/uni00000011/uni000000131 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000018 /uni00000013/uni00000011/uni00000015/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000016/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000003(×103)/uni00000015/uni00000014/uni00000013/uni00000014/uni00000015/uni00000016/uni00000017/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000029/uni0000004c/uni00000051/uni00000048/uni00000010/uni00000057/uni00000058/uni00000051/uni0000004c/uni0000004a/uni00000003/uni00000052/uni00000051/uni00000003/uni00000032/uni00000045/uni00000056/uni00000057/uni00000044/uni00000046/uni0000004f/uni00000048/uni00000037/uni00000055/uni00000044/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000055/uni00000010/uni00000059/uni00000014/uni00000032/uni00000045/uni00000056/uni00000057/uni00000044/uni00000046/uni0000004f/uni00000048/uni00000037/uni00000055/uni00000044/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000055/uni00000010/uni00000059/uni00000014/uni0000003a/uni00000044/uni0000004f/uni0000004e/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000038/uni00000053/uni00000036/uni00000057/uni00000048/uni00000053/uni00000053/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000032/uni00000045/uni00000056/uni00000057/uni00000044/uni00000046/uni0000004f/uni00000048/uni00000037/uni00000055/uni00000044/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000033/uni0000004f/uni00000044/uni00000057/uni00000049/uni00000052/uni00000055/uni00000050/uni0000002d/uni00000058/uni00000050/uni00000053/uni00000048/uni00000055/uni00000010/uni00000059/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000018 /uni00000013/uni00000011/uni00000018/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013 /uni00000014/uni00000011/uni00000015/uni00000018 /uni00000014/uni00000011/uni00000018/uni00000013 /uni00000014/uni00000011/uni0000001a/uni00000018 /uni00000015/uni00000011/uni00000013/uni00000013/uni0000002c/uni00000057/uni00000048/uni00000055/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000056/uni00000003(×103)/uni00000013/uni00000014/uni00000015/uni00000016/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000026/uni00000052/uni00000010/uni00000027/uni00000048/uni00000056/uni0000004c/uni0000004a/uni00000051/uni00000003/uni00000037/uni00000044/uni00000056/uni0000004e/uni00000056/uni00000033/uni00000055/uni00000048/uni00000026/uni00000052/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000cFigure 11: Additional experiments. Left: visualization of the fine-tuning process. Right: learningcurve of brain-body pre-training on more diverse co-design tasks.easily determined. Moreover, FEM has shown promise for real-world soft robotic models whencombined with highly stable implicit Euler integration [42].In our study, we focus on the model-free co-design problem where system modeling is not required,and we use reinforcement learning to approximate the gradient of design and control. This endeavorrequires a large amount of training data. However, when considering the real-world problem, itis possible to transfer to the model-based setting. By leveraging the differentiable properties ofcertain FEM models [40], the universal parameterized co-design policy can be updated using Back-Propagation Through Time (BPTT), resulting in more efficient brain-body pre-training. When wemove to the 3D modular robot design space, we would like to adapt the parameterization methoddepicted in Figure 5 to its 3D version. By using a “3D convolutional kernel”, we can easily createinput design observation sequences for the transformer-based policy. Although transformer has suf-ficient capacity to handle long sequences, a more expansive design space (e.g., thousands of voxels)would necessitate a more complex transformer model. This, in turn, could demand significantlyhigher computational power.From the perspective of “Real”, one of the foremost considerations is the material selection, whichshould be predicated upon the required flexibility, durability, and functionality [68, 69]. With thiscriterion, the Diels-Alder (DA) polymer [70] or silicone voxels [38], in conjunction with multi-material cubic blocks produced through 3D printing, may serve as ideal components for constructingthe body of a MSR. To establish the local observation space, each voxel could be equipped with anarray of sensors, such as touch, pressure and velocity sensors. Alternatively, soft sensors, craftedfrom conductive elastomers that alter resistance upon deformation, could offer valuable feedback tothe control system. Furthermore, Peano-HASEL actuators [71] or pneumatic actuators [72] might besuitable for volumetric actuation (probably limited to expansion for efficient simulation), and closed-loop control could be achieved by utilizing Neural Networks (NNs). In the real-world setting, factorslike material imperfections, air resistance, friction and many others come into play, we also need toiteratively refine the design and control algorithms based on real-world feedback.We acknowledge that each point discussed above presents its challenges but is well worth in-depthinvestigation, and we aspire for our work to serve as a catalyst for future research into the co-designof modular soft robots.G Additional ExperimentsG.1 How Does Fine-Tuning on the Target Task Affect Performance on the Training Tasks?It is worth noting that when a pre-trained co-design policy undergoes fine-tuning for a new targettask, it can experience what’s known as “catastrophic forgetting”. This means that it might for-19Figure 12: Morphological results of brain-body pre-training.get certain information or patterns it learned during the pre-training phase, potentially leading to adecrease in performance on the original training tasks.We track the performance changes on 4pre-training tasks when the co-design policy is fine-tunedon ObstacleTraverser-v1. The left side of Figure 11 illustrates a consistent decrease in performancefor Walker-v0 and PlatformJumper-v0 due to the significant disparity between the target task and theoriginal tasks. In contrast, If the target task is similar to the pre-training tasks, the co-design policymight retain more of its initial knowledge. For instance, performance on tasks like UpStepper-v0and ObstacleTraverser-v0 remains less affected throughout the fine-tuning.G.2 Pre-Training on More Diverse Co-design TasksIn our paper, we select 4locomotion tasks for pre-training and 5tasks for testing. As we focus on co-designing modular soft robots to perform multiple tasks, the wealth of brain-body links embeddedwithin the enormous combined search space offers sufficient diversity for effective policy learning.To further explore the potential of PreCo, we also conduct an additional experiment that encom-passes more diverse co-design tasks for pre-training. In this experiment, except for the original 4pre-training tasks, we add Pusher-v0 (the robot is encouraged to push a box initialized in front ofit as far as possible) and Carrier-v0 (the robot is encouraged to carry a box initialized above it asfar as possible) to the co-design policy’s learning procedure. Moreover, we add Thrower-v0 (therobot is encouraged to throw a box initialized above it as far as possible) to the test tasks. The rightside of Figure 11 demonstrates the learning curves, and the result is averaged over 5different runs.The results of Figure 12, Figure 13 and Figure 14 show that PreCo still performs well on zero-shotgeneralization and few-shot adaptation.20Figure 13: Morphological results of zero-shot generalization.Figure 14: Morphological results of brain-body fine-tuning.21 |
ApxLUk8U-l | Self-Improving Robots: End-to-End AutonomousVisuomotor Reinforcement LearningArchit Sharma∗Ahmed M. Ahmed∗Rehaan Ahmad Chelsea FinnStanford UniversityAbstract: In imitation and reinforcement learning (RL), the cost of human super-vision limits the amount of data that the robots can be trained on. While RL offersa framework for building self-improving robots that can learn via trial-and-errorautonomously, practical realizations end up requiring extensive human supervi-sion for reward function design andrepeated resetting of the environment betweenepisodes of interactions. In this work, we propose MEDAL++, a novel design forself-improving robotic systems: given a small set of expert demonstrations at thestart, the robot autonomously practices the task by learning to both doandundothe task, simultaneously inferring the reward function from the demonstrations.The policy and reward function are learned end-to-end from high-dimensionalvisual inputs, bypassing the need for explicit state estimation or task-specific pre-training for visual encoders used in prior work. We first evaluate our proposedsystem on a simulated non-episodic benchmark, EARL, finding that MEDAL++is both more data efficient and gets up to 30% better final performance comparedto state-of-the-art vision-based methods. Our real-robot experiments show thatMEDAL++ can be applied to manipulation problems in larger environments thanthose considered in prior work, and autonomous self-improvement can improvethe success rate by 30% to 70% over behavioral cloning on just the expert data.Code, training and evaluation videos along with a brief overview is available at:https://architsharma97.github.io/self-improving-robots/Keywords: reinforcement learning, autonomous, reset-free, manipulation1 IntroductionFigure 1: A robot resets the environmentfrom the goal state to the initial state ( top),in contrast to a human resetting the environ-ment for the robot ( bottom ). While latter isthe norm in robotic reinforcement learning,a robot that can reset the environment andpractice the task autonomously can train onmore data, and thus, be more competent.While imitation learning methods have shown promis-ing evidence for generalization via large-scale teleoper-ated data collection efforts [1, 2], human supervision isexpensive and collected datasets are still incommensu-rate for learning robust and broadly performant control.In this context, the aspirational notion of self-improvingrobots becomes relevant: robots that can learn and im-prove from their own interactions with the environmentautonomously. Reinforcement learning (RL) is a naturalframework for such self-improvement, where the robotscan learn from trial-and-error autonomously in principle.However, RL deployment requires domain expertise andextensive supervision in practice for state estimation, de-signing reward functions, and repeated resetting of the en-vironments after every episode of interaction.In particular, the human supervision for repeatedly reset-ting the environment through training is an impediment to∗Authors contributed substantially to real-robot results. Correspondence to architsh@stanford.edu .7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.building autonomous robots, visualized in Figure 1. Several prior works [3, 4, 5] show that standardRL algorithms can fail catastrophically when the reset frequency is decreased. While recent worksaddress the lack of supervision for repeated resetting [4, 6, 7, 8, 9], learning efficiently in the absenceof frequent environment resets remains a challenge in the real-world. Task-relevant states constituteonly a small subset of all possible states, especially in larger real-world environments, and the robotcan drift far from these task-relevant states when learning without repeated resets. In view of this,using a small set of demonstrations can be an effective choice to construct self-improving systems.Expert demonstrations can alleviate challenges related to exploration [10] and enable efficient au-tonomous RL by encouraging agent to stay close to task-relevant states in the demonstrations [11].And since the human supervision required for collecting the demonstrations is front-loaded , i.e.,before the training begins, the robot can collect data autonomously and self-improve thereon. Im-portantly, we make the observation that the terminal states in expert trajectories effectively representthe goal distribution. This allows us to learn a goal-reaching reward function without any additionaldata collection or reward engineering.In this work, our main contribution is MEDAL++, a carefully designed system that can train au-tonomously in the real-world with minimal task-specific engineering. MEDAL++ builds upon [11]with several crucial components that enable efficient end-to-end autonomous training in the real-world: First, we learn an encoder for high-dimensional visual inputs end-to-end using DrQ-v2 [12],bypassing the need for state estimation or task-specific pre-training of visual encoders. Second, wejudiciously use expert demonstrations by learning a goal-reaching reward function [13, 14], elimi-nating the need for engineered reward functions. Finally, we improve the sample efficiency by usingan ensemble of Q-value functions and increasing the update steps per sample collected [15], usingBC regularization on expert data to regularize policy learning towards demonstration data [16], andoversampling transitions from demonstration data when training the Q-value function [10]. We eval-uate MEDAL++ on a pixel-based control version of EARL [5], a non-episodic learning benchmarkand observe that MEDAL++ is more data efficient and gets up to 30% better performance comparedto competitive methods [4, 11]. Most importantly, we conduct real-robot evaluations using a FrankaPanda robot arm on four manipulation tasks, such as hanging a cloth on a hook, covering a bowlwith a cloth and peg insertion, all from RGB image observations. We observe that MEDAL++ canimprove the success rate of the policy by 30% to70% over a behavioral cloning policy learned onlyon the expert data, suggesting that MEDAL++ is a step towards self-improving robotic systems.2 Related WorkSeveral works have demonstrated the emergence of complex skills on a variety of problems usingreinforcement learning on real robots [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27], However, theseprior works require the environment to be reset to a (narrow) set of initial states for every episode ofinteraction with the environment. Such resetting of the environment either requires repeated humaninterventions and constant monitoring [28, 29, 30, 31, 32] or scripting behaviors [19, 23, 33, 34, 24,35] which can be time-intensive while resulting in brittle behaviors. Some prior works have alsodesigned the task and environment to bypass the need for resetting the environment [36, 37, 21], butthis applies to a restricted set of tasks.More recent works have identified the need for algorithms that can work autonomously with min-imal supervision required for resetting the environments [38, 4, 5]. Several recent works proposeto learn a backward policy to undo the task, in addition to learning a forward policy that does thetask [38, 6, 7, 8, 39, 40]. In this work, we build upon MEDAL [11], where the backward policylearns to match the distribution of states visited by an expert to solve the task. While the resultsfrom these prior papers are restricted to simulated settings, some recent papers have demonstratedautonomous training on real robots [4, 41, 42, 43]. However, the results on real robots have either re-lied on state estimation [41, 42], pre-specified reward functions [43] or task-specific decompositioninto subgoals [44]. R3L [4] also considers the setting of learning from image observations withoutrepeated resets and specified reward functions, similar to our work. It uses a backward policy thatoptimizes for state-novelty while learning the reward function from a set of goal images collectedprior to training [14]. However, R3L relies on frozen visual encoders trained independently on data2collected in the same environment, and optimizing for state-novelty does not scale to larger environ-ments, restricting their robot evaluations to smaller, easier to explore environments. Our simulationresults indicate that MEDAL++ learns more efficiently than R3L, and real robot evaluations in-dicate the MEDAL++ can be used on larger environments. Overall, our work proposes a systemthat can learn end-to-end from visual inputs without repeated environment resets, with real-robotevaluations on four manipulation tasks.3 PreliminariesProblem Setting . We consider the autonomous RL problem setting [5]. We assume that the agentis in a Markov Decision Process represented by (S,A,T, r, ρ 0, γ), where Sis the state space,potentially corresponding to high-dimensional observations such as RGB images, Adenotes therobot’s action space, T:S × A × S → R≥0denotes the transition dynamics of the environment,r:S × A → Ris the (unknown) reward function, ρ0denotes the initial state distribution, and γdenotes the discount factor. The objective is to learn a policy that maximizes E[P∞t=0γtr(st, at)]when deployed fromρ0during evaluation . There are two key differences from the standard episodicRL setting: First, the training environment is non-episodic, i.e., the environment does notperiodi-cally reset to the initial state distribution after a fixed number of steps. Second, the reward functionis not available during training. Instead, we assume access to a set of demonstrations collected byan expert prior to robot training. Specifically, the expert collects a small set of forward trajectoriesD∗f={(si, ai). . .}demonstrating the task and similarly, a set of backward demonstrations D∗bundoing the task back to the initial state distribution ρ0.Autonomous Reinforcement Learning via MEDAL . To enable a robot to practice the task au-tonomously, MEDAL [11] trains a forward policy πfto solve the task, and a backward policy πbto undo the task. The forward policy πfexecutes for a fixed number of steps before the controlis switched over to the backward policy πbfor a fixed number of steps. Chaining the forward andbackward policy reduces the number of interventions required to reset the environment. The for-ward policy is trained to maximize E[P∞t=0γtr(st, at)], which can be done via any RL algorithm.The backward policy πbis trained to minimize the Jensen-Shannon divergence JS (ρb(s)||ρ∗(s))between the stationary state-distribution of the backward policy ρband the state-distribution of theexpert policy ρ∗. By training a classifier Cb:S 7→ (0,1)to discriminate between states visited bythe expert (i.e. s∼ρ∗) and states visited by πb(i.e.,s∼ρb), the divergence minimization problemcan be rewritten as max πb−E[P∞t=0γtlog (1−Cb(st))][11]. The classifier used in the rewardfunction for πbis trained using the cross-entropy loss, where the states s∈ D∗fare labeled 1andstates visited by πbonline are labeled 0, leading to a minimax optimization between πbandCb.Learning Reward Functions with VICE . Engineering rewards can be tedious, especially whenonly image observations are available. Since the transitions from the training environment are notlabeled with rewards, the robot needs to learn a reward function for the forward policy πf. Inthis work, we consider VICE [45], particularly, the simplified version presented by Singh et al.[14] that is compatible with off-policy RL. VICE requires a small set of states representing thedesired outcome (i.e., goal images) prior to training. Given a set of goal states G, VICE trains aclassifier Cf:S 7→ (0,1)where Cfis trained using the cross entropy loss on states ∈ G labeledas 1, and states visited by πflabeled as 0. The policy πfis trained with a reward function oflogCf(s)−log (1−Cf(s)), which can be viewed as minimizing the KL-divergence between thestationary state distribution of πfand the goal distribution [46, 47, 48]. VICE has two benefits overpre-trained frozen classifier-based rewards: first, the negative states do not need to be collected by aperson and second, the VICE classifier is harder to exploit as the online states are iteratively addedto the label 0 set, continually improving the goal-reaching reward function implicitly.4 MEDAL++: Practical and Efficient Autonomous Reinforcement LearningThe goal of this section is to develop a RL method that can learn from autonomous online interactionin the real world, given just a (small) set of forward D∗fand backward demonstrations D∗bwithoutreward labels. Particularly, we focus on design choices that make MEDAL++ viable in the real3world in contrast to MEDAL: First, we describe how to learn from visual inputs without explicit stateestimation. Second, we describe how to learn a reward function from the expert demonstrations toeliminate the need for ground truth rewards when training the forward policy πf. Third, we describethe algorithmic modifications for training the Q-value function and the policy πmore efficiently.Finally, we describe how to construct MEDAL++ using all the components described here, traininga forward policy πfand a backward policy πbto learn autonomously.Encoding Visual Inputs . We embed the high-dimensional RGB images into a low-dimensionalfeature space using a convolutional encoder E. The RGB images are augmented using random cropsand shifts (up to 4 pixels) to regularize Q-value learning [12]. While some prior works incorporateexplicit representation learning losses for visual encoders [49, 50], Yarats et al. [12] suggest that reg-ularizing Q-value learning using random crop and shift augmentations is both simpler and efficient,allowing end-to-end learning without any explicit representation learning objectives. Specifically,the training loss for Q-value function on an environment transition (s, a, s′, r)can be written as:l(Q,E) =Q(E(aug(s)), a)−r−γ ̄VE(aug(s′))2(1)where aug (·)denotes the augmented image, and r+γ ̄V(·)is the TD-target. Equation 2 describesthe exact computation of ̄Vusing slow-moving target networks ̄Qand the current policy π.Learning the Reward Function . To train a VICE classifier Cf, we need to specify a set of goalstates that can be used as positive samples. Instead of collecting a separate set of goals, we observethat the last Kstates of every trajectory in D∗fapproximate the goal distribution, and thus, can beused as the goal set G. The trajectories collected by the robot’s policy πfwill be used to generatenegative states for training Cf. The policy is trained to maximize −log (1−Cf(·))as the rewardfunction, encouraging the policy to reach states that have a high probability of being labeled 1 underCf, and thus, similar to the states in the set G. The reward signal from the classifier can be sparseif the classifier has high accuracy on distinguishing between the goal states and states visited bythe policy. Since the classification problem for Cfis easier than the goal-matching problem for πf,especially early in the training when the policy is not as successful, it becomes critical to regularizethe discriminator Cf[51]. We use spectral normalization [52], mixup [53] to regularize Cf, andapply random crop and shift augmentations to input images to create a broader training distribution.As we have access to expert demonstrations D∗f, why do we match the policy’s state distributionto the goal distribution, instead of GAIL [13, 54], which matches policy’s state-action distributionto that of the expert? In a practical robotic setup, actions demonstrated by an expert during tele-operation and optimal actions for a learned neural network policy will be different. The forwardpass through a policy network introduces a delay, especially as the visual encoder Ebecomes larger.Matching both the state and actions to that of the expert, as is the case with GAIL, can lead to sub-optimal policies and be infeasible in general. In contrast, VICE allows the robotic policies to chooseactions that are different from the expert as long as they lead to a set of states similar to those in G.The exploratory benefits of matching the actions can be recovered, as described in the next section.Improving the Learning Efficiency . To improve the learning efficiency over MEDAL, we incor-porate several changes in how we train the Q-value function and the policy π. First, we train anensemble of Q-value networks {Qn}Nn=1and corresponding set of target networks { ̄Qn}Nn=1. Whentraining an ensemble member Qn, the target is computed by sampling a subset of target networks,and taking the minimum over the subset. The target value ̄V(s′)in Eq 1 can be computed as ̄V(s′) =Ea′∼πminj∈M ̄Qj(s′, a′), (2)where Mis a random subset of the index set {1. . . N}of size M. Randomizing the subset ofthe ensemble when computing the target allows more gradient steps to be taken to update Qnonl(Qn,E)[15] without overfitting to a specific target value, increasing the overall sample efficiencyof learning. The target networks ̄Qnare updated as an exponential moving average of Qnin theweight space over the course of training. At iteration t, ̄Q(t)n←τQ(t)n+ (1−τ) ̄Q(t−1)n , whereτ∈(0,1]determines how closely ̄Qntracks Qn.4Importantly, we want to leverage the expert demonstrations to counteract the exploration challenge,especially because the training signal from VICE reward can be sparse. Q-value networks are typi-cally updated on minibatches sampled uniformly from a replay buffer D. However, the transitions inthe demonstrations are generated by an expert, and thus, can be more informative about the actionsfor reaching successful states [10]. To bias the data towards the expert distribution, we oversampletransitions from the expert data such that for a batch of size B,ρBtransitions are sampled fromthe expert data uniformly and (1−ρ)Btransitions are sampled from the replay buffer uniformlyforρ∈[0,1). Finally, we regularize the policy learning towards expert actions by introducing abehavior cloning loss in addition to maximizing the Q-values [16, 10]:L(π) =Es∼D,a∼π(·|s)"1NNXn=1Qn(E(aug(s)), a)#+λE(s∗,a∗)∼ρ∗hlogπa∗| E(aug(s∗))i,where λ≥0denotes the hyperparameter controlling the effect of BC regularization. Note that theparameters of the encoder are frozen with respect to L(π), and are only trained through l(Qn,E).Putting it Together: MEDAL++ . MEDAL++ trains a forward policy that learns to solvethe task and a backward policy that learns to undo the task towards the expert state distribu-tion. The parameters and data buffers for the forward policy are represented by the tuple F ≡πf,Ef,{Qfn}Nn=1,{ ̄Qfn}Nn=1, Cf,D∗f,Df,Gf, where the symbols retain their meaning from theprevious sections. Similarly, the parameters and data buffers for the backward policy are repre-sented by the tuple B ≡πb,Eb,{Qbn}Nn=1,{ ̄Qbn}Nn=1, Cb,D∗b,Db,Gb. Noticeably, FandBhavea similar structure: Both πfandπbare trained using with −log(1−C(·))as the reward function(using their respective classifiers CfandCb), with both classifiers trained to discriminate betweenthe states visited by the policy and their target states. The primary difference is the set of positivetarget states GfandGbused to train CfandCbrespectively, visualized in Figure 8. The VICEclassifier Cfis trained to predict the last Kstates of every trajectory from D∗fas positive, whereaswe train the MEDAL classifier Cbto predict all the states of forward demonstrations except the lastKstates as positive. Optionally, we can also include the last Kstates of backward demonstrationsfromD∗bas positives for training Cb. The pseudocode for training is given in Algorithm 1, and moredetailed description is available in Appendix A.1. Some key details: the control switches betweenthe forward πfand the backward policy πbafter a fixed number of steps. When executing in thereal world, humans are allowed to intervene and reset the environment intermittently, switching thecontrol over to πfafter the intervention to restart the forward-backward cycle.5 ExperimentsThe goal of our experiments is to determine whether MEDAL++ can be a practical method forself-improving robotic systems. We benchmark MEDAL++ against competitive methods [11, 4] onEARL [5] benchmark for non-episodic RL to evaluate the learning efficiency from high-dimensionalobservations, in Section 5.1. Our primary experiments in Section 5.2 evaluate MEDAL++ onfour real robot manipulation tasks, primarily tasks with soft-body objects such as hanging a clothon a hook and covering a bowl with cloth. The real robot evaluation considers the question ofwhether self-improvement is feasible via MEDAL++, and if so, how much self-improvement canMEDAL++ obtain? Finally, we run ablations to evaluate the contributions of different componentsto MEDAL++ in Section 5.3.5.1 Benchmarking MEDAL++ on EARLFirst, we benchmark MEDAL++ on continuous-control environment from EARL against state-of-the-art non-episodic autonomous RL methods. To be consistent with the benchmark, we use theground truth reward functions for all the environments.Environments . We consider three sparse-reward continuous-control environments from EARLbenchmark [5], shown in Appendix, Fig 6. Tabletop organization is a simplified manipulation en-vironment where a gripper is tasked to move the mug to one of the four coasters from a wide setof initial states, sawyer door closing task requires a sawyer robot arm to learn how to close a doorstarting from various positions, and finally the sawyer peg insertion task requires the sawyer robot5Figure 2: Comparison of autonomous RL methods on vision-based manipulation tasks in simulated environ-ments from EARL [5]. MEDAL++ is both more efficient and learns a similarly or more successful policycompared to other methods.arm to grasp the peg and insert it into a goal. Not only does the robot have to learn how to do thetask (i.e. close the door or insert the peg), but it has to learn how to undo the task (i.e. open the dooror remove the peg) to try the task repeatedly in a non-episodic training environment. All tasks aresetup to return 84×84RGB observations with sparse goal reaching reward functions. The trainingenvironment is reset to s0∼ρ0every 25,000 steps of interaction with the environment. This is ex-tremely infrequent compared to episodic settings where the environment is reset to the initial statedistribution every 200-1000 steps. EARL comes with 5-15forward and backward demonstrationsfor every environment to help with exploration in these sparse reward environments. We report theaverage success of the forward policy every 10,000training steps over 10 trials. More details canbe found in Appendix A.3.Comparisons . We compare MEDAL++ to four methods: (1) MEDAL [11] uses a backward policythat matches the expert state distribution by minimizing JS (ρb(s)||ρ∗(s)), similar to ours. How-ever, the method is designed for low-dimensional states and policy/ Q-value networks and cannot bena ̈ıvely extended to RGB observations. For a better comparison, we improve the method to use avisual encoder with random crop and shift augmentations during training, similar to MEDAL++.(2)R3L [4] uses a perturbation controller as backward policy which optimizes for state-noveltycomputed using random network distillation [55]. Unlike our method, R3L also requires a sepa-rately collected dataset of environment observations to pre-train a V AE [56] based visual encoder,which is frozen throughout the training thereafter. (3) We consider an oracle RL method that trainsjust a forward policy and gets a privileged training environment that resets every 200steps (i.e., thesame episode length as during evaluation) and finally, (4) we consider a control method na ̈ıve RL ,that similar to oracle trains just a forward policy, but resets every 25,000 steps similar to the non-episodic methods. We additionally report the performance of a behavior cloning policy, trainedon the forward demonstrations used in the EARL environments. The implementation details andhyperparameters can be found in Appendix A.2.Results . Figure 2 plots the evaluation performance of the forward policy versus the training sam-ples collected in the environment. MEDAL++ outperforms all other methods on both the sawyerenvironments, and is comparable to MEDAL on tabletop organization , the best performing method.While R3L does recover a non-trivial performance eventually on door closing andtabletop organi-zation , the novelty-seeking perturbation controller can cause the robot to drift to states farther awayfrom the goal in larger environments, leading to slower improvement in evaluation performance onstates starting from s0∼ρ0. While MEDAL and MEDAL++ have the same objective for thebackward policy, optimization related improvements enable MEDAL++ to learn faster. Note, BCperforms worse on tabletop organization environment with a 45% success rate, compared to thesawyer environments with a 70% and 80% success rate on peg insertion anddoor closing respec-tively. So, while BC regularization helps speed up efficiency and can lead to better policies, it canhurt the final performance of MEDAL++ if the BC policy itself has a worse success rate (at least,when true rewards are available for training, see ablations in Section 5.3). While we use the samehyperparameters for all environments, reducing the weight on BC regularization when BC policieshave poor success can reduce the bias in policy learning and improve the final performance.65.2 Real Robot EvaluationsIn line with the main goal of this paper, our experiments aim to evaluate whether self-improvementthrough MEDAL++ can enable real-robots to learn more competent policies autonomously. Onfour manipulation tasks, we provide quantitative and qualitative comparison of the policy learnedby behavior cloning on the expert data to the one learned after self-improvement by MEDAL++.We recommend viewing the results on our anonymized website in the supplementary material fortraining and evaluation videos, which provides a more comprehensive overview.Figure 3: The training setup forMEDAL++. The image observationsinclude a fixed third person view and a firstperson view from a wrist camera mountedabove the gripper. The evaluation tasksgoing clockwise : cube grasping, covering abowl with a cloth, hanging a cloth on thehook and, peg insertion.Robot Setup and Tasks . We use Franka Emika Pandaarm with a Robotiq 2F-85 gripper for all our experiments.We use a RGB camera mounted on the wrist and a thirdperson-camera, as shown in Figure 3. The final obser-vation space includes two 100×100 RGB images, 3-dimensional end-effector position, orientation along thez-axis, and the width of the gripper. The action space isset up as either a 4 DoF end-effector control, or 5 DoFend-effector control with orientation along the z-axis de-pending on the task (including one degree of freedom forthe gripper). Our evaluation suite consists of four ma-nipuation tasks: grasping a cube, hanging a cloth on ahook, covering a bowl with a piece of cloth, and a (soft)peg insertion. Real world data and training is more perti-nent for soft-body manipulation as they are harder to sim-ulate, and thus, we emphasize those tasks in our evaluation suite. The tasks are shown in Figure 3.Training and Evaluation . For every task, we first collect a set of 50forward demonstrations and 50backward demonstrations using a Xbox controller. We chain the forward and backward demonstra-tions to speed up collection and better approximate autonomous training thereafter. After collectingthe demonstrations, the robot is trained for 30hours using MEDAL++, collecting about 300,000environment transitions in the process. For the first 30 minutes of training, we reset the environmentto create enough (object) diversity in the initial data collected in the replay buffer. After the initialcollection, the environment is reset intermittently approximately every hour of real world training onan average, though, it is left unattended for several hours. More details related to hyperparameters,network architecture and training can be found in Appendix A.2. For evaluation, we roll-out thepolicy from varying initial states, and measure the success rate over 50evaluations. To isolate therole of self-improvement, we compare the performance to a behavior cloning policy trained on theforward demonstrations using the same network architecture for the policy as MEDAL++. For bothMEDAL++ and BC, we evaluate multiple intermediate checkpoints and report the success rate ofthe best performing checkpoint.Task Behavioral Cloning MEDAL++Cube GraspingID 0.85 1.00OOD 0.08 0.82Cloth Hanging 0.26 0.62Bowl Cloth Cover 0.12 0.46Peg Insertion 0.04 0.52Figure 4: Evaluation performance of the best check-point learned by behavior cloning and MEDAL++. Ta-ble shows the final success rates over 50 trials from ran-domized initial states, normalized to [0,1]. MEDAL++substantially improves the final performance comparedto behavior cloning, validating the self-improvement.Results . The success rate of the best per-forming BC policy and MEDAL++ policy isreported in Table 4. MEDAL++ substan-tially increases the success rate of the learnedpolicies, with approximately 30-70% improve-ments. We provide an abridged version ofthe analysis here, and defer a more detailedanalysis to Appendix A.4: First, we con-sider a cube-grasping experiment. To iso-late a potential source of improvement fromautonomous reinforcement learning, forwarddemonstrations are collected from a narrow ini-tial state distribution but the robot is evaluated starting from both in-distribution ( ID) states andout-of-distribution ( OOD ) states, visualized in Appendix, Figure 7 ( only for this experiment ).MEDAL++ improves the ID performance by 15% over the BC policy, but we see a large im-7provement of 74% on OOD performance. Autonomous training allows the robot to practice the taskfrom a diverse set of states, including states that were OOD relative to the demonstration data. Thissuggests that we expect improvement in success rate to result partly from being robust to the ini-tial state distribution, as a small set of demonstrations is unlikely to cover all possible initial statesa robot can be evaluated from. Next, we evaluate MEDAL++ on grasping a cloth and putting itthrough a fixed hook. Here, MEDAL++ improves the success rate over BC by 36%, improving thegrasp success, reducing drift and collision with the hook and importantly, reducing memorization asMEDAL++ learns a policy that re-tries grasping the cloth if it fails the first time, rather than fol-lowing a memorized trajectory observed in the forward demonstrations. We observe similar trendsonbowl-covering-with-cloth and the peg-insertion (5DoF) where we observe thatMEDAL++ improves 34% and 48% in success rate over BC, with similar sources of improvementascloth-on-the-hook . Overall, we observe that not only is MEDAL++ feasible to run in thereal world with minimal task engineering, but it can also substantially improve the policy over BC.5.3 AblationsFigure 5: Ablation identifying con-tributions from different components ofMEDAL++. Improvements from BC reg-ularization and oversampled expert transi-tions are important for learning efficiency.We benchmark four variants on the tabletop organizationandpeg insertion tasks in Figure 5: (1) MEDAL++, (2)MEDAL++ with the true reward function instead of thelearned VICE reward, (3) MEDAL++ without the en-semble of Q-value functions, but, using SAC [57], and(4) MEDAL++ with neither BC-regularization nor over-sampling expert transitions when training Q-value func-tions. The learned reward in MEDAL++ can recoveror exceed the performance with true rewards. Both en-semble of Q-values and BC-regularization + oversampledexpert transitions improve the performance, though thelatter makes a larger contribution to the improvement inperformance. Note, when using the true rewards, BC-regularization/oversampling expert transitions can hurtthe final performance (as discussed in Section 5.1). How-ever, when using learned rewards, they both become moreimportant for better performance. We hypothesize that asthe learned reward function becomes noisier, other com-ponents become more important for efficient learning and better final performance.6 DiscussionWe proposed MEDAL++, a method for learning autonomously from high-dimensional image ob-servation without engineered reward functions or human oversight for repeated resetting of the en-vironment. MEDAL++ takes a small set of forward and backward demonstrations as input, andautonomously practices the task to improve the learned policy, as evidenced by comparison withbehavior cloning policies trained on just the demonstrations.Limitations and Future Work : Real robot data collection is slow, even when autonomous. While thecontrol frequency is 10 Hz, training data is collected at approximately 3.5 Hz because policy updatesand collection steps are done sequentially. Asynchronous and parallel data collection and trainingcan substantially increase the amount of data collected. Several algorithmic extensions can improvethe learning efficiency: sharing the visual encoder and the environment transitions between forwardand backward policies, using better network architectures and exploration specifically designed forlearning autonomously can improve the sample efficiency. Additionally, reducing the number ofdemonstrations required per task while learning an effective reward function and minimizing theexploration challenge would lead to greater autonomy. Our work assumes the environments tobe reversible. Extending MEDAL++ to environments with irreversible states, for example, usingPAINT [40], is an exciting opportunity. Intermittent human interventions to reset the environmentcan still be important to learn successfully. The robotic system can get stuck in a specific state whencollecting data autonomously due to poor exploration. Developing and using better methods forexploration or pretraining on more offline data can further reduce human interventions in training.87 AcknowledgementsWe would like to acknowledge Tony Zhao, Sasha Khazatsky and Suraj Nair for help with setting uprobot tasks and control stack, Eric Mitchell, Joey Hejna, Suraj Nair for feedback on an early draft,Abhishek Gupta for valuable conceptual discussion, and members of IRIS and SAIL for listeningto AS drone about this project on several occasions, personal and group meetings. This project wasfunded by ONR grants N00014-20-1-2675 and N00014-21-1-2685 and, Schmidt Futures.References[1] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002. PMLR, 2022.[2] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. RT-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[3] J. D. Co-Reyes, S. Sanjeev, G. Berseth, A. Gupta, and S. Levine. Ecological reinforcementlearning. arXiv preprint arXiv:2006.12478 , 2020.[4] H. Zhu, J. Yu, A. Gupta, D. Shah, K. Hartikainen, A. Singh, V . Kumar, and S. Levine. Theingredients of real-world robotic reinforcement learning, 2020. URL https://arxiv.org/abs/2004.12570 .[5] A. Sharma, K. Xu, N. Sardana, A. Gupta, K. Hausman, S. Levine, and C. Finn. Autonomousreinforcement learning: Formalism and benchmarking. International Conference on LearningRepresentations (ICLR) , 2021. URL https://arxiv.org/abs/2112.09605 .[6] B. Eysenbach, S. Gu, J. Ibarz, and S. Levine. Leave no trace: Learning to reset for safe andautonomous reinforcement learning. International Conference on Learning Representations(ICLR) , 2018. URL https://arxiv.org/abs/1711.06782 .[7] A. Sharma, A. Gupta, S. Levine, K. Hausman, and C. Finn. Autonomous reinforcement learn-ing via subgoal curricula. Advances in Neural Information Processing Systems , 34:18474–18486, 2021.[8] K. Xu, S. Verma, C. Finn, and S. Levine. Continual learning of control primitives: Skilldiscovery via reset-games. Neural Information Processing Symposium (NeurIPS) , 2020. URLhttps://arxiv.org/abs/2011.05286 .[9] J. Kim, J. hyeon Park, D. Cho, and H. J. Kim. Automating reinforcement learning withexample-based resets. IEEE Robotics and Automation Letters , 7(3):6606–6613, 2022.[10] A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, and P. Abbeel. Overcoming explorationin reinforcement learning with demonstrations. In 2018 IEEE international conference onrobotics and automation (ICRA) , pages 6292–6299. IEEE, 2018.[11] A. Sharma, R. Ahmad, and C. Finn. A state-distribution matching approach to non-episodic re-inforcement learning. In International Conference on Machine Learning , pages 19645–19657.PMLR, 2022. URL https://arxiv.org/abs/2205.05212 .[12] D. Yarats, R. Fergus, A. Lazaric, and L. Pinto. Mastering visual continuous control: Improveddata-augmented reinforcement learning, 2021. URL https://arxiv.org/abs/2107.09645 .[13] J. Ho and S. Ermon. Generative adversarial imitation learning, 2016. URL https://arxiv.org/abs/1606.03476 .9[14] A. Singh, L. Yang, K. Hartikainen, C. Finn, and S. Levine. End-to-end robotic reinforcementlearning without reward engineering. ArXiv , abs/1904.07854, 2019.[15] X. Chen, C. Wang, Z. Zhou, and K. Ross. Randomized ensembled double q-learning: Learningfast without a model, 2021. URL https://arxiv.org/abs/2101.05982 .[16] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning complex dexterous manipulation with deep reinforcement learning and demonstra-tions. arXiv preprint arXiv:1709.10087 , 2017.[17] S. Lange, M. Riedmiller, and A. V oigtl ̈ander. Autonomous reinforcement learning on rawvisual input data in a real world application. In The 2012 international joint conference onneural networks (IJCNN) , pages 1–8. IEEE, 2012.[18] J. Kober, J. A. Bagnell, and J. Peters. Reinforcement learning in robotics: A survey. TheInternational Journal of Robotics Research , 32(11):1238–1274, 2013.[19] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research , 17(1):1334–1373, 2016.[20] F. Ebert, S. Dasari, A. X. Lee, S. Levine, and C. Finn. Robustness via retrying: Closed-looprobotic manipulation with self-supervised learning. In A. Billard, A. Dragan, J. Peters, andJ. Morimoto, editors, Proceedings of The 2nd Conference on Robot Learning , volume 87 ofProceedings of Machine Learning Research , pages 983–993. PMLR, 29–31 Oct 2018. URLhttps://proceedings.mlr.press/v87/ebert18a.html .[21] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, and S. Levine. Qt-opt: Scalable deep reinforcement learningfor vision-based robotic manipulation, 2018. URL https://arxiv.org/abs/1806.10293 .[22] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V . Kumar. Dexterous manipulation with deepreinforcement learning: Efficient, general, and low-cost, 2018. URL https://arxiv.org/abs/1810.06045 .[23] A. Nagabandi, K. Konoglie, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation, 2019. URL https://arxiv.org/abs/1909.11652 .[24] A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser. Tossingbot: Learning to throwarbitrary objects with residual physics, 2019. URL https://arxiv.org/abs/1903.11239 .[25] D. Kalashnikov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. Mt-opt: Continuous multi-task robotic reinforcement learning at scale, 2021.URL https://arxiv.org/abs/2104.08212 .[26] L. Smith, J. C. Kew, X. B. Peng, S. Ha, J. Tan, and S. Levine. Legged robots that keep onlearning: Fine-tuning locomotion policies in the real world. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 1593–1599. IEEE, 2022.[27] M. Bloesch, J. Humplik, V . Patraucean, R. Hafner, T. Haarnoja, A. Byravan, N. Y . Siegel,S. Tunyasuvunakool, F. Casarini, N. Batchelor, et al. Towards real robot learning in the wild:A case study in bipedal locomotion. In Conference on Robot Learning , pages 1502–1511.PMLR, 2022.[28] C. Finn, X. Y . Tan, Y . Duan, T. Darrell, S. Levine, and P. Abbeel. Deep spatial autoencodersfor visuomotor learning. In 2016 IEEE International Conference on Robotics and Automation(ICRA) , pages 512–519. IEEE, 2016.10[29] S. Gu, E. Holly, T. Lillicrap, and S. Levine. Deep reinforcement learning for robotic manipula-tion with asynchronous off-policy updates. In 2017 IEEE international conference on roboticsand automation (ICRA) , pages 3389–3396. IEEE, 2017.[30] A. Ghadirzadeh, A. Maki, D. Kragic, and M. Bj ̈orkman. Deep predictive policy training usingreinforcement learning. In 2017 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 2351–2358. IEEE, 2017.[31] Y . Chebotar, K. Hausman, M. Zhang, G. Sukhatme, S. Schaal, and S. Levine. Combiningmodel-based and model-free updates for trajectory-centric reinforcement learning. In Interna-tional conference on machine learning , pages 703–711. PMLR, 2017.[32] T. Haarnoja, S. Ha, A. Zhou, J. Tan, G. Tucker, and S. Levine. Learning to walk via deepreinforcement learning. arXiv preprint arXiv:1812.11103 , 2018.[33] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V . Kumar. Dexterous manipulation with deepreinforcement learning: Efficient, general, and low-cost. In 2019 International Conference onRobotics and Automation (ICRA) , pages 3651–3657. IEEE, 2019.[34] A. Sharma, M. Ahn, S. Levine, V . Kumar, K. Hausman, and S. Gu. Emergent real-world roboticskills via unsupervised off-policy reinforcement learning. arXiv preprint arXiv:2004.12974 ,2020.[35] P. Agrawal, A. V . Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking:Experiential learning of intuitive physics. Advances in neural information processing systems ,29, 2016.[36] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700robot hours. In 2016 IEEE international conference on robotics and automation (ICRA) , pages3406–3413. IEEE, 2016.[37] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE Inter-national Conference on Robotics and Automation (ICRA) , pages 2786–2793. IEEE, 2017.[38] W. Han, S. Levine, and P. Abbeel. Learning compound multi-step controllers under unknowndynamics. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems,IROS 2015, Hamburg, Germany, September 28 - October 2, 2015 , pages 6435–6442. IEEE,2015. doi:10.1109/IROS.2015.7354297. URL https://doi.org/10.1109/IROS.2015.7354297 .[39] K. Lu, A. Grover, P. Abbeel, and I. Mordatch. Reset-free lifelong learning with skill-spaceplanning. arXiv preprint arXiv:2012.03548 , 2020.[40] A. Xie, F. Tajwar, A. Sharma, and C. Finn. When to ask for help: Proactive interventions inautonomous reinforcement learning. Neural Information Processing Systems (NeurIPS) , 2022.URL https://arxiv.org/abs/2210.10765 .[41] A. Gupta, J. Yu, T. Z. Zhao, V . Kumar, A. Rovinsky, K. Xu, T. Devlin, and S. Levine. Reset-freereinforcement learning via multi-task learning: Learning dexterous manipulation behaviorswithout human intervention, 2021. URL https://arxiv.org/abs/2104.11203 .[42] A. Gupta, C. Lynch, B. Kinman, G. Peake, S. Levine, and K. Hausman. Bootstrapped au-tonomous practicing via multi-task reinforcement learning. arXiv preprint arXiv:2203.15755 ,2022.[43] H. Walke, J. Yang, A. Yu, A. Kumar, J. Orbik, A. Singh, and S. Levine. Don’t start fromscratch: Leveraging prior data to automate robotic reinforcement learning. Conference onRobot Learning (CoRL) , 2022. URL https://arxiv.org/abs/2207.04703 .11[44] K. Xu, Z. Hu, R. Doshi, A. Rovinsky, V . Kumar, A. Gupta, and S. Levine. Dexterous ma-nipulation from images: Autonomous real-world rl via substep guidance. arXiv preprintarXiv:2212.09902 , 2022.[45] J. Fu, A. Singh, D. Ghosh, L. Yang, and S. Levine. Variational inverse control with events: Ageneral framework for data-driven reward definition, 2018. URL https://arxiv.org/abs/1805.11686 .[46] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcementlearning. arXiv preprint arXiv:1710.11248 , 2017.[47] S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training generative neural samplers usingvariational divergence minimization. Advances in neural information processing systems , 29,2016.[48] S. K. S. Ghasemipour, R. Zemel, and S. Gu. A divergence minimization perspective on imita-tion learning methods. In Conference on Robot Learning , pages 1259–1277. PMLR, 2020.[49] A. X. Lee, A. Nagabandi, P. Abbeel, and S. Levine. Stochastic latent actor-critic: Deep rein-forcement learning with a latent variable model. Advances in Neural Information ProcessingSystems , 33:741–752, 2020.[50] M. Laskin, A. Srinivas, and P. Abbeel. CURL: Contrastive unsupervised representations forreinforcement learning. In International Conference on Machine Learning , pages 5639–5650.PMLR, 2020.[51] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,and Y . Bengio. Generative adversarial networks, 2014. URL https://arxiv.org/abs/1406.2661 .[52] T. Miyato, T. Kataoka, M. Koyama, and Y . Yoshida. Spectral normalization for generativeadversarial networks. arXiv preprint arXiv:1802.05957 , 2018.[53] H. Zhang, M. Cisse, Y . N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk mini-mization, 2017. URL https://arxiv.org/abs/1710.09412 .[54] I. Kostrikov, K. K. Agrawal, D. Dwibedi, S. Levine, and J. Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning, 2018.URL https://arxiv.org/abs/1809.02925 .[55] Y . Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation.arXiv preprint arXiv:1810.12894 , 2018.[56] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.[57] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In International conference on machinelearning , pages 1861–1870. PMLR, 2018.[58] K. Hsu, M. J. Kim, R. Rafailov, J. Wu, and C. Finn. Vision-based manipulators need to alsosee from their hands. arXiv preprint arXiv:2203.12677 , 2022.12Figure 6: Environments from the EARL benchmark [5] used for simulated experiments. From left to right,the environments are: Peg insertion, Door closing and Tabletop organization.Figure 7: (left) Randomized position of the cube in the grasping task. The position marked by violet boundaryare within the distribution of expert demonstrations, and the rest are outside the distribution. ( right ) Architectureoverview for MEDAL++.A AppendixA.1 Algorithm OverviewThe pseudocode for training is given in Algorithm 1. First, the parameters and data buffers in FandBare initialized and the forward and backward demonstrations are loaded into D∗fandD∗brespec-tively. Next, we update the forward and backward goal sets, as described above. After initializingthe environment, the forward policy πfinteracts with the environment and collects data, updatingthe networks and buffers in F. The control switches over to the backward policy πbafter a fixednumber of steps, and the networks and buffers in Bare updated. The backward policy interacts fora fixed number of steps, after which the control is switched over to the forward policy and this cycleis repeated thereon. When executing in the real world, humans are allowed to intervene and resetthe environment intermittently, switching the control over to πfafter the intervention to restart theforward-backward cycle.We now expand on how the networks are updated for πfduring training (also visualized in Figure 9);the updates for πbare analogous. First, the new transition in the environment is added to Df. Next,we sample a batch of states from Dfand label them 0, and sample a batch of equal size fromD∗fandlabel them 1. The classifier Cfis updated using gradient descent on the combined batch to minimizethe cross-entropy loss. Note, the classifier is not updated for every step collected in the environment.As stated earlier, the classification problem is easier than learning the policy, and therefore, it helpsto train the classifier slower than the policy. Finally, the policy πf,Q-value networks {Qfn, ̄Qfn}Nn=1and the encoder Eare updated on a batch of transitions constructed by sampling (1−ρ)Btran-sitions from DfandρBtransitions from D∗f. The Q-value networks and the encoder are updatedby minimizing1NPNn=1l(Qn,E)(Eq 1), and the target Q-networks are updated as an exponentialmoving average of Q-value networks. The policy πfis updated by maximizing L(π). We update13Algorithm 1: MEDAL++initialize F,B; // forward, backward parametersF.D∗f,B.D∗b←load demonstrations()F.Gf←getstates( F.D∗f,−K:)// last Kstates// exclude last Kstates from D∗f, use only the last Kstates from D∗bB.Gb←getstates( F.D∗f,:−K)∪getstates( B.D∗b,−K:)s∼ρ0;A ← F ; // initialize environmentwhile not done doa∼ A.act(s);s′∼ T(· |s, a);A.update buffer ({s, a, s′});A.update classifier ();A.update parameters ();// switch policy after a fixed intervalifswitch thenswitch (A,(F,B));// allow intermittent human interventionsifinterrupt thens∼ρ0;A ← F ;elses←s′;Figure 8: Visualizing the positive target states for forward clas-sifierCfand backward classifier Cbfrom the expert demonstra-tions. For forward demonstrations, last Kstates are used forCf(orange ) and the rest are used for Cb(pink). For backwarddemonstrations, last Kstates are used for Cb.Figure 9: An overview of MEDAL++training. The classifier is trained to dis-criminate states visited by an expert fromthe states visited online. The robot rein-forcement learns on a combination of self-collected and expert transitions, and thepolicy learning is regularized using the be-havior cloning loss.theQ-value networks multiple times for every step collected in the environment, whereas the policynetwork is updated once for every step collected in the environment [15].A.2 Implementation Details and Practical TipsAn overview of the architecture used by the forward and backward networks is shown in Figure 7.Visual Encoder : For the encoder, we use the same architecture as DrQ-v2 [12]: 4 convolutionallayers with 32 filters of size (3, 3), stride 1, followed by ReLU non-linearities. The high-dimensionaloutput from the CNN is embedded into a 50 dimensional feature using a fully-connected layer,followed by LayerNorm and tanh non-linearity (to output the features normalized to [−1,1]). Forreal-robot experiments, the first person and third person views are concatenated channel wise beforebeing passed into the encoder. The output of the encoder is fused with proprioceptive information,in this case, the end-effector position, before being passed to actor and critic networks.Actor and Critic Networks : Both actor and critic networks are parameterized as 4 layer fully-connected networks with 1024 ReLU hidden units for every layer. The actor parameterizes a Gaus-14sian distribution over the actions, where a tanh non-linearity on the output restricts the actions to[−1,1]. We use an ensemble size of 10critics.Discriminators : The discriminator for the forward and backward policies use a similar visual en-coder but with 2 layers instead of 4. The visual embedding is passed to a fully connected networkwith 2 hidden layers with 256 ReLU units. When training the network, we use mixup and spectralnorm regularization [53, 52] for the entire network.Training Hyperparameters : For all our experiments, K= 20 , i.e. the number of frames used as goalframes. The forward policy interacts with the environment for 200 steps, then the backward policyinteracts for 200 steps. In real world experiments, we also reset the arm every 1000 steps to avoidhitting singular positions. Note, this reset does not require any human intervention as the controllerjust resets the arm to a fixed joint position. We use a batch size of 256to train the policy and criticnetworks, out of which 64transitions are sampled from the demonstrations ( oversampling ). We usea batch size of 512to train the discriminators, 256of the states come from expert data and the other256comes from the online data. Further, the discriminators are updated every 1000 steps collectedin the environment. The update-to-data ratio, that is the number of gradient updates per transitioncollected in the environment is 3for simulated environments and 1for the real-robot experiments.We use a linearly decaying schedule for behavior cloning regularization from 1 to 0.1 over the first50000 steps which remains fixed at 0.1 onwards throughout training.For real world experiments, we use a wrist camera to improve the overall performance [58], andprovide only the wrist-camera view to both discriminators. We find that this further regularizesthe discriminator. Finally, we provide no proprioceptive information for the VICE discriminator,but we give MEDAL discriminator the proprioceptive information, as it needs a stronger notionof the robot’s localization to adequately reset to a varied number of initial positions for improvedrobustness.Teleoperation : To collect our demonstrations on the real robot, we use an Xbox controller thatmanipulates the end-effector position, orientation and the gripper state. Two salient notes: (1) Theforward and backward demonstrations are collected together, one after the other and (2) the initialposition for demonstrations is randomized to cover as large a state-space as feasible. The increasedcoverage helps with exploration during autonomous training.A.3 EARL Environments, Training and EvaluationEnvironments . We consider three sparse-reward continuous-control environments from EARLbenchmark [5], shown in Appendix, Fig 6). Tabletop organization is a simplified manipulationenvironment where a gripper is tasked to move the mug to one of the four coasters from a wide setof initial states, sawyer door closing task requires a sawyer robot arm to learn how to close a doorstarting from various positions, and finally the sawyer peg insertion task requires the sawyer robotarm to grasp the peg and insert it into a goal. Not only does the robot have to learn how to do the task(i.e. close the door or insert the peg), but it has to learn how to undo the task (i.e. open the door orremove the peg) to try task repeatedly in the non-episodic training environment. The sparse rewardfunction is given by r(s, a) = 1(∥s−g∥ ≤ε), where gdenotes the goal, and εis the tolerance forthe task to be considered completed.Training and Evaluation . The environments are setup to return 84×84RGB images as obser-vations with a 3-dimensional action space for the tabletop organization (2D end-effector deltas inthe XY plane and 1D for gripper) and a 4-dimensional action space for sawyer environments ( 3Dend-effector delta control + 1D gripper). The training environment is reset to s0∼ρ0every 25,000steps of interaction with the environment. This is extremely infrequent compared to episodic set-tings where the environment is reset to the initial state distribution every 200-1000 steps. EARLcomes with 5-15forward and backward demonstrations for every environment to help with explo-ration in these sparse reward environments. We evaluate the forward policy every 10,000trainingsteps, where the evaluation approximates Es0∼ρ0[P∞t=0γtr(st, at)]by averaging the return of the15Figure 10: An overview of MEDAL++ on the task of inserting the peg into the goal location. ( top) Startingwith a set of expert trajectories, MEDAL++ learns a forward policy to insert the peg by matching the goal statesand a backward policy to remove and randomize the peg position by matching the rest of the states visited byan expert. ( bottom ) Chaining the rollouts of forward and backward policies allows the robot to practice the taskautonomously. The rewards indicate the similarity to their respective target states, output by a discriminatortrained to classify online states from expert states.policy over 10episodes starting from s0∼ρ0. These roll-outs are used only for evaluation, and notfor training.A.4 Real-world Experiment AnalysisWe discuss the four manipulation tasks in detail. We recommend viewing the supplemental websitefor training and evaluation videos:(1)Cube Grasping : The goal in this task is to grasp the cube from varying initial positions and con-figurations and raise it. For this task, we consider a controlled setting to isolate one potential sourceof improvement from autonomous reinforcement learning: robustness to the initial state distribution.Specifically, all the forward demonstrations are collected starting from a narrow set of initial states(ID), but, the robot is evaluated starting from both ID states and out-of-distribution ( OOD ) states,visualized in Appendix, Figure 7. BC policy is competent on ID states, but it performs poorly onstates that are OOD. However, after autonomous self-improvement using MEDAL++, we see animprovement of 15% on ID performance, and a large improvement of 74% on OOD performance.Autonomous training allows the robot to practice the task from a diverse set of states, includingstates that were OOD relative to the demonstration data. This suggests that improvement in successrate results partly from being robust to the initial state distribution, as a small set of demonstrationsis unlikely to cover all possible initial states a robot can be evaluated from.(2)Cloth on the Hook : In this task, the robot is tasked with grasping the cloth and putting it througha fixed hook. To practice the task repeatedly, the backward policy has to remove the cloth fromthe hook and drop it on platform. Here, MEDAL++ improves the success rate over BC by 36%.The BC policy has several failure modes: (1) it fails to grasp the cloth, (2) it follows through withhooking because of memorization, or (3) it hits into into the hook because it drifts from the righttrajectory and could not recover. Autonomous self-improvement improves upon all these issues,but particularly, it learns to re-try grasping the cloth if it fails the first time, rather than following amemorized trajectory observed in the forward demonstrations.(3)Bowl Covering with Cloth : The goal of this task is to cover a bowl entirely using the cloth.The cloth can be a wide variety of initial states, ranging from ‘laid out flat’ to ‘scrunched up’ invarying locations. The task is challenging as the robot has to grasp the cloth at the correct locationto successfully cover the entire bowl (partial coverage is counted as a failure). Here, MEDAL++improves the performance over BC by 34%. The failure modes of BC are similar to previous task,including failure to grasp, memorization and failure to re-try, and incomplete coverage due to wronginitial grasp. Autonomous self-improvement substantially helps with the grasping (including re-trying) and issues related to memorization. While it plans the grasps better than BC, there is roomfor improvement to reduce failures resulting from partially covering the bowl.(4)Peg Insertion : Finally, we consider the task of inserting a peg into a goal location. The locationand orientation of the peg is randomized, in service of which we use 5DoF control for this task. Asuccessful insertion requires the toy to be perpendicular to the goal before insertion, and the error16margin for a successful insertion is small given the size of the peg and the goal. Additionally, thepeg here is a soft toy, it can be grasped while being in the wrong orientation. Here, MEDAL++improves the performance by 48% over BC. In addition to failures described in the previous tasks,a common cause of failure is the insertion itself where the agent takes an imprecise trajectory andis unable to insert the peg. After autonomous self-improvement, the robot employs an interestingstrategy where it keeps retries the insertion till it succeeds. The policy is also better at grasping,though the failures of insertion often result from orienting the gripper incorrectly before the graspwhich makes insertion infeasible.17 |
4uFVn6WHyzo | Generating Transferable Adversarial SimulationScenarios for Self-Driving via Neural RenderingYasasa Abeysirigoonawardena†1, Kevin Xie1, 2, Chuhan Chen3, Salar Hosseini1,Ruiting Chen1, Ruiqi Wang4, Florian Shkurti1,2,51University of Toronto,2Vector Institute,3Carnegie Mellon Universty,4Stanford University,5UofT Robotics InstituteAbstract: Self-driving software pipelines include components that are learnedfrom a significant number of training examples, yet it remains challenging to eval-uate the overall system’s safety and generalization performance. Together withscaling up the real-world deployment of autonomous vehicles, it is of critical im-portance to automatically find simulation scenarios where the driving policies willfail. We propose a method that efficiently generates adversarial simulation scenar-ios for autonomous driving by solving an optimal control problem that aims tomaximally perturb the policy from its nominal trajectory. Given an image-baseddriving policy, we show that we can inject new objects in a neural rendering repre-sentation of the deployment scene, and optimize their texture in order to generateadversarial sensor inputs to the policy. We demonstrate that adversarial scenar-ios discovered purely in the neural renderer (surrogate scene) can often be suc-cessfully transferred to the deployment scene, without further optimization. Wedemonstrate this transfer occurs both in simulated and real environments, providedthe learned surrogate scene is sufficiently close to the deployment scene.1 IntroductionSafety certification of a self-driving stack would require driving hundreds of millions of miles onreal roads, according to [1], to be able to estimate miles per intervention with statistical significance.This could correspond to decades of driving and data collection. Procedural generation of drivingsimulation scenarios has emerged as a complementary approach for designing unseen test environ-ments for autonomous vehicles in a cost-effective way. Currently, generation of simulation scenariosrequires significant human involvement, for example to specify the number of cars and pedestriansin the scene, their initial locations and approximate trajectories [2], as well as selection of assets tobe added to the simulator. In addition to being challenging to scale, having a human in the loop canresult in missing critical testing configurations.In this paper, we cast adversarial scenario generation as a high-dimensional optimal control prob-lem. Given a known image-based driving policy that we want to attack, as well as the dynamicsof the autonomous vehicle, we aim to optimize a photorealistic simulation environment such that itproduces sensor observations that are 3D-viewpoint-consistent, but adversarial with respect to thepolicy, causing it to deviate from its nominal trajectory. The objective of the optimal control prob-lem is to maximize this deviation through plausible perturbations of objects in the photorealisticenvironment.Our optimal control formulation requires differentiation through the sensor model in order to com-pute the derivative of the sensor output with respect to the underlying state perturbation. However,most existing photorealistic simulators for autonomous vehicles are not differentiable; they can onlybe treated as black boxes that allow forward evaluation, but not backpropagation. Instead of usingan off-the-shelf photorealistic simulator and adding assets to match the scene, we train an editable†Correspondence to yasasa@cs.toronto.edu7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.UnperturbedRandomAttackAdversarial(deployment)Adversarial (NeRF )Figure 1: First-person-view (FPV) of our adversarial attack transfer to an RC car with overheadtrajectory view on the right. Row 1: Unperturbed policy execution; Row 2: Random search textureattack; Row 3: Our adversarial attack directly transferred to the real deployment scene, withoutadditional optimization; Row 4: Our adversarial attack discovered in the surrogate NeRF simulator.neural rendering model that imitates the deployment scene, allowing us to insert new objects in thesimulator and to optimize their texture through gradient-based optimization. This editable neuralrendering model acts as a surrogate physics and rendering simulator, enabling us to differentiatethrough it in an efficient way in order to attack the driving policy’s input sensor observations.Unlike many existing types of adversarial attacks in the literature [3, 4, 5], our work aims to discoverenvironment perturbations/attacks that satisfy the following properties: (a) They are temporally-consistent . The influence of the attack is not instantaneous, it is amortized through time via theoptimal control loss function. (b) They are transferable . An attack discovered in the surrogatescene should ideally be adversarial in the actual deployment scene. (c) They are object-centric .The attack introduces and edits objects as opposed to unstructured high-frequency perturbationsacross all pixels. Specifically, we make the following contributions:1. We formulate adversarial scenario generation across time as an optimal control problemthat relies on a learned, surrogate NeRF simulator. The solution to this problem yields 3D-view-consistent, object-centric, adversarial attacks that often transfer to the deploymentenvironment. We show how to solve this problem efficiently using implicit differentiation.2. Differentiable rendering of our surrogate NeRF model enables gradient-based adversarialobject insertion and scales to high dimensional parameterizations of multiple objects.3. We show that our adversarial attacks discovered in the surrogate NeRF simulator can berealized in the real-world and retain their ability to disrupt the policy.We experimentally validate our framework by reconstructing scenes using only pose-annotated im-ages and generate adversarial object insertion attacks with multiple trajectories.2 Related WorkAdversarial scenarios for autonomous driving . Perceptual adversarial attacks make modificationsto prerecorded sensor data from real driving sessions to fool the perception system. Since this sensordata is fixed, they lack the ability to resimulate and typically only operate on the individual framelevel. Previous works, [4, 6] attempt to attack a LiDAR object detection module by artificiallyinserting an adversarial mesh on top of car rooftops or objects in a prerecorded LiDAR sequence.They extend the scope of their attack further by incorporating textures to be able to attack image-based object detectors as well [3]. In both these works, the inserted object has a very low resolution2Pose-Annotated Images of Deployment Scene Synthetic Scene in Neural Renderer Discovered Adversarial Attack Attacks Transferred to Deployment Scene Overhead View Overhead View Deployment Scene CARLA Differentiable simulator Neural Renderer Transfer Attack Gradient- based adversarial attack Learn NeRF Figure 2: Our method can be summarized in the four steps shown. (a) In the top left, we obtain posedimages from the deployment scene which can be a simulator or the real world. (b) In the bottom left,we reconstruct a surrogate scene by fitting a NeRF to the posed images as a differentiable simulatorand observe only minor perceptual gap. (c) Having the surrogate scene, we can insert objects, whichare also represented as NeRFs, and attack their color fields to generate textural attacks. (d) Thediscovered adversarial objects are introduced back into the deployment scene.and nondescript geometry. Recent self-driving simulators, such as DriveGAN [7], GeoSim [8] andUniSim [9] address these issues, with the latter enabling manipulable sensor-based simulators basedon prerecorded datasets. These works, however, have not dealt with discovering attacks.Another prominent line of works produce dynamic state-level adversarial attacks. These gener-ally target the control/planning system only by perturbing trajectories of other agents in the scene.Without considering the perception system, these methods use simplified traffic and state-based sim-ulators that do not incorporate 3D rendering [10, 11, 12].Closest to our work, a few methods have proposed to attack end-to-end policies by adding pertur-bations to existing self-driving simulators. In [13], the trajectories of other agents in a CARLAscene are modified to generate a collision event. Due to the non-differentiability of the simulator,a black-box Bayesian optimization is used. Gradient-based attacks on top of simulators have alsobeen investigated. However, the requirement of differentiability has so far limited their scope to verysimplified geometries that are composited post-hoc onto renderings from CARLA. In [5], flatly col-ored rectangles are composited on top of frames from the CARLA simulator and optimized to causemaximal deviation of an end-to-end image-based neural network steering controller. Similarly, workin [14] attempts to play a video sequence of adversarial images on a billboard in the scene using im-age composition. To our knowledge, no works in this setting have been able to demonstrate transferof adversarial attacks to the real world, as these attacks rely on a pre-existing simulator that theyaugment. Compared to these, our attacks are entirely performed on a surrogate neural simulator thatis reconstructed from only posed images captured from any deployment scene. Furthermore, oursurrogate neural simulator allows for inserting arbitrary objects reconstructed from posed images.The driving simulator VISTA [15] generates high fidelity sensor data using a large collection ofreal world data. In our case, we are able to train a NeRF using the data, allowing us to generalizeto a wider range of novel views. [16] samples adversarial masking of existing LiDAR data usingreinforcement learning. Work on perception error models [17] avoids using a simulator altogetherand instead focuses on learning a surrogate policy that uses lower dimensional salient features, whichare attacked. However, it would be very difficult to infer the real world perceptual disturbance thatwould cause the attack, so these attacks are very challenging to transfer to the real world.Robust adversarial examples. Adversarial attacks for classification have commonly used minimalperturbations on the input images [18] that may not always transfer to the physical world or another3domain. To enhance robustness to domain transfer [19] proposes a class of adversarial transforma-tions that optimize for adversarial attacks under a distribution of image transforms [20].3 Background3.1 Neural RenderingNeural 3D representations, such as neural radiance fields (NeRF), have seen significant activity inrecent years due to their ability to reconstruct real world objects and scenes to very high detail usingonly posed images. A survey of recent progress in neural rendering can be found in [21].In [22], physics simulations of known objects are combined with their learned NeRF to create highfidelity dynamic environments for training deep reinforcement learning agents that can transfer tothe real world. In our work, we use composition of NeRFs to insert and optimize adversarial objects.This is shown in Fig. 3, and the details are in Sections 4.1 and 4.2. We render the scene using thevolume rendering equation:I(x, ω) =ZT0σ(x+tω) expZt0σ(x+ˆtω)dˆtL(x+tω,−ω)dt (1)Where I(x, ω)is the intensity at a location xgiven in world space in the direction ω.L, andσare thelearned color and density fields in NeRF. For the sake of performance, we choose to use grid-basedvolume representations. Structured grid NeRFs reduce computation cost by storing direct densityand color variables [23] or latent features [24, 25] on explicit 3D grids. In essence, they tradeextra memory utilization for large performance improvements. Instant Neural Graphics Primitives(iNGP) [26] uses multi-scale dense grids and sparse hash grids of features that are decoded to colorand density by a MLP. We chose to use iNGP because it balances our performance and memorytradeoffs well.4 MethodOur framework generates successful adversarial attacks of end-to-end image-based self-driving poli-cies with only access to posed images from the deployment scene. An overview of the high-levelsteps in our framework is shown in Figure 2.We now briefly describe the setting and our adversarial attack method. More details are included inAppendix B. Let xtdenote the state of the car at time t,x∗denote a reference trajectory to track andCTE the cross-track error. Starting from Eqn. 6, we set the cost function C(xt)of our problem asthe car’s proximity to the reference x∗:C(xt) =−CTE(xt, x∗) (2)In other words, we want to maximize deviation from the desired trajectory. We set the constraintfunction G(xt, xt+1, θ) = 0 to be the following set of constraints:ut=πφ(ot) (3) ot=hγ,θ(xt) (4) xt+1=fc(xt, ut) (5)Where πis the fixed driving policy *,his the neural rendering sensor model that outputs imageobservations otgiven the state of the car. The renderer depends on θ, the parameters of adversarialNeRF objects and γ, the fixed rendering parameters of the background scene NeRF. Finally, fcdenotes the dynamics of the ego vehicle that must be considered, since we want to find adversarialtrajectories that are consistent across multiple frames.4.1 Differentiable RendererTraditional simulators like CARLA do not admit computation of gradients. Thus, prior works rely onartificially compositing simplistic textured geometries on top of rendered images from CARLA and*We train our own policy and provide details in Appendix C.2.4Figure 3: A computation diagram of our algorithm for generating adversarial attacks. The innerdriving loop consists of three components: the neural rendering model, the differentiable drivingpolicy, and the differentiable kinematic car model. We inject the adversarial perturbation to thesurrogate scene by composing the outputs of one or more neural object renderers (the single objectcase is shown above for simplicity) with the output of the neural scene renderer. The parametersof the object renderer(s) are optimized to maximize the deviation of the realized trajectory from thereference trajectory, while keeping the parameters of the driving policy and scene renderer frozen.obtaining gradients with respect to the composited alteration [14]. We use NeRFs to learn surrogatemodels of the scene and sensor model instead. This surrogate model not only gives us an automatedmethod to reconstruct scenes from pose-annotated images, but also provides efficient gradient com-putation giving us a differentiable form for the sensor h. For the purposes of optimization, we foundtraditional NeRF representations to be intractable in terms of compute and memory requirements(during gradient computation). Thus, we opt to use the multi-resolution hash grid representation,Instant-NGP [26].Note that, similar to existing work, we detach the gradients of the image observation with respectto the camera coordinates (which are attached to the ego vehicle) [27]. We include more detailsregarding this in Appendix B.3.4.2 Adversarial Object InsertionWe use insertion and texturing of multiple objects as our adversarial perturbations to the back-ground scene. To do this, we first reconstruct regular objects, such as cars, as individualNeRFs from pose-annotated images. For our object NeRFs we simply store color values di-rectly on the voxel grids of Instant-NGP, which are tri-linearly interpolated within each voxel.Figure 4: Base car on the left; random texture inthe middle; adversarial texture on the right.By choosing these color voxel grids as our ad-versarial parameters θ, we can perform inde-pendent adversarial texture attacks over multi-ple objects.The object NeRFs can be easily composedwith our background scene NeRF. This is donevia alpha compositing, which leverages opacityand depth values that can be easily computed.4.3 Gradient computation via implicit differentiationWe use implicit differentiation for gradient computation [28], also known as the adjoint method,which enables constant memory gradient computation with respect to trajectory length. In discretetime, the adjoint method amounts to propagating gradients through an implicit relationship Gfor5problems of the form:minθJ(θ) =TXt=0C(xt)such that G(xt−1, xt, θ) = 0 (6)Explicitly, the method performs a forward simulation to compute the variables xtand then subse-quently a backward pass to compute adjoint variables λtby solving the equations:∂G(xt−1, xt)∂xt⊤λt=−∂C(xt)∂xt⊤−∂G(xt, xt+1)∂xt⊤λt+1 (7)with the boundary condition:∂G(xT−1, xT)∂xT⊤λT=−∂C(xT)∂xT⊤(8)Finally, the gradient of the loss can be calculated as:∇θJ=λ⊤1∂G(x0, x1, θ)∂x0∂x0∂θ+TXt=1λ⊤t∂G(xt−1, xt, θ)∂θ(9)Throughout both passes we do not need to store large intermediate variables and only need to accu-mulate the gradient at each step.4.4 Gradient-based Adversarial AttackObtaining gradients for the problem in Eqn. (6) should be possible with an autodifferentiationframework such as PyTorch [29]. We find that naively computing the gradient via backpropagationresults in memory issues as we scale up trajectory lengths due to all the intermediary computevariables used to compute the integral in Eqn. 1 being stored until the end of the trajectory. Weachieve drastic memory savings by using the adjoint method [30] which only keeps track of theadjoint variables λalong the trajectory. In our case, the adjoint variables are three-dimensional,allowing us to only use as much memory as it takes to compute a single jacobian vector product ofthe composition of models given by (5), (3), (4) in the optimization problem in Eqn. (6).To summarize, the computation of our gradient-based adversarial attack proceeds as follows:1. We rollout our policy in our surrogate simulator to compute the loss and the trajectory x1:Tin Eqn. (6).2. We perform a backward pass to compute adjoint variables for gradient computation.3. Using the adjoint variables, we compute the gradient ∇θJand update parameters θ.5 ExperimentsTo demonstrate the effectiveness of our framework, we aim to reconstruct a driving scenario fromposed images, generate adversarial attacks and validate that those attacks transfer to the deploymentscene. Through our experiments, we would like to answer the following key questions:(Q1) Can gradient based optimization find better adversarial examples than random search?(Q2) Are NeRF models suitable surrogate renderers for gradient based adversarial optimization?(Q3) Are adversarial attacks transferable from NeRF back to the deployment domain?5.1 Experimental DetailsCARLA Deployment Scenes. We first validate our method in simulation, treating CARLA asa proxy for a real deployment scene. We perform experiments on a 3-way intersection of theCARLA [31] simulator. For the 3-way intersection, we consider 3 different trajectories to be fol-lowed by the ego vehicle. For the object models , we train surrogate NeRF models for two sampleobjects, a fire hydrant and a small car using only posed images (any other object could be used). We6manually insert 2 small cars and 3 fire hydrants into the driving scene in an initial placement. Ouradversarial attacks jointly optimize the NeRF color parameters and object rigid transforms.Real World Deployment Scenes. Our real-world experiments are performed on an autonomous RCcar driving around a square race track in an indoor room. It is difficult to manufacture adversarialattacked objects with complex shapes in the real world. Hence, for practicality, we insert a NeRFobject representing a flat square texture pattern that can be projected by a display monitor in the realworld and optimize its color parameters. In order to create a first version of the attack we choose todirectly compose the adversarial texture on to the robot camera feed. We then move on to a moredifficult task of physically realizing this attack, for this we opted to display the texture on a monitorto simplify lighting conditions. Additional details of our real world experimental setup are given inC.4.15.2 Evaluation MetricsFigure 5: Selected overhead views and snap-shots from adversarial deployment trajec-tories in the real world (top row: moni-tor displays adversarial texture discovered inNeRF), and in CARLA (bottom row: adver-sarial objects inserted in the simulator).We measure the effectiveness of an attack with ouradversarial objective, the cross track error of the ve-hicle. We use the road center as the reference andso even an unperturbed driving policy has some non-zero deviation which we report under “Unperturbed”in Table 1. To characterize the insensitivity of ourmethod to random seeds, we run 5 separate attacksper scenario for both the gradient-based and ran-dom attacks with different random initializations ofthe adversarial parameters. We report the mean andstandard deviation of our metric.Our proposed method of attack is viagradient-based optimization using the methodoutlined in Section 4.4. The gradient-based attackuses50iterations of optimization using Adam, witha learning rate of 0.1. Due to the high dimensionalparameterization, detailed in B.3.1, bayesian opti-mization becomes computationally intractable. Therefore, as a baseline for our method, we performa random search parameter attack on the NeRF surrogate model that samples parameters froma Gaussian distribution with mean zero and a standard deviation of 5. We chose this standarddeviation to match the distribution over parameters we found in our gradient attacks. We use thesame number of function evaluations, selecting the best achieved attack among the 50 randomsamples for the CARLA experiment. For real-world experiments, we didn’t find much variationbetween random attacks in the surrogate simulator, showing the difficulty of random search in highdimensional parameter spaces.5.3 Experimental ResultsExample gradient attack trajectories are shown in Figure 5. We include more visualization of resultsfor deployments of adversarial attacks, both in CARLA simulation in the real world and preliminaryresults of retraining the CARLA policy using new data, in Appendix D. In Table 1 we compare thetotal cross track errors caused by our adversarial attack against the expert lane following controller.We observe in all 3 CARLA scenarios (averaged over 5 seeds each) that our adversarial attacksusing gradient optimization consistently produce significant deviation from the lane center. Whentransferring these attacks back into the deployment scene, we see that although the magnitude of thedeviation is reduced, we still retain a significant increase over the unperturbed or random search set-ting. The difference is likely due to visual imperfections in our surrogate NeRF simulator comparedto the deployment scene. The random search perturbations are far less effective, remaining near thebaseline unperturbed trajectory for 2 out of the 3 cases.7CARLA Deployment Surrogate Scene CARLA DeploymentScenario Unperturbed Random Gradient Random GradientStraight 1166 1132±7 2347 ±49 1193±19 1702 .±160Right 1315 2084±10 4105 ±847 1476±12 2101 .±75Left 1448 1460±8 4125 ±124 1158±163 2240 .±574Physical Deployment Surrogate Scene Physical DeploymentSetup Unperturbed Random Gradient Random GradientGreen Screen48 34±4 157 ±146±3 248 ±72Monitor 47±3 76 ±48Table 1: Comparison of the total cross-track error for all the scenario tested. Results are shownfor the following cases: (1) no attack in the deployment scene (unperturbed), (2) an adversarialattack (random or gradient) in the surrogate NeRF scene, (3) an attack in the deployment scene.We separate results from the CARLA and physical deployments, we show that gradients in oursurrogate simulator are useful for finding adversarial attacks and these attacks remain effective whentransferred to the deployment environment.For the real world experiment, we observed a similar result. Random attacks consistently fail toelicit deviation from the driving policy both in the surrogate and deployment scenes. Over 5 randomseeds, not a single random attack was able to cause the vehicle to exit the track. Gradient attackson the other hand are reliably able to find strong attacks with little variance in the surrogate scene.When transferring our attacks to the real world, we find the attacks to retain their strength in thegreen screen setup. The strength of the attack is relatively diminished when using the monitor toproject the attack but is nonetheless consistently higher than the random attack and causes the vehicleto understeer and exit the track on occasion. We suspect this is due to the display properties of themonitor which can alter the appearance of the adversarial perturbation.6 LimitationsDespite showing the ability to generate 3D-consistent adversarial scene perturbations, there are afew avenues for improvement. First, we assume that the vision-based driving policy is differen-tiable. Recent works have shown high potential for modular end-to-end learned policies that couldleverage synergies between vision and planning, such as neural motion planning [32], transfuser [33]and many others [34]. We discuss three potential methods to handle non-differentiable policies inAppendix E.1. Second, while we do optimize for both adversarial textures and object poses, ourexperiments in section D.3 of the Appendix show that the latter produces significantly non-smoothloss landscapes that necessitated multi-start gradient optimization methods to handle local minima.7 ConclusionWe presented a method for generating 3D-consistent object-based adversarial perturbations in au-tonomous driving scenarios. Unlike previous approaches that rely on making edits on top of fixedpre-recorded data or black-box simulators, we develop a differentiable simulator directly with a neu-ral radiance field representation of geometry and texture of a scene that admits gradients through therendering of camera and depth observations. Through alpha-compositing, we can introduce newobjects also represented as neural radiance fields into the scene and optimize color perturbations ofthe objects. We validate our framework both in simulation and on a real-world RC car race trackdriving scenario showing successful sim-to-real transfer of discovered attacks. While our particularimplementation is only a first step towards demonstrating NeRF based adversarial attack genera-tion, we believe that our framework shows a promising new direction for automatic evaluation ofautonomous vehicles. We expect our method to benefit greatly from continued improvements beingmade to neural rendering and their wider adoption for A V/robotic simulation.8References[1] K. Nidhi and S. M. Paddock. Driving to Safety: How Many Miles of Driving Would ItTake to Demonstrate Autonomous Vehicle Reliability? https://www.rand.org/pubs/research_reports/RR1478.html , 2016. [Online; accessed 19-July-2018].[2] A. C. Madrigal. Inside waymo’s secret world for training self-driving cars. The Atlantic , Aug2017. URL https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/ .[3] J. Tu, H. Li, X. Yan, M. Ren, Y . Chen, M. Liang, E. Bitar, E. Yumer, and R. Urtasun. Exploringadversarial robustness of multi-sensor perception systems in self driving, Jan. 2022. URLhttps://arxiv.org/abs/2101.06784 .[4] J. Tu, M. Ren, S. Manivasagam, M. Liang, B. Yang, R. Du, F. Cheng, and R. Urtasun.Physically Realizable Adversarial Examples for LiDAR Object Detection, Apr. 2020. URLhttps://arxiv.org/abs/2004.00543 .[5] J. Yang, A. Boloor, A. Chakrabarti, X. Zhang, and Y . V orobeychik. Finding Physical Adversar-ial Examples for Autonomous Driving with Fast and Differentiable Image Compositing, June2021. URL https://arxiv.org/abs/2010.08844 .[6] Y . Cao, C. Xiao, D. Yang, J. Fang, R. Yang, M. Liu, and B. Li. Adversarial objects against lidar-based autonomous driving systems. CoRR , abs/1907.05418, 2019. URL http://arxiv.org/abs/1907.05418 .[7] S. W. Kim, J. Philion, A. Torralba, and S. Fidler. Drivegan: Towards a controllable high-quality neural simulation. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR) , pages 5820–5829, June 2021.[8] Y . Chen, F. Rong, S. Duggal, S. Wang, X. Yan, S. Manivasagam, S. Xue, E. Yumer, and R. Ur-tasun. Geosim: Realistic video simulation via geometry-aware composition for self-driving.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) , pages 7230–7240, June 2021.[9] Z. Yang, Y . Chen, J. Wang, S. Manivasagam, W.-C. Ma, A. J. Yang, and R. Urtasun. Unisim: Aneural closed-loop sensor simulator. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR) , pages 1389–1399, June 2023.[10] J. Wang, A. Pun, J. Tu, S. Manivasagam, A. Sadat, S. Casas, M. Ren, and R. Urtasun. AdvSim:Generating Safety-Critical Scenarios for Self-Driving Vehicles, Jan. 2022. URL https://arxiv.org/abs/2101.06549 .[11] D. Rempe, J. Philion, L. J. Guibas, S. Fidler, and O. Litany. Generating useful accident-pronedriving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 17305–17315, 2022.[12] M. Igl, D. Kim, A. Kuefler, P. Mougin, P. Shah, K. Shiarlis, D. Anguelov, M. Palatucci,B. White, and S. Whiteson. Symphony: Learning realistic and diverse agents for autonomousdriving simulation, 2022. URL https://arxiv.org/abs/2205.03195 .[13] Y . Abeysirigoonawardena, F. Shkurti, and G. Dudek. Generating adversarial driving scenariosin high-fidelity simulators. In 2019 International Conference on Robotics and Automation(ICRA) , pages 8271–8277. IEEE, 2019.[14] N. Patel, P. Krishnamurthy, S. Garg, and F. Khorrami. Overriding Autonomous Driving Sys-tems Using Adaptive Adversarial Billboards. IEEE Transactions on Intelligent TransportationSystems , 23(8):11386–11396, Aug. 2022.9[15] A. Amini, T.-H. Wang, I. Gilitschenski, W. Schwarting, Z. Liu, S. Han, S. Karaman, andD. Rus. Vista 2.0: An open, data-driven simulator for multimodal sensing and policy learn-ing for autonomous vehicles. In 2022 International Conference on Robotics and Automation(ICRA) , pages 2419–2426, 2022.[16] M. Koren, S. Alsaif, R. Lee, and M. J. Kochenderfer. Adaptive stress testing for autonomousvehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1–7. IEEE, 2018.[17] C. Innes and S. Ramamoorthy. Testing rare downstream safety violations via upstream adaptivesampling of perception error models. In 2023 IEEE International Conference on Robotics andAutomation (ICRA) , pages 12744–12750. IEEE, 2023.[18] N. Akhtar, A. Mian, N. Kardan, and M. Shah. Advances in adversarial attacks and defenses incomputer vision: A survey, Sept. 2021. URL https://arxiv.org/abs/2108.00401 .[19] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok. Synthesizing robust adversarial examples.InProceedings of the 35th International Conference on Machine Learning , volume 80, pages284–293, 2018.[20] P. Buddareddygari, T. Zhang, Y . Yang, and Y . Ren. Targeted attack on deep rl-based au-tonomous driving with learned visual patterns. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 10571–10577, 2022.[21] A. Tewari, O. Fried, J. Thies, V . Sitzmann, S. Lombardi, Z. Xu, T. Simon, M. Nießner,E. Tretschk, L. Liu, B. Mildenhall, P. Srinivasan, R. Pandey, S. Orts-Escolano, S. Fanello,M. Guo, G. Wetzstein, J.-Y . Zhu, C. Theobalt, M. Agrawala, D. B. Goldman, and M. Zollh ̈ofer.Advances in neural rendering. In ACM SIGGRAPH 2021 Courses , SIGGRAPH ’21, 2021.[22] A. Byravan, J. Humplik, L. Hasenclever, A. Brussee, F. Nori, T. Haarnoja, B. Moran, S. Bohez,F. Sadeghi, B. Vujatovic, and N. Heess. Nerf2real: Sim2real transfer of vision-guided bipedalmotion skills using neural radiance fields, 2022. URL https://arxiv.org/abs/2210.04932 .[23] S. Fridovich-Keil, A. Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa. Plenoxels: Radi-ance fields without neural networks. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 5501–5510, 2022.[24] L. Liu, J. Gu, K. Zaw Lin, T.-S. Chua, and C. Theobalt. Neural sparse voxel fields. Advancesin Neural Information Processing Systems , 33:15651–15663, 2020.[25] A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su. Tensorf: Tensorial radiance fields. EuropeanConference on Computer Vision (ECCV) , 2022.[26] T. M ̈uller, A. Evans, C. Schied, and A. Keller. Instant neural graphics primitives with a mul-tiresolution hash encoding. ACM Transactions on Graphics , 41(4):1–15, July 2022.[27] N. Hanselmann, K. Renz, K. Chitta, A. Bhattacharyya, and A. Geiger. King: Generatingsafety-critical driving scenarios for robust imitation via kinematics gradients, 2022. URLhttps://arxiv.org/abs/2204.13683 .[28] L. S. Pontryagin. Mathematical theory of optimal processes . CRC press, 1987.[29] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani,S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer,F. d’Alch ́e Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information ProcessingSystems 32 , pages 8024–8035. Curran Associates, Inc., 2019.10[30] K. M. Jatavallabhula, M. Macklin, F. Golemo, V . V oleti, L. Petrini, M. Weiss, B. Con-sidine, J. Parent-Levesque, K. Xie, K. Erleben, L. Paull, F. Shkurti, D. Nowrouzezahrai,and S. Fidler. gradsim: Differentiable simulation for system identification and visuomo-tor control. International Conference on Learning Representations (ICLR) , 2021. URLhttps://openreview.net/forum?id=c_E8kFWfhp0 .[31] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. CARLA: An open urban drivingsimulator. In Proceedings of the 1st Annual Conference on Robot Learning , pages 1–16, 2017.[32] W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun. End-to-end interpretableneural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 8660–8669, 2019.[33] K. Chitta, A. Prakash, B. Jaeger, Z. Yu, K. Renz, and A. Geiger. Transfuser: Imitation withtransformer-based sensor fusion for autonomous driving. Pattern Analysis and Machine Intel-ligence (PAMI) , 2022.[34] A. Tampuu, T. Matiisen, M. Semikin, D. Fishman, and N. Muhammad. A survey of end-to-end driving: Architectures and training methods. IEEE Transactions on Neural Networks andLearning Systems , 33(4):1364–1384, 2020.[35] M. Tancik, V . Casser, X. Yan, S. Pradhan, B. Mildenhall, P. P. Srinivasan, J. T. Barron, andH. Kretzschmar. Block-nerf: Scalable large scene neural view synthesis. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8248–8258, 2022.[36] Z. Xie, J. Zhang, W. Li, F. Zhang, and L. Zhang. S-neRF: Neural radiance fields for streetviews. In International Conference on Learning Representations , 2023.[37] A. R. Kosiorek, H. Strathmann, D. Zoran, P. Moreno, R. Schneider, S. Mokr ́a, and D. J.Rezende. Nerf-vae: A geometry aware 3d scene generative model, 2021. URL https://arxiv.org/abs/2104.00587 .[38] M. Niemeyer and A. Geiger. Giraffe: Representing scenes as compositional generative neuralfeature fields. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , 2021.[39] B. Yang, Y . Zhang, Y . Xu, Y . Li, H. Zhou, H. Bao, G. Zhang, and Z. Cui. Learningobject-compositional neural radiance field for editable scene rendering. In Proceedings ofthe IEEE/CVF International Conference on Computer Vision , pages 13779–13788, 2021.[40] S. Benaim, F. Warburg, P. E. Christensen, and S. Belongie. V olumetric disentanglement for 3dscene manipulation, 2022. URL https://arxiv.org/abs/2206.02776 .[41] A. Mirzaei, T. Aumentado-Armstrong, K. G. Derpanis, J. Kelly, M. A. Brubaker, I. Gilitschen-ski, and A. Levinshtein. Spin-nerf: Multiview segmentation and perceptual inpainting withneural radiance fields, 2022. URL https://arxiv.org/abs/2211.12254 .[42] P. P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall, and J. T. Barron. Nerv: Neu-ral reflectance and visibility fields for relighting and view synthesis. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7495–7504, 2021.[43] W. Ye, S. Chen, C. Bao, H. Bao, M. Pollefeys, Z. Cui, and G. Zhang. Intrinsicnerf: Learningintrinsic neural radiance fields for editable novel view synthesis, 2022. URL https://arxiv.org/abs/2210.00647 .[44] Y . Xu, M. Chai, Z. Shi, S. Peng, S. Ivan, S. Aliaksandr, C. Yang, Y . Shen, H.-Y . Lee, B. Zhou,and T. Sergy. Discoscene: Spatially disentangled generative radiance field for controllable3d-aware scene synthesis, 2022. URL https://arxiv.org/abs/2212.11984 .11[45] A. Kundu, K. Genova, X. Yin, A. Fathi, C. Pantofaru, L. J. Guibas, A. Tagliasacchi, F. Del-laert, and T. Funkhouser. Panoptic neural fields: A semantic object-aware neural scene rep-resentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 12871–12881, 2022.[46] L. Yen-Chen, P. Florence, J. T. Barron, A. Rodriguez, P. Isola, and T.-Y . Lin. Inerf: Invertingneural radiance fields for pose estimation. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , page 1323–1330, 2021.[47] M. Adamkiewicz, T. Chen, A. Caccavale, R. Gardner, P. Culbertson, J. Bohg, and M. Schwa-ger. Vision-only robot navigation in a neural radiance world. IEEE Robotics and AutomationLetters , 7(2):4606–4613, 2022.[48] S. L. Cleac’h, H. Yu, M. Guo, T. A. Howell, R. Gao, J. Wu, Z. Manchester, and M. Schwager.Differentiable physics simulation of dynamics-augmented neural objects, 2022. URL https://arxiv.org/abs/2210.09420 .[49] D. Driess, Z. Huang, Y . Li, R. Tedrake, and M. Toussaint. Learning multi-object dynamicswith compositional neural radiance fields, 2022. URL https://arxiv.org/abs/2202.11855 .[50] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt,K. Ehsani, A. Kembhavi, and A. Farhadi. Objaverse: A universe of annotated 3d objects,2022. URL https://arxiv.org/abs/2212.08051 .[51] B. O. Community. Blender - a 3D modelling and rendering package . Blender Foundation,Stichting Blender Foundation, Amsterdam, 2018. URL http://www.blender.org .[52] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars, 2016. URLhttps://arxiv.org/abs/1604.07316 .[53] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the fourteenth international conferenceon artificial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Pro-ceedings, 2011.[54] F. Codevilla, M. M ̈uller, A. Dosovitskiy, A. M. L ́opez, and V . Koltun. End-to-end driving viaconditional imitation learning. CoRR , abs/1710.02410, 2017. URL http://arxiv.org/abs/1710.02410 .[55] J. L. Sch ̈onberger and J.-M. Frahm. Structure-from-motion revisited. In Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2016.[56] J. L. Sch ̈onberger, E. Zheng, M. Pollefeys, and J.-M. Frahm. Pixelwise view selection forunstructured multi-view stereo. In European Conference on Computer Vision (ECCV) , 2016.[57] N. Ravi, J. Reizenstein, D. Novotny, T. Gordon, W.-Y . Lo, J. Johnson, and G. Gkioxari. Ac-celerating 3d deep learning with pytorch3d. arXiv:2007.08501 , 2020.[58] S. Shirobokov, V . Belavin, M. Kagan, A. Ustyuzhanin, and A. G. Baydin. Black-box optimiza-tion with local generative surrogates. Advances in Neural Information Processing Systems , 33:14650–14662, 2020.[59] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, N. De Freitas, et al. Bayesian optimization in highdimensions via random embeddings. In IJCAI , pages 1778–1784, 2013.[60] J. Wu, M. Poloczek, A. G. Wilson, and P. I. Frazier. Bayesian optimization with gradients,2018.12[61] H. J. T. Suh, M. Simchowitz, K. Zhang, and R. Tedrake. Do differentiable simulators givebetter policy gradients?, 2022.[62] P. Vicol, L. Metz, and J. Sohl-Dickstein. Unbiased gradient estimation in unrolled computationgraphs with persistent evolution strategies, 2021.[63] J. Schulman, N. Heess, T. Weber, and P. Abbeel. Gradient estimation using stochastic compu-tation graphs. Advances in neural information processing systems , 28, 2015.[64] R. Astudillo and P. I. Frazier. Thinking inside the box: A tutorial on grey-box bayesian opti-mization. In 2021 Winter Simulation Conference (WSC) , pages 1–15. IEEE, 2021.[65] M. Vlastelica, A. Paulus, V . Musil, G. Martius, and M. Rol ́ınek. Differentiation of blackboxcombinatorial solvers. arXiv preprint arXiv:1912.02175 , 2019.[66] A. Agrawal, B. Amos, S. Barratt, S. Boyd, S. Diamond, and J. Z. Kolter. Differentiable convexoptimization layers. Advances in neural information processing systems , 32, 2019.13A Appendix: Additional BackgroundA.1 Neural RenderingDifferentiable rendering . NeRFs represent scenes as emissive volumetric media [26]. Unlikesurface rendering, volumetric rendering does not suffer from explicit hard discontinuities, whichare difficult to handle for traditional surface rendering methods[21]. We exploit the differentiablevolume rendering of NeRFs, to robustly compute efficient gradients for arbitrary geometries.Complex scene reconstruction . Works such as BlockNeRF [35] and S-NeRF [36] show great po-tential for automatically capturing street-level scenes relevant to autonomous vehicle simulation.Unlike traditional simulators, these neural representations are directly trained on raw sensor cap-tures, thereby obtaining high-fidelity visual reconstruction without laborious asset creation.Composition and editing . Recent works have extended the static single scene setting of NeRF tocomposition of NeRFs, scene disentanglement, as well as editing and relighting. Specifically, [37]encodes scenes with latent codes from which new scenes can be generated. [38], [39] and [40] intro-duce compositional radiance fields to represent different objects and realize scene decomposition.[41] utilizes 2D segmentation information to perform 3D scene inpainting. [42] and [43] decomposecolor into different illumination components. [44] [45] learn priors from big datasets of images todisentangle existing scenes.Control and neural rendering models . Neural rendering has seen utility in a few different opti-mization tasks for robotics such as pose estimation [46]. NeRFs have also seen direct applicationto trajectory optimization, including utilizing NeRF’s density as approximate occupancy [47], col-lision, and friction constraints [48], effectively allowing NeRF to act as a differentiable physicssimulator. On the other hand, [49] learns an additional latent dynamics of NeRF objects. In [22],physics simulations of known objects are combined with their learned NeRF to create high fidelitydynamic environments for training deep reinforcement learning agents, that can transfer to the realworld. We use composition of NeRFs in our work to insert and optimize adversarial objects. This isshown in Fig. 3, and the details are in Sections 4.1 and 4.2.B Appendix: Method DetailsB.1 NeRFVolume Rendering A neural radiance field consists of two fields, σφ(x), Lψ(x, ω)that encodethe density σat every location xand the outgoing radiance Lat that location in the direction ω.In NeRFs, both of these functions are represented by parameterized differentiable functions, suchas neural networks. Given a radiance field, we are able to march rays through an image planeand reconstruct a camera image from a given camera pose and intrinsic matrix using the renderingfunction reiterated here for clarity:I(x, ω) =ZT0σ(t) expZt0σ(ˆt)dˆtL(t,−ω)dt (B.1)Where L(t,·)andσ(t)are shorthands for L(tω+x,·)andσ(tω+x), and I(x, ω)is the intensityat a location xgiven in world space in the direction ω.Compositing For our adversarial attacks to contain 3D semantics, it is crucial to insert the pertur-bation in a 3D aware manner. For this we utilize another feature of neural radiance fields, whichis to output opacity values. Specifically, in Eqn. (1) we can extract the transmittance component,which acts as a measure of the pixel transparency α:α(x, ω) = expZt0σ(ˆt)dˆt(B.2)14Furthermore, we can replace the radiance term with distance in (1) to extract the expected termina-tion depth of a ray z:z(x, ω) =ZT0tσ(t)α(t)dt (B.3)We consider the case of two radiance fields, the object radiance field σo, Loand the backgroundradiance field σs, Ls. We use a transformation matrix to correspond ray coordinates between thescene and the object radiance field.By applying equations (1), (B.2), (B.3) to a single ray that corresponds to both the base scene andthe object radiance field, we obtain the values co,αo,zo,cs,αs,zsrespectively, where α∗is theopacity and z∗is the depth along the ray. We denote the foreground and background values at apixel asf= arg mino,s(zs, zo) (B.4)b= arg maxo,s(zs, zo) (B.5)The final blended color is then given by:c=αfcf+ (1−αf)αbcbαf+αb(1−αf)(B.6)In the case of multiple object NeRFs, we simply repeat the alpha blending for each object to com-posite them all into the same scene.B.2 Vehicle DynamicsThe dynamics in equation (5) can take multiple forms, for the CARLA experiments, we choose thesimplest kinematic model of a car, a Dubin’s vehicle: ̇x="vcosθvsinθu#(B.7)For the purposes of the CARLA deployment environment, we find that it is sufficient to consider thekinematic model with fixed velocity, and only angular control. Thus, our imitation learning policy inEqn. (3) only outputs steering commands. We note that our approach is applicable to any dynamicsmodel, as long as it is differentiable.For the real world experiments, we opted for a fixed velocity Ackerman steering model: ̇x="vcosθvsinθvltan(θ)#(B.8)where lis the robot wheelbase.B.3 Optimization DetailsAs described in Section 4.1, following prior work, we do not propagate gradients of camera param-eters through the sensor model function. Specifically, we set,ot=hγ,θ(stop gradient( xt)) (B.9)Thus gradients of the observation will only be taken with respect to the adversarial object parametersθand not the state of the car. The gradient with respect to xtcorresponds to exploiting higher ordereffects of how the observation would change if the car was looking in a slightly different directiondue to previous steps of the attacks, and leads to a very non-smooth loss objective that is not usefulfor finding practical attacks.For experiments in the real world, we found the attacks were sometimes very sensitive to the robot’spose. To alleviate this issue, we chose to optimize multiple randomly sampled initial poses simul-taneously. The samples were normally distributed around the nominal car starting location, with astandard deviation of 0.1.15B.3.1 Optimization parametersIn all our experiments, our optimization parameters θcorrespond to values on the NGP voxel grid.Since we have removed the decoder, the grid values directly correspond to the color for a givenposition in the volume. Due to this, the parametrization even for small models can get quite large,in the order of a 5million for the hydrant.C Appendix: Experimental DetailsC.1 NeRF ModelsWhen training the surrogate NeRF models of the background scene and objects, we use the defaultInstant-NGP hyperparameters and optimize over 50 epochs using the Adam optimizer.The source 3D assets for our objects were obtained from the Objaverse dataset [50] and posedimages produced by rendering with Blender[51]. For our object models, we choose to use Instant-NGP without a decoder, instead directly encoding the colour values in the feature grid. Furthermore,we remove view dependence for better multi-view consistency. Finally, we use lower resolutions forthe object feature grids as compared to the scene feature grids. The object feature grids containresolutions up to 1283and643features for the car and hydrant, respectively. Since our adversarialobjective does not have any smoothness constraint, we found it critical to use lower resolution gridsand remove the positionally encoded feature decoders to avoid aliasing effects.C.2 Driving Policy.We train our own policy on which the attack will be performed. Our policy is an end-to-end RGBimage neural network policy and the architecture is taken from [52]. We make a slight addition togoal condition the policy by adding a goal input to the linear component and increasing the capacityof the linear layers. The policy is trained via imitation learning, specifically DAgger [53], [54].Expert actions are given by a lane following controller tuned for the simulator that gets access tothe ground truth state, unlike the policy. The expert queried from various states random distancesfrom the center of the road to recover from. Furthermore, random noise augmentation is used on theimages during training to make the policy more robust to noisy observations.C.3 CARLAWe fit the background scene model using a dataset of 1800 images and their corresponding cameraposes, which provide a dense covering of the CARLA scene.When transferring our attacks back to the deployment scene, opacity values are usually not available.In order to evaluate our attacks, we assume that objects are opaque ( α= 1), and thus our method ofblending in Equation B.3 can be calculated using just the depth and color values. We observe fromexperiments on the CARLA simulator that this type of composition is sufficient for the evaluationin the deployment environment.Driving Policy. For our driving policy the initial training dataset of images is collected from theintersection in CARLA. We further fine-tuned the policy with some additional data collected fromour surrogate simulator to ensure that our policy is not trivially failing due to slight visual differ-ences. We use a total dataset of 120000 images in CARLA and 60000 images in the surrogatesimulator in order to train the policy. We validated our policy on a hold out validation set consistingof 12000 images captured purely from the surrogate simulator. All data were collected by runningthe expert on the 3 reference trajectories. The policy was trained using behaviour cloning, where wegave examples of recovery from deviation by collecting data from random start locations around thenominal trajectory.16Figure C.1: Picture of driving area for the real world scenario experiments.C.4 Real WorldWe fit the background to a room in the real world using a dataset of 2161 images captured from aniPhone camera at 4K resolution. We collect data covering the room by walking around, then attachthe iPhone to the robot to collect further data from the driving view points. The captured videos areprocessed using COLMAP [55, 56] for both camera intrinsic and the poses.Driving Policy. We train a driving policy to track a square track in the room marked by greentape, this policy was trained using an expert PID controller with global positioning supplied by theVICON system providing 9584 images. We further augment this again with 12000 images fromdriving data in the NeRF scene. An overview of our working area is given in Figure C.1.For all real world attacks we optimize the color of a cube in the surrogate NeRF scene, placed at oneof the corners such that the camera will encounter this cube as the car takes the turn.C.4.1 RobotWe carry out experiments using the RACECAR/J†platform. The robot is equipped with a ZEDstereo camera, of which we only utilize the RGB data from the left sensor, which has been configuredto a resolution of 366x188 at 10 frames per second. We operate the robot inside a VICON systemthat positions the robot at a rate of 50Hz streaming through a remotely connected computer that runspolicy as well as the image processing for some of the attacks.C.4.2 Green Screen AttackFor the green screen attack, we utilized a VICON system to accurately position both our robot andthe green screen target. Using the green screen target position, as well as the camera parameters,we project one face of the cube on the input image to the policy. We opt to overlay the cube insuch a manner to keep the policy driving in real time and to ensure that there is no penalty oncontrol frequency. The image compositions is done at the remote computer where the controls arecomputed, which are then sent wirelessly to the robot to execute.C.4.3 Monitor AttackTo replace the green screen with a physical object, we place a monitor and display the same attackas above on the monitor. We place the monitor in a location such that it is visually consistent withthe NeRF and green screen attacks. For the monitor attack, we utilize a 27-inch monitor with a 16:9aspect ratio. Since the adversarial objects optimized in earlier examples are cubes we only use thecenter of the monitor to display the attack.†https://racecarj.com/17Figure D.1: The performance of the driving policy before (left) and after (right) retraining on thediscovered adversarial scenarios.CARLA Attack Transfer in CARLA CARLA Attack After RetrainingScenario Unperturbed Random Gradient GradientStraight 1166 1193±19 1702 .±160 1250Right 1315 1476±12 2101 .±75 1307Left 1448 1158±163 2240 .±574 1419Table 2: Comparison of the total cross-track error for the retraining experiment over the 3 differenttrajectories. Results are extending the results from the main paper Table1 shown for the followingcases: (1) no attack in CARLA (unperturbed), (2) an attack in the CARLA scene, (3) an attack inthe CARLA scene after the driving policy is retrained using adversarial data.D Appendix: Additional Experimental ResultsD.1 Incorporating Discovered Adversarial Scenarios in the Training SetOur primary focus in this paper was to discover adversarial attacks for the evaluation of pretrainedself-driving policy. Here we perform some preliminary investigations on fine-tuning our self-drivingpolicies, on the old data and the adversarial attacks we found. Specifically, we take the attacksdiscovered by the gradient-based optimization and use them to collect additional imitation learningdata. The collection is performed in the CARLA simulator using the depth compositing approachto insert the adversarial objects, as was done for the evaluation in the main paper. Apart fromthe object compositing, the data is collected in the same way as the original CARLA data usedto train the base policy. We collect 24000 total frames over three trajectories with two differentstarting points. After fine-tuning our policy on the combination of the original dataset and the newadversarially augmented dataset, we evaluate the fine-tuned agent in the same scenario. We visualizethe trajectories of the fine-tuned policy in Figure D.1 and report on the total deviation compared tobefore fine-tuning in Table 2. We find that the policy is no longer susceptible to the adversarialattacks, even though the initial starting position for evaluation was unseen during training.18Figure D.2: Sample renderings of the left turn trajectory with the adversarial perturbations inCARLA from the ego vehicle’s point of view. Four different snapshots from the evolution of thetrajectory are shown.D.2 CARLA VisualizationsWe show first person visualizations of our discoverered adversarial attacks inserted back into theCARLA deployment simulator in Figure D.2. We note the smoothness of the texture discoveredby our method. Purely perceptual single-frame attacks typically exhibit a much higher frequencytexture.We show additional overhead trajectory views of adversarially attacked trajectories from oneCARLA scene in Figure D.3.D.3 Object Translation AttacksWe find in practice that the loss landscape with respect to the poses of inserted objects are extremelynon-smooth, as seen in Fig. D.4 We therefore investigated a mixed attack in NeRF where we usemulti-start (multi-seed) gradient optimization for the object poses and single-seed gradient optimiza-tion for the adversarial textures. For this experiment, we used a more robust policy trained with thedata from our color attacks, as well as additional examples of the adversarial objects in randomposes. We report the results in Table 3. During the course of optimization we randomly sample aset of 5 poses within the constraints, then we perform 10 gradient descent iterations to refine thecandidate solutions. We keep the best solution across 50 total evaluations. We see that even with amore robust policy, by combining gradients and multi-seed sampling we are able to discover somesignificant and transferable adversarial scenarios that incorporate both textures and object poses. Wenote that the straight trajectory is particular robust after the retraining, which causes the new gradientattacks to be less effective. We observe that modifying the poses of adversarial objects in additionto their textures allows the attacks to transfer even better to the deployment CARLA scene.CARLA NeRF Attack Transfer in CARLAScenario Unperturbed Multistart Gradient Random Only Multistart Gradient Random OnlyStraight 1248 1377±87 1265±21 1194±10 1164±23Right 1648 2682±175 2529±169 2707.±342 1981±170Left 1353 1523±27 1476±27 1808±401 1792±447Table 3: Total cross track error for attacking CARLA policy with random and gradient combined.19(a) Unperturbed (b) Attacks in NERF (c) TransferredFigure D.3: Overhead views of three distinct trajectories driven by the policy. (a) shows the policydriving behavior in CARLA when no adversarial perturbation is introduced. (b) shows the policydriving behavior in the surrogate simulator with the discovered adversarial perturbation. (c) showsthe same perturbation transferred to the deployment scene.Figure D.4: Loss landscape as a function of a car position for the straight scenario20CARLAScenario Differentiable RasterizationStraight 1647±38Right 1727±41Left 3397±180Table 4: Results of optimizing the texture of an adversarial mesh directly using PyTorch3D differ-entiable rasterizationFigure D.5: Overhead views of the car driving with adversarial PyTorch3D objects.D.4 Differentiable Rasterization for Adversarial TexturesIn this section, we perform an investigation on our method using different differentiable renderingmethods for the inserted objects. Instead of carrying out volume rendering, we use PyTorch3D[57]to carry out differentiable rasterization for our adversarial objects. We optimize the textures of carand hydrant models as in our experiments on Instant-NGP, however, we now render the objects us-ing PyTorch3d directly on top of CARLA background. We report the mean and standard deviationfrom 5 random seeds in table 4. Figure D.5 shows selected overhead views of the adversarial tra-jectories. We observe that using differentiable rasterization we are able to obtain results similar tovolume rendering. However, there still needs to be work done to determine how to use differentiablerasterization to generate adversarial attacks that are transferable to the real world.D.5 Real-world VisualizationsWe show aligned visualizations of the same adversarial real-world monitor attack in Figure D.6.E LimitationsE.1 Non-differentiable policiesIn our work, we indeed assume that the visual driving policy is end-to-end differentiable. Recentwork has shown the potential of modular end-to-end learned policies that can optimize the interac-tion between perception and planning [32, 34].(a) Surrogate Simulator (b) First person view (c) Third person viewFigure D.6: Real-world adversarial monitor attack visualizations.21However, our current method does have a clear limitation when it comes to non-differentiable poli-cies. Here we outline a few approaches to address this issue. If the policy is entirely black-box,then just as we have learned a differentiable surrogate scene to approximate the true simulator, itmay be possible to also learn a differentiable surrogate policy to approximate the behavior of thenon-differentiable policy [58]. Otherwise, we may need to resort to zeroth order optimization. Thiswould prove computationally challenging for high-dimensional parameter spaces, but methods suchas sparse Bayesian optimization could be applicable [59]. We can also leverage gradient informationfor Bayesian Optimization [60], or recent work on combined zeroth-order and first-order gradientestimators, such as [61] or [62].If we consider the structure of the policy, most modern driving pipelines will still contain largechunks of end-to-end differentiable components, such as perception modules, together with non-differentiable components. In these cases, the policy would be a mixed computation graph wheresome parts are differentiable and others are not. Gradient estimation in mixed stochastic compu-tation [63] graphs has been explored by many works in the contexts of probabilistic programming,variational inference and RL, and could be adapted for stochastic optimization methods. Gray-boxbayesian optimization [64] also considers splitting up functions into constituent parts, and tech-niques for leveraging gradient information for BO exist as well.Although parts of the driving policy may not be trivially differentiable via standard backpropaga-tion, there are also many techniques (such as implicit differentiation and informed perturbations)proposed for obtaining gradients of algorithms including combinatorial [65] and convex optimiza-tion [66] that may be present in the self-driving policy. These could account for many of the “non-differentiable” components of modern driving policies, and it is worth exploring the applications ofsuch techniques to them.22 |
pw-OTIYrGa | On the Utility of Koopman Operator Theory inLearning Dexterous Manipulation SkillsYunhai Han1, Madie Xie1, Ye Zhao1, Harish Ravichandar11Georgia Institute of Technologyfyhan389, manxie, yezhao, harish.ravichandar g@gatech.eduAbstract: Despite impressive dexterous manipulation capabilities enabled bylearning-based approaches, we are yet to witness widespread adoption beyondwell-resourced laboratories. This is likely due to practical limitations, such assignificant computational burden, inscrutable learned behaviors, sensitivity to ini-tialization, and the considerable technical expertise required for implementation.In this work, we investigate the utility of Koopman operator theory in alleviatingthese limitations. Koopman operators are simple yet powerful control-theoreticstructures to represent complex nonlinear dynamics as linear systems in higherdimensions. Motivated by the fact that complex nonlinear dynamics underlie dex-terous manipulation, we develop a Koopman operator-based imitation learningframework to learn the desired motions of both the robotic hand and the objectsimultaneously. We show that Koopman operators are surprisingly effective fordexterous manipulation and offer a number of unique benefits. Notably, policiescan be learned analytically , drastically reducing computation burden and elim-inating sensitivity to initialization and the need for painstaking hyperparameteroptimization. Our experiments reveal that a Koopman operator-based approachcan perform comparably to state-of-the-art imitation learning algorithms in termsof success rate and sample efficiency, while being an order of magnitude faster.Policy videos can be viewed at https://sites.google.com/view/kodex-corl.Keywords: Koopman Operator, Dexterous Manipulation1 IntroductionAutonomous dexterous manipulation skills are necessary for robots to successfully operate in a phys-ical world built by and for humans. However, achieving reliable robotic dexterous manipulation hasbeen a long-standing challenge [1] due to numerous factors, such as complex nonlinear dynamics,high-dimensional action spaces, and the expertise required to design bespoke controllers.Over the past decade, learning-based solutions have emerged as promising solutions that can addressthe challenges in acquiring dexterous manipulation skills. Indeed, these methods have been shown tobe capable of impressive feats, such as solving Rubik’s cubes [2], manipulating Baoding balls [3], re-trieving tool trays [4], and reorienting complex objects [5, 6]. However, existing learning approachessuffer from practical limitations that hinder their widespread adoption. First, implementing exist-ing algorithms requires significant technical expertise and familiarity with modern machine learninginfrastructure (e.g., knowledge of complex learning algorithms and well-established deep learn-ing software frameworks). Second, training policies consume significant computational resources.Third, while existing approaches have achieved impressive SOTA performance, these results tend torequire painstaking efforts to tune hyperparameters and architectures. Fourth, performance tends tobe highly sensitive to parameter initialization.In this work, we investigate the utility of Koopman operator theory in alleviating the limitationsof existing learning-based approaches as identified above. The Koopman operator theory helpsrepresent arbitrary nonlinear dynamics in finite dimensional spaces as linear dynamics in an infinite-dimensional Hilbert space [7]. While this equivalence is exact and fascinating from a theoreticalstandpoint, it is not tractable. However, recent advances have enabled the approximation of thisequivalence in higher but finite -dimensional spaces by learning the operator directly from data [8].7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Lifted Koopman Reference Dynamics�(Koopman Operator)Demonstrationsx�(�)x�(�)x�(�+1)x�(�+1)ExecutionInverse Dynamics Controllerx�(�+1)x�(�)u(�)Figure 1: KODex simultaneously encodes complex nonlinear dynamics of the desired motion of both the robotstate ( xr) and the object state ( xo) as a linear dynamical system in a higher-dimensional space by learning aKoopman operator Kdirectly from demonstrations. Further, KODex learns an inverse dynamics controller totrack the robot reference trajectory ( f^ xr(t)gTt=1) generated by the lifted linear system.We develop a novel imitation learning framework, dubbed Koopman Operator-based DexterousManipulation (KODex) , to evaluate the utility of Koopman operator theory for dexterous manip-ulation (see Fig. 1). Specifically, we model desired behaviors as solutions to nonlinear dynami-cal systems and learn Koopman operators that define approximately-equivalent linear dynamics inhigher-dimensional spaces. Note that it is insufficient to exclusively focus on the robot’s motion asthe objective of dexterous manipulation is centered on the object’s motion [1]. As such, KODexsimultaneously learns the desired motions of both the robot and the object from demonstrations.To eliminate the need for an expertly-tuned controller, KODex utilizes a learned inverse dynamicscontroller to track the reference trajectory generated by the learned dynamical system.A significant benefit of learning Koopman operators from data is that it lends itself to an analyticalsolution. As such, KODex is simple to implement and does not require expertise and familiaritywith state-of-the-art (SOTA) machine learning infrastructure. Instead of painstaking hyperparameteroptimization, we show that generic and task-agnostic polynomial lifting functions are sufficientfor KODex to learn diverse dexterous manipulation skills. Further, KODex offers consistent andpredictable performance since the learning process is analytical and thus not sensitive to parameterinitialization. Finally, given that KODex learns a linear dynamical system, one can readily inspectthe learned behaviors using a wide array of control-theoretic tools.We carry out extensive evaluations of KODex within the context of four dexterous manipulationskills on the simulated Adroit hand, an established experimental platform for dexterous manipula-tion [9]. Further, we compare KODex against SOTA imitation learning approaches in terms of gen-eral efficacy, computational efficiency, and sample efficiency. Our results demonstrate that KODexis at least an order of magnitude faster to train than SOTA imitation learning algorithms, whileachieving comparable sample efficiency and task success rates. These results suggest that Koopmanoperators can be effective, efficient, and reliable tools to learn dexterous manipulation skills and toreduce the barriers to wide-spread adoption.2 Related WorkIn this section, we contextualize our contributions within relevant sub-fields.Learning Manipulation Skills as Dynamical Systems : Our work falls into the category ofdynamical-system-based imitation learning methods for manipulation [10]. Made popular by theDynamics Movement Primitives (DMPs) [11], these methods model robot motions as solutions toa learnable dynamical system. The past decade witnessed a plethora of approaches built upon thesame principle (e.g., [12–17]), creating increasingly-capable LfD tools for manipulation. Robust-ness to perturbations, high sample efficiency, and provable convergence are all but a few examples ofthe many advantages of dynamical-system-based approaches. These approaches tend to be highlystructured and leverage control-theoretic and topological tools to learn complex desired motionswith unparalleled sample efficiency. Recent work also embeded the dynamical systems structureinto deep neural networks to enable end-to-end learning [18]. These approaches were primarilydesigned to capture low-dimensional end-effector skills for serial-link manipulators. In contrast,our work investigates the utility of Koopman operators in learning dexterous manipulation skills onhigh-DOF platforms.2Learning Dexterous Manipulation Skills : Deep Reinforcement Learning (RL) has been dominat-ing the field of dexterous manipulation recently, enabling an impressive array of skills [3, 19, 20]. Apopular approach demonstrated that a multi-finger hand can learn to solve the Rubik’s cube OpenAIet al. [2]. Recently, Chen et al. [5] developed a model-free RL framework capable of reorientingover 2000 differently-shaped objects, and Khandate et al. [6] combined sampling-based planningand model-free RL methods on the same task. Despite impressive successes, RL-based algorithmssuffer from poor sample efficiency and notoriously-difficult training procedures. In contrast, Imita-tion learning (IL) aims to improve sample efficiency by leveraging expert demonstrations [10, 21].However, most existing IL-based methods (including those discussed in Section 2) focus on lower-DOF manipulators and do not scale well to high-dimensional systems. Indeed, there are at leasttwo recent notable exceptions to this limitation: Xie et al. [4] developed a highly structured ILmethod to learn dexterous manipulation skills from demonstrations in the form of a dynamical sys-tem; Arunachalam et al. [22] introduced a mixed-reality framework to collect high-quality demon-strations and learn dexterous manipulation skills by leveraging visual representations and motionretargeting. Researchers have also combined IL with RL to get the best of both approaches andhas been able to achieve impressive performance (e.g., [9, 23]). A common attribute of all exist-ing learning approaches to dexterous manipulation is that they rely on significant computationalresources and user expertise for implementation and hyperparameter tuning. Further, the effec-tiveness of these approaches is highly sensitive to parameter initialization [24]. In stark contrast,KODex encodes complex skills as dynamical systems which are analytically extracted from demon-strations. As such, KODex incurs a significantly smaller computational burden and eliminates thedependence on painstaking hyperparameter tuning and unreliable numerical optimization. Further,unlike opaque deep neural networks, KODex learns linear dynamical systems that can be inspectedvia control-theoretic tools.Koopman Operators in Robotics : Recently, Koopman operator theory has proven beneficial invarious robotic systems, such as differential drive robots [25], spherical and serial-link manipula-tors [26], autonomous excavators [27, 28], and soft robotic manipulators [29, 30]. However, thesystems investigated in these works are low-dimensional. In contrast, our work is focused on evalu-ating the effectiveness of Koopman operators in learning skills for a high-dimensional system withcomplex dynamics (i.e., a multi-fingered hand). Further, prior works have not sufficiently inves-tigated the relative benefits of leveraging Koopman operators over SOTA neural network-basedapproaches, and the circumstances under which these benefits hold. In our work, we thoroughlyevaluate KODex against SOTA imitation learning methods within the context of multiple dexterousmanipulation tasks.3 Preliminary: Koopman Operator TheoryWe begin by providing a brief introduction to Koopman operator theory [7].Koopman Representation : Consider a discrete-time autonomous nonlinear dynamical systemx(t+ 1) =F(x(t)); (1)where x(t)2X Rnis the state at time t, andF() :Rn!Rnis a nonlinear function.To represent the the nonlinear dynamical system in (1) as a linear system, we begin by introducinga set of observables using the so-called lifting function g:X ! O , whereOis the space ofobservables. We can now define the Koopman Operator K, an infinite-dimensional operator on thelifting function g()for the discrete time system defined in (1) as follows[Kg] =g(F(x(t))) =g(x(t+ 1)) (2)If the observables belong to a vector space, the Operator Kcan be seen as an infinite-dimensionallinear map that describes the evolution of the observables as followsg(x(t+ 1)) =Kg(x(t)) (3)Therefore, the Koopman operator Klinearly propagates forward the infinite-dimensional liftedstates (i.e., observables). In practice, we do not benefit from this representation since it is infinite-dimensional. However, we can approximate Kusing a matrix K2Rppand define a finite set ofobservables (t)2Rp. Thus, we can rewrite the relationship in (3) as(x(t+ 1)) = K(x(t)) +r(x(t)); (4)3wherer(x(t))2Rpis the residual error caused by the finite dimensional approximation, which canbe arbitrarily reduced based on the choice of p.Learning Koopman Operator from Data : The matrix operator Kcan be inferred from a datasetD= [x(1);x(2);;x(T)], which contains the solution to the system in (1) for Ttime steps. Giventhe choice of observables (), the finite dimensional Koopman matrix Kis computed by minimizingthe approximation error defined in (4). Specifically, we can obtain KfromDby minimizing the costfunction J(K)given belowJ(K) =12t=T1Xt=1kr(x(t))k2=12t=T1Xt=1k(x(t+ 1))K(x(t))k2(5)Note minimizing J(K)amounts to solving a least-square problem, whose solution is given by [8]K=AGy;A=1T1t=T1Xt=1(x(t+ 1))(x(t));G=1T1t=T1Xt=1(x(t))(x(t))(6)where Gydenotes the Moore–Penrose inverse1ofG, anddenotes the outer product.4 Learning Koopman Operators for Dexterous ManipulationWe begin by introducing our framework to model dexterous manipulation skills as nonlinear dynam-ics and discuss the importance of incorporating object states into the system (Section 4.1). Next, wedescribe how KODex learns the reference dynamics2for a given skill from demonstrations (Sec-tion 4.2). Then, we discuss how to learn a low-level controller, also from demonstrations, in orderto faithfully track the reference trajectories generated by KODex (Section 4.3). Finally, we dis-cuss policy execution (Section 4.4). Further, an overall pseudo-code for KODex can be found inAppendix A.4.1 Modeling Dexterous Manipulation SkillsA central principle behind KODex is that the desired behavior of a robot can be represented usinga dynamical system. Note that, unlike other kinds of manipulation skills (e.g., end-effector skills ofmulti-link manipulators), dexterous manipulation is explicitly concerned with how an object movesas a result of the robot’s motion [1]. As such, KODex captures the desired motion of the robot alongwith that of the object. To this end, we define the state at time tasx(t) = [x r(t)>;xo(t)>]>, wherexr(t)2XrRnandxo(t)2XoRmrepresent the state of the robot and the object, respectively,at timet. As such, the dynamical system we wish to capture isx(t+ 1) =F(x(t)) (7)whereF() :XrXo!X rXodenotes the true dynamics that govern the interdependent motionsof the robot and the object. Note that this system is time-invariant. Indeed, time-invariant dynamicalsystems provide a natural way to capture manipulation skills that are more robust to intermittentperturbations than those that explicitly depend on time [10].A key challenge in learning the dynamical system in (7) is that it can be arbitrarily complex andhighly nonlinear, depending on the particular skill of interest. KODex leverages Koopman operatortheory to learn a linear dynamical system that can effectively approximate such complex nonlineardynamics. To this end, we first define a set of observables as follows(x(t)) = [x r(t)>; r(xr(t));xo(t)>; o(xo(t))]>;8t (8)where r:Rn!Rn0and o:Rm!Rm0are vector-valued lifting functions that transform therobot and object state respectively. While there are no coupling terms between the robot and theobject states in (8), note that the robot and object states still mix after the lifting operation.1It could be efficiently computed using the scipy.linalg.pinv( G)function from Scipy library.2We use the term “reference dynamics” to describe a fictitious dynamical system that encodes task-specificideal trajectories in the configuration space, and not the physical robot dynamics.4In our implementation, we use polynomial functions up to a finite degree in our lifting function sincepolynomial functions allow for flexible definition of complex functions. However, it is important tonote that KODex is agnostic to the specific choice of observables. Further, we do not assume thatwe know the ideal set of observables for any given skill. Instead, as we demonstrate in Section 5,KODex can learn different dexterous manipulation skills on the same space of observables.4.2 Learning Reference DynamicsWe now turn to the challenge of learning the Koopman operator Kfrom demonstrations. Let D=[fx(1)(t);(1)(t)gt=T(1)t=1;;fx(N)(t);(N)(t)gt=T(N)t=1 ]denote a set of Ndemonstrations contain-ing trajectories of state-torque pairs. Now, we can compute the Koopman matrix as K=AGy,where AandGcan be computed by modifying the expressions in (6) as followsA=n=NXn=1t=T(n)1Xt=1(xn(t+ 1))(xn(t))N(T(n)1);G=n=NXn=1t=T(n)1Xt=1(xn(t))(xn(t))N(T(n)1)(9)It is worth noting that KODex can also leverage partial trajectories that do not complete the task, aslong as all the state pairs (xn(t);xn(t+1)) are temporally consecutive. Additionally, we also recordthe actuated torque (t)at each time step for the controller design discussed in Section 4.3.We use the learned reference dynamics to generate rollouts in the observables space. However, weneed to obtain the rollouts in the original robot states to command the robot. Since we designed(x(t))such that both robot state xr(t)is a part of the observables in (8), we can retrieve the desiredrobot trajectoryf^ xr(t)gby selecting the corresponding elements in (x(t)).Indeed, the data distribution in Dhas a considerable effect on the generalizability of the computedKoopman matrix K. Therefore, there is an inevitable trade-off between the number of demonstra-tions and the cost of data collection - a challenge shared by most imitation learning algorithms.4.3 Learning a Tracking ControllerTo track the desired trajectories generated from the learned reference dynamics, we learn an inversedynamics controller C[31] [18]. Indeed, a PD controller could be designed instead of learning atracking controller. However, one would have to painstakingly tune the control gains and frequency,and do so for each task independently.We use a multi-layer perception (MLP) as the tracking controller and train it using the recordedstate-torque pairs (xnr(t);xnr(t+ 1);n(t))by minimizingLcontrol =n=NXn=1t=T(n)1Xt=1jC(xnr(t);xnr(t+ 1))n(t)j2N(T(n)1)(10)The learned controller takes as input the current robot state xr(t)and the desired next state from thereference trajectory ^ xr(t+ 1) , and generates the torque (t)required for actuation.4.4 ExecutionWith the reference dynamics and the tracking control learned, we specify how to execute the policyin this section. Suppose x(1) = (x r(1);xo(1))is the given initial state, we can generate the referencetrajectory (f^ xr(t)gTt=1) by propagating the learned reference dynamics ^ x(t+ 1) = K^ x(t). Further,at time step t, we pass the current robot state xr(t)and the desired next robot state ^ xr(t+ 1) to thelearned controller Cto compute the required torque (t).5 Experimental EvaluationWe evaluated KODex along with existing approaches in terms of their general efficacy, computa-tional efficiency, sample efficiency, and scalability when learning dexterous manipulation skills.5Figure 2: We evaluate KODex on four tasks from [9]: Tool Use, Door Opening, Relocation, and Reorientation.5.1 Experimental DesignEvaluation Platform : We conducted all our experiments on the widely-used ADROIT Hand [9] –a 30-DoF simulated system (24-DoF hand + 6-DoF floating wrist base) built with MuJoCo [32].Baselines : We compared KODex against the following baselines:•NN: Fully-connected neural network policy•LSTM : Recurrent neural network policy with Long Short-Term Memory (LSTM) units•NDP : Neural Dynamic policy [18]•NGF : Neural Geometric Fabrics policy [4]Note that NN and LSTM are unstructured baselines which help question the need for structuredpolicies. NDP and NGF are highly-structured SOTA imitation learning methods for manipulation.We undertook several precautions to ensure a fair comparison. First, we designed the robot andobject state space for all baselines and KODex to be identical. Second, we carefully designed thebaselines policies and tuned their hyper-parameters for each baseline method (Appendices E andF). Third, we trained each baseline policy over five random seeds to control for initialization effects.For all tasks, we saved the baseline policies that performed the best on a validation set of 50 held-outdemonstrations. Note that KODex utilizes an analytical solution and thus does not require parameterinitialization or hyper parameter optimization.Tasks : We evaluated all algorithms on a set of four tasks originally proposed in [9] (see Fig. 2).•Tool use : Pick up the hammer to drive the nail into the board placed at a randomized height.•Door opening : Given a randomized door position, undo the latch and drag the door open.•Object relocation : Move the blue ball to a randomized target location (green sphere).•In-hand reorientation : Reorient the blue pen to a randomized goal orientation (green pen).See Appendix B for the state space design of all tasks. For all tasks, the reference dynamics wasqueried at 100HZ, and the controller ran at 500HZ.Metrics : We quantify performance in terms of i) Training time : Time taken to train a policy, ii)Imitation error : TheL1distance between generated joint trajectories and the demonstrations, andiii)Task success rate : Percentage of successful trials (see Appendix C for success criteria).Expert Policy : For each task, we trained an expert RL agent using DAPG [9] to generate 250 expertdemonstrations (200 for training and 50 for validation). See Appendix D for further details.Inverse Dynamics Controller : To standardize controller performance across methods, we traineda common inverse dynamics controller for each task using 250demonstrations (see Appendix G).5.2 General EfficacyIn Fig. 3, we report the training time, imitation error, and task success rate for each method on eachtask, when trained on 200 demonstrations and tested on 10,000 testing instances.Training time : As can be seen, KODex is an order of magnitude faster than both unstructuredbaselines (NN, LSTM) and SOTA IL methods (NDP, NGF). Further, this trend holds across all thetasks. This is to be expected since KODex analytically computes the Koopman operator unlike allthe baselines, which rely on gradient descent and numerical optimization.Imitation error : In general, all methods (except NN) achieve low imitation error for the Tool Usetask with negligible difference across methods. In the three remaining tasks, we see that all struc-tured methods (NDP, NGF, KODex) considerably outperform the unstructured baseline (LSTM).6Tool Door Reloc ReorientTasks0100200300400Training Time (Sec)Training TimeTool Door Reloc ReorientTasks0.000.050.100.150.20Imitation Error (rad)Imitation ErrorTool Door Reloc ReorientTasks0255075100Task Success Rate (%)Task Success RateFigure 3: We report training time (left), imitation error (middle), and success rate (right) for methods on eachtask when trained on 200 demonstrations and evaluated on an independent set of 10,000 samples. Error barsfor baseline methods, show the standard deviation over five random seeds.2550 100 150 200Number of Demonstrations100200300400Training Time (Sec)Training Time vs Dataset Size2550 100 150 200Number of Demonstrations100200300Training Time (Sec)Training Time vs Dataset Size2550 100 150 200Number of Demonstrations100200300400Training Time (Sec)Training Time vs Dataset Size2550 100 150 200Number of Demonstrations100200300400Training Time (Sec)Training Time vs Dataset Size2550 100 150 200Number of Demonstrations0.00.10.20.30.4Imitation Error (rad)Imitation Error vs Dataset Size2550 100 150 200Number of Demonstrations0.00.10.20.30.4Imitation Error (rad)Imitation Error vs Dataset Size2550 100 150 200Number of Demonstrations0.00.10.20.30.4Imitation Error (rad)Imitation Error vs Dataset Size2550 100 150 200Number of Demonstrations0.00.10.20.3Imitation Error (rad)Imitation Error vs Dataset Size2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset Size(a) Tool Use (Hammer)2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset Size (b) Door Opening2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset Size (c) Object Relocation2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset Size (d) In-hand ReorientationFigure 4: The effects of number of demonstrations on training time (top row), imitation error (middle row), andsuccess rate (bottom row) for all methods on each task. Solid lines indicate mean trends and shaded areas showstandard deviation over five random seeds.We have excluded the significantly larger imitation errors generated by NN from the plot to preservethe resolution necessary to distinguish between the other methods. These results reinforce the effec-tiveness of structured methods in imitating demonstrations. Importantly, KODex is able to achieveimitation performance comparable to the SOTA IL methods despite its simplicity, while remainingan order of magnitude more computationally efficient.Task success rate : As expected, the NN policy performs significantly worse than all other methods.On the other hand, LSTM achieves impressive task success rates, even outperforming NDP in twoof the four tasks. This is in stark contrast to its high imitation error. While counter-intuitive, thisobservation follows the recent finding that imitation error and task success rate might not necessarilybe correlated [33]. We observe that KODex and NGF perform comparably, with one achieving ahigher task success rate than the other in two of the four tasks. Importantly, KODex results in themost consistent and predictable performance due to its lack of sensitivity to initialization.75.3 Scalability and Sample EfficiencyTo investigate scalability and sample efficiency, we trained policies on a varying number of demon-strations ([10, 25, 50, 100, 150, 200]). In Fig. 4, we report the training time, imitation error, andtask success rate for each method as a function of the number of demonstrations when tested on thesame 10,000 instances used to evaluate general efficacy.Training time : We observe that KODex scales with the number of demonstrations significantlybetter than the baselines, as evidenced by its training time growing at considerably lower rates.Imitation error and Success rate : We find that unstructured models (NN and LSTM) fail to demon-strate a consistent monotonic decrease (increase) in imitation error (task success rate) as the numberof demonstrations increase. In stark contrast, structured methods (NDP, NGF, and KODex) are ableto consistently drive down imitation error and improve task success rate. KODex almost consis-tently achieves the lowest imitation error and the highest task success rate with the fewest number ofdemonstrations, and is closely followed by NGF. These observations suggest that KODex tends tobe comparably, if not more sample efficient than the baselines, thanks to the rich structure inducedby the Koopman operator and the resulting effectiveness in capturing nonlinear dynamics. The onlyexception to this trend is the Object Relocation task, in which KODex requires 150 demonstrationsto perform comparably to NGF. We speculate this is because the demonstrations for this task exhibithigh variance as the hand base moves across a large space, and KODex requires more demonstrationsto capture the reference dynamics.5.4 Additional ExperimentsAdditional experiments reported in the appendix suggest that KODex learns policies that i) haveinference time on par with SOTA baselines (Appendix H), ii) have zero-shot out-of-distribution gen-eralization comparable to SOTA IL methods (Appendix I), iii) are robust to changes in physicalproperties (Appendix J), iv) are not overly sensitive to the choice of basis functions (Appendix K),v) are nearly-stable linear dynamical systems that generate safe and smooth robot trajectories (Ap-pendix L), and vi) are significantly more efficient and scalable than a baseline BC method thatdirectly learns state-action mappings (Appendix M).6 ConclusionsWe investigated the utility of Koopman operator theory in learning dexterous manipulation skills byencoding complex nonlinear reference dynamics as linear dynamical systems in higher-dimensionalspaces. Our investigations conclusively show that a Koopman-based framework can i) analyticallylearn dexterous manipulation skills, eliminating the sensitivity to initialization and reducing theneed for user expertise, and ii) match or outperform SOTA imitation learning approaches on variousdexterous manipulation tasks, while being an order of magnitude faster.7 Limitations and Future WorkWhile our work offers promise for the utility of Koopman operators in dexterous manipulation, itreveals numerous avenues for further improvement. First, we did not deploy KODex on physicalrobots. Although our results on robustness to changes in physical properties show promise, weplan to deploy KODex on physical platforms to translate our findings to hardware. Second, weonly considered polynomial basis functions; other non-smooth functions (e.g., ReLU [34]) couldbe beneficial to manipulation tasks involving friction and contact. Third, KODex has limited out-of-distribution generalization, much like most existing imitation learning approaches. Future workcan investigate if additional data collection [35] and learned lifting functions [36–38] alleviate thisconcern. Fourth, KODex relies on demonstrated action trajectories to learn the tracking controllerand reduce human effort. It might be possible to instead use reinforcement learning [39], therebyenabling the ability to learn from state-only observations [40]. Fifth, KODex could be evaluated onother domains and robotics tasks (e.g., the benchmark tasks in [41]) to further understand the trade-offs between KODex and other imitation learning approaches. Finally, Koopman operators can beused to learn system dynamics via self play to enable model-based reinforcement learning.8References[1] A. M. Okamura, N. Smaby, and M. R. Cutkosky. An overview of dexterous manipulation. InProceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Roboticsand Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 1, pages 255–262,2000.[2] OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki,A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder,L. Weng, and W. Zaremba. Learning Dexterous In-Hand Manipulation. International Journalof Robotics Research (IJRR) , 2020.[3] A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. In Conference on Robot Learning , pages 1101–1112. PMLR, 2020.[4] M. Xie, A. Handa, S. Tyree, D. Fox, H. Ravichandar, N. D. Ratliff, and K. Van Wyk. Neu-ral geometric fabrics: Efficiently learning high-dimensional policies from demonstration. InConference on Robot Learning , pages 1355–1367. PMLR, 2023.[5] T. Chen, J. Xu, and P. Agrawal. A System for General In-Hand Object Re-Orientation. In 5thAnnual Conference on Robot Learning , 2021.[6] G. Khandate, S. Shang, E. T. Chang, T. L. Saidi, J. Adams, and M. Ciocarlie. Sampling-based Exploration for Reinforcement Learning of Dexterous Manipulation. In Proceedingsof Robotics: Science and Systems , Daegu, Republic of Korea, July 2023. doi:10.15607/RSS.2023.XIX.020.[7] B. O. Koopman. Hamiltonian Systems and Transformation in Hilbert Space. Proceedings ofthe National Academy of Sciences , 17(5):315–318, 1931. doi:10.1073/pnas.17.5.315.[8] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data–driven approximation of thekoopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science ,25(6):1307–1346, 2015.[9] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demon-strations. In Proceedings of Robotics: Science and Systems (RSS) , 2018.[10] H. Ravichandar, A. S. Polydoros, S. Chernova, and A. Billard. Recent advances in robotlearning from demonstration. Annual Review of Control, Robotics, and Autonomous Systems ,3(1):297–330, 2020.[11] A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal. Dynamical movementprimitives: learning attractor models for motor behaviors. Neural computation , 25(2):328–373, 2013.[12] S. M. Khansari-Zadeh and A. Billard. Learning stable nonlinear dynamical systems with gaus-sian mixture models. IEEE Transactions on Robotics , 27(5):943–957, 2011.[13] K. Neumann and J. J. Steil. Learning robot motions with stable dynamical systems underdiffeomorphic transformations. Robotics and Autonomous Systems , 70:1–15, 2015.[14] H. Ravichandar, I. Salehi, and A. Dani. Learning Partially Contracting Dynamical Systemsfrom Demonstrations. In Proceedings of the 1st Annual Conference on Robot Learning , vol-ume 78, pages 369–378, 13–15 Nov 2017.[15] M. A. Rana, A. Li, H. Ravichandar, M. Mukadam, S. Chernova, D. Fox, B. Boots, andN. Ratliff. Learning reactive motion policies in multiple task spaces from human demon-strations. In Conference on Robot Learning , pages 1457–1468. PMLR, 2020.[16] M. A. Rana, A. Li, D. Fox, B. Boots, F. Ramos, and N. Ratliff. Euclideanizing flows: Dif-feomorphic reduction for learning stable dynamical systems. In Learning for Dynamics andControl , pages 630–639. PMLR, 2020.9[17] N. Figueroa and A. Billard. Locally active globally stable dynamical systems: Theory, learn-ing, and experiments. The International Journal of Robotics Research , 41(3):312–347, 2022.[18] S. Bahl, M. Mukadam, A. a. Gupta, and D. Pathak. Neural dynamic policies for end-to-endsensorimotor learning. Advances in Neural Information Processing Systems , 33:5058–5069,2020.[19] H. Qi, A. Kumar, R. Calandra, Y . Ma, and J. Malik. In-hand object rotation via rapid motoradaptation. arXiv preprint arXiv:2210.04887 , 2022.[20] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V . Kumar. Dexterous manipulation with deepreinforcement learning: Efficient, general, and low-cost. In 2019 International Conference onRobotics and Automation (ICRA) , pages 3651–3657. IEEE, 2019.[21] T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, J. Peters, et al. An algorithmicperspective on imitation learning. Foundations and Trends® in Robotics , 7(1-2):1–179, 2018.[22] S. P. Arunachalam, I. G ̈uzey, S. Chintala, and L. Pinto. Holo-dex: Teaching dexterity withimmersive mixed reality. arXiv preprint arXiv:2210.06463 , 2022.[23] V . Kumar, A. Gupta, E. Todorov, and S. Levine. Learning dexterous manipulation policiesfrom experience and imitation. arXiv preprint arXiv:1611.05095 , 2016.[24] P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger. Deep reinforcementlearning that matters. In Proceedings of the AAAI conference on artificial intelligence , 2018.[25] I. Abraham, G. De La Torre, and T. D. Murphey. Model-based control using Koopman opera-tors. arXiv preprint arXiv:1709.01568 , 2017.[26] I. Abraham and T. D. Murphey. Active Learning of Dynamics for Data-Driven Control UsingKoopman Operators. IEEE Transactions on Robotics , 35(5):1071–1083, 2019. doi:10.1109/TRO.2019.2923880.[27] F. E. Sotiropoulos and H. H. Asada. Dynamic Modeling of Bucket-Soil Interactions UsingKoopman-DFL Lifting Linearization for Model Predictive Contouring Control of AutonomousExcavators. IEEE Robotics and Automation Letters , 7(1):151–158, 2022. doi:10.1109/LRA.2021.3121136.[28] N. S. Selby and H. H. Asada. Learning of causal observable functions for koopman-dfl lift-ing linearization of nonlinear controlled systems and its application to excavation automa-tion. IEEE Robotics and Automation Letters , 6(4):6297–6304, 2021. doi:10.1109/LRA.2021.3092256.[29] D. Bruder, B. Gillespie, C. D. Remy, and R. Vasudevan. Modeling and control of soft robotsusing the koopman operator and model predictive control. arXiv preprint arXiv:1902.02827 ,2019.[30] D. Bruder, X. Fu, R. B. Gillespie, C. D. Remy, and R. Vasudevan. Koopman-Based Controlof a Soft Continuum Manipulator Under Variable Loading Conditions. IEEE Robotics andAutomation Letters , 6(4):6852–6859, 2021. doi:10.1109/LRA.2021.3095268.[31] J. P. Hanna and P. Stone. Grounded action transformation for robot learning in simulation. InThirty-first AAAI conference on artificial intelligence , 2017.[32] Todorov, Emanuel and Erez, Tom and Tassa, Yuval. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems ,pages 5026–5033, 2012.[33] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. arXiv preprint arXiv:2108.03298 , 2021.[34] A. F. Agarap. Deep learning using rectified linear units (relu). arXiv preprintarXiv:1803.08375 , 2018.10[35] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the fourteenth international conferenceon artificial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Pro-ceedings, 2011.[36] M. Weissenbacher, S. Sinha, A. Garg, and K. Yoshinobu. Koopman q-learning: Offline re-inforcement learning via symmetries of dynamics. In International Conference on MachineLearning , pages 23645–23667. PMLR, 2022.[37] B. Lusch, J. N. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings ofnonlinear dynamics. Nature communications , 9(1):1–10, 2018.[38] Y . Li, H. He, J. Wu, D. Katabi, and A. Torralba. Learning compositional koopman operators formodel-based control. In International Conference on Learning Representations , 2020. URLhttps://openreview.net/forum?id=H1ldzA4tPr .[39] X. B. Peng, E. Coumans, T. Zhang, T.-W. Lee, J. Tan, and S. Levine. Learning agile roboticlocomotion skills by imitating animals. arXiv preprint arXiv:2004.00784 , 2020.[40] F. Torabi, G. Warnell, and P. Stone. Recent advances in imitation learning from observation.arXiv preprint arXiv:1905.13566 , 2019.[41] A. Majumdar, K. Yadav, S. Arnaud, Y . J. Ma, C. Chen, S. Silwal, A. Jain, V .-P. Berges,P. Abbeel, J. Malik, et al. Where are we in the search for an artificial visual cortex for embodiedintelligence? arXiv preprint arXiv:2303.18240 , 2023.[42] K. Greff, R. K. Srivastava, J. Koutn ́ık, B. R. Steunebrink, and J. Schmidhuber. Lstm: Asearch space odyssey. IEEE transactions on neural networks and learning systems , 28(10):2222–2232, 2016.[43] K. Van Wyk, M. Xie, A. Li, M. A. Rana, B. Babich, B. Peele, Q. Wan, I. Akinola, B. Sun-daralingam, D. Fox, et al. Geometric fabrics: Generalizing classical mechanics to capture thephysics of behavior. IEEE Robotics and Automation Letters , 2022.[44] C.-A. Cheng, M. Mukadam, J. Issac, S. Birchfield, D. Fox, B. Boots, and N. Ratliff. Rmpflow:A geometric framework for generation of multitask motion policies. IEEE Transactions onAutomation Science and Engineering , 18(3):968–987, 2021.[45] J. Ho and S. Ermon. Generative adversarial imitation learning. Advances in neural informationprocessing systems , 29, 2016.11AppendicesA KODex Pseudo-codeThe overall pseudo-code for KODex is shown below.Algorithm 1: KODexDemonstration Data CollectionInitializeD=?;forn2f1;:::;NgdoGenerate aT(n)-horizon trajectory of states and torques f[xn(t);n(t)]gt=T(n)t=1 ;Addf[xn(t);n(t)]gt=T(n)t=1 toD;endKoopman Operator ApproximationDetermine lifting function (x(t));Compute KonD(6, 9);Controller DesignBuild a controller Cas a neural network with inputs as (xr(t);xr(t+ 1)) and output as (t);TrainCusing state-torque pairs (xnr(t);xnr(t+ 1);n(t))inD(10);ExecutionSpecify the initial states x(1);fort2f1;:::;T1gdoPredict the next robot states ^ xr(t+ 1) usingK(3 8);Read the current robot states xr(t);Generate the torque (t)usingCon(xr(t);^ xr(t+ 1)) and execute it;endB State DesignIn this section, we show the state design for each task in detail. It should be noted that the motioncapability of the hand for each task were suggested from the work [9] that originally introducedthese tasks. For a decent implementation, we employed the same setting.Tool use For this task, the floating wrist base can only rotate along the xandyaxis, so we havexr(t)2 X rR26. Regarding the object states, unlike the other tasks, where the objects ofinterest are directly manipulated by the hand, this task requires to modify the environment it-self. As a result, except for the hammer positions, orientations and their corresponding veloci-tiesptoolt;otoolt;_ ptoolt;_ otoolt(R3), we also define the nail goal position pnail(R3). Finally, we havexo(t) = [ptoolt;otoolt;_ ptoolt;_ otoolt;pnail]2XoR15. As a result, x(t)includes 41 states in total and weuseT= 100 .Door opening For this task, the floating wrist base can only move along the direction that is perpen-dicular to the door plane but rotate freely, so we have xr(t)2XrR28. Regarding the object states,we define the fixed door position pdoor, which can provide with case-specific information (similar topnailin Tool Use), and the handle positions phandlet (bothR3). In order to take into consideration thestatus of door being opened, we include the angular velocity of the opening angle vt(R1). Finally,we have xo(t) = [phandlet;vt;pdoor]2XoR7. As a result, x(t)includes 35 states in total and weuseT= 70 .Object relocation For this task, the ADROIT hand is fully actuated, so we have xr(t)2XrR30(24-DoF hand + 6-DoF floating wrist base). Regarding the object states, we define ptargetandpballtas the target and current positions. Then, we compute pballt= pballtptarget, which is the componentofpballtin a new coordinate frame that is constructed by ptargetbeing the origin. We additional in-clude the ball orientation oballtand their corresponding velocities _ pballt,_ oballt(allR3). Finally, we havexo(t) = [ pballt;oballt;_ pballt;_ oballt]2XoR12. As a result, x(t)includes 42 states in total and we useT= 100 .In-hand reorientation For this task, the floating wrist base is fixed, so we only consider the 24-DoFhand joints. Therefore, we have xr(t)2X rR24. Regarding the object states, we define ogoal12andopentas the goal and current pen orientations, which are both unit direction vectors. Then, wetransform opentto a new rotated coordinate frame that is constructed by ogoalbeingxaxis ([1,0,0]).Note that the vector opentafter transformation is also a unit vector and it converges to x axis ifthe pen is perfectly manipulated to goal orientation ogoal. In addition, we also include the cen-ter of mass position ppentand their corresponding velocities _ ppent,_ opent(allR3). Finally, we havexo(t) = [ppent; opent;_ ppent;_ opent]2XoR12. As a result, x(t)includes 36 states in total and we useT= 100 .In this work, we only included the joint positions as the robot states (with the only exception ofNGF’s second-order policy) for the following reasons: 1) Given that these tasks are not repetitive,we found that joint position information was sufficient to disambiguate the robot’s next action, 2)even when ambiguity arises for a given joint position, object state information can help with disam-biguation. Further, the impressive performance achieved by KODex in our experiments support thisdesign choice. Indeed, KODex is agnostic to this specific state design. One can incorporate velocityinformation into the robot state space without the need of any changes to the training procedure.C Task Success CriteriaThe task success criteria are listed below. The settings were the same as proposed in [9].Tool Use: The task is considered successful if at last time step T, the Euclidean distance betweenthe final nail position and the goal nail position is smaller than 0.01.Door Opening: The task is considered successful if at last time step T, the door opening angle islarger than 1.35 rad.Object Relocation: At each time step t, ifpjptargetpballtj2<0:10, then we have (t) = 1 . Thetask is considered successful ifPTt=1(t)>10.In-hand Reorientation: At each time step t, ifogoalopent>0:90(ogoalopentmeasures orientationsimilarity), then we have (t) = 1 . The task is considered successful ifPTt=1(t)>10.D Sampling ProcedureWe describe the sampling procedure in this section. The sample distributions used for RL trainingand demo collection were identical, as suggested in [9]. The out-of-distribution data were generatedto evaluate the zero-shot out-of-distribution generalizability of each policy.Tool Use: We randomly sampled the nail heights ( h) from a uniform distributions. Within distri-bution: we used h2HU (0:1;0:25); Out of distribution: we used h2HU (0:05;0:1)[U(0:25;0:3).Door Opening: We randomly sampled the door positions ( xyz) from uniform distributions. Withindistribution: we used x2XU (0:3;0),y2YU (0:2;0:35), andz2ZU (0:252;0:402) ;Out of distribution: we used y2YU (0:15;0:2)[U(0:35;0:4)(x;zremained unchanged).Object Relocation: We randomly sampled the target positions ( xyz) from uniform distributions.Within distribution: we used x2 X U (0:25;0:25),y2 Y U (0:25;0:25), andz2Z U (0:15;0:35); Out of distribution: we used z2Z U (0:35;0:40)(x;y remainedunchanged).In-hand Reorientation: We randomly sampled the pitch ( ) and yaw ( ) angles of the goalorientation from uniform distributions. Within distribution: we used 2 A U (1;1)and2BU (1;1); Out of distribution: we used f(;)2(A;B)(U(1;1:2));U(1;1:2))[(U(1;1:2));U(1:2;1))[(U(1:2;1));U(1:2;1))[(U(1:2;1));U(1;1:2))g:E Policy DesignWe show the detailed policy design in this section. All the baseline policies were trained to minimizethe trajectory reproduction error.KODex: The representation of the system is given as: xr= [x1r;x2r;;xnr]andxo=[x1o;x2o;;xmo]and superscript is used to index states. The details of the state design for eachtask is provided in Appendix B. In experiments, the vector-valued lifting functions rand oin (8)13were polynomial basis function defined as r=fxirxjrg[f (xir)3gfori;j= 1;;n o=fxioxjog[f (xio)2(xjo)gfori;j= 1;;m(11)Note thatxirxjr/xjrxironly appears once in lifting functions (similar to xioxjo/xjoxio), and we ignore tas the lifting functions are the same across the time horizon.The choice of lifting functions can be viewed as the hyper-parameter of KODex. We make thischoice as inspired from [25] and experimental results also indicate its effectiveness. Through allthe experiments, we sticked with the same set of lifting functions, which helped to relieve us fromextensive efforts of tuning the hyper-parameters, e.g. network layer size, that were necessary forbaseline policies as shown in Appendix F.Full-connected Neural Network (NN): The first baseline is a feedforward network that ingests thestates x(1) and iteratively produces the predictions x(t);t= 2;;Tvia the rollout of a MultilayerPerceptron (MLP). The reference joint trajectories ( xr(t)) are then used to execute the robot withthe learned controller C. The significance of this baseline is to evaluate a policy that produces ahigh-dimensional motion without any additional structure.Long Short-Term Memory (LSTM): We create an LSTM-based policy under the same input-output flow as the NN policy. We also apply two fully-connected layers between the task in-put/output and the input/hidden state of the LSTM network. Similarly, the same controller Cisdeployed to track the reference joint trajectory. LSTM networks are known to be beneficial to imi-tation learning [33] and suitable for sequential processing [42], e.g, motion generation. Therefore,we expect to evaluate the performance of the recurrent structures in these tasks.Neural Dynamic Policy (NDP): The Neural Dynamic Policy [18] embeds desired dynamical struc-ture as a layer in neural networks. Specifically, the parameters of the second order Dynamics MotionPrimitives (DMP) are predicted as outputs of the preceding layers (MLP in [18]). As a result, it al-lows the overall policy easily reason in the space of trajectories and can be utilized for learning fromdemonstration. We train an NDP policy following the imitation learning pipeline described in [18].For each task, given x(1), the neural network components in NDP generate the parameters of DMPs(radial basis functions (RBFs) in [18]), which are forward integrated to produce the reference jointtrajectories for tracking.Neural Geometric Fabrics policy (NGF): The Neural Geometric Fabrics [4], a structured pol-icy class, that enables efficient skill learning for dexterous manipulation from demonstrations byleveraging structures induced by Geometric Fabrics [43]. Geometric Fabrics is a stable class ofthe Riemannian Motion Policy (RMP) [44]. It has been demonstrated that NGF outperforms RMPin policy learning for dexterous manipulation task in [4]. The NGF policy is defined in the con-figuration space of the robot, which is composed of a geometric policy, a potential policy and adamping term. More specifically, the NGF policy is constructed as follows: (1) define a geometricpolicy pair [M;]and a potential policy pair [Mf;f]in the configuration space q, (2) energizethe geometric policy (project orthogonal to the direction of motion with pe) to create a collection ofenergy-preserving paths (the Geometric Fabric), and (3) force the Geometric Fabric with a potentialdefined by [Mf;f]and damp via bapplied along _q, which ensures convergence to the potential’sminima. The potential policy fis the gradient of a function of position only. Note that we param-eterize the geometric policy pair [M;], the potential policy pair [Mf;f], and the damping scalarbwith MLP networks and learn them from demonstration data.F Optimizing baseline model sizeAs described in Appendix E, we sticked with the same set of lifting functions for KODex and reportthe task success rate when we trained KODex on training set and tested it on validation set in Table. 1.However, for baselines, the hyper-parameters were selected through a set of ablation experimentsfor each task using the training set over three choices of model size, including small size, mediansize and large size. We generated five random seeds for parameter initialization per model size, perbaseline, and per task, as all learning based baseline models are sensitive to parameter initialization[24]. For each baseline policy, we report the mean and standard deviation of the task success rate onthe validation set over five random seeds in Tables. 2-5.Based on these results, we selectd the model size that offers the best performance in terms of tasksuccess rate. In addition, these results indicate that, unlike KODex, extensive hyper-parameter tun-14ing and various trials on parameter initialization for baseline models are necessary. Note that we uselto denote dim (x(t)).Table 1: Task success rate on validation set (KODex)Tool Door Relocation Reorientation100.0% 96.0% 88.0% 62.0%Table 2: Hyper-parameters on NN Network SizesModel SizeSuccess Rate TaskTool Door Relocation Reorientation (%)MLP: (32, 64, 32) 0.4(0.8) 0.0(0.0) 0.4(0.8) 6.8(3.9)MLP: (64, 128, 64) 0.0(0.0) 0.4(0.8) 1.2(2.4) 10.4(6.6)MLP: (128, 256, 128) 0.0(0.0) 0.0(0.0) 0.8(1.6) 6.0(1.5)Table 3: Hyper-parameters on LSTM Network SizesModel SizeSuccess Rate TaskTool Door Relocation Reorientation (%)LSTM: 20028.8(25.0) 87.6(10.3) 7.6(5.9) 56.4(7.4)fc: (l, 100), (200, l)LSTM: 25060.8(36.6) 80.8(24.5) 7.6(7.5) 48.0(17.0)fc: (l, 175), (250, l)LSTM: 30044.8(31.8) 82.0(13.9) 16.4(14.5) 54.0(11.0)fc: (l, 250), (300, l)Table 4: Hyper-parameters on NDP Network SizesModel SizeSuccess Rate TaskTool Door Relocation Reorientation (%)MLP: (32, 64, 32)0.0(0.0) 8.0(2.5) 30.0(9.3) 57.2(8.6)10 RBFsMLP: (64, 128, 64)16.8(29.8) 40.8(8.1) 74.0(4.9) 59.2(6.5)20 RBFsMLP: (128, 256, 128)18.4(31.9) 66.0(5.2) 79.2(7.7) 62.4(7.8)30 RBFsTable 5: Hyper-parameters on NGF Network SizesModel SizeSuccess Rate TaskTool Door Relocation Reorientation (%)MLP: (64, 32) 99.2(1.6) 87.2(12.0) 87.6(8.5) 77.6(2.3)MLP: (128, 64) 100.0(0.0) 90.0(5.9) 94.4(3.2) 72.4(4.5)MLP: (256, 128) 83.6(20.1) 90.8(4.3) 95.2(1.6) 78.4(3.4)G Hyper-parameters for controller learningThe hyper-parameters we used to learn the inverse dynamics controller Cfor each task were thesame as listed in Table. 6. Note that we use lrto denote dim (xr(t)).15Table 6: Hyper-parameters on controller learningHidden Layer Activation Learning Rate Iteration(4lr;4lr;2lr) ReLU 0.0001 300Table 7: One-step inference time (in milliseconds) with mean and standard deviation over the task horizon.PolicyTaskTool Door Relocation ReorientationNN 1.39(0.26) 1.26(0.39) 1.02(0.09) 1.15(0.12)LSTM 1.71(0.28) 1.32(0.34) 1.59(0.57) 1.42(0.13)NDP 1.88(0.30) 1.08(0.22) 1.05(0.06) 1.32(0.21)NGF 1.37(0.16) 1.17(0.26) 1.72(0.36) 1.19(0.12)KODex 1.71(0.48) 1.04(0.27) 1.12(0.24) 1.08(0.60)H Inference TimeWe report the inference time for each method in Table. 7. Our results indicate that KODex’s infer-ence time is on par with the SOTA baselines. As such, it reveals that KODex can be translated tophysical hardware and meet necessary control frequency.I Zero-Shot Out-of-Distribution GeneralizationWe generated a new set of 10,000 out-of-distribution samples to evaluate how the policies that weretrained on 200 demonstrations generalize to unseen samples (see Appendix D for details on thesampling procedure). In Fig. 5, we report the task success rates of each method trained on the 200demonstrations and tested on the 10,000 out-of-distribution samples. In addition, we also report thetask success rate of the expert policy on the same 10,000 out-of-distribution samples to establisha baseline. Perhaps unsurprisingly, none of the methods are able to consistently outperform theexpert policy in most tasks. We observe that KODex is able to outperform the four baselines in ToolUse task. In the other tasks, the highly-structured NGF performs the best, and KODex’s performscomparably to NDP and LSTM.Tool Door Reloc ReorientTasks0255075100Task Success Rate (%)Task Success RateFigure 5: Zero-Shot Out-of-distribution task success ratesJ Robustness to changes in physical propertiesWe evaluate the robustness of the reference dynamics learned by each method to changes in handmass or object mass for each task. This experiment is motivated by the fact that sim-to-real transferoften involves changes in physical properties. Further, consistent use of robotic hardware couldresult in changes to physical properties. Specifically, we consider four variations per task:16• Tool Use: i) Heavy Object (Hammer) : 0.25 (default)!0.85 (new), ii) Light Object (Hammer) :0.25 (default)!0.10 (new), iii) Light Hand (Palm) : 4.0 (default)!1.0 (new), and iv) HeavyHand (Palm) : 4.0 (default)!8.0 (new)• Door: i) Heavy Object (Latch) : 3.54 (default)!12.54 (new), ii) Light Object (Latch) : 3.54(default)!0.54 (new), iii) Light Hand (Palm) : 4.0 (default)!1.5 (new), and iv) Heavy Hand(Palm) : 4.0 (default)!7.0 (new)• Relocation: i) Heavy Object (Ball) : 0.18 (default)!1.88 (new), ii) Light Object (Ball) : 0.18(default)!0.05 (new), iii) Light Hand (Palm) : 4.0 (default)!3.0 (new), and iv) Heavy Hand(Palm) : 4.0 (default)!5.0 (new);• Reorientation: i) Heavy Object (Pen) : 1.5 (default)!9.5 (new), ii) Light Object (Pen) : 1.5(default)!0.2 (new), iii) Light Hand (Finger Knuckles) : 0.008 (default)!0.0001 (new), andiv)Heavy Hand (Finger Knuckles) : 0.008 (default)!0.20 (new)It is important to note we held the reference dynamics learned by each method constant for thisexperiment, irrespective of the changes to the hand or the object. Instead, we relearned the trackingcontroller using 200 rollouts from the expert agent, following the procedure detailed in Section. 4.3.In Tables. 8-11, we report the task success rate of KODex, and other baseline policies (all trained on200 demonstrations) before and after relearning the controller. We also report the task success ratesof the expert agents to establish baselines.We find that the Light Hand variation results in the lowest drop in performance across all methodsand all tasks, thus consequently relearning controllers does not offer any considerable improve-ments. In contrast, all methods benefit from relearning the controller in the Heavy Hand variations,as evidenced by the increased task success rates. Overall, we find that KODex outperforms all base-lines, with the exception of NGF which performs better than KODex under a few variations andtasks. Surprisingly, KODex (and some baselines) when used with the original controller outperformthe expert policy under a few variations (e.g., Heavy Object in Relocation task, and Heavy Objectin Door task). We believe this is due to the fact that KODex and the baselines learn to generate andtrack desired trajectories separately, while the expert RL directly generates control inputs from stateinformation. In particular, the learned desired trajectories for a given tasks are likely invariant toslight changes in physical properties. On rare occasions where this is not the case, we indeed findthat fine-tuning the tracking controllers worsens the performance.These results demonstrate that changes to the robot/system dynamics can be handled by fine tuningthe tracking controller without the need for relearning the reference dynamics. Once again, KODexis able to perform comparably to or outperform SOTA approaches despite its simplicity.Table 8: Robustness to variations in the physical properties (Tool Use)ControllerSuccess Rate VariationHeavy Object Light Object Light Hand Heavy Hand (%)Expert agent 93.5 66.2 65.4 71.2KODex + Original controller 46.0 64.0 99.5 46.5NN + Original controller 0.0(0.0) 0.0(0.0) 0.0(0.0) 0.7(1.4)LSTM + Original controller 32.7(18.7) 35.0(22.1) 44.3(23.1) 52.7(27.5)NDP + Original controller 0.0(0.0) 68.0(20.8) 45.4(37.4) 0.0(0.0)NGF + Original controller 33.4(11.5) 62.9(27.5) 83.2(26.3) 40.3(20.2)KODex + Expert-tuned controller 53.5 44.0 89.0 92.5NN + Expert-tuned controller 0.0(0.0) 0.0(0.0) 0.2(0.4) 0.0(0.0)LSTM + Expert-tuned controller 42.4(34.3) 33.7(14.9) 52.2(22.7) 69.9(19.4)NDP + Expert-tuned controller 33.3(20.0) 23.8(24.4) 29.4(37.1) 39.8(24.5)NGF + Expert-tuned controller 48.2(18.0) 48.7(12.2) 94.6(8.9) 82.1(7.5)K The impact of the choice of basis functionsWe evaluate if KODex’s performance is impacted by different sets of polynomial functions that areused as the lifting function. We trained all policies on 200 demos and tested them on 10,000 unseeninitial conditions.17Table 9: Robustness to variations in the physical properties (Door)ControllerSuccess Rate VariationHeavy Object Light Object Light Hand Heavy Hand (%)Expert agent 45.2 91.7 82.0 74.9KODex + Original controller 57.0 97.0 56.5 33.5NN + Original controller 0.0(0.0) 0.2(0.4) 1.3(2.1) 0.0(0.0)LSTM + Original controller 34.4(8.7) 75.8(19.5) 38.1(10.4) 33.5(11.4)NDP + Original controller 22.1(1.9) 62.8(5.2) 51.1(4.9) 3.1(2.3)NGF + Original controller 48.7(6.7) 95.0(2.1) 42.1(11.0) 33.8(10.0)KODex + Expert-tuned controller 39.0 94.0 54.0 81.5NN + Expert-tuned controller 0.0(0.0) 0.0(0.0) 0.7(0.9) 0.0(0.0)LSTM + Expert-tuned controller 21.2(5.3) 75.4(18.0) 49.2(8.1) 56.9(18.7)NDP + Expert-tuned controller 15.5(3.0) 36.2(10.6) 25.5(4.4) 8.8(3.0)NGF + Expert-tuned controller 36.6(5.1) 95.5(1.8) 57.7(4.7) 77.1(6.7)Table 10: Robustness to variations in the physical properties (Relocation)ControllerSuccess Rate VariationHeavy Object Light Object Light Hand Heavy Hand (%)Expert agent 77.0 100.0 100.0 100.0KODex + Original controller 19.5 89.5 82.5 21.5NN + Original controller 0.1(0.2) 1.6(2.5) 1.5(2.1) 1.7(2.2)LSTM + Original controller 0.4(0.4) 15.4(10.7) 9.5(8.1) 7.7(9.4)NDP + Original controller 13.5(5.0) 85.6(8.1) 72.1(9.6) 31.6(10.0)NGF + Original controller 25.8(4.9) 96.4(1.4) 96.6(0.97) 19.3(3.8)KODex + Expert-tuned controller 34.0 93.0 85.0 89.0NN + Expert-tuned controller 0.2(0.4) 0.6(0.7) 1.4(1.8) 1.5(2.3)LSTM + Expert-tuned controller 5.8(4.7) 15.2(12.5) 15.5(10.7) 14.1(9.3)NDP + Expert-tuned controller 19.9(5.8) 84.5(8.9) 63.2(15.0) 92.4(1.2)NGF + Expert-tuned controller 52.6(3.6) 98.1(1.2) 95.6(2.2) 94.5(0.9)Table 11: Robustness to variations in the physical properties (Reorientation)ControllerSuccess Rate VariationHeavy Object Light Object Light Hand Heavy Hand (%)Expert agent 46.8 69.0 95.2 89.7KODex + Original controller 53.5 55.0 66.5 61.5NN + Original controller 4.7(2.6) 9.6(8.1) 9.5(6.4) 7.9(6.5)LSTM + Original controller 34.5(7.8) 52.3(10.6) 60.3(6.0) 55.6(7.8)NDP + Original controller 49.4(3.6) 58.4(6.4) 59.8(7.6) 55.7(9.7)NGF + Original controller 39.9(1.9) 57.1(2.2) 81.6(1.8) 73.4(3.8)KODex + Expert-tuned controller 52.0 63.0 71.5 65.5NN + Expert-tuned controller 1.5(0.9) 5.2(4.2) 3.8(1.7) 3.7(2.6)LSTM + Expert-tuned controller 43.5(7.9) 47.7(8.8) 61.4(4.2) 54.4(5.5)NDP + Expert-tuned controller 55.5(5.9) 59.0(5.5) 63.0(6.5) 57.0(7.5)NGF + Expert-tuned controller 49.1(2.6) 59.7(3.2) 79.4(1.9) 72.6(1.2)Design : Specifically, we define four sets of observables (one of which was used in the originalsubmission). Let robot state: xr= [x1r;x2r;;xnr]andxo= [x1o;x2o;;xmo]denote the robotand the object state, respectively, with superscript indexing the states. We then define four vector-valued lifting functions rand oin (8) as follows• Set 1 r=f(xir)2gfori= 1;;n o=f(xio)2gfori= 1;;m18• Set 2 r=fxirxjrgfori;j= 1;;n o=fxioxjogfori;j= 1;;m•Set 3 (used in this work) r=fxirxjrg[f (xir)3gfori;j= 1;;n o=fxioxjog[f (xio)2(xjo)gfori;j= 1;;m• Set 4 r=fxirxjrg[f (xir)2(xjr)gfori;j= 1;;n o=fxioxjog[f (xio)2(xjo)gfori;j= 1;;mWe report the number of observables for each set and task combination in Table. 12.Table 12: Number of observablesSetTaskTool Door Relocation Reorientationn=26,m=15 n=28,m=7 n=30,m=12 n=24,m=12Set 1 82 70 84 72Set 2 512 469 585 414Set 3 (ours) 763 546 759 582Set 4 1413 1302 1629 1134Figure 6: The effects of lifting function on training time (left), imitation error (center), and success rate (right).Discussion : As shown in Fig. 6, it is clear that training time increases with the number of observ-ables since the Moore–Penrose inverse requires more computation for higher-dimension matrices.Importantly, KODex’s success rate across all tasks remained roughly the same for Sets 2, 3, and 4.In general, as one would expect, increasing the number of observables tends to decrease imitationerror and increase task success rate. The only exception to this trend is observed for the ObjectRelocation task, in which KODex performs marginally better when trained on Set 2 (585 observ-ables) compared with it trained on Set 3 (759 observables). Taken together, these results suggestthat KODex’s performance is not highly sensitive to the specific choice of lifting function, as longas sufficient expressivity is ensured.L Stability AnalysisAnother unique advantage of utilizing Koopman Operators to model the reference dynamics fordexterous manipulation tasks is that the learned policy is a linear dynamical system which can bereadily inspected and analyzed, in stark contrast to SOTA methods built upon deep neural networks.We analyzed the stability of the learned policy. For a linear dynamical system with complex conju-gate eigenvalues i=ij!i, i.e., KODex with Koopman matrix K, the system is asymptoticallystable if all of the eigenvalues have magnitude ( i=p2i+!2i) less than one. From the standpoint19of control theory, it is beneficial to have a asymptotically stable system because of the guaranteethat all system states will converge. However, from the standpoint of dexterous manipulation tasksconsidered in this work, strict stability might not be preferable because the final desired hand posesand object poses are not identical for different initial conditions. This represents a natural trade-offbetween safety and expressivity. As such, understanding how KODex addresses this trade-off canbe illuminating.0.0 0.5 1.0Eigenvalue Magnitude0255075100OccurrenceOccurrence of Eigenvalue Magnitude(a) Tool Use0.0 0.5 1.0Eigenvalue Magnitude02040OccurrenceOccurrence of Eigenvalue Magnitude(b) Door Opening0.0 0.5 1.0Eigenvalue Magnitude050100150OccurrenceOccurrence of Eigenvalue Magnitude(c) Object Relocation0.0 0.5 1.0Eigenvalue Magnitude0204060OccurrenceOccurrence of Eigenvalue Magnitude(d) In-hand ReorientationFigure 7: Occurrence of Eigenvalue MagnitudeTable 13: Maximun Eigenvalue MagnitudeTool Use Door Opening Object Relocation In-hand Reorientation1.07888 1.00553 1.00859 1.00413In Fig. 7, we report a histogram of the Koopman matrix’s eigenvalue magnitudes in each task. Inaddition, we report the maximum eigenvalue magnitude in Table. 13. Based on these results, we cansee that i) most eigenvalues’ magnitudes are less than one, suggesting that KODex tends to learnnearly-stable policies that generate safe trajectories during execution, and ii) a few eigenvalueshave magnitude larger than one, suggesting KODex does not prioritize stability, at the expense ofexpressivity required to achieve the reported performance.M Comparisons against Behaviour Cloning with State-Action MappingWe conducted an additional experiment involving a new neural network based baseline policy thatlearns to directly map states x(t)to actions(t)instead of learning the reference dynamics and thetracking controller. The new policy (State-action BC) was built upon the MLP architecture withthree hidden layers ([64, 128, 64]), and was trained over three random seeds to minimize the state-action reproduction error. For a fair comparison, these policies were trained and tested on the sameset of demonstrations and testing instances as in Section. 5.3.In Fig. 8, we report the training time and the task success rate on each task for KODex and State-202550 100 150 200Number of Demonstrations100200300400Training Time (Sec)Training Time vs Dataset SizeState-action BCKODex2550 100 150 200Number of Demonstrations100200300Training Time (Sec)Training Time vs Dataset SizeState-action BCKODex2550 100 150 200Number of Demonstrations100200300400Training Time (Sec)Training Time vs Dataset SizeState-action BCKODex2550 100 150 200Number of Demonstrations100200300400Training Time (Sec)Training Time vs Dataset SizeState-action BCKODex2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset SizeState-action BCKODex(a) Tool Use (Hammer)2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset SizeState-action BCKODex (b) Door Opening2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset SizeState-action BCKODex (c) Object Relocation2550 100 150 200Number of Demonstrations0255075100Task Success Rate (%)Task Success Rate vs Dataset SizeState-action BCKODex (d) In-hand ReorientationFigure 8: The effects of number of demonstrations on training time (top row), and success rate (bottom row) forKODex and State-action BC on each task. Solid lines indicate mean trends and shaded areas show standarddeviation over three random seeds.2550 100 150 200Number of Demonstrations0510Pen Throw-out Rate (%)Pen Throw-out Rate vs Dataset SizeState-action BCKODexFigure 9: The effects of number of demonstrations on pen throw-out rate for KODex and State-action BC.Action BC. The results reveal a familiar trend: across all tasks, KODex is drastically more compu-tationally efficient than State-Action BC, while performing comparably (if not better) in terms ofsuccess rate.Further, we would like to highlight two other advantages of KODex over State-Action BC. First,KODex could be potentially applied on state-only demonstrations, with a manually-tuned PD con-troller replacing the learned tracking controller (one could also learn the controller via reinforcementlearning [39]). In contrast, state-action imitation learning methods inevitably need action labels.Second, KODex is safer for online execution. Since KODex separates motion generation and track-ing, it tends to take less risky actions. But state-action policies may take unsafe actions when theyencounter unseen states due to covariate shift. In Fig. 9, we report the pen throw-out rate from theIn-hand Reorientation task. It can be seen that the State-action BC policy is more likely to generateundesirable behaviours, resulting in complete task failures. This implies that KODex may be saferfor hardware implementations, thanks to the separation of reference motion and tracking. Althoughthere are a few other state-action policies that better address covariate shift (e.g., GAIL [45]), suchcomparisons are outside of scope of this work.21 |
6kSohKYYTn0 | Measuring Interpretability of Neural Policies ofRobots with Disentangled RepresentationTsun-Hsuan Wang, Wei Xiao, Tim Seyde, Ramin Hasani, Daniela RusMassachusetts Institute of Technology (MIT)Abstract: The advancement of robots, particularly those functioning in complexhuman-centric environments, relies on control solutions that are driven by ma-chine learning. Understanding how learning-based controllers make decisions iscrucial since robots are often safety-critical systems. This urges a formal andquantitative understanding of the explanatory factors in the interpretability ofrobot learning. In this paper, we aim to study interpretability of compact neu-ral policies through the lens of disentangled representation. We leverage decisiontrees to obtain factors of variation [1] for disentanglement in robot learning; theseencapsulate skills, behaviors, or strategies toward solving tasks. To assess howwell networks uncover the underlying task dynamics, we introduce interpretabilitymetrics that measure disentanglement of learned neural dynamics from a concen-tration of decisions, mutual information and modularity perspective. We showcasethe effectiveness of the connection between interpretability and disentanglementconsistently across extensive experimental analysis.Keywords: Interpretability, Disentangled Representation, Neural Policy1 IntroductionObservationd≥0.1∧μ≥0.10.0≤d≤0.2Neuron ResponseFactors of Variation by Logic ProgramCompact Neural Policy for Robot LearningStrategy 1 (Stabilize)Strategy 2 (Recover At The Right)Interpretability Measure by Disentanglement•Concentration•Mutual Information•ModularityLatent CodeBehavior #1Behavior #NBehavior #1Figure 1: Understand robot behaviors by extracting logic programs asfactors of variation to measure interpretability with disentanglement.Interpretability of learning-basedrobot control is important forsafety-critical applications as it af-fords human comprehension ofhow the system processes inputsand decides actions. In general,achieving interpretability is diffi-cult for learning-based robot con-trol. The robot learning modelsmake decisions without being ex-plicitly programmed to perform thetask and are often very large, thus itis impossible to synthesize and explain their reasoning processes. This lack of transparency, oftenreferred to as the ”black box” problem, makes it hard to interpret the workings of learning-basedrobot control systems. Understanding why a particular decision was made or predicting how thesystem will behave in future scenarios remains a challenge, yet critical for physical deployments.Through the lens of representation learning, we assume that neural networks capture a set of pro-cesses that exist in the data distribution; for robots, they manifest learned skills, behaviors, or strate-gies, which are critical to understand the decision-making of a policy. However, while these factorsof variation [1] (e.g., color or shape representations) are actively studied in unsupervised learningfor disentangled representation, in robot learning, they are less well-defined and pose unique chal-lenges due to the intertwined correspondence of neural activities with emergent behaviors unknowna priori. In the present study, we aim to (i) provide a useful definition of factors of variation forpolicy learning, and (ii) explore how to uncover dynamics and factors of variation quantitatively as ameasure of interpretability in compact neural networks for closed-loop end-to-end control applica-7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.tions. In this space, an entanglement corresponding to multiple neurons responsible for an emergentbehavior can obstruct the interpretation of neuron response even with a small number of neurons[2, 3, 4, 5]. To this end, the disentanglement of learned representations [6, 7, 8] in compact neuralnetworks is essential for deriving explanations and interpretations for neural policies.We posit that each neuron should learn an abstraction (factor of variation) related to a specific strat-egy required for solving a sub-component of a task. For example, in locomotion, one neuron maycapture periodic gait, where the numerical value of the neuron response may be aligned with dif-ferent phases of the gait cycle; another neuron may account for recovery from slipping. Retrievingthe abstraction learned by a neuron is, however, non-trivial. Directly observing the neuron responsealong with sensory information provided as input to the policy can be extremely inefficient andtedious for identifying behaviors and interpreting decision-making.In this work, our objective is to formulate an abstraction that represents the decision-making of aparametric policy to quantify the interpretability of learned behaviors, specifically from the perspec-tive of disentangled representations. To this end, we make the following key contributions:• Provide a practical definition of factor of variation for robot learning by programmatically extract-ing decision trees from neural policies, in the form of logic program grounded by world states.• Introduce a novel set of quantitative metrics of interpretability to assess how well policies uncovertask structures and their factors of variation by measuring the disentanglement of learned neuraldynamics from a concentration of decisions, mutual information, and modularity perspective.• Experiment in a series of end-to-end policy learning tasks that (a) showcase the effectiveness ofleveraging disentanglement to measure interpretability, (b) demonstrate policy behaviors extractedfrom neural responses, (c) unveil interpretable models through the lens of disentanglement.2 Related WorkCompact neural networks. Compact neural networks are ideal in resource constraints situationssuch as robotics and by nature easier to interpret due to a smaller number of neurons [2, 4]. Com-pact networks can be obtained by pruning [9] or compressing neural networks in end-to-end training[10]. Regularization has also been used to generate compact neural networks [11]. Compact repre-sentation of features can be learned using discriminative masking [12]. Neural Ordinary DifferentialEquations have also been used for learning compact network [13, 4]. In this work, we formally studythe interpretability of compact neural policies through the lens of disentangled representation.Interpretable neural networks. An interpretable neural network could be constructed from a phys-ically comprehensible perspective [14, 15]. Knowledge representation is used to obtain interpretableConvolutional Neural Network [16]. An active line of research focuses on dissecting and analyzingtrained neural networks in a generic yet post-hoc manner [17, 18, 19, 20]. Another active line ofresearch is to study disentangled explanatory factors in learned representation [8]. A better repre-sentation should contain information in a compact and interpretable structure [1, 21]. Unlike priorworks that study disentanglement based on factors of variations such as object types, there is no no-tion of ground-truth factors in robot learning and thus we propose to use decision trees to constructpseudo-ground-truth factors that capture emergent behaviors of robot for interpretability analysis.Interpretability in policy learning. Explainable AI has been recently extended to policy learn-ing like reinforcement learning [22] or for human-AI shared control settings [23]. One line ofresearch analyzes multi-step trajectories from the perspective of options or compositional skills[24, 25, 26, 27, 28, 29]. A more fine-grained single-step alternative is to extract policies via imitationlearning to interpretable models like decision tree [30]. Another line of work directly embeds thedecision tree framework into the learning-based model to strike a balance between expressivenessand interpretability [31, 32, 33]. Explanation of policy behaviors can also be obtained by searchingfor abstract state with value function [34] or feature importance [35]. In this work, we aim to offer anew perspective of disentangled representation to measure interpretability in robot policy learning.3 Method2Algorithm 1 Extract Abstraction via Decision TreeData Trajectories rollout from a compact neural policy Ddt={(o0, s0, a0, z0, . . . )j}Nj=1Result Interpreters of neuron response {fiS}i∈Ifori∈ IdoTrain a decision tree Tθifrom state {st}to neural response {zit}.Collect dataset Ddpwith neuron response {zit}and decision paths {Pist}.Train neuron response classifier qφi:R→ {P} withDdp.Obtain decision path parser ri:{P} → L by tracing out {Pik}Kik=1inTθi.Construct the mapping fiS=ri◦qφi.end forIn this section, we describe how to obtain factor of variation by predicting logic programs fromneuron responses that reflect the learned behavior of the policies (Section 3.1), followed by a set ofquantitative measures of interpretability in the lens of disentanglement (Section 3.2).3.1 Extracting Abstraction via Decision TreeOur goal is to formulate a logic program that represents the decision-making of a parametric policyto serve as an abstraction of learned behaviors, summarized in Algorithm 1. First, we describe adecision process as a tuple {O,S,A, Pa, h}, where at a time instance t,ot∈ O is the observation,st∈ S is the state, at∈ A is the action, Pa:S × A × S → [0,1]is the (Markovian) transitionprobability from current state stto next state st+1under action at, andh:S → O is the observationmodel. We define a neural policy as π:O → A and the response of neuron i∈ Ias{zit∈R}i∈I,whereIrefers to a set of neurons to be interpreted. For each neuron i, we aim to construct a mappingthat infers a logic program from neuron response, fiS:R→ L , where Lis a set of logic programsgrounded on environment states S. Note that fiSdoes not take the state as an input as underlyingstates may be inaccessible during robot deployment. In the following discussion, we heavily use thenotation Pi∗for the decision path associated with the i’th neuron, where the subscript ∗refers to thedependency on state if with parenthesis (like (st)) and otherwise indexing based on the context.From states to neuron responses. Decision trees are non-parametric supervised learning algo-rithms for classification and regression. Throughout training, they develop a set of decision rulesbased on thresholding one or a subset of input dimensions. The relation across rules is described bya tree structure with the root node as the starting point of the decision-making process and the leafnodes as the predictions. The property of decision trees to convert data for decision making to a setof propositions is a natural fit for state-grounded logic programs. Given a trained neural policy π,we collect a set of rollout trajectories Ddt={τj}Nj=1, where τj= (o0, s0, a0, z0, o1, . . .). We firsttrain a decision tree Tθito predict the ith neuron response from states,θi∗= arg minθiX(st,zit)∈D dtLdt(ˆzit, zit),where ˆzit=Tθi(st) (1)where Ldtrepresents the underlying classification or regression criteria. The decision tree Tθide-scribes relations between the neuron responses and the relevant states as logical expressions. Duringinference, starting from the root node, relevant state dimensions will be checked by the decision rulein the current node and directed to the relevant lower layer, finally arriving at one of the leaf nodesand providing information to regress the neuron response. Each inference traces out a route from theroot node to a leaf node. This route is called a decision path. A decision path consists of a sequenceof decision rules defined by nodes visited by the path, which combine to form a logic program,∧n∈Pi(st),j=g(n)(sjt≤cn)←→ Behavior extracted from ˆzitviaTθi (2)where ∧is the logical AND, Pi(st)is the decision path of the tree Tθithat takes stas inputs, ggives the state dimension used in the decision rule of node n(assume each node uses one feature fornotation simplicity), and cnis the threshold at node n.From neuron responses to decision paths. So far, we recover a correspondence between the neuronresponse ztand the state-grounded program based on decision paths Pi(st); however, this is not3sufficient for deployment since the decision tree Tθirequires as input the ground-truth state and notthe observable data to the policy (like ot, zt). To address this, we find an inverse of Tθiwith neuronresponses as inputs and pre-extracted decision paths as classification targets. Based on the inferenceprocess of Tθi, we can calculate the numerical range of neuron responses associated with a certaindecision path Pi(st)from the predicted ˆztand then construct the pairs of ztandPist. We collectanother dataset Ddpand train a classifier qφito predict decision paths from neuron responses,φi∗= arg minφiX(zit,Pi(st))∈D dpLdp(qφi(zit),Pi(st)) (3)where Ldpis a classification criterion. While Pi(st)is state-dependent, there exists a finite set ofdecision paths {Pik}Kik=1given the generating decision tree. We define the mapping from the decisiontree to the logic program as r:{P} → L , which can be obtained by tracing out the path as describedabove. Overall, the desired mapping is readily constructed as fiS=ri◦qφi.3.2 Quantitative Measures of InterpretabilityProgrammatically extracting decision trees for constructing a mapping from the neuron response toa logic program offers a representation that facilitates the interpretability of compact neural poli-cies. Furthermore, building on the computational aspect of our approach, we can quantify the inter-pretability of a policy with respect to several metrics through the lens of disentanglement.A. Neuron-Response Variance. Given decision paths {Pik}Kik=1associated with a tree Tθiat the ithneuron, we compute the normalized variance of the neuron response averaged across decision paths,1|I|Xi∈I1KiKiXk=1Var(st,zit)∈D dtt∈{u|Pi(su)=Pik}hzitZii(4)where Ziis a normalization factor that depends on the range of response of the ith neuron. Theset{u|Pi(su)=Pik}contains all time steps that exhibit the same behavior as entailed by Pik. Forexample, suppose we have a trajectory consisting of behaviors including walking and running, andthat walking is depicted as Pik, the set refers to all time steps of walking. This metric captures theconcentration of the neuron response that corresponds to the same strategy represented by the logicprogram defined by Tθi. In practice, we discretize all neuron responses to Nbins, compute theindex of bins to which a value belongs, divide the index by Nand compute their variance.B. Mutual Information Gap. Inspired by [21, 8], we integrate the notion of mutual informationin our framework to extend disentanglement measures for unsupervised learning to policy learning.Specifically, while previous literature assumes known ground-truth factors for disentanglement suchas object types, viewing angles, etc., there is no straightforward equivalence in neural policies sincethe emergent behaviors or strategies are unknown a priori. To this end, we propose to leverage thedecision path sets to construct pseudo-ground-truth factors Mdp=Si∈I{Pik}Kik=1={Pk}Kk=1.Note that there may be a correlation across decision paths, i.e., P(Pi,Pj)̸=P(Pi)P(Pj)fori̸=j. For example, one decision path corresponding to a logic program of the robot movingforward at high speed has a correlation to another decision path for moving forward at low speed.This may occur because a neuron of a policy can learn arbitrary behaviors. However, this leads toa non-orthogonal ground-truth factor set and can be undesirable since high correlations of a neuronto multiple ground-truth factors (e.g., I[zi;Pi]andI[zi;Pj]are large) can result from not onlyentanglement of the neuron but also the correlation between factors (e.g., I[Pi;Pj]is large). Hence,this urges the need to calibrate mutual information for computing disentanglement measures. Westart by adapting the Mutual Information Gap (MIG) [21] to our framework:1KKXk=11H[Pk]I[zi∗;Pk]−maxj̸=i∗I[zj;Pk]−I[zj;Pk;Pkj](5)where His entropy, Iis interaction information that can take an arbitrary number of variables (with2 being mutual information), i∗= arg maxiI[zi;Pk], and kj= arg maxlI[zj;Pl]. Intuitively,4Table 1: Quantitative results of classical control.NetworkArchitectureDisentanglement Explanation Size ↓ CognitiveChunks ↓ Variance ↓ MI-Gap ↑ Modularity ↑ Vertical HorizontalFCs 0.0242.0050.3008.0250.9412.0145.00.461.91.141.65.28GRU 0.0329.0040.2764.0620.9096.0224.90.801.96.171.65.25LSTM 0.0216.0030.2303.0240.9355.0084.75.392.02.121.90.14ODE-RNN 0.0287.0070.3062.0410.9376.0174.90.381.93.151.80.27CfC 0.0272.0040.2892.1110.9067.0394.70.651.82.331.50.47NCP 0.0240.0080.3653.0520.9551.0193.45.831.51.331.30.32Table 2: Alignment between disen-tanglement and explanation quality inclassical control.Re-signed RankCorrelation ↑Explanation Size CognitiveChunks Vertical HorizontalVariance -0.146 0.002 0.040MI-Gap 0.427 0.505 0.449Modularity -0.114 0.156 0.032Clockwise Angular Velocity ̇θ≥0Static At All PositionsUprightθ≈0Downward(Right)Downward(Left)123412341234(a) Phase Portrait & Neuron Response(b) Emergent Strategies fromLogic Programs(c) Decision Tree of a Neuroṅθ≤0.4̇θ≤−0.3θ≤−1.3θ≤1.412341TrueFalseFigure 2: In classical control (Pendulum): (a) Phase portrait with empirically measured closed-loop dynamicsand neuron response. Each arrow and colored dot are the results averaged around the binned state space. (b)Emergent strategies from logic programs. (c) Decision tree extracted for command neuron 3 in NCP.this measures the normalized difference between the highest and the second-highest mutual informa-tion of each decision path with individual neuron activation, i.e., how discriminative the correlationbetween the neuron response is with one decision path as opposed to the others. For example, neuronresponse correlated to multiple factors of variation will have lower MIG than those to one only. Thelast term I[zj;Pk;Pkj]is for calibration and captures the inherent correlation between zjandPkresulted from potentially nonzero I[Pk;Pkj]withPkjbeing a proxy random variable of zjin theground-truth factor set. We show how to compute I[zj;Pk]−I[zj;Pk;Pkj]in Appendix Section C.C. Modularity. We compute modularity scores from [36] with the same calibration term,1IXi∈I1−Pk̸=k∗(I[zi;Pk]−I[zi;Pk;Pk∗])2(K−1)I[zi;Pk∗]2, (6)where k∗= arg maxlI[zi;Pl]. For a ideally modular representation, each neuron will have highmutual information to a single factor of variation and low mutual information with all the others.Suppose for each neuron ihas the best ”match” with a decision path (ground-truth factor) k∗, non-modularity of that neuron is computed as the normalized variance of mutual information betweenits neuron response and all non-matched decision paths {Pk}k̸=k∗. In practice, we discretize neuronresponses into Nbins to compute discrete mutual information.4 ExperimentsWe conduct a series of experiments in various policy-learning tasks to answer the following: (i) Howeffective is disentanglement to measure the interpretability of policies? (ii)What can we extract fromneural responses? (iii)What architecture is more interpretable through the lens of disentanglement?4.1 SetupNetwork architecture. We construct compact neural networks for each end-to-end learning to con-trol task. For all tasks, our networks are constructed by the following priors: (i) Each baseline net-work is supplied with a perception backbone (e.g., a convolutional neural network) (ii) We constructpolicies based on different compact architectures that take in feature vectors from the perceptionbackbone and output control with comparable cell counts (instead of actual network size in memoryas we assess interpretability metrics down to cell-level). The perception backbone is followed bya neural controller designed by compact feed-forward and recurrent network architectures includ-ing fully-connected network ( FCs), gated recurrent units ( GRU ) [37], and long-short term memory(LSTM ) [38]. Additionally, we include advanced continuous-time baselines designed by ordinarydifferential equations such as ODE-RNN [39], closed-form continuous-time neural models ( CfCs )[40], and neural circuit policies ( NCPs ) [4]. We interpret the dynamics of the neurons in the lastlayer before the output in FCs, the command-neuron layer of NCPs, and the recurrent state of therest. We then extract logic programs and measure interpretability with the proposed metrics.5Table 3: Quantitative results of locomotion.NetworkArchitectureDisentanglement Explanation Size ↓ CognitiveChunks ↓ Variance ↓ MI-Gap ↑ Modularity ↑ Vertical HorizontalFCs 0.0187.0020.1823.0130.9622.0085.66.462.54.124.02.55GRU 0.0259.0020.1830.0220.9713.0095.78.392.52.083.94.35LSTM 0.0108.0020.1453.0250.9600.0025.62.312.52.103.92.23ODE-RNN 0.0210.0030.1880.0290.9701.0076.00.502.57.114.16.43CfC 0.0234.0040.1596.0190.9628.0095.94.152.58.044.20.32NCP 0.0107.0010.2164.0420.9791.0053.94.252.08.022.72.18Table 4: Alignment between disentan-glement and explanation quality in lo-comotion.Re-signed RankCorrelation ↑Explanation Size CognitiveChunks Vertical HorizontalVariance 0.512 0.456 0.443MI-Gap 0.422 0.504 0.481Modularity 0.170 0.180 0.173̇h!≥0.2θ",$≥0.2θ%,$≥−0.3̇θ%,"≥−7.01234566A6Bθ%,$≥−0.5̇θ&,"≥−8.4h!≤0.0θ","≤−0.1[CMD7] Periodic Gait Sequence[CMD3] Failure ModesorNotation of Robot Statesθ!,!h#̇θ$,!θ%,!θ!,&̇θ$,&Figure 3: Neural activations along a gait sequence on HalfCheetah [43]. We focus on neurons 7 and 3 forillustration. Neuron 7 exhibits a periodic activation pattern that reacts to different phases of the gait cycle (left).Neuron 3 displays peak activity in situations with the potential to destabilize gait, such as early touchdown (left)and forward flipping (right). Our approach aids in failure detection by monitoring key neurons’ responses.Evaluation. To evaluate the effectiveness of measuring interpretability through the lens of disen-tanglement, we adopt the metrics proposed in [41], which studies human interpretability of decisionsets [42] (a representation of explanation similar to that in this work). They show human responsetime and subjective satisfaction are highly correlated with explanation size andcognitive chunks .Explanation size consists of vertical size , the number of cases (the number of decision paths per neu-ron), and horizontal size, the complexity of each case (the length of each decision path). Cognitivechunks refer to the presentation of newly defined concepts, which we quantify as the introductionof new symbols in the logic program. Furthermore, we measure the alignment of disentanglementquantification and the above-mentioned explanation quality metrics. We compute re-signed rankcorrelation by re-signing Spearman’s rank correlation coefficient to make larger values always referto better alignment, e.g., given higher modularity being better while lower explanation size beingbetter, better alignment corresponds to negative correlation and we thus negate the coefficient.4.2 Classical ControlEnvironment and policy learning. We use the OpenAI Gym Classical Control Pendulum task [43].The environment has simple yet nonlinear dynamics and allows for straightforward visualization ofthe entire state space. The environment states include θ(joint angle) and ̇θ(joint angular velocity).θis in the range of ±πwithθ= 0 as the upright position. ̇θis along the clockwise direction. Thecontrol is u(joint torque). The goal is to stabilize at the upright position ( θ= ̇θ= 0) with limitedcontrol energy consumption ( u↓). We use Proximal Policy Optimization (PPO) [44] to train thepolicy with early stop by reaching episode reward -500 or a maximal number of training iterations.We run each model with 5 different random seeds and report average results.Quantitative analysis. Table 1 shows that, among all models, NCP achieves the best performancein disentanglement and explanation quality (i.e., explanation size and cognitive chunks), suggestingthat it is more interpretable from the perspective of both our work and [41]. Beyond alignment ofthe best performance, Table 2 indicates the consistency of overall ranking between disentanglementand explanation quality. We found that while variance and modularity are (partially) aligned in thebest performance in Table 1, only the mutual information gap is correlated to explanation qualityin the overall ranking. Another interesting finding is that CfCs have the lowest logic conflict. Byempirically checking the decision trees, they construct non-trivial but highly-overlapping decisionpaths, thus leading to considerably fewer conflicts in logic programs across neurons.Neuron responses and underlying behaviors. While all models learn reasonable strategies, asexemplified by focusing on the sign of θand ̇θ, we now dive deeper into understanding individualneural dynamics. To this end, we focus on NCPs as they provide a lower variance from the disentan-glement perspective in their logic programs. We found different neurons roughly subdivide the statespace into quadrants and focus on their respective subsets. In Figure 2, we show the interpretability6Table 5: Quantitative results of visual servoing.NetworkArchitectureDisentanglement Explanation Size ↓ CognitiveChunks ↓ Variance ↓MI-Gap ↑Modularity ↑Vertical HorizontalFCs 0.0124 0.1354 0.9704 4.88 2.36 2.88GRU 0.0158 0.1614 0.9801 3.88 1.94 1.75LSTM 0.0172 0.1950 0.9851 4.25 2.06 2.38ODE-RNN 0.0151 0.1588 0.9766 5.25 2.24 2.75CfC 0.0191 0.1391 0.9677 5.50 2.43 3.00NCP 0.0068 0.3902 0.9770 4.12 2.00 1.88Table 6: Alignment between disentan-glement and explanation quality in vi-sual servoing.Re-signed RankCorrelation ↑Explanation Size CognitiveChunks Vertical HorizontalVariance 0.371 0.314 0.314MI-Gap 0.657 0.771 0.771Modularity 0.257 0.314 0.314Phase Portrait1234(a) Interpretation of a single neuron(b) Front-view image (observation) at different neuron activationd≤0.2mμ≤0.1radd≥0.0m1234Decision Tree of a NeuronLeft/Right to CenterLeft/Right OrientedClose to CenterTrueFalse1234Neuron Activation at Different Control ModesStable at centerRight sideRight side & left orientedLeft sideFigure 4: Explanation of neural policies for end-to-end visual servoing (Image-based Driving). (a) Phaseportrait of local heading error μand lateral deviation dwith empirically measured mean neuron response andclosed-loop dynamics. (b) Front-view images retrieved based on neuron response.analysis of command neuron 3 as an example. This neuron developed fine-grained strategies fordifferent situations like swinging clockwise in the right or left downward positions, and stabilizingpositive angular velocity around the upright position, as shown in Figure 2 (b)(c). We further providephase portrait in Figure 2 (a). The arrows indicate empirically measured closed-loop dynamics (withcontrol from the policy) and the color coding indicates average neuron response at a specific statefrom evaluation. The color of the neuron response (corresponding to logic programs) and the arrows(which implicitly capture actions) highlight different fine-grained strategies in the phase portrait.Notably, this finding applies not just to NCPs but also to other networks with similar functions.4.3 LocomotionEnvironment and policy learning. We consider a planar locomotion task based on OpenAI Gym’sHalfCheetah environment [43]. The agent is rewarded for forward locomotion based on a simplebase velocity reward. We optimize our policies with PPO until a maximum number of episodes hasbeen reached. For each model, we run five trials with different random seeds and report averageresults. Here, our objective is to extend our interpretability framework to a higher-dimensionalcontrol task. Specifically, we investigate whether our approach is capable of extracting consistentsingle-neuron activation patterns that align with individual phases of a periodic gait cycle.Quantitative analysis. In Table 3, we observe consistent results with classical control that NCPachieves the best performance in disentanglement and explanation quality. We further observe thatthe LSTM achieves a desirable low disentanglement variance comparable to NCPs. LSTMs andmost networks compared to NCPs on the other hand show a lower Mutual Information Gap. Thissuggests that in these networks neuron responses are concentrated for different decision paths but notquite identifiable from a probabilistic perspective, as certain neuron activation cannot be uniquelymapped to a decision path. Besides, Table 4 shows consistent findings that the mutual informationgap has the best ranking correlation with explanation quality. As opposed to classical control, theranking correlation is overall much higher and we hypothesize that more complex tasks may yieldbetter alignment in overall ranking between disentanglement and explanation quality.Exploring Gait Pattern. As the state space is significantly larger than the Pendulum task, we com-plement our quantitative interpretability results with qualitative results that focus on two exemplaryneurons, namely command neurons (CMDs) 3 and 7. Figure 3 provides the extracted decision treesfor CMDs 7 (left) and 3 (right). We find that the former displays periodic activation patterns thatalign very well with individual phases of regular gait. In particular, it leverages position readings ofthe back thigh joint in conjunction with fore shin velocity to coarsely differentiate between stanceand flight phase. More fine-grained coordination of lift-off and touchdown is handled by the leftmostand rightmost branches, respectively. In addition to periodic neuron activations following regular7gait, we also observe more specialized decision trees that respond to potential safety-critical sit-uations. For example, the decision tree of CMD 3 includes two branching options that align withvariations of tripping due to premature touchdown during the flight phase corresponding to a forwardtrip (6A) and a forward flip (6B). More quantitative analysis is shown in Appendix Section I.4.4 End-to-end Visual ServoingEnvironment and policy learning. We consider vision-based end-to-end autonomous drivingwhere the neural policy learns steering commands for lane-following. The model takes front-viewRGB images of the vehicle as input, and outputs control commands for the steering wheel andspeed. We use the high-fidelity data-driven simulator VISTA [45] as our environment. We adopt atraining strategy called guided policy learning that leverages VISTA to augment a real-world datasetwith diverse synthetic data for robust policy learning. The training dataset contains roughly 200kimage-and-control pairs and mean squared error is used as the training objective. For evaluation, weinitialize the vehicle at a random position throughout the entire track and evaluate the policy for 100frames (roughly 10s) for 100 episodes. The performance is estimated as the ratio of the length of thepath traversed without a crash and the total path length. Notably, this task has two additional majordistinctions: (1) policy learning based on supervised learning as opposed to reinforcement learning(2) policies take in data (images) different from states on which logic program is grounded.Quantitative analysis. In Table 5, we have consistent findings that NCP achieves the best perfor-mance in both disentanglement and explanation quality (more precisely, comparable with the best inmodularity, horizontal explanation size, and cognitive chunks). In Table 6, the mutual informationgap achieves consistently the best alignment in the overall ranking. Also, the correlation are muchhigher than classical control and comparable with or higher than locomotion. This suggests againthat more complex tasks yield better alignment between disentanglement and explanation quality.Maneuver strategies from visual inputs. In Figure 4, we show extracted behaviors for a neuron inthe NCP driving policy. While the state space of driving is higher dimensional, we focus on localheading error μand lateral deviation from the lane center din the lane following task. We computethe statistics and plot neuron response and closed-loop dynamics in the d-μphase portrait. Thisspecific neuron develops more fine-grained control for situations when the vehicle is on the right ofthe lane center, as shown in Figure 4(a), with images retrieved from neuron response in Figure 4(b).5 Discussion and LimitationWe summarize all consistent findings to answer the questions asked at the beginning of Section 4.First, disentanglement is highly indicative of explanation quality in the best performance across alltasks, suggesting that, among all compact neural policies, the ones with more disentangled represen-tation are more interpretable for humans [41] (with robustness analysis across hyperparameters inAppendix Section F). In addition, compared to neuron response variance and modularity, the mutualinformation gap consistently has the best alignment in the overall ranking with explanation quality.Besides, there are certain network architectures (NCPs) that exhibit superior performance in disen-tanglement and explanation quality consistently across experiments. Another interesting finding isthat more complex tasks yield better alignment between disentanglement and explanation quality(by comparing between Table 2, 4, and 6). Finally, qualitative results showed that learned behaviorsof neural policies, e.g., gait patterns or maneuver strategies, can be extracted from neuron responses.Limitation. The proposed framework involves extracting factor of variation relevant to strategiesand task structures; however, the empirical implementation only considers abstraction (i.e., logicprogram) in a single time step. Extensions to temporal reasoning include temporal logic [46] orusing decision trees with temporal capability [33, 47]. Furthermore, the abstraction is grounded ona set of world states pre-determined by human; however, these states may not be sufficiently expres-sive to capture the learned behavior of the policy. This requires estimating the information carriedbetween observation and grounding symbols or methods to extract the latter from the former.Acknowledgments. This work is supported by Capgemini Engineering, the Toyota Research In-stitute (TRI). This research was also supported in part by the AI2050 program at Schmidt Futures(Grant G-22-905 63172). It reflects only the opinions of its authors and not TRI or Toyota entity.8References[1] Y . Bengio, A. Courville, and P. Vincent. Representation learning: A review and new per-spectives. IEEE transactions on pattern analysis and machine intelligence , 35(8):1798–1828,2013.[2] M. Lechner, R. Hasani, M. Zimmer, T. A. Henzinger, and R. Grosu. Designing worm-inspired neural networks for interpretable robotic control. In 2019 International Conferenceon Robotics and Automation (ICRA) , pages 87–94. IEEE, 2019.[3] R. Hasani, M. Lechner, A. Amini, D. Rus, and R. Grosu. Liquid time-constant networks. arXivpreprint arXiv:2006.04439 , 2020.[4] M. Lechner, R. Hasani, A. Amini, T. A. Henzinger, D. Rus, and R. Grosu. Neural circuitpolicies enabling auditable autonomy. Nature Machine Intelligence , 2(10):642–652, 2020.[5] C. V orbach, R. Hasani, A. Amini, M. Lechner, and D. Rus. Causal navigation by continuous-time neural networks. Advances in Neural Information Processing Systems , 34, 2021.[6] J. Schmidhuber. Learning factorial codes by predictability minimization. Neural computation ,4(6):863–879, 1992.[7] J. Peters, D. Janzing, and B. Sch ̈olkopf. Elements of causal inference: foundations and learn-ing algorithms . The MIT Press, 2017.[8] F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Sch ̈olkopf, and O. Bachem. Chal-lenging common assumptions in the unsupervised learning of disentangled representations. Ininternational conference on machine learning , pages 4114–4124. PMLR, 2019.[9] C. Baykal, L. Liebenwein, I. Gilitschenski, D. Feldman, and D. Rus. Sensitivity-informedprovable pruning of neural networks. SIAM Journal on Mathematics of Data Science , 4(1):26–45, 2022. doi:10.1137/20M1383239. URL https://doi.org/10.1137/20M1383239 .[10] C. Hawkins, X. Liu, and Z. Zhang. Towards compact neural networks via end-to-end training:A bayesian tensor approach with automatic rank determination. SIAM Journal on Mathematicsof Data Science , 4(1):46–71, 2022. doi:10.1137/21M1391444. URL https://doi.org/10.1137/21M1391444 .[11] S. Oymak. Learning compact neural networks with regularization. In J. Dy and A. Krause,editors, Proceedings of the 35th International Conference on Machine Learning , volume 80 ofProceedings of Machine Learning Research , pages 3966–3975. PMLR, 10–15 Jul 2018. URLhttps://proceedings.mlr.press/v80/oymak18a.html .[12] J. Bu, A. Daw, M. Maruf, and A. Karpatne. Learning compact representations of neural net-works using discriminative masking (dam). Advances in Neural Information Processing Sys-tems, 34, 2021.[13] M. Torkamani, P. Wallis, S. Shankar, and A. Rooshenas. Learning compact neural networksusing ordinary differential equations as activation functions. arXiv preprint arXiv:1905.07685 ,2019.[14] B. A. Toms, E. A. Barnes, and I. Ebert-Uphoff. Physically interpretable neural networks forthe geosciences: Applications to earth system variability. Journal of Advances in ModelingEarth Systems , 12(9):e2019MS002002, 2020.[15] R. Hasani, A. Amini, M. Lechner, F. Naser, R. Grosu, and D. Rus. Response characterizationfor auditing cell dynamics in long short-term memory networks. In 2019 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE, 2019.9[16] Q. Zhang, Y . N. Wu, and S.-C. Zhu. Interpretable convolutional neural networks. In Proceed-ings of the IEEE conference on computer vision and pattern recognition , pages 8827–8836,2018.[17] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying inter-pretability of deep visual representations. In Proceedings of the IEEE conference on computervision and pattern recognition , pages 6541–6549, 2017.[18] B. Zhou, D. Bau, A. Oliva, and A. Torralba. Interpreting deep visual representations via net-work dissection. IEEE transactions on pattern analysis and machine intelligence , 41(9):2131–2145, 2018.[19] D. Bau, J.-Y . Zhu, H. Strobelt, B. Zhou, J. B. Tenenbaum, W. T. Freeman, and A. Torralba.Gan dissection: Visualizing and understanding generative adversarial networks. arXiv preprintarXiv:1811.10597 , 2018.[20] D. Bau, J.-Y . Zhu, H. Strobelt, A. Lapedriza, B. Zhou, and A. Torralba. Understanding the roleof individual units in a deep neural network. Proceedings of the National Academy of Sciences ,117(48):30071–30078, 2020.[21] R. T. Chen, X. Li, R. B. Grosse, and D. K. Duvenaud. Isolating sources of disentanglement invariational autoencoders. Advances in neural information processing systems , 31, 2018.[22] A. Heuillet, F. Couthouis, and N. D ́ıaz-Rodr ́ıguez. Explainability in deep reinforcement learn-ing. Knowledge-Based Systems , 214:106685, 2021.[23] Q. Li, Z. Peng, H. Wu, L. Feng, and B. Zhou. Human-ai shared control via policy dissection.Advances in Neural Information Processing Systems , 35:8853–8867, 2022.[24] M. Wulfmeier, A. Abdolmaleki, R. Hafner, J. T. Springenberg, M. Neunert, T. Hertweck,T. Lampe, N. Siegel, N. Heess, and M. Riedmiller. Compositional transfer in hierarchicalreinforcement learning. arXiv preprint arXiv:1906.11228 , 2019.[25] A. Sharma, M. Ahn, S. Levine, V . Kumar, K. Hausman, and S. Gu. Emergent real-world roboticskills via unsupervised off-policy reinforcement learning. arXiv preprint arXiv:2004.12974 ,2020.[26] V . Campos, A. Trott, C. Xiong, R. Socher, X. Gir ́o-i Nieto, and J. Torres. Explore, discoverand learn: Unsupervised discovery of state-covering skills. In International Conference onMachine Learning , pages 1317–1327. PMLR, 2020.[27] A. Bagaria, J. K. Senthil, and G. Konidaris. Skill discovery for exploration and planning usingdeep skill graphs. In International Conference on Machine Learning , pages 521–531. PMLR,2021.[28] D. Tanneberg, K. Ploeger, E. Rueckert, and J. Peters. Skid raw: Skill discovery from rawtrajectories. IEEE robotics and automation letters , 6(3):4696–4703, 2021.[29] T. Seyde, W. Schwarting, I. Gilitschenski, M. Wulfmeier, and D. Rus. Strength through diver-sity: Robust behavior learning via mixture policies. In Conference on Robot Learning , pages1144–1155. PMLR, 2022.[30] O. Bastani, Y . Pu, and A. Solar-Lezama. Verifiable reinforcement learning via policy extrac-tion. Advances in neural information processing systems , 31, 2018.[31] A. Silva, M. Gombolay, T. Killian, I. Jimenez, and S.-H. Son. Optimization methods forinterpretable differentiable decision trees applied to reinforcement learning. In Internationalconference on artificial intelligence and statistics , pages 1855–1865. PMLR, 2020.10[32] Z. Ding, P. Hernandez-Leal, G. W. Ding, C. Li, and R. Huang. Cdt: Cascading decision treesfor explainable reinforcement learning. arXiv preprint arXiv:2011.07553 , 2020.[33] A. Pace, A. J. Chan, and M. van der Schaar. Poetree: Interpretable policy learning with adaptivedecision trees. arXiv preprint arXiv:2203.08057 , 2022.[34] D. Amir and O. Amir. Highlights: Summarizing agent behavior to people. In Proceedingsof the 17th International Conference on Autonomous Agents and MultiAgent Systems , pages1168–1176, 2018.[35] N. Topin and M. Veloso. Generation of policy-level explanations for reinforcement learning. InProceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 2514–2521,2019.[36] K. Ridgeway and M. C. Mozer. Learning deep disentangled embeddings with the f-statisticloss. Advances in neural information processing systems , 31, 2018.[37] K. Cho, B. Van Merri ̈enboer, D. Bahdanau, and Y . Bengio. On the properties of neural machinetranslation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 , 2014.[38] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation , 9(8):1735–1780, 1997.[39] Y . Rubanova, R. T. Chen, and D. K. Duvenaud. Latent ordinary differential equations forirregularly-sampled time series. Advances in neural information processing systems , 32, 2019.[40] R. Hasani, M. Lechner, A. Amini, L. Liebenwein, M. Tschaikowski, G. Teschl, and D. Rus.Closed-form continuous-depth models. arXiv preprint arXiv:2106.13898 , 2021.[41] I. Lage, E. Chen, J. He, M. Narayanan, B. Kim, S. Gershman, and F. Doshi-Velez. An evalua-tion of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006 , 2019.[42] H. Lakkaraju, S. H. Bach, and J. Leskovec. Interpretable decision sets: A joint framework fordescription and prediction. In Proceedings of the 22nd ACM SIGKDD international conferenceon knowledge discovery and data mining , pages 1675–1684, 2016.[43] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.Openai gym. arXiv preprint arXiv:1606.01540 , 2016.[44] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[45] A. Amini, T.-H. Wang, I. Gilitschenski, W. Schwarting, Z. Liu, S. Han, S. Karaman, andD. Rus. Vista 2.0: An open, data-driven simulator for multimodal sensing and policy learn-ing for autonomous vehicles. In 2022 International Conference on Robotics and Automation(ICRA) . IEEE, 2022.[46] A. Camacho and S. A. McIlraith. Learning interpretable models expressed in linear temporallogic. In Proceedings of the International Conference on Automated Planning and Scheduling ,volume 29, pages 621–630, 2019.[47] L. Console, C. Picardi, and D. T. Dupr `e. Temporal decision trees: Model-based diagnosis ofdynamic systems on-board. Journal of artificial intelligence research , 19:469–512, 2003.[48] R. T. Chen, Y . Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differentialequations. Advances in neural information processing systems , 31, 2018.[49] J. Frankle and M. Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural net-works. In International Conference on Learning Representations , 2018.11[50] L. Liebenwein, R. Hasani, A. Amini, and D. Rus. Sparse flows: Pruning continuous-depthmodels. Advances in Neural Information Processing Systems , 34, 2021.[51] L. Liebenwein, C. Baykal, B. Carter, D. Gifford, and D. Rus. Lost in pruning: The effects ofpruning neural networks beyond test accuracy. Proceedings of Machine Learning and Systems ,3:93–138, 2021.[52] L. Li, T. J. Walsh, and M. L. Littman. Towards a unified theory of state abstraction for mdps.InAI&M , 2006.[53] T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum. Hierarchical deep reinforce-ment learning: Integrating temporal abstraction and intrinsic motivation. Advances in neuralinformation processing systems , 29, 2016.[54] E. S. Spelke and K. D. Kinzler. Core knowledge. Developmental science , 10(1):89–96, 2007.[55] T. Dean and R. Givan. Model minimization in markov decision processes. In AAAI/IAAI , pages106–111, 1997.[56] E. van der Pol, T. Kipf, F. A. Oliehoek, and M. Welling. Plannable approximations to mdphomomorphisms: Equivariance under actions. arXiv preprint arXiv:2002.11963 , 2020.[57] I. Higgins, D. Amos, D. Pfau, S. Racaniere, L. Matthey, D. Rezende, and A. Lerchner. Towardsa definition of disentangled representations. arXiv preprint arXiv:1812.02230 , 2018.[58] H. Caselles-Dupr ́e, M. Garcia Ortiz, and D. Filliat. Symmetry-based disentangled represen-tation learning requires interaction with environments. Advances in Neural Information Pro-cessing Systems , 32, 2019.[59] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprintarXiv:2205.06175 , 2022.[60] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, andI. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advancesin neural information processing systems , 34:15084–15097, 2021.[61] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[62] J. Li, D. Li, S. Savarese, and S. Hoi. Blip-2: Bootstrapping language-image pre-training withfrozen image encoders and large language models. arXiv preprint arXiv:2301.12597 , 2023.12A Compact Networks for Neural PoliciesTo obtain compact neural representations, there are three common approaches: 1) simply choosean RNN with small number of units densely wired to each other (e.g., a long short-term memory,LSTM, network [38], or a continuous-time network such as an ordinary differential equation, ODE, -based network [48, 39]). 2) sparsify a large network into a smaller system (e.g., lottery ticket winners[49], or sparse flows [50]), and 3) use neural circuit policies that are given by sparse architectureswith added complexity to their neural and synaptic representations but have a light-weighted networkarchitecture [3, 4, 40].In the first approach the number of model parameters inversely affect interpretability, i.e., interpret-ing wider and/or deeper densely wired RNNs exponentially makes the interpretation of the systemharder. Sparsity has been shown to help obtain a network with 95% less parameters compared tothe initial model. However, recent studies show that such levels of sparsity affect the robustnessof the model, thus make it more susceptible to perturbations [51]. Neural circuit policies (NCPs)[4] on the other hand have shown great promise in achieving attractive degrees of generalizabil-ity while maintaining robustness to environmental perturbations. This representation learning ca-pability is rooted in their ability to capture the true cause and effect of a given task [5]. NCPsare sparse network architectures with their nodes and edges determined by a liquid time-constant(LTC) concept [3]. The state of a liquid network is described by the following set of ODEs [3]:dx(t)dt=−h1τ+f(x(t),I(t), t, θ)i⊙x(t)+f(x(t),I(t), t, θ)⊙A. Here, x(D×1)(t)is the hidden statewith size D, I(m×1)(t)is an input signal, τ(D×1)is the fixed internal time-constant vector, A(D×1)is a bias parameter, and ⊙is the Hadamard product. In tasks involving spatiotemporal dynamicsthese networks showed significant benefit over their counterparts, both in their ODE form and intheir closed-form representation termed Closed-form continuous-time (CfC) models [4, 5, 40].Interpretation of Neuron Responses. Compact neural representations promise to enable the in-terpretability of decision-making by focusing post-hoc analysis on a limited number of neural re-sponses. However, having merely a lower-dimensional space for visualization is not sufficient toidentify consistent behaviors or strategies acquired by a learning agent. Emergent behaviors maydistribute responses across numerous neurons with a high degree of entanglement. Even for modelswith a small number of neurons, it can be challenging to identify and interpret the behavior corre-lated with observed response patterns. In this paper, we hypothesize that abstraction with respectto a type of learned strategy within a single neuron is necessary for better interpretability of neuralpolicies. We further desire semantic grounding of the neuron response, that is, associating neuronresponse to human-readable representation. The representation space should be abstract enough tobe human-understandable and expressive enough to capture arbitrary types of emergent behaviorsor strategies. We adopt the framework of logic programs due to their simple yet effective represen-tations of decision-making processes.B A Motivating Perspective Of Disentangled RepresentationThe underlying behaviors of neural policies involves descriptions with multiple levels of abstraction,from detailed states at every time instance to high-level strategies toward solving a task, spanninga continuum where the details can be summarized and reduced to gradually construct their concisecounterparts. Among these descriptions of behaviors, a right amount of abstraction should be con-cise enough for human interpretability yet being sufficiently informative of how neural policies actlocally toward solving the overall task. Relevant concepts about abstraction have been explored inthe context of state abstraction in Markov Decision Process (MDP) [52], hierarchical reinforcementlearning [53], and developmental psychology [54]. In the following, we aim to more formally definesuch abstraction for interpretability of neural policies and draw connection to disentangled repre-sentations. First, we define a MDP as a tuple {S,A, Pa, R}, where at time instance t,st∈ S is thestate, at∈ A is the action, Pa:S × A × S → [0,1]is the transition function, R:S × A → Risthe reward function. The goal of policy learning in a MDP is to find a policy π:S × A → [0,1]13that maximizes the expected future return (accumulated reward). The closed-loop dynamics (indeterministic setting) can then be written asst+1=Pa(st, at), where at=π(st)Then, we construct an abstract MDP, with state presumably being the abstraction we are lookingfor interpretability, as a tuple {ˆS,ˆA,ˆPa,ˆR}that follows similar definition to the above-mentionedregular MDP. It follows the deterministic MDP homomorphism [55, 56] as follows,∀st, st+1∈ S, at∈ A Pa(st, at) =st+1⇒ ̄Pa(Q(st), ̄A(at)) =Q(st+1)∀st∈ S, at∈ A R(st, at) =R(Q(st), ̄A(at))where Q:S → ˆSis the state embedding function and ̄A:A → ˆAis the action embedding function.The state embedding function can also be seen as an action-equivariant map that precisely satisfiesthe MDP homomorphism [56]. Next, we start to draw connection to disentangled representationfrom one of its formalism using symmetries and group theory [57]. Informally, disentanglementrefers to the level of decomposition in representation that reflects the factor of variation . For ex-ample, one dimension of vector representations corresponds to color and the other corresponds toshape. In [57], these factor of variations are formally defined as symmetries of world state ( Sin ourcase). Given group G, binary operator ◦:G×G→G, group decomposition into a direct product ofsubgroups G=G1×G2×. . ., and group action ·X:G×X→XwithXas a set which the groupaction act upon, the idea is to ”commute” symmetries from one set Xto the other X′. Suppose thereis a group Gof geometries acting on the world state Svia action ·S:G× S → S , we would liketo find a corresponding action acting on representation ·Z:G×Z→Zthat reflects the symmetricstructure of SinZ(in our case neuron response zt∈Z). This entails the equivariance condition,g·ZES)Z(st) =ES)Z(g·Sst)where ES)Zcommutes action across SandZ, and can be called a G-morphism or equivariant map.G× S SG×Z ZidG×ES)Z·SES)Z·ZA more concrete connection of group action to MDP can be seen in the analogy of agent-environment interaction [58],g·Sst=st+1=Pa(st, at)It is worth emphasizing the distinction of group action ·Sand regular action at: not all regular actionatexhibit symmetry, as pointed out in [58]. And the group action upon neural state ·Zcan be viewedas the transition dynamics of neural policies,g·Zzt=zt+1=πz(zt) =πs(Pa(st, πa(zt))where π=πa◦πs, πa:Z→ A , πs:S → Zis simply the decomposition of neural policies toexplicitly extract neuron responses ztandπz:Z→Zis the transition function of neural states(note that this does not necessarily require recurrence structure of neural policies; instead this ismore of a convenient notation here). Following the definition of [57], an agent’s representation Zisdisentangled with respect to the decomposition G=G1×G2×. . .if1. There is a group action ·Z:G×Z→Z.2. The map ES)Z:S → Zis equivariant between the group actions on SandZ.3. There is a decomposition Z=Z1×Z2×. . .such that each Ziis fixed by the actions ofallGj, j̸=iand affected only by Gi.For the first condition, We already define ·Zin the above. For the second condition, we show thatthe equivariant map can follow the definition ES)Z=πz◦πs, i.e., zt+1=ES)Z(st). This followsthe proof as,g·ZES)Z(st) =g·Zzt+1=πz(zt+1) =πz(πs(st+1)) = ( πz◦πs)(st+1) =ES)Z(g·Sst)14Next, extending the formalism of disentangled representation in [57] with the above-mentionedMDP homomorphism [55], we define the equivariance condition between the regular MDP{S,A, Pa, R}and the abstract MDP {ˆS,ˆA,ˆPa,ˆR},g·SEˆS)S(ˆst) =EˆS)S(g·ˆSˆst)where EˆS)Scommutes action across ˆSandS, and can be defined with MDP homomorphism,ˆst+1=Q(st+1)ˆPa(ˆst,ˆat) =Q(Pa(st, at))g·ˆSˆst=Q(g·Sst)Q−1(g·ˆSˆst) =g·SQ−1(ˆst)EˆS)S=Q−1Note that theoretically the state embedding function Qmay not have an inverse mapping sincegoing from StoˆSis supposed to be more abstract (and thus concise with equal or less information).However, this does not matter since we don’t necessarily require this recipe to tell us how exactlygroup actions in ˆScommute to S. Overall, we establish the following group homomorphism acrosssetˆS,S, and Z,G׈S ˆSG× S SG×Z ZidG×EˆS)S·ˆSEˆS)SidG×ES)Z·SES)Z·ZThis connects the right amount of abstraction for interpretability discussed in the beginning, thenassociated with MDP homomorphism, to factor of variation in disentangled representation, which isformalized by symmetry and group theory. Disentanglement in Zcan then be lifted to symmetriesin abstract state space ˆS. In [57], disentanglement of representation is lifted up to the symmetriesin the world state space S, e.g., a factor of group decomposition Gican be color of an object.However, this is not sufficient to describe the behavior of policies since Slacks task structure.Hence, we further go from StoˆSwith MDP homomorphism to capture the essence of solvinga task. The factor of group decomposition Gican then be task-related, e.g., relative pose to atarget object (which may be of high interest for tasks like object tracjing, and less so for taskslike locomotion). Overall, this provides a motivation to cast the problem of searching for properdescription of the behavior of neural policies (for interpretability) to searching for disentanglementin neuron responses. In this paper, we therefore study how to measure interpretability of compactneural policies with disentangled representation.C Calibration Of Mutual InformationLemma C.1. The calibration term I[zj;Pk]−I[zj;Pk;Pkj]in both MIG (5) and Modularity (6)metrics, for j̸=i∗, without loss of generality has the following lower bound:I[zj;Pk]−I[zj;Pk;Pkj]≥max(0 , I[zj;Pk]−I[Pk;Pkj]) (7)Lemma C.1 is necessary because to compute the calibration term, we need access to the conditionaldistribution of the random variable (Pkj|zj), which is normally inaccessible. Hence, we derive alower bound for the calibrated mutual information.Proof. In the main paper, we adapting Mutual Information Gap (MIG) [21] to our framework as,1KKXk=11H[Pk]I[zi∗;Pk]−maxj̸=i∗I[zj;Pk]−I[zj;Pk;Pkj]15and Modularity score [36] as,1IXi∈I1−Pk̸=k∗(I[zi;Pk]−I[zi;Pk;Pk∗])2(K−1)I[zi;Pk∗]2Both involve the computation of I[zj;Pk;Pkj]. Without loss of generality for both cases (and withthe notation of MIG), we simplify the calibration term for j̸=i∗as follows,I[zj;Pk]−I[zj;Pk;Pkj]=I[zj;Pk]−(I[zj;Pk]−I[zj;Pk|Pkj])=I[zj;Pk|Pkj]=I[zj;Pk] +H[Pkj|zj] +H[Pkj|Pk]−H[Pkj|zj,Pk]−H[Pkj]=I[zj;Pk]−(H[Pkj]−H[Pkj|Pk]) + ( H[Pkj|zj]−H[Pkj|zj,Pk])=I[zj;Pk]−I[Pk;Pkj] +I[Pkj|zj;Pk]≥max(0 , I[zj;Pk]−I[Pk;Pkj])Most steps simply follow identities of mutual information and entropy. The last step requires accessto the conditional distribution of random variable (Pkj|zj), which is normally inaccessible. Hence,we introduce an approximation that serves as a lower bound for the calibrated mutual informationin our implementation.D Other Quantitative MeasuresDecision Path Accuracy. During deployment, we use an inverse proxy qφifor the decision treeTθiand hence we compute the approximation error by measuring the accuracy of a state-groundeddecision path inferred from the neuron response with qφicompared to true states,1|I|Xi∈I1|Ddt|X(st,zit)∈D dt1|qφi(zit)|Xn∈qφi(zit)j=g(n)1[sjt≤cn] (8)where 1is an indicator function, qφi(zit)is the inferred decision path with norm as number ofdecision rules. The condition sjt≤cnvalidates if the current state sjtcomplies with the inferred ruledefined by cn(which is from Tθi). Since the discrepancy is computed at the decision rule level, itcaptures not only the error of the classifier model qφibut also how accurately fiSparses zit.Cross-neuron Logic Conflict. When interpreting a neural policy as a whole instead of inspectingindividual neuron response, it is straightforward to find the intersection across logic programs ex-tracted from different neurons lt=reduce (∧i∈Ilit), where reduce summarizes and reduces logicprograms to a more compact one. Intuitively, the neuron-wise logic program should summarize theoperational domain of the strategy currently executed by the neuron, where intersection describesthe domain of a joint strategy across neurons. However, the reduction of intersection can be invalidif there is conflict in the logical formulae across neurons, e.g., a≤3from the first neuron anda≥4from the second neuron. The conflict may imply, under the same configuration of fS, that (1)the policy fails to learn compatible strategies across neurons or (2) there is an error induced by theinterpreter due to insufficient or ambiguous connection between the logic program and the neuronresponse, which implicitly indicates lack of interpretability.Experimental Results. For classical control, we verify in Table 7 that all models achieve compara-ble performance when learning toward target -500 episode reward. For locomotion, in Table 8, mostmodels achieve comparable task performance except for GRU and ODE-RNN being slightly worse.For end-to-end visual servoing, in Table 9, all models achieve good performance ( >0.9) except forODE-RNN, which fails to learn a good policy within maximal training iterations.16Table 7: Other quantitative results of classical control.NetworkArchitectureDecision PathAccuracy ↑LogicConflict ↓Performance ↑FCs 0.3015.0690.2104.065-488.55010.99GRU 0.2504.1040.2832.080-559.82114.07LSTM 0.2392.0310.5072.103-467.95024.53ODE-RNN 0.2980.0650.2506.101-533.93122.47CfC 0.2509.1380.1556.099-489.28007.66NCP 0.4726.1140.2026.088-556.64116.75Table 8: Other quantitative results of locomotion.NetworkArchitectureDecision PathAccuracy ↑LogicConflict ↓Performance ↑FCs 0.5285.0540.1035.0115186.502458.84GRU 0.4924.0540.1500.0323857.211448.57LSTM 0.5283.0730.2155.0424122.741751.04ODE-RNN 0.4959.0570.1474.0243472.691734.91CfC 0.4841.0450.1581.0315195.462292.67NCP 0.5859.0190.1105.0265822.730512.73E Implementation DetailsNCPs are designed by a four-layer structure consisting of sensory neurons (input layer), interneu-rons, command neurons (with recurrent connections), and motor neurons (output layer). To make afair comparison, we augment all non-NCP models by a feed-forward layer, which is of equivalentsize to the inter-neuron layer in NCPs.E.1 Classical Control (Pendulum)Network Architecture. With 3-dimensional observation space and 1-dimensional action space, weuse the following network architecture for compact neural policies.•FCs: a3→10→4→1fully-connected network with tanh activation.•GRU : a3→10fully-connected network with tanh activation followed by GRU with cellsize of 4, outputting a 1-dimensional action.•LSTM : a3→10fully-connected network with tanh activation followed by LSTM withhidden size of 4, outputting a 1-dimensional action. Note that this effectively gives 8cellsby considering hidden and cell states.•ODE-RNN : a3→10fully-connected network with tanh activation followed by a neuralODE with recurrent component both of size 4, outputting a 1-dimensional action.•CfC: with backbone layer = 1, backbone unit = 10 , backbone activation silu, hidden size= 4without gate and mixed memory, outputting a 1-dimensional action.•NCP : with 3sensory neurons, 10interneuron, 4command neurons, 1motor neuron, 4output sensory synapses, 3output inter-synapses, 2recurrent command synapse, 3motorsynapses.For all policies, we use a 3→64→64→1fully-connected networks with tanh activation as valuefunction. We interpret the layer of size 4for each policy.Training details. We use PPO with the following parameters for all models. Learning rate is 0.0003 .Train batch size (of an epoch) is 512. Mini-batch size is 64. Number of iteration within a batch is 6.Value function clip parameter is 10.0. Discount factor of the MDP is 0.95. Generalized advantage17Table 9: Other quantitative results of visual servoing.NetworkArchitectureDecision PathAccuracy ↑LogicConflict ↓Performance ↑FCs 0.5379 0.1354 1.0000GRU 0.6160 0.1884 0.9210LSTM 0.5174 0.4504 1.0000ODE-RNN 0.5483 0.3786 0.4239CfC 0.5549 0.2274 0.9922NCP 0.5960 0.1067 1.0000estimation parameter is 0.95. Initial coefficient of KL divergence is 0.2. Clip parameter is 0.3.Training halts if reaching target average episode reward 150. Maximal training steps is 1M.Interpreter details. For the decision tree Tθi, we set minimum number of samples required to beat a leaf node as 10% of the training data, criterion of a split as mean squared error with Friedman’simprovement score, the maximum depth of the tree as 3, complexity parameter used for minimalcost-complexity pruning as 0.003; we use scikit-learn implementation of CART (Classification andRegression Trees). For simplicity, we use another decision tree as decision path classifier qφiwithmaximal depth of tree as 3, minimum number of samples in a leaf node as 1%of data, complexityparameter for pruning as 0.01, criterion as Gini impurity. The state grounding Sof the interpreterfiSis{θ, ̇θ}, where θis joint angle and ̇θis joint angular velocity. We use the offline data collectedduring the closed-loop policy evaluation for the training dataset, which consists of 100 trajectorieswith each having maximally 100 time steps (default in the environment).E.2 Locomotion (HalfCheetah)Network Architecture. With 17-dimensional observation space and 6-dimensional action space,we first use feature extractors of a shared architecture as a 17→256fully-connected network,which then output features to compact neural policies with the following architectures,•FCs: a256→20→10→6fully-connected network with tanh activation.•GRU : a256→20fully-connected network with tanh activation followed by GRU withcell size of 10, outputting a 6-dimensional action.•LSTM : a256→20fully-connected network with tanh activation followed by LSTM withhidden size of 10, outputting a 6-dimensional action. Note that this effectively gives 20cells by considering hidden and cell states.•ODE-RNN : a256→20fully-connected network with tanh activation followed by a neuralODE with recurrent component both of size 10, outputting a 6-dimensional action.•CfC: with backbone layer = 1, backbone unit = 20 , backbone activation silu, hidden size= 10 without gate and mixed memory.•NCP : with 256sensory neurons, 20interneuron, 10command neurons, 6motor neuron,4output sensory synapses, 5output inter-synapses, 6recurrent command synapse, 4inputmotor synapses.For all policies, we use a 17→256→256→1fully-connected networks with tanh activation asvalue function. We interpret the layer of size 10for each policy.Training details. We use PPO with the following parameters for all models. Learning rate is 0.0003 .Train batch size (of an epoch) is 65536 . Mini-batch size is 4096 . Number of iteration within a batchis32. Value function coefficient is 10.0. Discount factor of the MDP is 0.99. Generalized advantageestimation parameter is 0.95. Initial coefficient of KL divergence is 1.0. Clip parameter is 0.2.Gradient norm clip is 0.5. Training halts if reaching target average episode reward −500. Maximaltraining steps is 12M.18Interpreter details. For the decision tree Tθi, we set minimum number of samples required to beat a leaf node as 10% of the training data, criterion of a split as mean squared error with Friedman’simprovement score, the maximum depth of the tree as 3, complexity parameter used for minimalcost-complexity pruning as 0.001; we use scikit-learn implementation of CART (Classification andRegression Trees). For simplicity, we use another decision tree as decision path classifier qφiwithmaximal depth of tree as 3, minimum number of samples in a leaf node as 1%of data, complexityparameter for pruning as 0.01, criterion as Gini impurity. The state grounding Sof the interpreterfiSis{hR, θR, θT,B, θS,B, θF,B, θT,F, θS,F, θF,F, ̇xR, ̇hR, ̇θR, ̇θT,B, ̇θS,B, ̇θF,B, ̇θT,F, ̇θS,F, ̇θF,F},where hR, ̇hRare position and velocity of z-coordinate of the front tip, θR, ̇θRare angle and an-gular velocity of the front tip, θT,B, ̇θT,Bare angle and angular velocity of the thigh in the back,θS,B, ̇θS,Bare angle and angular velocity of the shin in the back, θF,B, ̇θF,Bare angle and angularvelocity of the foot in the back, θT,T, ̇θT,Tare angle and angular velocity of the thigh in the front,θS,T, ̇θS,Tare angle and angular velocity of the shin in the front, θF,T, ̇θF,Tare angle and angularvelocity of the foot in the front, ̇xRis the velocity of x-coordinate of the front tip. We use the offlinedata collected during the closed-loop policy evaluation for the training dataset, which consists of100 trajectories with each having maximally 1000 time steps (default in the environment).E.3 End-to-end visual servoing (Image-based Driving)Network Architecture. With image observation space of size (200,320,3)and 2-dimensional ac-tion space, we first use feature extractors of a shared architecture as a convolutional neural network(CNN) in Table 10, which then output features to compact neural policies with the following archi-tectures,•FCs: a1280→20→8→2fully-connected network with tanh activation.•GRU : a1280→20fully-connected network with tanh activation followed by GRU withcell size of 8, outputting a 2-dimensional action.•LSTM : a1280→20fully-connected network with tanh activation followed by LSTM withhidden size of 8, outputting a 2-dimensional action. Note that this effectively gives 20cellsby considering hidden and cell states.•ODE-RNN : a1280→20fully-connected network with tanh activation followed by aneural ODE with recurrent component both of size 8, outputting a 2-dimensional action.•CfC: with backbone layer = 1, backbone unit = 20 , backbone activation silu, hidden size= 8without gate and mixed memory.•NCP : with 1280 sensory neurons, 20interneuron, 8command neurons, 2motor neuron,4output sensory synapses, 5output inter-synapses, 6recurrent command synapse, 4inputmotor synapses.Training details. Batch size is 64. Sequence size is 10. Learning rate is 0.001. Number of epochsis10. We perform data augmentation on RGB images with randomized gamma of range [0.5,1.5],brightness of range [0.5,1.5], contrast of range [0.7,1.3], saturation of range [0.5,1.5].Interpreter details. For the decision tree Tθi, we set minimum number of samples required to beat a leaf node as 10% of the training data, criterion of a split as mean squared error with Friedman’simprovement score, the maximum depth of the tree as 3, complexity parameter used for minimalcost-complexity pruning as 0.003; we use scikit-learn implementation of CART (Classification andRegression Trees). For simplicity, we use another decision tree as decision path classifier qφiwithmaximal depth of tree as 3, minimum number of samples in a leaf node as 1%of data, complexityparameter for pruning as 0.01, criterion as Gini impurity. The state grounding Sof the interpreterfiSis{v, δ, d, ∆l, μ, κ}, where vis vehicle speed, δis heading, dis lateral deviation from the lanecenter, ∆lis longtitudinal deviation from the lane center, μis local heading error with respect tothe lane center, κis road curvature. We use the offline data collected during the closed-loop policyevaluation for the training dataset, which consists of 100 trajectories with each having maximally100 time steps.19Layer HyperparametersConv2d (3, 24, 5, 2, 2)GroupNorm2d (16, 1e-5)ELU -Dropout 0.3Conv2d (24, 36, 5, 2, 2)GroupNorm2d (16, 1e-5)ELU -Dropout 0.3Conv2d (36, 48, 3, 2, 1)GroupNorm2d (16, 1e-5)ELU -Dropout 0.3Conv2d (48, 64, 3, 1, 1)GroupNorm2d (16, 1e-5)ELU -Dropout 0.3Conv2d (64, 64, 3, 1, 1)AdaptiveAvgPool2d reduce height dimensionTable 10: Network architecture of CNN feature extractor for end-to-end visual servoing. Hyperparameters forConv2d are input channel, output channel, kernel size, stride, and padding; for GroupNorm2d , they are groupsize and epsilon; for Dropout , it is drop probability.F Robustness AnalysisWe propose to study the interpretability of neural policies through decision trees and present severalquantitative measures of interpretability by analyzing various properties on top of neuron responsesand corresponding decision trees, including Neural-Response Variance ,Mutual Information Gap ,Modularity ,Decision Path Accuracy , and Logic Conflict . However, the extracted decision trees maydiffer across different configurations. Hence, to validate the robustness of the proposed metrics tohyperparameters, we compute all metrics with different decision tree parameters in classical controlenvironment (Pendulum). We report the averaged results with 5 random seeds in Table 11 ( Neural-Response Variance ), Table 12 ( Mutal Information Gap ), Table 13 ( Modularity ), Table 14 ( DecisionPath Accuracy ), Table 15 ( Logic Conflict ). Most metrics (variance, MI-gap, decision path accuracy,logic conflict) yield consistent top-1 results and agree with similar rankings among network archi-tectures, except for modularity that is slightly less robust against hyperparameters yet still consistentin the top-3 set of models. This results demonstrate the reliability of the proposed interpretabilityanalysis for neural policies.Table 11: Robustness to hyperparameters for Neural-Response Variance . The results are averaged across 5random seeds in classical control (Pendulum).[Variance ↓] Network Architecture FCs GRU LSTM ODE-RNN CfC NCPCost Complexity Pruning0.001 0.0232 0.0304 0.0209 0.0266 0.0254 0.02070.003 0.0242 0.0329 0.0216 0.0287 0.0272 0.02400.01 0.0261 0.0371 0.0221 0.0315 0.0267 0.0305Minimal Leaf Sample Ratio0.01 0.0154 0.0261 0.0138 0.0193 0.0189 0.01860.1 0.0242 0.0329 0.0216 0.0287 0.0272 0.02400.2 0.0334 0.0387 0.0284 0.0354 0.0295 0.028520Table 12: Robustness to hyperparameters for Mutual Information Gap . The results are averaged across 5random seeds in classical control (Pendulum).[MI-Gap ↑] Network Architecture FCs GRU LSTM ODE-RNN CfC NCPCost Complexity Pruning0.001 0.0284 0.2686 0.2026 0.2891 0.2544 0.34030.003 0.3008 0.2764 0.2303 0.3062 0.2892 0.36530.01 0.3482 0.3065 0.2547 0.3142 0.3567 0.3664Minimal Leaf Sample Ratio0.01 0.2824 0.2632 0.2040 0.2819 0.2433 0.34560.1 0.3008 0.2764 0.2303 0.3062 0.2892 0.36530.2 0.3798 0.3387 0.2528 0.3168 0.3342 0.3429Table 13: Robustness to hyperparameters for Modularity . The results are averaged across 5 random seeds inclassical control (Pendulum).[Modularity ↑]Network ArchitectureFCs GRU LSTM ODE-RNN CfC NCPCost Complexity Pruning0.001 0.9519 0.9558 0.9327 0.9485 0.9228 0.94380.003 0.9471 0.9550 0.9402 0.9486 0.9116 0.95510.01 0.9532 0.9598 0.9445 0.9487 0.8970 0.9593Minimal Leaf Sample Ratio0.01 0.9638 0.9702 0.9547 0.9630 0.9333 0.96510.1 0.9471 0.9550 0.9402 0.9486 0.9116 0.95510.2 0.9475 0.9372 0.9197 0.9404 0.8755 0.9301G Counterfactual Analysis via Removal of NeuronsThere exist some neurons with logic programs that are sensible but may have little effect on taskperformance. For example, in NCPs (not confined to this specific architecture but just focus onit for discussion), we find a neuron that aligns its response purely with vehicle speed. Given thetask objective is lane following without crashing, such neuron pays attention to useful (for temporalreasoning across frames) but relatively unnecessary (to the task) information. Furthermore, thereare neurons that don’t exhibit sufficient correlation with any of the environment state and fail toinduce decision branching. In light of these observation, we try to remove neurons that we suspectto have little influence on the performance by inspecting their logic program. We show the resultsin Table 16. Removing neurons 3, 4, 7 has a marginal impact on task performance. Among them,neuron 3 and 4 mostly depends on vehicle speed vwith a small tendency to the lateral deviation d.Neuron 7 fails to split a tree.H Interpretation Of Driving ManeuverIn Figure 4, we describe interpretations similar to classical control (for a neuron in NCP). Whilethe state space of driving is higher dimensional (5 with bicycle model for lane following), statesof interest only include local heading error μand lateral deviation from the lane center din lanefollowing task. We compute the statistics and plot neuron response and closed-loop dynamics inthed-μphase portrait. This specific neuron develops more fine-grained control for situations whenthe vehicle is on the right of the lane center, as shown in Figure 4(a). We further show front-viewimages retrieved based on neuron response in Figure 4(b).I Additional Quantitative Analysis on Locomotion BehaviorsIn Figure 3 (right), we demonstrate interesting qualitative examples on discovered gaits criticalfor failure modes like early touchdown or forward flipping. In Table 17, we conduct quantitativeanalysis to further justify our findings. Since Gym Half-Cheetah does not have early termination,we did analysis on the stepwise reward, specifically the run reward (horizontal distance incrementedacross the consecutive time steps). Recall in Section 3.1 paragraph “From neuron responses todecision paths”, we can infer the decision path (or branch) by the range of neuron responses. We use21Table 14: Robustness to hyperparameters for Decision Path Accuracy . The results are averaged across 5 randomseeds in classical control (Pendulum).[Decision Path Accuracy ↑]Network ArchitectureFCs GRU LSTM ODE-RNN CfC NCPCost Complexity Pruning0.001 0.2815 0.2415 0.2195 0.2904 0.2250 0.42940.003 0.3015 0.2504 0.2392 0.2980 0.2509 0.47260.01 0.3074 0.3330 0.3161 0.3707 0.2864 0.4390Minimal Leaf Sample Ratio0.01 0.2950 0.2637 0.2270 0.2574 0.2452 0.42870.1 0.3015 0.2504 0.2392 0.2980 0.2509 0.47260.2 0.3572 0.3587 0.2794 0.3322 0.2784 0.4684Table 15: Robustness to hyperparameters for Logic Conflict . The results are averaged across 5 random seeds inclassical control (Pendulum).[Logic Conflict ↓]Network ArchitectureFCs GRU LSTM ODE-RNN CfC NCPCost Complexity Pruning0.001 0.2451 0.3348 0.5240 0.2641 0.2048 0.31590.003 0.2104 0.2832 0.5072 0.2506 0.1556 0.20260.01 0.1766 0.1877 0.4325 0.1401 0.1121 0.2924Minimal Leaf Sample Ratio0.01 0.2672 0.4298 0.6791 0.3575 0.2654 0.26070.1 0.2104 0.2832 0.5072 0.2506 0.1556 0.20260.2 0.1796 0.1664 0.3842 0.2001 0.1089 0.1111Table 16: Removing a single neuron based on explanation.Remove Neuron 0 1 2 3 4 5 6 7Performance ↑ 0.24 0.07 0.09 1.00 0.969 0.29 0.03 1.000 25 50 75 100 125 150 175 200Progress s/m0.20.00.20.40.6Lateral error d/mAll neuronsRemove neuron 30 25 50 75 100 125 150 175 200Progress s/m0.20.00.20.40.6Lateral error d/mAll neuronsRemove neuron 7Remove neurons 3 and 7Figure 5: Driving profile when removing neurons according to decision tree interpretation.this technique (e.g., 6B corresponds to CMD03 ¡= -0.914) to retrieve all time steps that fall into thebranch 6A (early touchdown) or 6B (forward flipping) and compute average reward of the next fewtime steps. We also compute the non-branch results to serve as a reference, i.e., comparison betweenbranch X and not branch X. For early touchdown (6A), we can observe from the quantitative resultsthat the reward drops immediately when such a branch is “activated”; this makes sense as earlytouchdown brakes the robot right away, leading to smaller distance increments. For forward flipping(6B), we observe a higher reward at the closer time steps, which then quickly falls off to relativelymuch lower value; this is also reasonable as flipping motion carries the robot body forward a lot inthe beginning stage yet leaves the robot body in a very bad pose to move forward afterward.22Table 17: Quantitative analysis on failure mode of locomotion.Step Run Reward At Time t+1 t+2 t+3 t+4Branch 6A (early touchdown) 5.405 5.414 6.048 6.730Not Branch 6A 6.235 6.233 6.031 5.813Branch 6B (forward flipping) 7.145 6.431 5.392 4.998Not Branch 6B 5.360 5.794 6.425 6.664Table 18: Extension to larger models with Decision Transformer [60] as an example.Method Variance MI-Gap ModularityDecision Transformer 0.007.0030.167.0880.981.017J Potential Extension to Larger ModelsWhile our work focuses only on compact neural networks, our method can be extended to larger-scale models by selecting a “bottleneck” layer and extract interpretation from neurons in that layer.Take transformers as examples. To start with, at a high level, transformer-based policies are con-structed with an encoder-decoder architecture [59, 60] with inputs/outputs either being tokenized orkept to be directly mapped from continuous values to embeddings. The natural selection of the bot-tleneck is then the last hidden state of the encoder, which is a common way to do feature extractionfrom large transformer-based models [61, 62]. Overall, we believe there are promising extensionsof our work toward larger-scale models.Furthermore, in Table 18, we conduct an experiment using Decision Transformer [60] to demonstratethe future potential of our work. We apply our method to a pre-trained checkpoint that can achieve∼10,000 reward in Gym Half-Cheetah. Given Decision Transformer adopts an encoder-decodertransformer-based architecture (similar to most language models), we extract the latest time stepof the “last hidden state” of the encoder (following the terminology of Huggingface-Transformershere); specifically, last hidden state refers to the last layer of the stacked attention blocks and thelatest time step refers to the last time step of the input sequence, e.g., in natural language, the “robot”in the sentence “I like robot”. As discussed earlier, such technique is commonly adopted to extractfeatures from large-scale transformer-based models. So far, in Decision Transformer, we get a (3,128) feature, where 3 corresponds to return, action, and state prediction respectively. We take thedimension that is used to predict action, eventually leading to a 128-dimensional feature vector. Weapply our method on this 128-dimensional feature vector and report the metrics.Interestingly, Decision Transformer gives very good performance in neuron response variance andmodularity, and slightly below average performance in MIG. Besides, we also show some samplesof extracted logic programs. (Please refer to Section E.2 paragraph “Interpreter details” for detaileddescription of each symbol)• ̇hR<=-1.959 ∧θS,B<=-0.256• ̇θS,B<=-2.139 ∧θT,T>-0.995• ̇θS,B<=-0.945 ∧ ̇hR<=0.109Furthermore, as there are 128 neurons interpreted, which leads to a much larger set of possibledecision paths and hence larger-sized logic programs, it drastically increases the cognitive load ofhumans interpreting the programs. To remedy this issue, we try to extract the decision path setfrom a smaller number of neurons by performing dimension reduction and only considering theprincipal components. Such an approximation can be surprisingly effective in practice as from theperspective of representation learning, the feature may exhibit a highly-structured distribution in the128-dimensional space. We can empirically verify this by checking the explained variance (E.V .) ofthe principal components (P.C.) as shown in Table 19. We can see that the 10 principal componentscan achieve over 80% of explained variance. Note that the dimension reduction is only performed23Table 19: Explained variance of the dimension-reduced space of Decision Transformer’s features.PrincipleComponents1 2 3 4 5 6 7 8 9 10ExplainedVariance (E.V .)0.247 0.179 0.133 0.109 0.044 0.028 0.025 0.022 0.017 0.013AccumulatedE.V .0.247 0.426 0.559 0.669 0.713 0.742 0.767 0.789 0.807 0.821Table 20: Results for the human study.Method Accuracy Subjective SatisfactionNon-NCP 0.603.0530.814.021NCP 0.648.0650.971.078at constructing the decision path set for factors of variation and we are still interpreting all 128neurons. We will then get much smaller-sized logic programs after performing the logic reductionstep (as discussed in the Section D paragraph “Cross-neuron Logic Conflict”). Immediate researchquestions then arise here like does this dimension reduction step still produce the factors of variationwith similar amounts of information to the original ones? , orhow sensitive is it to the methods andhyperparameters that extract the factors of variation? , etc. These studies are extremely interestingyet go beyond this work and we leave these to future exploration.While this additional study only provides a minimal experiment and analysis, we believe it demon-strates the potential of extending the proposed concept to larger-scale models and we will keep onexploring along this research direction in the future.K Minimal Human Study as ValidationWe design a questionnaire adapted from [41] to measure accuracy, response time, and subjectivesatisfaction. An example is shown in Figure 6. We show the human subject the observation ofthe policy (the image below what the robot sees) and the logic programs extracted from neuronresponses (the text below In the mind of the robot), and ask the subject to guess what the robot willdo next (the two different angles of the steering wheels). One of the options corresponds to the actualcontrol predicted by the policy and the other is randomly sampled with the opposite sign from theactual control to avoid ambiguity. The user can choose between the two angles or non-selected (i.e.,I don’t know or I am not sure). Also, we put a checkbox below if the user thinks the logic programis not helping to measure subjective satisfaction. We record the answers of the subjects along withtheir response time.The questionnaire consists of 144 questions, which takes roughly 20 minutes to finish. We sample62 questions based on NCP’s results and 62 questions based on non-NCP’s since this specific archi-tecture has the most distinction from the others across all tasks, thus potentially easier for minimalhuman experiment. We collected the results from 10 subjects, shown in the below table with super-script as standard deviation. Both accuracy and subjective satisfaction are between 0 and 1, where 0is the worst and 1 is the best. The subjective satisfaction is the rate of the subject not checking thecheckbox that indicates the logic program is not helping. The results are shown in Table 20. First,we can see that all accuracy are larger than 0.5 (random guess), which indicates that the logic pro-grams are indeed useful for human users to understand the decision making of robots. Besides, wesee positive correlation between subjective satisfaction and accuracy, which means when the humanusers think the logic programs are useful, they are indeed useful.Note that, however, these results should be only viewed as a minimal experiment that augmentsthe evaluation and analysis from [41, 42], which performs detailed user study on the human inter-pretability of the decision set (conceptually the same as logic programs used in our work). The24Figure 6: The demo of the user study questionnaire (adapted from [41]). We show the human subject theobservation of the policy (the image below what the robot sees ) and the logic programs extracted from neuronresponses (the text below In the mind of the robot ), and ask the subject to guess what the robot will do next (thetwo different angles of the steering wheels). One of the option corresponds to the actual control predicted bythe policy and the other is randomly sampled with the opposite sign from the actual control to avoid ambiguity.The user can choose between the two angles or unselected (i.e., I don’t know or I am not sure). Also, we puta checkbox below if the user thinks the logic program is not helping to measure subjective satisfaction. Werecord the answers of the subjects along with their response time.more thorough and rigorous study with human subjects on interpretability in robot learning shouldbe further explored in the future research.L Logic Program from Decision TreesHere we show the corresponding logic program of the finite set of decision path {r(Pik)}Kik=1forevery interpreted neuron in all network architectures. The symbols used in the logic program followthe state grounding definition in Section E. We also briefly summarize the size of associated decisiontrees by computing the number of decision rules for each model (before logic program reduction andconflict checking).In classical control (Pendulum), the extracted logic program are shown in Table 21 (FC; of size 39),Table 22 (GRU; of size 54), Table 23 (LSTM; of size 43), Table 24 (ODE-RNN; of size 40), Table 25(CfC; of size 20), Table 26 (NCP; of size 26).In locomotion (HalfCheetah), the extracted logic program are shown in Table 27 (FC; of size 171),Table 28 (GRU; of size 158), Table 29 (LSTM; of size 148), Table 30 (ODE-RNN; of size 156),Table 31 (CfC; of size 149), Table 32 (NCP; of size 81).In end-to-end visual servoing (Image-based Driving), the extracted logic program are shown inTable 33 (FC; of size 92), Table 34 (GRU; of size 60), Table 35 (LSTM; of size 70), Table 36(ODE-RNN; of size 94), Table 37 (CfC; of size 107), Table 38 (NCP; of size 66).In a logic program, ”conflict” indicates there are conflict between predicates within the logic pro-gram as elaborated in Section 3.2.25Model Neuron Logic ProgramFC00:( ̇θ <= 0.69)∧(θ <=−2.18)1:( ̇θ >0.69)∧(θ <=−2.18)2:(conflict )3:(θ <= 2.41)∧(θ >−2.18)4:(θ >2.41)10:( ̇θ <=−1.16)∧(θ <=−0.34)1:( ̇θ <= 1.73)∧( ̇θ >−1.16)∧(θ <=−0.34)2:( ̇θ >1.73)∧(θ <=−0.34)3:(θ <= 2.03)∧(θ >−0.34)4:(θ <= 2.62)∧(θ >2.03)5:(θ >2.62)20:( ̇θ <=−1.54)∧(θ <=−1.68)1:( ̇θ <= 1.47)∧( ̇θ >−1.54)∧(θ <=−1.68)2:( ̇θ >1.47)∧(θ <=−1.68)3:(conflict )4:(θ <= 2.48)∧(θ >−1.68)5:(θ >2.48)30:(θ <=−2.76)1:(conflict )2:(θ <= 0.05)∧(θ >−2.76)3:(θ >0.05)Table 21: Logic program of FC in classical control (Pendulum).Model Neuron Logic ProgramGRU00:(θ <=−0.06)1:(conflict )2:( ̇θ <=−0.30)∧(θ >−0.06)3:( ̇θ <= 1.75)∧( ̇θ >−0.30)∧(θ >−0.06)4:( ̇θ >1.75)∧(θ >−0.06)10:( ̇θ <=−2.30)∧(θ <=−1.27)1:( ̇θ <= 1.83)∧( ̇θ >−2.30)∧(θ <=−1.27)2:( ̇θ <=−0.37)∧(θ >−1.27)3:( ̇θ <= 1.83)∧( ̇θ >−0.37)∧(θ >−1.27)4:( ̇θ <= 3.10)∧( ̇θ >1.83)5:( ̇θ >3.10)20:( ̇θ <=−0.11)∧(θ <=−0.05)1:( ̇θ <=−0.11)∧(θ >−0.05)2:(conflict )∧(θ >−0.05)3:( ̇θ >−0.11)∧(θ <=−2.09)4:( ̇θ >−0.11)∧(θ <= 0.41)∧(θ >−2.09)5:( ̇θ <= 1.61)∧( ̇θ >−0.11)∧(θ >0.41)6:( ̇θ >1.61)∧(θ >0.41)30:(θ <=−2.61)1:( ̇θ <= 2.44)∧(θ <= 0.21)∧(θ >−2.61)2:( ̇θ >2.44)∧(θ <= 0.21)∧(θ >−2.61)3:( ̇θ <=−1.76)∧(θ >0.21)4:( ̇θ <= 0.39)∧( ̇θ >−1.76)∧(θ >0.21)5:( ̇θ >0.39)∧(θ >0.21)Table 22: Logic program of GRU in classical control (Pendulum).26Model Neuron Logic ProgramLSTM00:(θ <=−2.16)1:(conflict )2:(θ <= 0.01)∧(θ >−2.16)3:( ̇θ <= 0.48)∧(θ >0.01)4:( ̇θ <= 3.00)∧( ̇θ >0.48)∧(θ >0.01)5:( ̇θ >3.00)∧(θ >0.01)10:( ̇θ <=−2.57)1:(conflict )2:( ̇θ >−2.57)∧(θ <= 0.35)3:( ̇θ >−2.57)∧(θ <= 2.03)∧(θ >0.35)4:( ̇θ >−2.57)∧(θ >2.03)20:( ̇θ <= 4.79)∧(θ <=−2.31)1:( ̇θ <= 4.79)∧(θ <= 1.72)∧(θ >−2.31)2:( ̇θ <=−3.13)∧(θ >1.72)3:( ̇θ <= 4.79)∧( ̇θ >−3.13)∧(θ >1.72)4:( ̇θ >4.79)30:(θ <=−2.32)1:(θ <= 0.93)∧(θ >−2.32)2:( ̇θ <=−3.98)∧(θ >0.93)3:( ̇θ >−3.98)∧(θ <= 2.04)∧(θ >0.93)4:( ̇θ >−3.98)∧(θ >2.04)Table 23: Logic program of LSTM in classical control (Pendulum).Model Neuron Logic ProgramODE-RNN00:( ̇θ <=−1.48)∧(θ <=−0.02)1:( ̇θ <=−1.48)∧(θ >−0.02)2:( ̇θ >−1.48)10:( ̇θ <=−0.08)∧(θ <=−1.45)1:( ̇θ >−0.08)∧(θ <=−1.45)2:(θ <= 2.05)∧(θ >−1.45)3:(θ <= 2.50)∧(θ >2.05)4:( ̇θ <=−0.40)∧(θ >2.50)5:( ̇θ <= 0.03)∧( ̇θ >−0.40)∧(θ >2.50)6:( ̇θ >0.03)∧(θ >2.50)20:( ̇θ <=−0.56)1:( ̇θ >−0.56)∧(θ <=−2.16)2:( ̇θ >−0.56)∧(θ <= 2.44)∧(θ >−2.16)3:(conflict )∧(θ >2.44)4:( ̇θ >−0.56)∧(θ >2.44)30:(θ <=−2.18)1:(θ <= 0.04)∧(θ >−2.18)2:(θ <= 2.65)∧(θ >0.04)3:( ̇θ <=−0.21)∧(θ >2.65)4:( ̇θ >−0.21)∧(θ >2.65)Table 24: Logic program of ODE-RNN in classical control (Pendulum).27Model Neuron Logic ProgramCfC00:(θ <=−0.03)1:(conflict )2:(conflict )3:(θ >−0.03)10:(θ <=−0.05)1:(conflict )2:(conflict )3:(θ >−0.05)20:(θ <=−2.02)1:(θ >−2.02)30:(θ <=−0.12)1:(θ <= 0.24)∧(θ >−0.12)2:(θ <= 2.14)∧(θ >0.24)3:(θ >2.14)Table 25: Logic program of CfC in classical control (Pendulum).Model Neuron Logic ProgramNCP00:( ̇θ <= 0.33)1:( ̇θ >0.33)10:(θ <=−0.07)1:(conflict )2:(θ <= 0.27)∧(θ >−0.07)3:(θ >0.27)20:( ̇θ <= 4.80)∧(θ <=−1.27)1:( ̇θ <= 4.80)∧(θ <= 1.66)∧(θ >−1.27)2:( ̇θ <= 4.80)∧(θ >1.66)3:( ̇θ >4.80)30:( ̇θ <=−0.33)1:( ̇θ <= 0.44)∧( ̇θ >−0.33)2:( ̇θ >0.44)∧(θ <=−1.31)3:( ̇θ >0.44)∧(θ <= 1.44)∧(θ >−1.31)4:( ̇θ >0.44)∧(θ >1.44)Table 26: Logic program of NCP in classical control (Pendulum).28Model Neuron Logic ProgramFC00:( ̇θR<=−0.22)∧( ̇θT,F<=−3.26)1:( ̇θR>−0.22)∧( ̇θT,F<=−3.26)2:( ̇θT,F<= 6.46)∧( ̇θT,F>−3.26)∧(θF,F<=−0.50)3:( ̇θT,F<= 6.46)∧( ̇θT,F>−3.26)∧(θF,F>−0.50)4:( ̇θT,F>6.46)∧(hR<= 0.05)5:( ̇θT,F>6.46)∧(hR>0.05)10:(θF,B<=−0.05)∧(θS,F<= 0.61)∧(θT,B<= 0.05)1:(θF,B<=−0.05)∧(θS,F>0.61)∧(θT,B<= 0.05)2:( ̇θT,F<=−11.24)∧(θF,B>−0.05)∧(θT,B<= 0.05)3:( ̇θT,F>−11.24)∧(θF,B>−0.05)∧(θT,B<= 0.05)4:(θT,B>0.05)∧(θT,F<= 0.40)5:(θT,B>0.05)∧(θT,F<= 0.62)∧(θT,F>0.40)6:( ̇θT,B<= 2.08)∧(θT,B>0.05)∧(θT,F>0.62)7:( ̇θT,B>2.08)∧(θT,B>0.05)∧(θT,F>0.62)20:( ̇θF,F<=−12.45)∧(θF,B<= 0.06)1:( ̇θF,F>−12.45)∧(θF,B<= 0.06)∧(θS,F<= 0.65)2:( ̇θF,F>−12.45)∧(θF,B<= 0.06)∧(θS,F>0.65)3:( ̇θT,B<= 6.33)∧(θF,B>0.06)∧(θT,F<= 0.33)4:( ̇θT,B<= 6.33)∧(θF,B>0.06)∧(θT,F>0.33)5:( ̇θT,B>6.33)∧(θF,B>0.06)30:(θF,B<= 0.38)∧(θS,F<= 0.17)∧(θT,F<= 0.58)1:(θF,B<= 0.38)∧(θS,F<= 0.17)∧(θT,F>0.58)2:(θF,B<= 0.38)∧(θS,F>0.17)∧(θT,B<= 0.07)3:(θF,B<= 0.38)∧(θS,F>0.17)∧(θT,B>0.07)4:(θF,B>0.38)40:( ̇θS,B<= 1.79)∧(θR<= 0.19)∧(θT,B<= 0.06)1:( ̇θS,B>1.79)∧(θR<= 0.19)∧(θT,B<= 0.06)2:( ̇θF,F<= 9.07)∧(θR>0.19)∧(θT,B<= 0.06)3:( ̇θF,F>9.07)∧(θR>0.19)∧(θT,B<= 0.06)4:(θR<=−0.03)∧(θT,B>0.06)5:(θR>−0.03)∧(θF,F<=−0.49)∧(θT,B>0.06)6:(θR>−0.03)∧(θF,F>−0.49)∧(θT,B>0.06)50:( ̇θT,B<= 1.10)∧( ̇θT,F<=−6.72)∧(θT,F<= 0.67)1:( ̇θT,B<= 1.10)∧( ̇θT,F>−6.72)∧(θT,F<= 0.67)2:( ̇θT,B<= 1.10)∧(θT,F>0.67)3:( ̇θT,B>1.10)∧( ̇hR<=−0.33)∧(θR<= 0.29)4:( ̇θT,B>1.10)∧( ̇hR>−0.33)∧(θR<= 0.29)5:( ̇θT,B>1.10)∧( ̇hR<=−0.48)∧(θR>0.29)6:( ̇θT,B>1.10)∧( ̇hR>−0.48)∧(θR>0.29)60:( ̇θR<=−0.83)∧( ̇θF,B<= 4.65)∧( ̇θF,F<=−0.07)1:( ̇θR<=−0.83)∧( ̇θF,B>4.65)∧( ̇θF,F<=−0.07)2:( ̇θR>−0.83)∧( ̇θF,F<=−0.07)∧(θS,B<=−0.33)3:( ̇θR>−0.83)∧( ̇θF,F<=−0.07)∧(θS,B>−0.33)4:( ̇θF,F>−0.07)∧(θR<= 0.52)5:( ̇θF,F>−0.07)∧(θR>0.52)70:( ̇θT,F<=−0.36)∧(θR<= 0.33)∧(θS,F<= 0.56)1:( ̇θT,F<=−0.36)∧(θR>0.33)∧(θS,F<= 0.56)2:( ̇θT,F<=−0.36)∧(θS,F>0.56)3:( ̇θT,F<= 6.91)∧( ̇θT,F>−0.36)∧(θS,B<= 0.09)4:( ̇θT,F>6.91)∧(θS,B<= 0.09)5:( ̇θT,F>−0.36)∧(θS,B>0.09)80:( ̇θR<=−0.10)∧( ̇θT,B<=−3.31)1:( ̇θR>−0.10)∧( ̇θT,B<=−3.31)2:( ̇θT,B<= 0.97)∧( ̇θT,B>−3.31)3:( ̇θT,B>0.97)∧(θS,B<=−0.35)4:( ̇θT,B>0.97)∧(θR<= 0.20)∧(θS,B>−0.35)5:( ̇θT,B>0.97)∧(θR>0.20)∧(θS,B>−0.35)90:( ̇θF,B<= 0.63)∧( ̇θS,F<= 3.07)1:( ̇θF,B<= 0.63)∧( ̇θS,F>3.07)∧(θR<= 0.46)2:( ̇θF,B<= 0.63)∧( ̇θS,F>3.07)∧(θR>0.46)3:( ̇θF,B>0.63)∧( ̇θT,F<= 5.96)∧(hR<= 0.03)4:( ̇θF,B>0.63)∧( ̇θT,F<= 5.96)∧(hR>0.03)5:( ̇θF,B>0.63)∧( ̇θT,F>5.96)∧(θT,F<= 0.38)6:( ̇θF,B>0.63)∧( ̇θT,F>5.96)∧(θT,F>0.38)Table 27: Logic program of FC in locomotion (HalfCheetah).29Model Neuron Logic ProgramGRU00:( ̇θR<= 1.54)∧( ̇θT,B<=−2.33)∧(θR<= 0.52)1:( ̇θR>1.54)∧( ̇θT,B<=−2.33)∧(θR<= 0.52)2:( ̇θT,B>−2.33)∧(θR<= 0.11)3:( ̇θT,B>−2.33)∧(θR<= 0.52)∧(θR>0.11)4:( ̇θT,F<=−7.48)∧(θR>0.52)5:( ̇θT,F>−7.48)∧(θR<= 0.97)∧(θR>0.52)6:( ̇θT,F>−7.48)∧(θR>0.97)10:( ̇θS,B<= 10.33)∧(θR<= 0.50)∧(θF,B<=−0.41)1:( ̇θS,B<= 10.33)∧(θR<= 0.50)∧(θF,B>−0.41)2:( ̇θS,B<= 10.33)∧( ̇hR<=−0.07)∧(θR>0.50)3:( ̇θS,B<= 10.33)∧( ̇hR>−0.07)∧(θR>0.50)4:( ̇θS,B>10.33)20:( ̇θT,B<= 3.33)∧(θR<= 0.12)1:( ̇θT,B>3.33)∧(θR<= 0.12)2:( ̇θT,B<= 6.59)∧(θR>0.12)∧(θT,F<= 0.70)3:( ̇θT,B<= 6.59)∧(θR>0.12)∧(θT,F>0.70)4:( ̇θT,B>6.59)∧(θR<= 0.54)∧(θR>0.12)5:( ̇θT,B>6.59)∧(θR>0.54)30:( ̇θR<=−0.78)∧(θT,F<= 0.17)1:( ̇θR>−0.78)∧(θR<= 0.68)∧(θT,F<= 0.17)2:( ̇θR>−0.78)∧(θR>0.68)∧(θT,F<= 0.17)3:( ̇hR<= 0.64)∧(θT,B<=−0.14)∧(θT,F>0.17)4:( ̇hR<= 0.64)∧(θT,B>−0.14)∧(θT,F>0.17)5:( ̇hR>0.64)∧(θT,F>0.17)40:( ̇θS,B<= 1.92)∧(θS,F<= 0.02)1:( ̇θS,B>1.92)∧(θS,F<= 0.02)2:( ̇θS,B<= 6.10)∧( ̇θS,F<= 7.21)∧(θS,F>0.02)3:( ̇θS,B<= 6.10)∧( ̇θS,F>7.21)∧(θS,F>0.02)4:( ̇θS,B>6.10)∧(θS,F>0.02)50:( ̇θT,B<= 2.59)∧(θF,B<= 0.10)∧(θT,B<=−0.16)1:( ̇θT,B<= 2.59)∧(θF,B<= 0.10)∧(θT,B>−0.16)2:( ̇θT,B>2.59)∧(θF,B<= 0.10)3:( ̇θT,B<= 1.45)∧( ̇θT,F<= 6.43)∧(θF,B>0.10)4:( ̇θT,B>1.45)∧( ̇θT,F<= 6.43)∧(θF,B>0.10)5:(conflict )∧(θF,B>0.10)6:( ̇θT,F>6.43)∧(θF,B>0.10)60:(θR<= 0.17)∧(θF,B<=−0.12)1:(θR<= 0.62)∧(θR>0.17)∧(θF,B<=−0.12)2:( ̇θF,B<= 6.66)∧(θR<= 0.62)∧(θF,B>−0.12)3:( ̇θF,B>6.66)∧(θR<= 0.62)∧(θF,B>−0.12)4:( ̇θT,B<=−5.47)∧(θR>0.62)5:( ̇θT,B>−5.47)∧(θR<= 0.89)∧(θR>0.62)6:( ̇θT,B>−5.47)∧(θR>0.89)70:( ̇hR<=−0.27)∧(θT,F<= 0.20)1:( ̇θT,B<=−0.25)∧( ̇hR>−0.27)∧(θT,F<= 0.20)2:( ̇θT,B>−0.25)∧( ̇hR>−0.27)∧(θT,F<= 0.20)3:( ̇hR<= 0.12)∧(θR<= 0.77)∧(θT,F>0.20)4:( ̇hR<= 0.12)∧(θR>0.77)∧(θT,F>0.20)5:( ̇hR>0.12)∧(θT,F>0.20)80:( ̇hR<= 0.13)∧(θF,B<=−0.16)∧(hR<= 0.07)1:( ̇hR<= 0.13)∧(θF,B>−0.16)∧(hR<= 0.07)2:( ̇hR<= 0.13)∧(hR>0.07)3:( ̇hR<= 1.11)∧( ̇hR>0.13)∧(hR<= 0.08)4:( ̇hR>1.11)∧(hR<= 0.08)5:( ̇hR>0.13)∧(hR>0.08)90:( ̇θT,F<= 0.51)∧(θS,F<=−0.07)1:( ̇θF,F<= 3.81)∧( ̇θT,F<= 0.51)∧(θS,F>−0.07)2:( ̇θF,F>3.81)∧( ̇θT,F<= 0.51)∧(θS,F>−0.07)3:( ̇θT,F>0.51)∧(θF,B<= 0.34)4:( ̇θR<=−1.78)∧( ̇θT,F>0.51)∧(θF,B>0.34)5:( ̇θR>−1.78)∧( ̇θT,F>0.51)∧(θF,B>0.34)Table 28: Logic program of GRU in locomotion (HalfCheetah).30Model Neuron Logic ProgramLSTM00:( ̇θR<= 2.01)∧(θS,B<= 0.53)∧(θT,F<=−0.28)1:( ̇θR<= 2.01)∧(θS,B<= 0.53)∧(θT,F>−0.28)2:( ̇θR>2.01)∧(θS,B<= 0.53)3:(θS,B>0.53)10:(θF,F<= 0.02)∧(θT,F<=−0.35)1:( ̇θF,F<= 1.23)∧(θF,F<= 0.02)∧(θT,F>−0.35)2:( ̇θF,F>1.23)∧(θF,F<= 0.02)∧(θT,F>−0.35)3:(θF,F>0.02)∧(θT,F<=−0.07)4:( ̇θF,F<= 0.95)∧(θF,F>0.02)∧(θT,F>−0.07)5:( ̇θF,F>0.95)∧(θF,F>0.02)∧(θT,F>−0.07)20:(θF,F<=−0.46)∧(hR<=−0.07)1:( ̇θF,B<=−4.76)∧(θF,F>−0.46)∧(hR<=−0.07)2:( ̇θF,B>−4.76)∧(θF,F>−0.46)∧(hR<=−0.07)3:(θR<= 0.11)∧(hR>−0.07)4:(θR>0.11)∧(hR>−0.07)30:( ̇θS,B<= 0.14)∧(θT,F<=−0.01)∧(hR<=−0.11)1:( ̇θS,B<= 0.14)∧(θT,F>−0.01)∧(hR<=−0.11)2:( ̇θS,B>0.14)∧(θS,B<= 0.42)∧(hR<=−0.11)3:( ̇θS,B>0.14)∧(θS,B>0.42)∧(hR<=−0.11)4:(θT,B<= 0.27)∧(hR>−0.11)5:(θT,B<= 0.53)∧(θT,B>0.27)∧(hR>−0.11)6:(θT,B>0.53)∧(hR>−0.11)40:(θF,F<=−0.00)1:(conflict )2:(θR<= 0.12)∧(θF,F<= 0.39)∧(θF,F>−0.00)3:(θR>0.12)∧(θF,F<= 0.39)∧(θF,F>−0.00)4:( ̇θR<=−0.51)∧(θF,F>0.39)5:( ̇θR>−0.51)∧(θF,F>0.39)50:( ̇θR<= 0.07)∧(θF,F<=−0.05)1:( ̇θR<= 0.07)∧(θF,F<= 0.35)∧(θF,F>−0.05)2:( ̇θR>0.07)∧(θF,F<= 0.35)∧(hR<=−0.18)3:( ̇θR>0.07)∧(θF,F<= 0.35)∧(hR>−0.18)4:( ̇θS,B<=−2.15)∧(θF,F>0.35)5:( ̇θS,B>−2.15)∧(θF,F>0.35)60:( ̇θT,F<= 1.81)∧( ̇hR<=−1.08)∧(θT,F<=−0.14)1:( ̇θT,F<= 1.81)∧( ̇hR>−1.08)∧(θT,F<=−0.14)2:( ̇θT,F>1.81)∧(θT,F<=−0.14)3:( ̇hR<=−0.47)∧(θT,F>−0.14)4:( ̇θT,B<=−1.89)∧( ̇hR>−0.47)∧(θT,F>−0.14)5:( ̇θT,B>−1.89)∧( ̇hR>−0.47)∧(θT,F>−0.14)70:(θF,F<=−0.07)∧(θT,F<= 0.36)1:(θF,F<=−0.07)∧(θT,F>0.36)2:( ̇xR<= 4.14)∧(θF,F>−0.07)∧(θT,B<= 0.34)3:( ̇xR>4.14)∧(θF,F>−0.07)∧(θT,B<= 0.34)4:( ̇θT,F<=−2.37)∧(θF,F>−0.07)∧(θT,B>0.34)5:( ̇θT,F>−2.37)∧(θF,F>−0.07)∧(θT,B>0.34)80:( ̇θF,B<= 11.09)∧( ̇θF,F<= 1.05)∧(θT,F<=−0.46)1:( ̇θF,B<= 11.09)∧( ̇θF,F<= 1.05)∧(θT,F>−0.46)2:( ̇θF,B>11.09)∧( ̇θF,F<= 1.05)3:( ̇θF,F>1.05)∧( ̇θT,B<=−11.38)4:( ̇θF,F>1.05)∧( ̇θT,B>−11.38)∧(θR<= 0.16)5:( ̇θF,F>1.05)∧( ̇θT,B>−11.38)∧(θR>0.16)90:(θF,F<=−0.40)∧(θS,B<=−0.33)1:(θF,F<=−0.40)∧(θS,B>−0.33)2:( ̇θT,F<=−10.21)∧( ̇hR<=−0.80)∧(θF,F>−0.40)3:( ̇θT,F>−10.21)∧( ̇hR<=−0.80)∧(θF,F>−0.40)4:( ̇hR>−0.80)∧(θF,B<=−0.14)∧(θF,F>−0.40)5:( ̇hR>−0.80)∧(θF,B>−0.14)∧(θF,F>−0.40)Table 29: Logic program of LSTM in locomotion (HalfCheetah).31Model Neuron Logic ProgramODE-RNN00:( ̇hR<=−0.27)∧(θS,B<=−0.49)1:( ̇hR>−0.27)∧(θS,B<=−0.49)2:( ̇hR<=−0.00)∧(θF,F<=−0.22)∧(θS,B>−0.49)3:( ̇hR<=−0.00)∧(θF,F>−0.22)∧(θS,B>−0.49)4:( ̇hR>−0.00)∧(θS,B>−0.49)∧(θT,B<= 0.22)5:( ̇hR>−0.00)∧(θS,B>−0.49)∧(θT,B>0.22)10:( ̇θT,B<=−2.91)∧(θF,B<=−0.39)1:( ̇θT,B>−2.91)∧( ̇hR<=−0.21)∧(θF,B<=−0.39)2:( ̇θT,B>−2.91)∧( ̇hR>−0.21)∧(θF,B<=−0.39)3:( ̇θR<=−0.00)∧(θF,B>−0.39)4:( ̇θR>−0.00)∧(θR<=−0.04)∧(θF,B>−0.39)5:( ̇θR>−0.00)∧(θR>−0.04)∧(θF,B>−0.39)20:(θR<= 0.02)∧(θT,B<=−0.16)1:(θR<= 0.02)∧(θT,B>−0.16)∧(hR<= 0.00)2:(θR<= 0.02)∧(θT,B>−0.16)∧(hR>0.00)3:(θR>0.02)∧(hR<=−0.02)4:(θR>0.02)∧(conflict )5:(θR>0.02)∧(hR>−0.02)30:( ̇hR<= 0.55)∧(θR<= 0.08)∧(θT,B<= 0.42)1:( ̇hR>0.55)∧(θR<= 0.08)∧(θT,B<= 0.42)2:(θR<= 0.08)∧(θS,B<= 0.16)∧(θT,B>0.42)3:(θR<= 0.08)∧(θS,B>0.16)∧(θT,B>0.42)4:(θR>0.08)∧(θT,B<= 0.42)∧(hR<=−0.05)5:(θR>0.08)∧(θT,B<= 0.42)∧(hR>−0.05)6:(θR>0.08)∧(θT,B>0.42)40:( ̇θT,B<=−3.83)∧( ̇θT,F<=−0.87)1:( ̇θT,B<=−3.83)∧( ̇θT,F>−0.87)∧(hR<=−0.08)2:( ̇θT,B<=−3.83)∧( ̇θT,F>−0.87)∧(hR>−0.08)3:( ̇θT,B>−3.83)∧(θS,B<=−0.11)∧(hR<=−0.09)4:( ̇θT,B>−3.83)∧(θS,B>−0.11)∧(hR<=−0.09)5:( ̇θT,B>−3.83)∧(θT,B<=−0.33)∧(hR>−0.09)6:( ̇θT,B>−3.83)∧(θT,B>−0.33)∧(hR>−0.09)50:( ̇xR<= 3.31)1:( ̇xR>3.31)∧(θS,B<= 0.01)∧(θT,F<= 0.57)2:( ̇xR>3.31)∧(θS,B>0.01)∧(θT,F<= 0.57)3:( ̇xR>3.31)∧(θT,F>0.57)60:( ̇θT,B<= 1.22)∧(θS,B<=−0.54)1:( ̇θT,B>1.22)∧(θS,B<=−0.54)2:( ̇θT,B<=−3.70)∧(θS,B>−0.54)∧(θT,B<= 0.16)3:( ̇θT,B<=−3.70)∧(θS,B>−0.54)∧(θT,B>0.16)4:( ̇θT,B>−3.70)∧(conflict )5:( ̇θT,B>−3.70)∧(θS,B>−0.54)70:( ̇hR<= 0.02)∧(θS,F<=−0.19)∧(θT,B<= 0.53)1:( ̇hR<= 0.02)∧(θS,F<=−0.19)∧(θT,B>0.53)2:( ̇hR>0.02)∧(θS,F<=−0.19)3:( ̇θR<= 0.53)∧(θS,F<= 0.16)∧(θS,F>−0.19)4:( ̇θR<= 0.53)∧(θS,F>0.16)5:( ̇θR>0.53)∧(θS,F>−0.19)80:( ̇θT,B<=−0.39)∧(θT,B<=−0.37)1:( ̇θT,B<=−0.39)∧(θS,B<= 0.04)∧(θT,B>−0.37)2:( ̇θT,B<=−0.39)∧(θS,B>0.04)∧(θT,B>−0.37)3:( ̇θT,B>−0.39)∧(θT,B<= 0.55)∧(θT,F<= 0.09)4:( ̇θT,B>−0.39)∧(θT,B<= 0.55)∧(θT,F>0.09)5:( ̇θT,B>−0.39)∧(θT,B>0.55)90:(θT,F<=−0.24)1:( ̇θT,F<=−6.60)∧(conflict )2:( ̇θT,F>−6.60)∧(conflict )3:(θT,F>−0.24)∧(hR<=−0.11)4:(θT,F<= 0.32)∧(θT,F>−0.24)∧(hR>−0.11)5:(θT,F>0.32)∧(hR>−0.11)Table 30: Logic program of ODE-RNN in locomotion (HalfCheetah).32Model Neuron Logic ProgramCfC00:( ̇θF,F<=−9.40)1:( ̇θF,F>−9.40)∧( ̇θT,B<=−1.68)∧(θF,F<= 0.23)2:( ̇θF,F>−9.40)∧( ̇θT,B>−1.68)∧(θF,F<= 0.23)3:( ̇θF,F>−9.40)∧( ̇θT,F<=−5.69)∧(θF,F>0.23)4:( ̇θF,F>−9.40)∧( ̇θT,F>−5.69)∧(θF,F>0.23)10:(θR<= 0.06)∧(θT,F<=−0.46)1:(θR>0.06)∧(θT,F<=−0.46)2:(θR<= 0.02)∧(θT,F>−0.46)3:(θR>0.02)∧(conflict )4:(θR>0.02)∧(θT,F>−0.46)20:( ̇θT,B<= 1.92)∧( ̇θT,F<=−0.59)∧( ̇hR<= 0.43)1:( ̇θT,B<= 1.92)∧( ̇θT,F>−0.59)∧( ̇hR<= 0.43)2:( ̇θT,B>1.92)∧( ̇θT,F<= 5.02)∧( ̇hR<= 0.43)3:( ̇θT,B>1.92)∧( ̇θT,F>5.02)∧( ̇hR<= 0.43)4:( ̇θS,F<= 13.53)∧( ̇hR>0.43)∧(θF,B<=−0.01)5:( ̇θS,F<= 13.53)∧( ̇hR>0.43)∧(θF,B>−0.01)6:( ̇θS,F>13.53)∧( ̇hR>0.43)30:( ̇θT,F<= 8.70)∧(θS,F<= 0.53)∧(hR<=−0.09)1:( ̇θT,F<= 8.70)∧(θS,F<= 0.53)∧(hR>−0.09)2:( ̇θT,F<= 8.70)∧(θS,F>0.53)3:( ̇θT,F>8.70)∧(θR<= 0.20)4:( ̇θT,F>8.70)∧(θR>0.20)40:(θT,F<=−0.51)∧(hR<=−0.04)1:(θT,F<=−0.51)∧(hR>−0.04)2:( ̇θT,B<= 0.58)∧(θT,F<= 0.50)∧(θT,F>−0.51)3:( ̇θT,B<= 0.58)∧(θT,F>0.50)4:( ̇θT,B>0.58)∧( ̇hR<=−0.44)∧(θT,F>−0.51)5:( ̇θT,B>0.58)∧( ̇hR>−0.44)∧(θT,F>−0.51)50:( ̇θT,B<= 2.61)∧(θS,F<= 0.08)∧(hR<= 0.03)1:( ̇θT,B<= 2.61)∧(θS,F<= 0.08)∧(hR>0.03)2:( ̇θR<= 0.63)∧( ̇θT,B<= 2.61)∧(θS,F>0.08)3:( ̇θR>0.63)∧( ̇θT,B<= 2.61)∧(θS,F>0.08)4:( ̇θT,B>2.61)∧(θT,F<=−0.13)5:( ̇θT,B>2.61)∧(θT,F>−0.13)60:( ̇θT,B<=−0.60)∧(θF,F<=−0.11)1:( ̇θT,B>−0.60)∧(θF,F<=−0.11)∧(hR<=−0.09)2:( ̇θT,B>−0.60)∧(θF,F<=−0.11)∧(hR>−0.09)3:( ̇θS,F<= 11.48)∧(θF,F>−0.11)∧(θT,B<=−0.10)4:( ̇θS,F<= 11.48)∧(θF,F>−0.11)∧(θT,B>−0.10)5:( ̇θS,F>11.48)∧(θF,F>−0.11)70:( ̇θT,F<=−8.38)∧( ̇hR<= 0.45)∧(θT,F<= 0.47)1:( ̇θT,F>−8.38)∧( ̇hR<= 0.45)∧(θT,F<= 0.47)2:( ̇hR>0.45)∧(θF,F<= 0.14)∧(θT,F<= 0.47)3:( ̇hR>0.45)∧(θF,F>0.14)∧(θT,F<= 0.47)4:(θT,F<= 0.69)∧(θT,F>0.47)5:(θT,F>0.69)80:( ̇θT,B<=−6.34)∧(hR<=−0.02)1:( ̇θT,B>−6.34)∧(hR<=−0.02)2:( ̇θT,B>−6.34)∧(conflict )3:( ̇θT,F<= 10.03)∧(θS,F<= 0.50)∧(hR>−0.02)4:( ̇θT,F>10.03)∧(θS,F<= 0.50)∧(hR>−0.02)5:(θS,F>0.50)∧(hR>−0.02)90:( ̇θT,B<= 1.14)∧(θS,B<=−0.06)∧(θS,F<= 0.46)1:( ̇θT,B<= 1.14)∧(θS,B>−0.06)∧(θS,F<= 0.46)2:( ̇θT,B<= 1.14)∧(θS,F>0.46)3:( ̇θT,B>1.14)∧(θT,F<= 0.56)∧(hR<=−0.10)4:( ̇θT,B>1.14)∧(θT,F<= 0.56)∧(hR>−0.10)5:( ̇θT,B>1.14)∧(θT,F>0.56)Table 31: Logic program of CfC in locomotion (HalfCheetah).33Model Neuron Logic ProgramNCP00:(θF,B<=−0.05)1:(θF,B>−0.05)10:(θF,B<=−0.04)1:(θF,B>−0.04)20:(θT,B<=−0.29)1:(θT,B>−0.29)30:( ̇θT,F<=−6.91)∧(hR<=−0.08)1:( ̇θT,F<=−6.91)∧(hR>−0.08)2:( ̇θT,F>−6.91)∧(θF,F<=−0.13)3:( ̇θT,F>−6.91)∧(θF,F>−0.13)40:( ̇θF,B<=−6.37)∧(θT,B<=−0.36)1:( ̇θF,B>−6.37)∧(θT,B<=−0.36)2:( ̇hR<=−0.59)∧(θT,B<= 0.59)∧(θT,B>−0.36)3:( ̇hR>−0.59)∧(θT,B<= 0.59)∧(θT,B>−0.36)4:( ̇θF,B<= 0.64)∧(θT,B>0.59)5:( ̇θF,B>0.64)∧(θT,B>0.59)50:(θF,B<=−0.02)∧(θT,B<=−0.40)1:(θF,B<=−0.02)∧(θT,B>−0.40)2:(θF,B>−0.02)60:( ̇θF,F<=−3.61)∧(θF,B<=−0.06)1:( ̇θF,F>−3.61)∧(θF,B<=−0.06)2:( ̇hR<= 0.51)∧(θF,B>−0.06)∧(θT,F<=−0.70)3:( ̇hR<= 0.51)∧(θF,B>−0.06)∧(θT,F>−0.70)4:( ̇hR>0.51)∧(θF,B>−0.06)70:(θT,B<=−0.27)1:(conflict )2:( ̇θS,F<=−8.45)∧( ̇hR<= 0.18)∧(θT,B>−0.27)3:( ̇θS,F<=−8.45)∧( ̇hR>0.18)∧(θT,B>−0.27)4:( ̇θS,F>−8.45)∧(θF,B<= 0.22)∧(θT,B>−0.27)5:( ̇θS,F>−8.45)∧(θF,B>0.22)∧(θT,B>−0.27)80:( ̇θF,B<= 6.33)∧( ̇θT,F<= 12.79)∧(θS,B<=−0.04)1:( ̇θF,B>6.33)∧( ̇θT,F<= 12.79)∧(θS,B<=−0.04)2:( ̇θT,F<= 12.79)∧(θS,B>−0.04)3:( ̇θT,F>12.79)90:( ̇θF,B<= 6.97)∧(hR<=−0.12)1:( ̇θF,B<= 6.97)∧(θF,F<= 0.15)∧(hR>−0.12)2:( ̇θF,B<= 6.97)∧(θF,F>0.15)∧(hR>−0.12)3:( ̇θF,B>6.97)∧( ̇θS,B<=−8.53)4:( ̇θF,B>6.97)∧( ̇θS,B>−8.53)Table 32: Logic program of NCP in locomotion (HalfCheetah).34Model Neuron Logic ProgramFC00:(κ <= 0.00)∧(v <= 7.40)1:(κ <= 0.00)∧(v <= 7.71)∧(v >7.40)2:(κ >0.00)∧(v <= 7.71)3:(κ <= 0.00)∧(d <= 0.12)∧(v >7.71)4:(κ <= 0.00)∧(d >0.12)∧(v >7.71)5:(κ >0.00)∧(v >7.71)10:(v <= 7.30)1:(δ <= 0.00)∧(d <= 0.19)∧(v >7.30)2:(δ <= 0.00)∧(d >0.19)∧(v >7.30)3:(δ >0.00)∧(d <= 0.29)∧(v >7.30)4:(δ >0.00)∧(d >0.29)∧(v >7.30)20:(δ <=−0.02)∧(κ <= 0.02)∧(μ <= 0.01)1:(δ >−0.02)∧(κ <= 0.02)∧(μ <= 0.01)2:(κ >0.02)∧(μ <= 0.01)3:(μ >0.01)30:(κ <=−0.00)1:(κ >−0.00)∧(μ <= 0.01)∧(d <= 0.07)2:(κ >−0.00)∧(μ <= 0.01)∧(d >0.07)3:(κ >−0.00)∧(μ <= 0.02)∧(μ >0.01)4:(κ >−0.00)∧(μ >0.02)40:(κ <= 0.00)∧(μ <= 0.01)∧(v <= 7.66)1:(κ <= 0.00)∧(μ <= 0.01)∧(v >7.66)2:(κ <= 0.00)∧(μ >0.01)3:(κ >0.00)∧(μ <=−0.02)4:(κ >0.00)∧(μ <= 0.01)∧(μ >−0.02)5:(κ >0.00)∧(μ >0.01)50:(μ <= 0.01)∧(v <= 7.49)1:(δ <=−0.02)∧(μ <= 0.01)∧(v >7.49)2:(δ >−0.02)∧(μ <= 0.01)∧(v >7.49)3:(μ <= 0.02)∧(μ >0.01)4:(μ >0.02)60:(κ <=−0.01)1:(δ <=−0.01)∧(κ >−0.01)∧(μ <= 0.01)2:(δ >−0.01)∧(κ >−0.01)∧(μ <= 0.01)3:(κ >−0.01)∧(μ >0.01)70:(μ <=−0.02)1:(δ <= 0.02)∧(μ <= 0.01)∧(μ >−0.02)2:(δ >0.02)∧(μ <= 0.01)∧(μ >−0.02)3:(μ >0.01)Table 33: Logic program of FC in end-to-end visual servoing (Image-based Driving).35Model Neuron Logic ProgramGRU00:(d <=−0.06)1:(d <= 0.11)∧(d >−0.06)2:(d >0.11)10:(μ <= 0.04)1:(μ <= 0.09)∧(μ >0.04)2:(μ >0.09)20:(μ <= 0.04)1:(μ <= 0.10)∧(μ >0.04)2:(μ >0.10)30:(v <= 5.26)1:(μ <= 0.01)∧(v <= 6.88)∧(v >5.26)2:(μ >0.01)∧(v <= 6.88)∧(v >5.26)3:(v <= 7.34)∧(v >6.88)4:(v >7.34)4 0: None50:(v <= 5.26)1:(μ <=−0.01)∧(d <= 0.20)∧(v >5.26)2:(μ <=−0.01)∧(d >0.20)∧(v >5.26)3:(μ >−0.01)∧(v <= 6.81)∧(v >5.26)4:(μ >−0.01)∧(v >6.81)60:(δ <=−0.04)1:(δ <= 0.05)∧(δ >−0.04)∧(v <= 7.81)2:(δ <= 0.05)∧(δ >−0.04)∧(v >7.81)3:(δ <= 0.09)∧(δ >0.05)4:(δ >0.09)70:(κ <= 0.02)∧(μ <=−0.05)∧(d <= 0.61)1:(κ <= 0.02)∧(μ >−0.05)∧(d <= 0.61)2:(κ >0.02)∧(d <= 0.61)3:(δ <= 0.06)∧(μ <=−0.04)∧(d >0.61)4:(δ <= 0.06)∧(μ >−0.04)∧(d >0.61)5:(δ >0.06)∧(d >0.61)Table 34: Logic program of GRU in end-to-end visual servoing (Image-based Driving).36Model Neuron Logic ProgramLSTM00:(κ <= 0.00)∧(d <= 0.07)1:(κ <= 0.00)∧(d >0.07)2:(κ <= 0.00)∧(κ >0.00)3:(κ >0.00)∧(v <= 7.58)4:(κ >0.00)∧(v >7.58)10:(d <=−0.08)∧(v <= 7.54)1:(d <= 0.04)∧(d >−0.08)∧(v <= 7.54)2:(d <= 0.04)∧(v <= 7.68)∧(v >7.54)3:(d <= 0.04)∧(v >7.68)4:(d <= 0.30)∧(d >0.04)5:(d >0.30)20:(κ <= 0.00)∧(d <=−0.11)1:(κ >0.00)∧(d <=−0.11)2:(d <= 0.03)∧(d >−0.11)∧(v <= 7.50)3:(d <= 0.03)∧(d >−0.11)∧(v >7.50)4:(d <= 0.29)∧(d >0.03)5:(d >0.29)30:(v <= 7.25)1:(d <= 0.03)∧(v <= 7.66)∧(v >7.25)2:(d >0.03)∧(v <= 7.66)∧(v >7.25)3:(δ <= 0.00)∧(v >7.66)4:(δ >0.00)∧(v >7.66)40:(δ <= 0.05)∧(κ <= 0.00)∧(d <= 0.23)1:(δ <= 0.05)∧(κ >0.00)∧(d <= 0.23)2:(δ <= 0.05)∧(d >0.23)3:(δ >0.05)50:(δ <= 0.04)1:(δ >0.04)6 0: None70:(δ <=−0.01)∧(v <= 7.39)1:(δ >−0.01)∧(v <= 7.39)2:(κ <= 0.02)∧(d <= 0.01)∧(v >7.39)3:(κ >0.02)∧(d <= 0.01)∧(v >7.39)4:(d >0.01)∧(v >7.39)Table 35: Logic program of LSTM in end-to-end visual servoing (Image-based Driving).37Model Neuron Logic ProgramCfC00:(δ <=−0.03)∧(μ <= 0.02)1:(δ >−0.03)∧(μ <=−0.00)2:(δ >−0.03)∧(μ <= 0.02)∧(μ >−0.00)3:(μ >0.02)∧(v <= 6.79)4:(μ >0.02)∧(v >6.79)10:(μ <=−0.05)1:(δ <=−0.03)∧(μ >−0.05)2:(δ >−0.03)∧(κ <= 0.00)∧(μ >−0.05)3:(δ >−0.03)∧(κ >0.00)∧(μ >−0.05)20:(μ <= 0.02)∧(d <= 0.12)∧(v <= 7.23)1:(μ <= 0.02)∧(d <= 0.12)∧(v >7.23)2:(μ <= 0.00)∧(d >0.12)3:(μ <= 0.02)∧(μ >0.00)∧(d >0.12)4:(μ >0.02)∧(v <= 6.79)5:(μ >0.02)∧(v >6.79)30:(δ <=−0.04)1:(conflict )2:(δ >−0.04)∧(d <= 0.16)3:(δ >−0.04)∧(d >0.16)40:(κ <= 0.00)∧(μ <=−0.00)1:(κ >0.00)∧(μ <=−0.00)2:(μ <= 0.02)∧(μ >−0.00)∧(d <= 0.40)3:(μ <= 0.02)∧(μ >−0.00)∧(d >0.40)4:(μ >0.02)∧(v <= 6.87)5:(μ >0.02)∧(v >6.87)50:(v <= 6.41)1:(μ <= 0.00)∧(v <= 7.15)∧(v >6.41)2:(μ <= 0.00)∧(v >7.15)3:(μ >0.00)∧(d <= 0.51)∧(v >6.41)4:(μ >0.00)∧(d >0.51)∧(v >6.41)60:(δ <=−0.00)1:(conflict )2:(δ <= 0.04)∧(δ >−0.00)∧(v <= 7.18)3:(δ <= 0.04)∧(δ >−0.00)∧(v >7.18)4:(δ <= 0.07)∧(δ >0.04)5:(δ >0.07)70:(δ <=−0.02)∧(v <= 7.26)1:(δ >−0.02)∧(v <= 6.56)2:(δ >−0.02)∧(v <= 7.26)∧(v >6.56)3:(δ <= 0.00)∧(v <= 7.55)∧(v >7.26)4:(δ >0.00)∧(v <= 7.55)∧(v >7.26)5:(v >7.55)Table 36: Logic program of ODE-RNN in end-to-end visual servoing (Image-based Driving).38Model Neuron Logic ProgramODE-RNN00:(d <=−0.04)∧(v <= 5.48)1:(d <=−0.04)∧(v <= 7.46)∧(v >5.48)2:(μ <= 0.02)∧(d >−0.04)∧(v <= 7.46)3:(μ >0.02)∧(d >−0.04)∧(v <= 7.46)4:(δ <= 0.05)∧(v <= 7.84)∧(v >7.46)5:(δ <= 0.05)∧(v >7.84)6:(δ >0.05)∧(v >7.46)10:(d <=−0.72)1:(d >−0.72)∧(v <= 5.80)2:(δ <=−0.00)∧(d >−0.72)∧(v >5.80)3:(δ >−0.00)∧(d >−0.72)∧(v >5.80)20:(v <= 5.00)1:(δ <= 0.09)∧(v <= 6.99)∧(v >5.00)2:(δ <= 0.09)∧(v >6.99)3:(δ >0.09)∧(v <= 6.98)∧(v >5.00)4:(δ >0.09)∧(v >6.98)30:(d <=−0.04)1:(d <= 0.04)∧(d >−0.04)2:(μ <=−0.11)∧(d >0.04)3:(μ >−0.11)∧(d <= 0.92)∧(d >0.04)4:(μ >−0.11)∧(d >0.92)40:(μ <= 0.02)∧(d <= 0.60)∧(v <= 7.04)1:(μ <= 0.02)∧(d >0.60)∧(v <= 7.04)2:(μ >0.02)∧(d <=−0.43)∧(v <= 7.04)3:(μ >0.02)∧(d >−0.43)∧(v <= 7.04)4:(v <= 7.53)∧(v >7.04)5:(d <= 0.38)∧(v >7.53)6:(d >0.38)∧(v >7.53)50:(v <= 5.38)1:(δ <=−0.03)∧(d <= 0.10)∧(v >5.38)2:(δ >−0.03)∧(d <= 0.10)∧(v >5.38)3:(δ <=−0.02)∧(d >0.10)∧(v >5.38)4:(δ >−0.02)∧(d >0.10)∧(v >5.38)60:(μ <= 0.01)∧(d <= 0.18)1:(μ <= 0.01)∧(d <= 0.47)∧(d >0.18)2:(δ <= 0.10)∧(μ >0.01)∧(d <= 0.47)3:(δ >0.10)∧(μ >0.01)∧(d <= 0.47)4:(d >0.47)∧(v <= 6.60)5:(μ <=−0.00)∧(d >0.47)∧(v >6.60)6:(μ >−0.00)∧(d >0.47)∧(v >6.60)70:(v <= 5.15)1:(μ <= 0.12)∧(d <=−0.08)∧(v >5.15)2:(μ <= 0.12)∧(d >−0.08)∧(v >5.15)3:(μ >0.12)∧(v >5.15)Table 37: Logic program of CfC in end-to-end visual servoing (Image-based Driving).39Model Neuron Logic ProgramNCP00:(δ <=−0.05)1:(δ <= 0.02)∧(δ >−0.05)∧(μ <=−0.01)2:(δ <= 0.02)∧(δ >−0.05)∧(μ >−0.01)3:(δ <= 0.09)∧(δ >0.02)∧(μ <=−0.01)4:(δ <= 0.09)∧(δ >0.02)∧(μ >−0.01)5:(δ >0.09)10:(μ <= 0.05)1:(μ >0.05)20:(δ <= 0.02)∧(μ <=−0.03)1:(δ >0.02)∧(μ <=−0.03)2:(δ <=−0.02)∧(μ >−0.03)3:(δ >−0.02)∧(μ >−0.03)30:(v <= 7.41)1:(v <= 7.72)∧(v >7.41)2:(d <=−0.02)∧(v >7.72)3:(d <= 0.10)∧(d >−0.02)∧(v >7.72)4:(d >0.10)∧(v <= 8.05)∧(v >7.72)5:(d >0.10)∧(v >8.05)40:(v <= 7.45)1:(v <= 7.78)∧(v >7.45)2:(v <= 8.08)∧(v >7.78)3:(v >8.08)50:(d <=−0.15)∧(v <= 7.65)1:(d <=−0.15)∧(v >7.65)2:(μ <= 0.06)∧(d <= 0.04)∧(d >−0.15)3:(μ <= 0.06)∧(d >0.04)4:(μ >0.06)∧(d >−0.15)60:(δ <=−0.03)∧(v <= 7.73)1:(δ <=−0.03)∧(v >7.73)2:(δ >−0.03)∧(μ <= 0.07)∧(d <= 0.02)3:(δ >−0.03)∧(μ <= 0.07)∧(d >0.02)4:(δ >−0.03)∧(μ >0.07)7 0: NoneTable 38: Logic program of NCP in end-to-end visual servoing (Image-based Driving).40 |
B7PnAw4ze0l | Precise Robotic Needle-Threading with TactilePerception and Reinforcement LearningZhenjun Yu*, Wenqiang Xu*, Siqiong Yao, Jieji Ren,Tutian Tang, Yutong Li, Guoying Gu, Cewu Lu§Shanghai Jiao Tong UniversityAbstract: This work presents a novel tactile perception-based method, namedT-NT, for performing the needle-threading task, an application of deformable lin-ear object (DLO) manipulation. This task is divided into two main stages: Tail-end Finding andTail-end Insertion . In the first stage, the agent traces the con-tour of the thread twice using vision-based tactile sensors mounted on the grip-per fingers. The two-run tracing is to locate the tail-end of the thread. In thesecond stage, it employs a tactile-guided reinforcement learning (RL) model todrive the robot to insert the thread into the target needle eyelet. The RL modelis trained in a Unity-based simulated environment. The simulation environmentsupports tactile rendering which can produce realistic tactile images and threadmodeling. During insertion, the position of the poke point and the center ofthe eyelet are obtained through a pre-trained segmentation model, Grounded-SAM, which predicts the masks for both the needle eye and thread imprints.These positions are then fed into the reinforcement learning model, aiding ina smoother transition to real-world applications. Extensive experiments on realrobots are conducted to demonstrate the efficacy of our method. More experi-ments and videos can be found in the supplementary materials and on the website:https://sites.google.com/view/tac-needlethreading .Keywords: tactile perception, needle threading1 IntroductionDeformable linear object (DLO) insertion is a common task in everyday life. From needle threadingto suturing in medical surgery scenarios. In this work, we are particularly interested in the needle-threading task as it requires precisely manipulating the highly deformable thread and locating thetarget eyelet with a small clearance, which is challenging, even for humans.The task of needle-threading can be subdivided into two stages: Tail-end Finding andTail-endInsertion . During the Tail-end Finding stage, the primary objective of the agent is to locate andsecure the terminal end of the thread. Compared to the thread, it is ostensibly less complex toidentify the tail-end of other DLOs such as network and USB cables. The tail-ends of these DLOsare easily distinguished from the rest of their line part and are often sufficiently large to be detectedby a standard RGB-D sensor from its working distance. Conversely, the tail-end of a thread bearsa uniform appearance with the remainder of the thread, rendering it virtually invisible to a standardcamera lens. In the Tail-end Insertion stage, the agent should accurately guide the tail-end of thethread into a specified eyelet with minimal clearance. In addition, the agent should be able to judgethe success or failure of the task execution.Based on the observations, we propose a Tactile perception-based approach to address the Needle-Threading task and name it T-NT . In comparison with previous solutions which adopt laser scanner[1], high-resolution camera [2], and dual cameras [3] to locate and discern the states of both the* indicates equal contributions. § Cewu Lu is the corresponding author.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Our task setting contains one thread robot with a RealSense camera for marker detectionand one eyelet robot. There are two stages for our task, Tail-end Finding andTail-end Insertion . Thebrake is connected to the scroll to control whether it can rotate or not.thread and eyelet, our approach utilizes the tactile sensor which is locally mounted on the gripperfingers. We make use of MC-Tac [4], a GelSlim[5]-like tactile sensor, to provide tactile percep-tion. The tactile imprints produced by the thread and eyelet are distinctly observable when they arepressed on the gel of the tactile sensor.To find the tail end, the agent is instructed to trace the contour of the thread twice. In the first run, thegoal is to measure the distance from the starting point to the extremity of the thread. The endpointof the thread imprint within the tactile image serves as a determinant of the gripper reaching the tip.It is crucial to note that, due to the pliable nature of the thread, it can get straightened as the gripperglides over it. Thus, once the robot gripper grasps the starting point, it only needs to follow a linearpath. The total thread length can be approximated by cumulatively adding the distance traversedby the gripper and the remaining length of the thread imprint visible on the tactile image. For thesecond run, the agent initiates from the identical starting point, proceeding to the terminal end. Thistime the gripper stops at a certain distance from the tip. This approach ensures that the tail-end isgrasped with a minimal residual segment ready for insertion into the eyelet.TheTail-end Insertion process is facilitated by employing a tactile-guided reinforcement learning(RL) model. The task is conceptualized as a goal-conditioned RL task, wherein the objective is forthe thread’s tip to poke the gel at the back of the eyelet and the poke point is within the eyelet’srange. To accomplish this, an RL model is trained from a simulated environment built on the Unityplatform and deployed to the real world. Tactile rendering is implemented in the simulator. It canproduce tactile images that resemble the images from the real-world sensor. The thread is repre-sented as chaining particles constrained by distance and bend effect, modeled by XPBD (ExtendedPosition-based Dynamics)[6]. To obtain the position of the poke point and the eyelet, we adopt apretrained language-grounded segmentation model, Grounded-SAM [7, 8] to predict the masks forboth the needle eye and thread imprints. And then we calculate the center of mass for the poke pointand sample several pixels to represent the eyelet area based on the masks. We feed the positioninformation of the poke point and the eyelet as inputs to train the reinforcement learning model,ensuring a smoother transition to real-world applications.We conduct experiments with needles of different sizes and threads of different sizes. We haveachieved an average 63.33% success rate in the real world.We summarize our contribution as follows:• We propose a tactile perception-based approach for the needle threading task, T-NT, withtwo stages, Tail-end Finding andTail-end Insertion .• We conducted real experiments on different kinds of threads and needles, with variousneedle positions and angles, and achieved a great success rate in every category.22 Related WorksOur work is directly related to deformable linear object (DLO) manipulation, especially the task ofrobotic insertion. Besides, since our work also relies on the tactile sensor, we also discuss relatedworks on tactile perception-based object manipulation.Robotic Insertion of Deformable Linear Object. Robotic manipulation of deformable linear ob-jects (DLOs) such as cables, wires, threads, or tubes has been a long-standing challenge in the fieldof robotics. The inherent flexibility of these objects, their interaction with the environment, and thehigh-dimensional nature of their state space pose unique challenges.Several manipulation tasks involving DLOs have been explored by previous researchers, such asshape control [9, 10, 11], cable routing [12] and knotting/unknotting [13, 14, 15]. A subset of the re-search has focused specifically on the task of robotic insertion of DLOs, including needle-threading[1, 2, 3, 16] and DLO-in-hole assembly tasks [17, 18]. Kim et al. [3] adopted a dual camera settingto capture both peripheral and foveated vision, and leveraged imitation learning to train a policy toexecute the needle threading task. Since imitation learning requires expert demonstrations, visualservoing is more direct. However, due to the challenge of locating the thread and needle in thecommon visual setting, many works adopted enhanced visual settings. Silverio et al. [1] adopteda laser scanner to track the tail-end of the thread. However, the laser scanner is not a commonchoice for general-purpose manipulation settings. Huang et al. [2] employed a high-speed camerato perceive the thread. They assumed that a rapidly rotating thread can be approximated as a rigidobject, thereby simplifying the control models. However, this study was based on a specifically de-signed two-degree-of-freedom mechanism and did not automate the thread mounting process. Lv etal. [16] propose a model-based RL method for the needle-threading task. They adopt differentiablesimulation and rendering techniques to synchronize the thread and needle configuration between thesimulation and real-world observations. However, they cannot determine whether the execution issuccessful unless manually examine the execution.Tactile perception-based object manipulation. Tactile perception-based object manipulation is of-ten studied accompanied by the development of tactile sensors. For example, Zanella [18] proposedto address the DLO-in-hole assembly tasks with customized low-resolution tactile sensors on thegripper finger. She et al. [19] explored cable manipulation with a proposed tactile-reactive gripper.Later, as more and more tactile sensors get commercially available or open-source [20, 21, 5], re-searchers can focus on developing algorithms for different tasks, such as object shape reconstruction[22, 23, 24, 25], contour following [12, 26] and dexterous manipulation [27].3 BackgroundIn this section, we will first describe the task settings and assumptions. Then we describe the prob-lems in the tactile-based needle-threading task.3.1 Task SettingThe overall task setting is illustrated in Fig. 1. In the experiments, we need two robot arms, onefor thread manipulation (denoted as thread robot ), and one for eyelet localization (denoted as eyeletrobot ). To grasp and sense the thread, we mount two tactile sensors on the inner side of the gripperfingers on the thread robot . To touch and sense the eyelet, we mount one tactile sensor on the outerside of one gripper finger on the eyelet robot . In this work, we adopt Franka Emika panda as thethread robot and Kinova Gen2 as the eyelet robot . We adopt MC-Tac sensor [4] as our tactile sensorsto provide tactile perception. We mount an Intel RealSense D435 on the thread robot , and conducteye-in-hand calibration with Easy-HandEye package [28].The thread is scrolled on a spool, and a part of the thread is hung down naturally. The spool iscontrolled by a brake device. When we trace the thread in Tail-end Finding stage, the spool will notbe able to scroll so that the length of the thread dropped can remain the same. While in Tail-endInsertion stage, the spool can rotate so that the thread can reach the eyelet.The needle is inserted into a base support because we do not consider picking the needle up. Thelocation of the base support and the angle of the needle can be varied.3The tactile sensor on the eyelet robot touches the eyelet from the beginning of the experiments, butthe exact location of the eyelet is not known.3.2 Problem StatementIn the Tail-end Finding stage, given a thread, we first find its rest length lthread by gliding it fromthe beginning point to the thread tip. The beginning point is defined at the center of 2cmbelow thespool. Then, we trace the thread from the same beginning point and stop the movement early, sothat the tail-end has a length of ltail. To note, when the gripper holds the tail-end and moves around,gravity might cause the tail-end to bend downwards. Thus, we need to estimate the tip position inconsideration of the gravity force. We solve this problem with a neural network f(·):ptip=f(ltail, θ, p tac) +ptac, (1)where θis the orientation of the tail-end measured from the tactile image, ptac∈SO(3)is theposition w.r.t thread robot ’s base, and ptip∈SO(3)is the position w.r.t thread robot ’s base, asshown in Fig. 2.In the Tail-end Insertion stage, we drive the tail towards the eyelet area. We first give a roughestimate of the eyelet area with peyelet =pmarker +T0.pmarker is the relative pose from the marker oneyelet robot tothread robot ’s base, which is easy to obtain after eye-in-hand calibration. T0is thetransformation between the marker and the tactile sensor on the eyelet robot , which only needs tomeasure once for all the experiments. Given the estimated ptipandpeyelet, we plan a trajectory to alocation perpendicular to the gel surface and 1cmaway. In this way, the thread robot later need onlymove along the surface to find the eye and perform insertion, the surface is denoted “ uv-plane”, asshown in Fig. 1.Then, we read the tactile image stream I∈R1600×1200×3from the tactile sensor on the eyeletrobot and model the insertion task as a goal-conditioned reinforcement learning problem. Givenan eyelet imprint Ieye,t, we segment the hole and the thread tip-gel contact point from Ieye,twith apretrained segmentation model, Grounded-SAM [7, 8]. We then calculate the center of mass (COM)of contact point mask ci,t, and we randomly sample Npixel positions from the mask of needleeyelet, and obtain Ceye,t={n1eye,t, ..., nNeye,t}. We formulate the problem of Tail-end Insertion aslearning a policy πthat sequences move actions at∈ A with a robot from tactile observations(ci,t, Ceye,t)∈ O.π(ci,t, Ceye,t)→at= (Tu,Tv)∈ A (2)withTuandTv, defined in SE(2), is the displacement of the end-effector of the thread robot in the uvdirections respectively. We calculate the total number of pixels of the poking thread, as well as thenumber that is within the eyelet mask. If the total number is lower than 500, or the number of stepstthat the agent has tried is larger than 5 (based on the average reward of training), we consider it afailure . If the number of pixels that are in the needle surpasses half of the total pixel number, weregard the state as successful . One of the steps in the real world is shown in Fig. 3.Figure 2: The pipeline of T-NT consists of two aforementioned parts: Tail-end Finding andTail-endInsertion . We first do the tip pose estimation with a neural network, and adapt our RL agent trainedin a simulation environment to conduct needle threading with the observation from the tactile sensor.44 Method4.1 Tactile Image ProcessingFor all the tactile images read during the whole process, we adopt the Grounded-SAM [7, 8] modelwith a prompt of “line” to locate the thread imprint, and ”bump” for the poke imprint. As for theeyelet, we use the prompt of “big hole”. In our experiments, we find these prompts are robustenough. Thanks to the generalizability of the segmentation model, we don’t have to train a specificsegmentation model to get the imprint mask for both thread and eyelet.4.2 Tip Pose EstimationAs mentioned in Sec. 3.2, once we trace the thread twice, we can obtain ltail, and then we canestimate the orientation θof the tail-end from the tactile image. ltail=d1−d2where d1andd2are the distances the gripper traverses in the two times respectively. The orientation θof the line isestimated by the incline of the bottom contour, conducted by OpenCV [29], of the line mask in thetactile image, as shown in Fig. 2.With ltailandθ, we need to estimate the tip position. According to Equ. 1, we train a 4-layer MLPf(·). To collect the training data, the thread robot grasps the tail-end of the thread and moves to arandom direction 500 times. Each time, we measure the ltailand the transformation between ptipandptacmanually, while we can automatically obtain the data of ptac,θ. More details about the neuralnetwork structure and its training can be found in the supplementary materials.4.3 Goal-conditioned InsertionTraining Setup in Simulation We adopt a Unity-based simulation environment to train the RLmodel. We set up the same hardware setting in the simulator and the rendering of tactile imageshas been implemented and adjusted to resemble the real images (See supplementary materials onour website). We model the needle as a rigid object and attach it to a cube as the base support.The clearances of the eyelets are replicated from the real items. As for the thread, we model it as aconstrained particle system with XPBD [6]. It allows for adjustment of the stiffness, thickness, andlength of the thread.We initialize the training by randomly setting the relative positions between the end-effector of thethread robot and the eyelet. Considering the calibration error in the real world, the randomizeddistance is set to at most 3cm. The thread robot moves along the uv-plane of the gel surface of thetactile sensor on eyelet robot , and tries to poke gel and insert the tip into the eyelet.As mentioned earlier, to mitigate the sim-to-real gap, instead of utilizing the tactile image as theinput to the policy model, we introduce an intermediate representation, i.e., the locations of theeyelet and thread tip calculated by the mask generated on the tactile image from eyelet Ieyewiththe segmentation model. We model the insertion problem as the goal-conditioned RL based on thisintermediate representation of the observation.The reward function we use is: reward t=−PNi=1d(ci,t,nieye,t)/ lN. When the task is successful,reward t=rand when the task is failed, reward t=−r. With r= 100 ,lis the pixel length of thetactile image diagonal, d(·)is the Euclidean distance. The clear definition of success and failure hasbeen mentioned in Sec. 3.2.Transfer to Real World After we train our RL model in the simulation environment, we directlyapply it in the real world (See Fig. 3). To note, in the real world, to accomplish the full pipeline ofinsertion, we need to conduct the Tail-end Finding process and move the tip around the eyelet in anear-perpendicular direction to the gel surface.5 Experimental SetupTactile Sensors The tactile sensor we adopt is MC-Tac sensor [4], the gel has a Young’s module of0.123 MPa. It is important when selecting the thread material since the thread needs to be stiffer toleave the imprint on the gel.5Figure 3: An example of the real-world steps for goal-conditioned insertion. The observations areobtained by the tactile sensor, and the model predicts a uv-plane transformation for the thread robot.We repeat the step until the agent judges the state a success or failure.Thread-Eyelets We select three kinds of threads made of nylon, metal, and glass fiber with Young’smodule of 8.3 GPa, 50 GPa, and 90 GPa respectively.Aside from the stiffness, the thickness of the threads and the size of the eyelet clearance are alsoimportant to show the adaptability of our method. We use threads with thicknesses of 0.2mm,0.5mm,1mm, and 2mm. The sizes of the eyelets clearance are 0.6×7.5mm2,1.6×15mm2, and2.4×9mm2. The thread and eyelets we use in the tasks are shown in Fig. 4.Figure 4: We have selected three kinds of eyelets and four kinds of threads with different sizes, toprove the ability of our method.As shown in Fig. 5, when we fix the needle onto a base, we vary the angles from 45◦to 90◦, and thebase support positions are randomly placed as long as it is within the reach of the robot.Figure 5: We conduct our experiments on different needle positions and angles, to show the abilityof our model to adopt different situations of the needle states.Training Details In our simulation training, we only train our model with needle #2 and line #2.The line in the simulation is modeled with Obi [6], an XPBD-based physics engine. For keeping theline straight, we use the damping value 0.9, and the particle resolution for modeling the line is 0.7.We train the RL model with Stable baselines3 [30] and RFUniverse [31] to interact with Unity. Wetrain three different models: PPO ,SAC , and DDPG with the same learning rate of 3e-4 and totaltimesteps 1e5. The sample number Nof needle pixels for reward function calculation is 500.66 Results6.1 MetricsSuccess Rate If the imprint of the tail-end is inside the eyelet, we consider it a success. We measurethe success rate in real-world experiments.Mean Distance Error We measure the distance between the predicted thread tip and the groundtruth, and calculate the average distance for every kind of thread.6.2 Evaluation on Tip Pose EstimationAs mentioned earlier, we have prepared 500 data samples for each thread. We split 400 of them totrain and 100 to test. We report the quantitative results with mean distance error in Table 1. Andthe qualitative results are shown in Fig. 6. Since we use the estimated tip position for planning adesired location to insert, thus as long as this location, which is influenced by a sum error of tipposition estimation, peyelet estimation, trajectory following for the robot to execute the plan, is stillin the range of the gel surface, the insertion process can continue. And the gel surface has an areaof15mm×15mm, thus the tip pose estimation error is significantly small. Additionally, it takesabout 1-2minutes depending on the length of the thread (5 cm-20 cm).Figure 6: Qualitative results for tail-end estimation and insertion. The tip poses estimation and goal-conditioned insertion allow us to successfully thread needles with different sizes and thicknesses.Thread Mean Distance (mm)#1(≈0.2mm) 2.2#2(≈0.5mm) 1.7#3(≈1mm) 3.5#4(≈2mm) 4.1Table 1: Quantitative results for tail-end find-ing.Angle Success Rate45 57.89%60 63.33%90 62.89%Table 2: Average success rate for differentneedle angles.6.3 Results of Tail-end InsertionWe conduct experiments on all three kinds of needle eyelets and four threads, with 20 random basesupport locations. The needle is fixed on the base support with an angle of 60◦. According to asmall test (Table 2) on the angles, it does not have a significant influence on the results.We have conducted the experiments using a visual servoing method (denoted “VS” in Tab. 3) asa baseline to compare. The method is simply calculating the pixel distance between the COMs ofthe needle mask and the thread imprint, and projecting it to the real-world distance according to thecalibrated camera model, and then planning and moving the gripper accordingly with an off-the-shelf planner, MoveIt! [32].The quantitative results of the success rate are shown in Tab. 3. Due to needle size, line #3 can’tbe threaded into needle #1, and line #4 can’t be threaded into needle #1 and #2. The results clearlyshow that, with accurate tail-end finding, the success rates of the Tail-end Insertion is significantlyhigh, reaching 63.33% , while visual servoing method only has 47.22% , which proves the greatperformance of our method. Although we only train on one kind of needle and thread, the inter-mediate representation between real-world tactile image and mask from our simulation environment7successfully solves the sim2real gap and thus provides us great generalizability and transferabilityfor different needles and threads. We can also conclude that the difference in the size of needlesand threads affects the success rate greatly, for the bigger needles give the thread more opportu-nity for trial and error. The difference in the reinforcement learning policy hardly has an impacton the results. The insertion process takes an average of 3.41 steps to complete, with less than 1minute . We will elaborate and discuss more factors that influence our experiments and results in thesupplementary materials.PolicyneedleThread#1(≈0.2mm)#2(≈0.5mm)#3(≈1mm)#4(≈2mm)VS#1(0.6×7.5) 35% 35% * *#2(1.6×15) 55% 40% 45% *#3(2.4×9) 65% 50% 50% 50%PPO#1(0.6×7.5) 50% 45% * *#2(1.6×15) 65% 65% 50% *#3(2.4×9) 70% 80% 85% 60%SAC#1(0.6×7.5) 45% 50% * *#2(1.6×15) 65% 65% 45% *#3(2.4×9) 80% 85% 75% 65%DDPG1#(0.6×7.5) 60% 50% * *2#(1.6×15) 55% 60% 55% *3#(2.4×9) 75% 75% 75% 60%Table 3: Quantitative results for the success rate of four kinds of lines and three different needles,with 3 different RL policy and 20 random base support pose.6.4 LimitationWe briefly analyze the limitation of our method, more analysis and visual failure cases can be foundon our website.Thread stiffness requirement: The key limitation of our proposed methodology is its inability towork with soft threads such as the sewing threads. It is due to only the thread which has a stiffnessgreater than the gel being used can cause the imprint.Thread fixation to obtain the length: Our method takes a two-run process to locate the tail-end ofthe thread. During the Tail-end Finding stage, the thread has to be fixed in one end, ensuring that itsremaining length does not change. This could complicate the hardware setting.Tactile Sensor Positioning: The tactile sensor on eyelet robot needs to come in contact with theback side of the eyelet. Thus it requires the eyelet to be thin and cannot be applied to applicationssuch as USB insertion where the back side of the hole is not easily accessible. While it can be usedfor tasks like screw insertion into a nut.7 ConclusionIn this work, we present T-NT, a novel approach to deformable linear object (DLO) insertion tasks,particularly needle threading. Our strategy incorporates a tactile perception-based approach forboth Tail-end Finding andTail-end Insertion stages, utilizing the vision-based tactile sensor. Thereinforcement learning model for the insertion stage, trained in a simulated environment, was shownto be effective and adaptable to real-world scenarios. This tactile-based approach indicates that theuse of tactile perception, combined with reinforcement learning, can facilitate research involvinghighly deformable objects and requires precise manipulation. Further research may benefit frombuilding on this work to extend its application to other DLOs and complex manipulative tasks.8AcknowledgmentsThis work was supported by the National Key R&D Program of China (No. 2021ZD0110704),Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Qi ZhiInstitute, and Shanghai Science and Technology Commission (21511101200).References[1] J. Silv ́erio, G. Clivaz, and S. Calinon. A laser-based dual-arm system for precise control ofcollaborative robots. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 9183–9189. IEEE, 2021.[2] S. Huang, Y . Yamakawa, T. Senoo, and M. Ishikawa. Robotic needle threading manipulationbased on high-speed motion strategy using high-speed visual feedback. In 2015 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 4041–4046. IEEE,2015.[3] H. Kim, Y . Ohmura, and Y . Kuniyoshi. Gaze-based dual resolution deep imitation learningfor high-precision dexterous robot manipulation. IEEE Robotics and Automation Letters , 6(2):1630–1637, 2021.[4] J. Ren, J. Zou, and G. Gu. Mc-Tac: Modular camera-based tactile sensor for robot gripper. InThe 16th International Conference on Intelligent Robotics and Applications (ICIRA) , 2023.[5] I. H. Taylor, S. Dong, and A. Rodriguez. Gelslim 3.0: High-resolution measurement of shape,force and slip in a compact tactile-sensing finger. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 10781–10787. IEEE, 2022.[6] V . M. Studio. URL http://obi.virtualmethodstudio.com/ .[7] S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, and L. Zhang.Grounding dino: Marrying dino with grounded pre-training for open-set object detection.2023.[8] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, P. Doll ́ar, and R. Girshick. Segment anything. arXiv:2304.02643 , 2023.[9] M. Yu, H. Zhong, and X. Li. Shape control of deformable linear objects with offline and onlinelearning of local linear deformation models. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 1337–1343. IEEE, 2022.[10] R. Laezza, R. Gieselmann, F. T. Pokorny, and Y . Karayiannidis. Reform: A robot learningsandbox for deformable linear object manipulation. In 2021 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 4717–4723. IEEE, 2021.[11] Y . Yang, J. A. Stork, and T. Stoyanov. Online model learning for shape control of deformablelinear objects. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 4056–4062. IEEE, 2022.[12] A. Wilson, H. Jiang, W. Lian, and W. Yuan. Cable routing and assembly using tactile-drivenmotion primitives. arXiv preprint arXiv:2303.11765 , 2023.[13] K. Suzuki, M. Kanamura, Y . Suga, H. Mori, and T. Ogata. In-air knotting of rope using dual-arm robot based on deep learning. In 2021 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 6724–6731. IEEE, 2021.[14] H. Wakamatsu, E. Arai, and S. Hirai. Knotting/unknotting manipulation of deformable linearobjects. The International Journal of Robotics Research , 25(4):371–395, 2006.9[15] S. Lin, X. Jiang, and Y . Liu. Cable manipulation with partially occluded vision feedback.In2022 IEEE International Conference on Robotics and Biomimetics (ROBIO) , pages 1245–1250. IEEE, 2022.[16] J. Lv, Y . Feng, C. Zhang, S. Zhao, L. Shao, and C. Lu. Sam-rl: Sensing-aware model-based re-inforcement learning via differentiable physics-based simulation and rendering. arXiv preprintarXiv:2210.15185 , 2022.[17] K. Galassi, A. Caporali, and G. Palli. Cable detection and manipulation for dlo-in-hole assem-bly tasks. In 2022 IEEE 5th International Conference on Industrial Cyber-Physical Systems(ICPS) , pages 01–06. IEEE, 2022.[18] R. Zanella, D. De Gregorio, S. Pirozzi, and G. Palli. Dlo-in-hole for assembly tasks with tactilefeedback and lstm networks. In 2019 6th International Conference on Control, Decision andInformation Technologies (CoDIT) , pages 285–290. IEEE, 2019.[19] Y . She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson. Cable manipulation witha tactile-reactive gripper. The International Journal of Robotics Research , 40(12-14):1385–1401, 2021.[20] M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V . R. Most, D. Stroud, R. Santos,A. Byagowi, G. Kammerer, et al. Digit: A novel design for a low-cost compact high-resolutiontactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters ,5(3):3838–3845, 2020.[21] W. Yuan, S. Dong, and E. H. Adelson. Gelsight: High-resolution robot tactile sensors forestimating geometry and force. Sensors , 17(12):2762, 2017.[22] E. Smith, R. Calandra, A. Romero, G. Gkioxari, D. Meger, J. Malik, and M. Drozdzal. 3d shapereconstruction from vision and touch. Advances in Neural Information Processing Systems , 33:14193–14206, 2020.[23] E. Smith, R. Calandra, A. Romero, G. Gkioxari, D. Meger, J. Malik, and M. Drozdzal. 3d shapereconstruction from vision and touch. Advances in Neural Information Processing Systems , 33:14193–14206, 2020.[24] S. Wang, J. Wu, X. Sun, W. Yuan, W. T. Freeman, J. B. Tenenbaum, and E. H. Adelson. 3dshape perception from monocular vision, touch, and shape priors. In 2018 IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS) , pages 1606–1613. IEEE, 2018.[25] W. Xu, Z. Yu, H. Xue, R. Ye, S. Yao, and C. Lu. Visual-tactile sensing for in-hand objectreconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 8803–8812, 2023.[26] L. Pecyna, S. Dong, and S. Luo. Visual-tactile multimodality for following deformable linearobjects using reinforcement learning. In 2022 IEEE/RSJ International Conference on Intelli-gent Robots and Systems (IROS) , pages 3987–3994. IEEE, 2022.[27] D. F. Gomes and S. Luo. Geltip tactile sensor for dexterous manipulation in clutter. In TactileSensing, Skill Learning, and Robotic Dexterous Manipulation , pages 3–21. Elsevier, 2022.[28] R. Y . Tsai, R. K. Lenz, et al. A new technique for fully autonomous and efficient 3 d roboticshand/eye calibration. IEEE Transactions on robotics and automation , 5(3):345–358, 1989.[29] G. Bradski. The opencv library. Dr. Dobb’s Journal: Software Tools for the ProfessionalProgrammer , 25(11):120–123, 2000.[30] A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, and N. Dormann. Stable baselines3.https://github.com/DLR-RM/stable-baselines3 , 2019.10[31] H. Fu, W. Xu, H. Xue, H. Yang, R. Ye, Y . Huang, Z. Xue, Y . Wang, and C. Lu. Rfuniverse:A physics-based action-centric interactive environment for everyday household tasks. arXivpreprint arXiv:2202.00199 , 2022.[32] D. Youakim, P. Ridao, N. Palomeras, F. Spadafora, D. Ribas, and M. Muzzupappa. Moveit!:Autonomous underwater free-floating manipulation. IEEE Robotics & Automation Magazine ,24(3):41–51, 2017.11 |
SgTPdyehXMA | Language to Rewards for Robotic Skill SynthesisWenhao Yu, Nimrod Gileadi, Chuyuan Fuy, Sean Kirmaniy, Kuang-Huei Leey,Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever,Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang,Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei XiaGoogle DeepMindhttps://language-to-reward.github.io/zAbstract: Large language models (LLMs) have demonstrated exciting progress inacquiring diverse new capabilities through in-context learning, ranging from logical rea-soning to code-writing. Robotics researchers have also explored using LLMs to advancethe capabilities of robotic control. However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applyingLLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered primitives to interface with the robot. On the other hand, reward function is aflexible representation that can be optimized for control policies to achieve diverse tasks,while their semantic richness makes them suitable to be specified by LLMs. In this work,we introduce a new paradigm that harnesses this realization by utilizing LLMs to definereward parameters that can be optimized and accomplish variety of robotic tasks. Usingreward as the intermediate interface, we can effectively bridge the gap between high-level language instructions to low-level robot actions. Meanwhile, combining this witha real-time optimizer, MuJoCo MPC, empowers an interactive behavior creation experi-ence where users can immediately observe the results and provide feedback to the system.To systematically evaluate the performance of our proposed method, we designed atotal of 17 tasks for a simulated quadruped robot and a dexterous manipulator robot. Wedemonstrate that our proposed method reliably tackles 90% of the designed tasks, whilea baseline using primitive skills as the interface with Code-as-policies achieves 50%of the tasks. We further validated our method on a real robot arm where complex ma-nipulation skills such as non-prehensile pushing emerge through our interactive system.Keywords: Large language model (LLM), Low-level skill learning, Legged locomotion,Dexterous manipulation1 IntroductionThe recent advancements in large language models (LLMs) pretrained on extensive internet data [ 1,2]have revolutionized the ability to interpret and act on user inputs in natural language. These LLMs exhibitremarkable adaptability to new contexts (such as APIs [ 3], task descriptions [ 4], or textual feedback [ 5]),empowering tasks from logical reasoning [ 6,7] to code generation [ 8]. These diverse applications haveextended to robotics as well, where substantial progress has been made in using LLMs to drive robotbehaviors [ 5,4,9,10,11]: from step-by-step planning [ 4,9,12], goal-oriented dialogue [ 10,11], to robot-code-writing agents [ 3,13]. While they impart new modes of generalization, they focus on using languageto concatenate together new behaviors from an existing library of control primitives that are either manually-engineered or learned a priori. Despite having internal knowledge about robot motions, LLMs struggle withdirectly outputting low-level robot commands due to the limited availability of relevant training data (Fig. 1).As a result, the expression of these methods are bottlenecked by the breadth of the available primitives,the design of which often requires extensive expert knowledge or massive data collection [14, 15, 16].To tackle these challenges, we need to operate at a level of abstraction that allows harnessing the capabilitiesoffered by LLMs. Our key insight is to use reward functions as an interface that bridges the gap betweenCo-first authors, equal contributionyCore contributorszCorresponding emails: {magicmelon,nimrod,xiafei}@google.com . See Contributions in Appendix A.17th Conference on Robot Learning (CoRL 2023), Atlanta, USA. Language Model Policy Code block_names = detect_objects ("blocks") bowl_names = detect_objects ("bowls") for bowl_name in bowl_names: if is_empty (bowl_name): empty_bowl = bowl_name break objs_to_stack = [empty_bowl] + block_names stack_objects (objs_to_stack) def is_empty (name): ... Perception APIs Control APIs User Stack the blocks on the empty bowl. def stack_objects (obj_names): n_objs = len(obj_names) for i in range (n_objs - 1): obj0 = obj_names[i + 1] obj1 = obj_names[i] pick_place (obj0, obj1) User The r obo t do g's t orso is upright , balanced o v er its hind f ee t , which ar e fla t and shoulder-width apart . The fr ont le gs hang loosely , poised mid-air , mimicking a human 's r elax ed arms. # Se t t orso r e w ar ds se t_t orso_r e w ar ds(height= 0 . 7 , pit ch=np.de g2rad( 90 )) # Se t f ee t r e w ar ds se t_f ee t_pos_r e w ar ds( 'fr ont_left' , height= 0 . 7 ) se t_f ee t_pos_r e w ar ds( 'back_left' , height= 0 . 0 ) se t_f ee t_pos_r e w ar ds( 'fr ont_right' , height= 0 . 7 ) se t_f ee t_pos_r e w ar ds( 'back_right' , height= 0 . 0 )Mak e r obo t do g s tand up on tw o f ee t. User The r obo t do g's t orso is upright , balanced o v er its hind f ee t , which ar e fla t and shoulder-width apart . The fr ont le gs hang loosely , poised mid-air , mimicking a human 's r elax ed arms. LLM Motion Brain # Se t t orso r e w ar ds se t_t orso_r e w ar ds(height= 0 . 7 , pit ch=np.de g2rad( 90 )) # Se t f ee t r e w ar ds se t_f ee t_pos_r e w ar ds( 'fr ont_left' , height= 0 . 7 ) se t_f ee t_pos_r e w ar ds( 'back_left' , height= 0 . 0 ) se t_f ee t_pos_r e w ar ds( 'fr ont_right' , height= 0 . 7 ) se t_f ee t_pos_r e w ar ds( 'back_right' , height= 0 . 0 ) e x ecut e_plan( time= 2 )Good! No w mak e it perf orm a moonw alk. User Motion Controller Mak e r obo t do g s tand up on tw o f ee t. User Mak e r obo t do g s tand up on tw o f ee t. LLM User Mak e r obo t do g s tand up on tw o f ee t. R e w ar d T ransla t or (LLM) Mo tion Contr oller se t_ joint_tar g e t( 0 . 0 , 0 .2, 0 . 7 , 0 . 0 , -0 .3, 0 .8, 0 . 0 , 0 .2, 0 . 7 , 0 . 0 , -0 .3, 0 .8 ) User Mak e r obo t do g s tand up on tw o f ee t. LLM Low-level action Motion description Reward code Optimized low-level actions Figure 1: LLMs have some internal knowledge about robot motions, but cannot directly translate them into actions (left).Low-level action code can be executed on robots, but LLMs know little about them (mid). We bridge this gap by propos-ing a system (right) consisting of the Reward Translator that interprets the user input and provides a reward specification,which is then consumed by a Motion Controller that interactively synthesizes a robot motion given the reward.language and low-level robot actions. This is motivated by the fact that language instructions from humansoften tend to describe behavioral outcomes instead of low-level behavioral details (e.g. “robot standing up”versus “applying 15 Nm to hip motor”), and therefore we posit that it would be easier to connect instructionsto rewards than low-level actions given the richness of semantics in rewards. In addition, reward termsare usually modular and compositional, which enables concise representations of complex behaviors, goals,and constraints. This modularity further creates an opportunity for the user to interactively steer the robotbehavior. However, in many previous works in reinforcement learning (RL) or model predictive control(MPC), manual reward design requires extensive domain expertise [ 17,18,19,20,21]. This points to amissing link between the reward structures and task specification which is often in natural language. As such,we propose to utilize LLMs to automatically generate rewards, and leverage online optimization techniquesto solve them. Concretely, we explore using LLMs to translate task semantics to reward functions, and useMuJoCo MPC, a real-time optimization tool to synthesize robot behavior in real-time [ 22]. Thus rewardfunctions generated by LLMs can enable non-technical users to generate and steer novel and intricate robotbehaviors without the need for vast amounts of data nor the expertise to engineer low-level primitives.Across a span of 17 control problems on a simulated quadruped and a dexterous manipulator robot, we showthat this formulation delivers diverse and challenging locomotion and manipulation skills. Examples includegetting a quadruped robot to stand up, asking it to do a moonwalk, or tasking a manipulator with dexteroushand to open a faucet. We perform a large-scale evaluation to measure the overall performance of ourproposed method. We compare our method to a baseline that uses a fixed set of primitive skills and an alter-native formulation of grounding language to reward. We show that our proposed formulation can solve 40%more skills than baselines and is more stable in solving individual skills. We further deploy our approachto a real robot manipulator and demonstrate complex manipulation skills through language instructions.Our work makes the following core contributions: i) We explore a novel interactive framework of usingreward function as the interface to bridge language models and low-level robot actions. ii) We introducea two-layer prompting scheme that effectively improve the reliability of the system in harnessing internalmotion knowledge from LLMs. iii) We demonstrate superior performance of our proposed system tobaseline methods on two robot embodiments with 17 challenging control tasks and show validation ofthe approach on a robotic hardware.2 Related WorkLanguage to Actions. Directly predicting low-level control actions based on a language instruction hasbeen studied using various robot learning frameworks. Early work in the language community studiedmapping templated language to controllers with temporal logic [ 23] or learning a parser to motion prim-itives [24], while more recent work utilize end-to-end models that produce actions conditioned on naturallanguage descriptions. One example is instruction following methods in navigation [ 25]. However, theyoften assume low-dimensional actions navigating from one node of the graph to another [ 25,26]. To extendthe end-to-end approaches to manipulation, a common approach is to utilize latent embeddings of language2commands as multitask input context, and train with behavioral cloning [ 14,27,16], offline reinforcementlearning [ 28], goal-conditioned reinforcement learning [ 29], or in a shared autonomy paradigm [ 30]. Whileend-to-end trained policies can be performant, they require significant amount of data in the form ofoffline datasets or online environment interaction. In contrast, we study a less data hungry approach wherelow-level actions are not directly produced by an end-to-end policy but instead by an optimal controller.Language to Code. Code generation models have been widely studied both in and outside roboticscontext [31, 8, 32]. The capability of those models range from solving coding competition questions [33]and benchmarks [ 34], to drawing simple figures [ 35], generating policies that solve 2D tasks [ 36], andcomplex instruction following tasks [ 3]. In this work, we study LLMs for generating code for rewardfunctions, and show that the expression of the rewards can lead to expressive low-level policies.Language to Rewards. The idea of translating natural language instructions to rewards has been exploredby several prior work [ 37,38,39,40,41,42,43,44]. A common strategy in this direction is to train domain-specific reward models that map language instructions to reward values [ 38,42,40] or constraints [ 39].Although these methods can achieve challenging language conditioned robotic tasks such as object pushing[39], and drawer opening [ 40], they require considerable language-labeled data to train the reward model.Recent works investigated using LLMs directly as a reward function for inferring user intentions in negotia-tion games or collaborative human-AI interaction games [ 37,44]. By leveraging LLMs to assign reward val-ues during RL training, they demonstrate training agents that are aligned with user intentions and preferenceswithout explicit reward modeling. However, these works receive reward values of rollouts when training RLpolicies, which requires a large number of queries to LLMs during training. In contrast, we levrage LLMsto produce a parameterized reward function that can then be optimized. A similar direction to this work isautomated parameterization of reward functions, which had been explored in AutoRL [ 21], however, theydon’t provide a language interface. Finally, a concurrent work by Huang et al. explored extracting 3D valuemaps from large foundation models, which are used by motion planners to perform manipulation tasks [ 45].Incorporating Iterative Human Feedback. Correcting plans with iterative language feedback has alsobeen explored in the past. Broad et al. enable efficient online corrections using distributed correspondencegraphs to ground language [ 46]. However, this work relies on a semantic parser with pre-defined mappingsto ground language corrections. More end-to-end approaches have also demonstrated learning a languagecorrection conditioned policy, but they are similarly data hungry and thus fall back to shared autonomyto reduce complexity [ 47]. Later work explore mapping language corrections to composable cost functionssimilar to our work by training a prediction model from demonstration and apply trajectory optimizationto perform control [ 39]. Followup works further simplifies the system by integrating language correctionsto directly modify the waypoints of a trajectory using extensive datasets of paired corrections and demon-strations [ 48,49]. In contrast to these prior work, we demonstrate a flexible and data-efficient approachthat leverages LLMs to allow for multi-step correction of reward functions based on human feedback.3 Grounding Language to Actions Using Rewards3.1 Background and Reward InterfaceOur system takes user instruction in natural language and synthesizes corresponding reward functionsfor the desired motion. We define the reward function in the context of Markov Decision Process (MDP):(S;A;R;P;p 0), whereSis the state space, Ais the action space, R:SA7!Ris the reward function,P:SA7!Sis the dynamics, and p0is the initial state distribution. Given a reward function R, an optimalcontroller finds a sequence of actions a1:H=fa1;:::;aHgthat maximizes the expected accumulated reward:J(a1:H)=E=(s0;a0;:::;sH)PHt=0R(st;at), whereHis the rollout horizon.In this work, we assume the reward takes a particular form, suitable for use with MJPC. The reward isthe sum of a set of individual terms:R(s;a)=MXi=0winiri(s;a; i); (1)wherew2R+is a non-negative weight, n() :R!R+is a twice-differentiable norm that takes itsminimum at 0,r2Ris a residual term that achieves optimality when r=0, and iis the parameters oftheithresidual term. For example, if we want to have the robot raise its body height hto a desired height,we may design a residual term rh(h; )=h , where the reward parameter denotes the desired height,and use the l2 norm to construct the final reward function: Rh=wjjrhjj2. In principle, one may design3task-specific residual terms that can solve particular controller tasks. However, designing these residualsrequires domain expertise and may not generalize to novel tasks. In this work, we use a set of generic andsimple residual terms, and leverage the power of LLMs to compose different terms to generate complexbehaviors. The full set of residual terms used in this work can be found in the Appendix A.6.Our proposed system consists of two key components (Fig. 1 right): i) a Reward Translator , built uponpre-trained Large Language Models (LLMs) [ 10], that interacts with and understands user intents andmodulates all reward parameters and weightsw, and ii) a Motion Controller , based on MuJoCo MPC[22], that takes the generated reward and interactively optimize the optimal action sequence a1:H. Belowwe provide more details on the design of Reward Translator and Motion Controller .3.2 Reward TranslatorR e w ar d T ransla t or User Mak e r obo t do g s tand up on tw o f ee t.# Se t t orso r e w ar ds se t_t orso_r e w ar ds(height= 0 . 7 , pit ch=np.de g2rad( 90 )) # Se t f ee t r e w ar ds se t_f ee t_pos_r e w ar ds( 'fr ont_left' , height= 0 . 7 ) se t_f ee t_pos_r e w ar ds( 'back_left' , height= 0 . 0 ) se t_f ee t_pos_r e w ar ds( 'fr ont_right' , height= 0 . 7 ) se t_f ee t_pos_r e w ar ds( 'back_right' , height= 0 . 0 )[s tart of descrip tion] The t orso of the r obo t should pit ch up w ar d a t 90 . 0 de gr ees. The height of the r obo t's CoM or t orso cent er should be a t 0 . 7 me t ers. fr ont_left f oo t lift ed t o 0 . 7 me t ers high. fr ont_right f oo t lift ed t o 0 . 7 me t ers high. [end of descrip tion] Mo tion Descrip t or R e w ar d Coder Mo tion Descrip t or R e w ar d Coder Describe the motion of a dog robot using the following form: * The torso of the robot should pitch upward at [NUM: 0.0] degrees. * The height of the robot's CoM or torso center should be at [NUM: 0.3] m. ...Remember: 1. If you see phrases like [NUM: default_value], replace the entire phrase with a numerical value. 2. If you see phrases like {CHOICE: choice1, choice2, ...}, it means you should replace the entire phrase with one of the choices listed. ...We have a description of a robot's motion and we want you to turn that into the corresponding program with following functions: set_torso_rewards(height, pitch) height: height target for the robot torso pitch: pitch angle of the torso ...Example answer code: import numpy as np set_torso_targets(0.1, np.deg2rad(5)) ...Remember: 1. Always format the code in code blocks ...Motion Descriptor Prompt Motion template Rules Reward APIRules Example Reward Coder Prompt Mo tion Contr oller Figure 2: Detailed dataflow of the Reward Translator . A Motion Descriptor LLM takes the user input and describethe user-specified motion in natural language, and a Reward Coder translates the motion into the reward parameters.Inspired by recent progress on Large Language Models (LLMs), we propose to build the Reward Translatorbased on LLMs to map user interactions to reward functions corresponding to the desired robot motion. Asreward tuning is highly domain-specific and requires expert knowledge, it is unsurprising that LLMs trainedon generic language datasets (e.g. [ 1]) cannot directly generate a reward for a specific hardware. Instead, weexplore the in-context learning ability of LLMs to achieve this goal, inspired by prior work that demonstrateda variety of in-context learning skills for LLMs [ 2,50]. Furthermore, we decompose the problem oflanguage to reward into two stages: motion description and reward coding task, as illustrated in Fig. 2.Motion Description In the first stage, we design a Motion Descriptor LLM that interprets and expands theuser input into a natural language description of the desired robot motion following a pre-defined template(see example in Fig. 2). Although it is possible for LLMs to directly generate reasonable reward functionsfor relatively simple task, it often fails for tasks that necessitates complex reasoning. On the other hand,as observed in Fig. 1 left, LLMs can describe complex motions in detailed natural language successfully.Inspired by this observation, we design a template that describes common movements of a robot (see Fig. 2top right for an example) to effectively harness LLMs’ internal knowledge about motions. MotionDescriptor produces more structured and predictable outputs and improves stability of the overall system.In addition, as we are describing the motion in natural language, we do not need to provide any specificexamples in the prompt and can rely entirely on LLMs to generate the result.Reward Coding In the second stage, we translate the generated motion description into the rewardfunction using a second LLM. We formulate the problem of language to reward function as a code-writingtask to benefit from the LLMs’ knowledge of coding and code structure, thus we name the second LLMtheReward Coder . We design a prompt for instructing the LLM to generate reward specifying code (seeexample in Fig. 2 bottom right). The prompt consists of three parts: i) description of the reward APIsthat the LLM can call to specify different parameters of the reward function, ii) an example response thatwe expect the Reward Coder to produce, and iii) the constraints and rules that the Reward Coder needs4to follow. Note that the example is to demonstrate to the LLM how the response should look like, insteadof teaching it how to perform a specific task. As such, the Reward Coder needs to specify the rewardparameters based on its own knowledge about the motion from the natural language description.3.3 Motion ControllerThe Motion Controller needs to map the reward function generated by the Reward Translator to low-levelrobot actions a1:Hthat maximize the accumulated reward J(a1:H). There are a few possible ways toachieve this, including using reinforcement learning (RL), offline trajectory optimization, or, as in thiswork, model predictive control (MPC). At each control step, MPC plans a sequence of optimized actionsa1:Hand sends to the robot. The robot applies the action corresponding to its current timestamp, advancesto the next step, and sends the updated robot states to the MPC planner to initiate the next planning cycle.The frequent re-planning in MPC empowers its robustness to uncertainties in the system and, importantly,enables interactive motion synthesis and correction. We use an open-source implementation based on theMuJoCo simulator [ 51], MJPC [ 22]. MJPC has demonstrated the interactive creation of diverse behaviorssuch as legged locomotion, grasping, and finger-gaiting while supporting multiple planning algorithms,such as iLQG and Predictive Sampling. Following the observation by Howell et al [ 22], second-orderplanners such as iLQG produces smoother and more accurate actions while zeroth-order planners suchas Predictive Sampling is better at exploring non-smooth optimization landscape, we use iLQG for leggedlocomotion tasks, while use Predictive Sampling for manipulation tasks in this work.4 ExperimentsWe design experiments to answer the following questions: 1) Is our proposed method, by combiningLLMs and MJPC, able to generate diverse and complex robot motions through natural language interface?2) Does interfacing with the reward function result in a more expressive pipeline than interfacing directlywith low-level or primitive actions and is Motion Descriptor necessary for achieving reliable performance?3) Can our method be applied to real robot hardware?4.1 Experiment SetupWe first evaluate our approach on two simulated robotic systems: a quadruped robot, and a dexterous robotmanipulator (Fig. 3). Both robots are modeled in MuJoCo MPC [ 22]. We use GPT-4 as the underlyingLLM module with temperature parameter set to 0.3 [ 52]. Here we describe the key setups of each robot.More details regarding the full prompts and reward function can be found in Appendix A.5 and A.6.Quadruped Robot In this example, we command a four legged, 12 DoF robot (Fig. 3 (a)) to performa variety of motor skills. Quadruped robots have been demonstrated to perform a large variety of skillsincluding locomotion [ 35], hopping [ 18], biped standing [ 53,54], parkour [ 55] etc. We apply our systemto the quadruped robot to perform a similar suite of skills while only using natural language as input.Dexterous Manipulator In the second example, we demonstrate our system on a dexterous manipulatorrobot consisting of a 7 DoF Franka Emika arm and a 20 DoF shadow hand as the end-effector (Fig. 3(b)). This creates a large action space, making it challenging to manually design a controller for this robot.4.2 BaselinesWe compare our proposed system to two baseline methods: i) an variant of our approach that only usesReward Coder without having access to the Motion Descriptor , and ii) Code-as-Policies [ 3] where theLLM generates a plan using a set of pre-defined primitive skills. For the Code-as-Policies (CaP) baseline,we design the primitive skills based on common commands available to the robot. Due to limited spacewe put the full list of primitives in Appendix A.3.4.3 TasksWe design nine tasks for the quadruped robot and eight tasks for the dexterous manipulator to evaluatethe performance of our system. Fig. 3 shows samples of the tasks. The full list of tasks and sampled videoscan be found in Appendix A.2 and project website4.4language-to-reward.github.io5(a) Quadruped robot (b) Dexterous manipulator robot(c) Example rollout for the two robots.Figure 3: The two robots used in our experiments and sampled tasks. (a) a Quadruped robot with 12 DoFs. (b) adexterous manipulator robot with 27 DoFs. (c) example rollouts produced by our algorithm.4.4 Evaluation resultsFor each task and method considered, we generate 10 responses from Reward Translator , each evaluatedin MJPC for 50times. Fig. 4 shows the results for both robots. Our proposed approach achieves notablyhigher success rate for 11=17task categories and comparable performance for the rest tasks, showing theeffectiveness of the proposed method. When compared to the CaP baseline, our method achieves bettersuccess rate in almost all tasks. This is due to that CaP can perform well on tasks that can be expressed by thegiven primitives (e.g. Touch object) or very close to the given examples in prompt (e.g. Sit down), but failsto generalize to novel skills. On the other hand, using Reward Coder only can achieve success on some tasksbut fails in ones that requires more reasoning. For example, when asked to open a drawer, the baseline oftenforget to task the robot hand to get closer to the drawer handle and only design the reward for encouragingthe drawer to be open. Sampled responses from different method can be found in Appendix A.8.To further understand the overall performance of different systems, we show the pass rate in Fig. 4 right,which is a standard metric for analyzing code generation performance [ 8]. Each point in the plot representsthe percentage of tasks the system can solve, given that it can generate N pieces of code for each taskand pick the best performing one. As such, the pass rate curve measures the stability of the system (themore flat it is, the more stable the system is) as well as the task coverage of the system (the convergedpoint represents how many tasks the system can solve given sufficient trials). It is clear from the result thatfor both embodiments, using reward as the interface empowers LLMs to solve more tasks more reliably,and the use of Structured Motion Description further boosts the system performance.4.5 Ablation StudyIn this section, we perform an ablation study to investigate how our proposed pipeline responds to varyingamount of rules and reminders provided to Reward Translator and Motion Controller . Specifically, forReward Translator , we removed all rules in the prompt except for rule 5 that specifies the output format.Similarly for Motion Controller , we removed all rules except for rule 1 to make sure LLMs respond inthe correct format. We compare our full prompt to three variants: 1) reduced rule for Reward Translatoronly, 2) reduced rule for Motion Controller , 3) reduced rule for both modules. The results are shownin Figure 5. We can see that the full prompts outperforms the ablated variants, showing the usefulnessof the rules. Furthermore, we find reducing the rules in the Motion Controller caused a larger performancedegradation than reducing the rules in Reward Translator .6Figure 4: Comparison of our method and alternative methods in terms of pass rate: if we generate N pieces of codefor each task and pick the best performing one, what’s the percentage of tasks that the system can successfully tackle.Figure 5: Ablation study for different amount of rules in the designed prompts.4.6 Interactive Motion Synthesis ResultsOne benefit of using a real time optimization tool like MJPC is that humans can observe the motion beingsynthesized in real time and provide feedback. We showcase two examples where we teach the robot toperform complex tasks through multiple rounds of interactions. In the first example, we task the quadrupedrobot to stand up and perform a moon-walk skill (Fig. 6a). We give four instructions to achieve the task,as shown in Fig. 6. Each instruction improves the behavior towards the desired behavior based on theinteractively synthesized results. This showcase that users can interactively shape the behavior of the robotin natural language. In the second example, we showcase a different way of leveraging the interactivityof our system by sequentially commanding the dexterous manipulator robot to place an apple in a drawer,as seen in Fig. 6b. Results of the interactive results are best viewed in the supplementary video and fullcode output from our method can be found in Appendix A.9.User Mak e the r obo t s tand upright on tw o back f ee t lik e a human. Good, y ou ac tually don 't need t o k eep the fr ont pa ws a t certain height , jus t lea v e it t o the contr oller . Good, no w mak e the r obo t do a moonw alk. Moon w alk means the r obo t should w alk backw ar d while the f ee t s wings as if the y ar e mo ving f orw ar d. Corr ec t y our ans w er . User Open the dra w er . Good, no w put the apple inside the dra w er while k eep it open. Good, no w r elease the apple and mo v e hand a w a y . No w close the dra w er . (a) The quadruped perform a moon-walk. (b) The manipulator places an apple in the drawer. Figure 6: The two interactive examples using our proposed system.74.7 Real-robot experimentsUser Lift the apple User Lift the cube State Estimation Real Sim Planned T rajectory #pythonset_l2_distance_reward( "gripper" , "apple") set_obj_z_position_reward( "apple", 0.5)set_sim2real_regularization_reward()#pythonset_l2_distance_reward( "gripper" , "cube") set_obj_z_position_reward( "cube", 0.5)set_sim2real_regularization_reward() Figure 7: Implementation and rollouts of the proposed system in the real world.We implement a version of our method on a mobile manipulator, and tested it on nonprehensilemanipulation tasks in the real world. To obtain object states in real-world, we first use an open-vocabularydetector: F-VLM [ 56] to segment the object and then extract the associated points from point cloud behindthe mask and perform outlier rejection for points that might belong to the background. From a birds-eyeview, we fit a minimum volume rectangle and take the extremes to determine the extent in the z-axis. Weuse this 3D bounding box as state estimation for corresponding object in simulation. To detect the surface ofthe table with proper orientation, we use an AprilTag [ 57]. In addition, as seen in the supplementary video,MJPC sometimes discover highly dexterous and dyanmic maneuvers to accomplish the desired task that arebeyond the capabilities of current real hardwares. To mitigate this issue, we design a regularization residualterm specific to encourage steady and stable robot movements when applying our system to the real robot(set_sim/two.tosfreal_regularization_reward() in Fig. 7, see Appendix A.6.3 for details for this term).We demonstrate sim-to-real transfer on three tasks: object pushing, grasping, and drawer opening. Oursystem is able to generate relevant reward code and synthesize the correct motion. We also measuredsuccess rate of our system on three scenarios: 1) picking up an apple, 2) picking up a rubic’s cube, and 3)opening the middle drawer by repeating our approach for 10 time. Our method achieved 70%,70%,80%success rate respectively. For example rollouts please refer to the supplementary video/website and Fig. 8.5 Discussion and ConlusionWe propose a new paradigm for interfacing an LLM with a robot through reward functions, powered bya low-level model predictive control tool, MuJoCo MPC. Using reward function as the interface enablesLLMs to work in a semantic-rich space that play to the strengths of LLMs, while ensures the expressivenessof the resulting controller. To further improve the performance of the system, we propose to use a motiondescription template to better extract internal knowledge about robot motions from LLMs. We evaluateour proposed system on two simulated robotic platforms: a quadruped robot and a dexterous manipulatorrobot. We apply our approach to both robots to acquire a wide variety of skills. Compared to alternativemethods that do not use reward as the interface, or do not use the motion description template, our methodachieves significantly better performance in terms of stability and the number of tasks it can solve.Limitations and Future Work. Though we show that our system can obtain a diverse set of skills throughnatural language interactions, there are a few limitations. First, we currently design templates of motiondescriptions for each type of robot morphology, which requires manual work. An interesting future directionis to unify or automate the template design to make the system easily extendable to novel robot morphologies.Second, our method currently relies on language as the interaction interface with human users. As such,it can be challenging to design tasks that are not easily described in language (e.g., “walk gracefully"). Onepotential way to mitigate this issue is to extend the system to multi-modal inputs to allow richer forms ofuser interactions (e.g., by showing a video of the desirable behavior). Thirdly, we currently use pre-definedreward terms whose weights and parameters are modulated by the LLMs. Constraining the reward designspace helps improve stability of the system while sacrifices some flexibility. For example, our current designdoes not support time-varying rewards and would require re-designing the prompt to support that. EnablingLLMs to reliably design reward functions from scratch is thus an important and fruitful research direction.AcknowledgmentsThe authors would like to acknowledge Ken Caluwaerts, Kristian Hartikainen, Steven Bohez, CarolinaParada, Marc Toussaint, and the greater teams at Google DeepMind for their feedback and contributions.8References[1]A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P . Barham, H. W. Chung,C. Sutton, S. Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv preprintarXiv:2204.02311 , 2022.[2]T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P . Dhariwal, A. Neelakantan, P . Shyam,G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural informationprocessing systems , 33:1877–1901, 2020.[3]J. Liang, W. Huang, F. Xia, P . Xu, K. Hausman, B. Ichter, P . Florence, and A. Zeng. Code as policies:Language model programs for embodied control. arXiv preprint arXiv:2209.07753 , 2022.[4]A. Zeng, A. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. Ryoo, V . Sindhwani,J. Lee, V . V anhoucke, et al. Socratic models: Composing zero-shot multimodal reasoning withlanguage. arXiv preprint arXiv:2204.00598 , 2022.[5]W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P . Florence, A. Zeng, J. Tompson, I. Mordatch,Y . Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models.arXiv preprint arXiv:2207.05608 , 2022.[6]J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain ofthought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 , 2022.[7]T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa. Large language models are zero-shotreasoners. arXiv preprint arXiv:2205.11916 , 2022.[8]M. Chen, J. Tworek, H. Jun, Q. Y uan, H. P . d. O. Pinto, J. Kaplan, H. Edwards, Y . Burda,N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv preprintarXiv:2107.03374 , 2021.[9]M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances.arXiv preprint arXiv:2204.01691 , 2022.[10] S. V emprala, R. Bonatti, A. Bucker, and A. Kapoor. Chatgpt for robotics: Design principles andmodel abilities. Microsoft Auton. Syst. Robot. Res , 2:20, 2023.[11] C. Snell, S. Yang, J. Fu, Y . Su, and S. Levine. Context-aware language modeling for goal-orienteddialogue systems. arXiv preprint arXiv:2204.10198 , 2022.[12] W. Huang, P . Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extractingactionable knowledge for embodied agents. In International Conference on Machine Learning , pages9118–9147. PMLR, 2022.[13] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg.Progprompt: Generating situated robot task plans using large language models. arXiv preprintarXiv:2209.11302 , 2022.[14] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-z:Zero-shot task generalization with robotic imitation learning. In 5th Annual Conference on RobotLearning , 2021. URL https://openreview.net/forum?id/equal.tosf/eight.tosfkbp/two.tosf/three.tosftSGYv .[15] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman,A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprintarXiv:2212.06817 , 2022.[16] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P . Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.[17] J. Lee, J. Hwangbo, and M. Hutter. Robust recovery controller for a quadrupedal robot using deepreinforcement learning. arXiv preprint arXiv:1901.07517 , 2019.9[18] J. Siekmann, Y . Godse, A. Fern, and J. Hurst. Sim-to-real learning of all common bipedal gaits viaperiodic reward composition. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 7309–7315. IEEE, 2021.[19] F. Xia, C. Li, R. Martín-Martín, O. Litany, A. Toshev, and S. Savarese. Relmogen: Leveraging motiongeneration in reinforcement learning for mobile manipulation. arXiv preprint arXiv:2008.07792 ,2020.[20] M. Toussaint, J. Harris, J.-S. Ha, D. Driess, and W. Hönig. Sequence-of-constraints mpc: Reactivetiming-optimal control of sequential manipulation. In 2022 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 13753–13760. IEEE, 2022.[21] H.-T. L. Chiang, A. Faust, M. Fiser, and A. Francis. Learning navigation behaviors end-to-end withautorl. IEEE Robotics and Automation Letters , 4(2):2007–2014, 2019.[22] T. Howell, N. Gileadi, S. Tunyasuvunakool, K. Zakka, T. Erez, and Y . Tassa. Predictive Sampling:Real-time Behaviour Synthesis with MuJoCo. dec 2022. doi:10.48550/arXiv.2212.00541. URLhttps://arxiv.org/abs//two.tosf/two.tosf/one.tosf/two.tosf./zero.tosf/zero.tosf/five.tosf/four.tosf/one.tosf .[23] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas. Translating structured english to robot controllers.Advanced Robotics , 22(12):1343–1359, 2008.[24] C. Matuszek, E. Herbst, L. Zettlemoyer, and D. Fox. Learning to parse natural language commands toa robot control system. In Experimental robotics: the 13th international symposium on experimentalrobotics , pages 403–415. Springer, 2013.[25] A. Ku, P . Anderson, R. Patel, E. Ie, and J. Baldridge. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. arXiv preprint arXiv:2010.07954 , 2020.[26] A. Kamath, P . Anderson, S. Wang, J. Y . Koh, A. Ku, A. Waters, Y . Yang, J. Baldridge, and Z. Parekh.A new path: Scaling vision-and-language navigation with synthetic instructions and imitationlearning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 10813–10823, 2023.[27] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances overunstructured data. In Proceedings of the IEEE International Conference on Robotics and Automation(ICRA) , London, UK, 2023.[28] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, and S. Levine.Bridge data: Boosting generalization of robotic skills with cross-domain datasets. arXiv preprintarXiv:2109.13396 , 2021.[29] J. Fu, A. Korattikara, S. Levine, and S. Guadarrama. From language to goals: Inverse reinforcementlearning for vision-based instruction following. arXiv preprint arXiv:1902.07742 , 2019.[30] S. Karamcheti, M. Srivastava, P . Liang, and D. Sadigh. Lila: Language-informed latent actions. InProceedings of the 5th Conference on Robot Learning (CoRL) , 2021.[31] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry,Q. Le, et al. Program synthesis with large language models. arXiv:2108.07732 , 2021.[32] K. Ellis, C. Wong, M. Nye, M. Sable-Meyer, L. Cary, L. Morales, L. Hewitt, A. Solar-Lezama, andJ. B. Tenenbaum. Dreamcoder: Growing generalizable, interpretable knowledge with wake-sleepbayesian program learning. arXiv:2006.08381 , 2020.[33] Y . Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno,A. Dal Lago, et al. Competition-level code generation with alphacode. Science , 378(6624):1092–1097, 2022.[34] F. Alet, J. Lopez-Contreras, J. Koppel, M. Nye, A. Solar-Lezama, T. Lozano-Perez, L. Kaelbling, andJ. Tenenbaum. A large-scale benchmark for few-shot program induction and synthesis. In ICML , 2021.[35] L. Tian, K. Ellis, M. Kryven, and J. Tenenbaum. Learning abstract structure for drawing by efficientmotor program induction. NeurIPS , 2020.10[36] D. Trivedi, J. Zhang, S.-H. Sun, and J. J. Lim. Learning to synthesize programs as interpretableand generalizable policies. NeurIPS , 2021.[37] M. Kwon, S. M. Xie, K. Bullard, and D. Sadigh. Reward design with language models. InInternational Conference on Learning Representations (ICLR) , 2023.[38] J. Lin, D. Fried, D. Klein, and A. Dragan. Inferring rewards from language in context. arXiv preprintarXiv:2204.02515 , 2022.[39] P . Sharma, B. Sundaralingam, V . Blukis, C. Paxton, T. Hermans, A. Torralba, J. Andreas, and D. Fox.Correcting robot plans with natural language feedback. In Robotics: Science and Systems (RSS) , 2022.[40] S. Nair, E. Mitchell, K. Chen, b. ichter, S. Savarese, and C. Finn. Learning language-conditionedrobot behavior from offline data and crowd-sourced annotation. In A. Faust, D. Hsu, andG. Neumann, editors, Proceedings of the 5th Conference on Robot Learning , volume 164 ofProceedings of Machine Learning Research , pages 1303–1315. PMLR, 08–11 Nov 2022. URLhttps://proceedings.mlr.press/v/one.tosf/six.tosf/four.tosf/nair/two.tosf/two.tosfa.html .[41] L. Fan, G. Wang, Y . Jiang, A. Mandlekar, Y . Yang, H. Zhu, A. Tang, D.-A. Huang, Y . Zhu, andA. Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge.InThirty-sixth Conference on Neural Information Processing Systems Datasets and BenchmarksTrack , 2022. URL https://openreview.net/forum?id/equal.tosfrc/eight.tosfo_j/eight.tosfI/eight.tosfPX .[42] P . Goyal, S. Niekum, and R. J. Mooney. Using natural language for reward shaping in reinforcementlearning. arXiv preprint arXiv:1903.02020 , 2019.[43] D. Bahdanau, F. Hill, J. Leike, E. Hughes, A. Hosseini, P . Kohli, and E. Grefenstette. Learning tounderstand goal specifications by modelling reward. arXiv preprint arXiv:1806.01946 , 2018.[44] H. Hu and D. Sadigh. Language instructed reinforcement learning for human-ai coordination. In40th International Conference on Machine Learning (ICML) , 2023.[45] W. Huang, C. Wang, R. Zhang, Y . Li, J. Wu, and L. Fei-Fei. V oxposer: Composable 3d value mapsfor robotic manipulation with language models. arXiv preprint arXiv:2307.05973 , 2023.[46] A. Broad, J. Arkin, N. D. Ratliff, T. M. Howard, and B. Argall. Real-time natural languagecorrections for assistive robotic manipulators. International Journal of Robotics Research (IJRR) ,36:684–698, 2017.[47] Y . Cui, S. Karamcheti, R. Palleti, N. Shivakumar, P . Liang, and D. Sadigh. “no, to the right”–onlinelanguage corrections for robotic manipulation via shared autonomy. arXiv preprint arXiv:2301.02555 ,2023.[48] A. F. C. Bucker, L. F. C. Figueredo, S. Haddadin, A. Kapoor, S. Ma, and R. Bonatti. Reshaping robottrajectories using natural language commands: A study of multi-modal data alignment using trans-formers. In International Conference on Intelligent Robots and Systems (IROS) , pages 978–984, 2022.[49] A. F. C. Bucker, L. F. C. Figueredo, S. Haddadin, A. Kapoor, S. Ma, S. V emprala, and R. Bonatti.Latte: Language trajectory transformer. arXiv preprint arXiv:2208.02918 , 2022.[50] N. Ziems, W. Y u, Z. Zhang, and M. Jiang. Large language models are built-in autoregressive searchengines. arXiv preprint arXiv:2305.09612 , 2023.[51] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE,2012. doi:10.1109/IROS.2012.6386109.[52] OpenAI. Gpt-4 technical report. arXiv , 2023.[53] L. Smith, J. C. Kew, T. Li, L. Luu, X. B. Peng, S. Ha, J. Tan, and S. Levine. Learning and adaptingagile locomotion skills by transferring experience. arXiv preprint arXiv:2304.09834 , 2023.[54] Y . Fuchioka, Z. Xie, and M. van de Panne. Opt-mimic: Imitation of optimized trajectories fordynamic quadruped behaviors. arXiv preprint arXiv:2210.01247 , 2022.11[55] K. Caluwaerts, A. Iscen, J. C. Kew, W. Y u, T. Zhang, D. Freeman, K.-H. Lee, L. Lee, S. Saliceti,V . Zhuang, et al. Barkour: Benchmarking animal-level agility with quadruped robots. arXiv preprintarXiv:2305.14654 , 2023.[56] W. Kuo, Y . Cui, X. Gu, A. Piergiovanni, and A. Angelova. F-vlm: Open-vocabulary object detectionupon frozen vision and language models. arXiv preprint arXiv:2209.15639 , 2023.[57] E. Olson. Apriltag: A robust and flexible visual fiducial system. 2023.12A AppendixA.1 Author ContributionsAuthor contributions by type, ordered alphabetically within each category:Method (conception, implementation, iteration, evaluation) : Nimrod Gileadi, Kuang-Huei Lee, Y uvalTassa, Fei Xia, Peng Xu, Wenhao Y u.Infrastructure Development : Tom Erez, Nimrod Gileadi, Y uval Tassa, Fei Xia, Wenhao Y u.Hardware Deployment : Chuyuan Fu, Nimrod Gileadi, Leonard Hasenclever, Jan Humplik, SeanKirmani, Y uval Tassa, Fei Xia, Ted Xiao, Wenhao Y u.Project Advising : Tom Erez, Nicolas Heess, Brian Ichter, Dorsa Sadigh, Jie Tan, Y uval Tassa, Fei Xia,Andy Zeng, Tingnan Zhang.Paper Writing/Revision : Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Nimrod Gileadi, BrianIchter, Dorsa Sadigh, Fei Xia, Andy Zeng, Wenhao Y u, Tingnan Zhang.A.2 Full task listHere we show the list of tasks used in our evaluation as well as the instructions used for each task.Task Instructions Expected BehaviorFacing sunrise It’s early in the morning, make the robot head towards the sun. Robot face towards East.Facing sunset It’s late in the afternoon, make the robot head towards the sunset. Robot face towards West.Sit down Sit down low to ground with torso flat. Robot’s CoM drops lower and remain flat.Roll Over I want the robot to roll by 180 degrees. Robot’s belly faces up.Spin Spin fast. Robot reach a fast turning speed.Lift one paw I want the robot to lift its front right paw in the air. The front right paw of the robot lifts up in the air.Lift paw higherI want the robot to lift its front right paw in the air.Lift it even higher.The robot lifts its front right paw higher than before.Spin with lifted pawsLift front left paw.Good, now lift diagonal paw as well.Good, in addition I want the robot to spin fast.Robot lifts front left and rear right paws while spin fast.Stand up on two feet Make the robot stand upright on two back feet like a human. Robot stands on two back feet and keep balance.Trotting Describe trotting gait, then make the robot walk in a trotting gait. Robot walks with diagonal feet in synchronous.Pacing Describe pacing gait, then make the robot walk in a pacing gait. Robot walks with feet on the same side in synchronous.Bounding Describe bounding gait, then make the robot walk in a bounding gait. Robot walks with front and back feet in synchronous respectively.Table 1: List of tasks used in evaluation for the quadruped robot.Task Instructions Expected BehaviorTouch object Touch the {object} Robot fingers in contact with the object.Lift object Lift the {object} to0:5m The object needs to stay above 0:4mfor 1s.Move object Move the {object_a} to{object_b} The distance between object needs to be smaller than 0:1m.Upright object Place the {object} upright The z axis of the object needs to be parallel to x-y plane.Flip object Flip the {object} The local up vector of the object should be pointing downward.Lift two objects Lift the {object_a} and{object_b} at the same time. Both objects need to stay above 0:4mfor1s.Turn on the faucet Turn on the faucet. The valve of the faucet needs to be turned 90 degrees.Open the drawer Open the drawer. The drawer needs to be pulled fully open.Table 2: List of tasks used in evaluation for the dexterous manipulation.13A.3 Baseline detailsFor the quadruped robot, we use the following three primitive skills:•head_towards(direction) specifies a target heading direction direction for the robot toreach.•walk(forward_speed, sideway_speed, turning_speed) controls the robot to walk andturn in different directions. This is a common interface used in quadruped robots to navigatein different environments.•set_joint_poses(leg_name, joint_angles) directly sets the joint positions for each DoFon the robot. To help the LLMs understand the joint angles, we provide a set of examples inthe prompt.For the dexterous manipulator robot, we use three primitive skills to control the robot motion and alsoa function to get access to the position of an object in the scene:•end_effector_to(position) moves the center of the robot hand’s palm to the givenposition .•end_effector_open() opens the hand of the robot by extending all fingers.•end_effector_close() closes the hand to form a grasping pose.•get_object_position(obj_name) gets the position of a certain object in the scene.•get_joint_position(joint_name) gets the position of a certain joint in the scene.A.4 Additional illustrations for real-world resultsUser Open the middle drawer Figure 8: More illustrations for the real-world results for the proposed system.A.5 Full PromptsHere we list the full prompts used in Reward Translator for all experiments used in this work.i) Motion Descriptor Prompt for Quadruped14Describe the motion of a dog robot using the following form:[start of description]The torso of the robot should roll by [NUM: 0.0] degrees towards right, the torso should pitch upwardat [NUM: 0.0] degrees.The height of the robot’s CoM or torso center should be at [NUM: 0.3] meters.The robot should {CHOICE: [face certain direction, turn at certain speed]}. If facing certain direction,it should be facing {CHOICE: [east, south, north, west]}. If turning, it should turn at [NUM: 0.0]degrees/s.The robot should {CHOICE: [go to a certain location, move at certain speed]}. If going to certainlocation, it should go to (x=[NUM: 0.0], y=[NUM: 0.0]). If moving at certain speed, it should moveforward at [NUM: 0.0]m/s and sideways at [NUM: 0.0]m/s (positive means left).[optional] front_left foot lifted to [NUM: 0.0] meters high.[optional] back_left foot lifted to [NUM: 0.0] meters high.[optional] front_right foot lifted to [NUM: 0.0] meters high.[optional] back_right foot lifted to [NUM: 0.0] meters high.[optional] front_left foot extend forward by [NUM: 0.0] meters.[optional] back_left foot extend forward by [NUM: 0.0] meters.[optional] front_right foot extend forward by [NUM: 0.0] meters.[optional] back_right foot extend forward by [NUM: 0.0] meters.[optional] front_left foot shifts inward laterally by [NUM: 0.0] meters.[optional] back_left foot shifts inward laterally by [NUM: 0.0] meters.[optional] front_right foot shifts inward laterally by [NUM: 0.0] meters.[optional] back_right foot shifts inward laterally by [NUM: 0.0] meters.[optional] front _left foot steps on the ground at a frequency of [NUM: 0.0] Hz, during the steppingmotion, the foot will move [NUM: 0.0] meters up and down, and [NUM: 0.0] meters forward and back,drawing a circle as if it’s walking {CHOICE: forward, back}, spending [NUM: 0.0] portion of the timein the air vs gait cycle.[optional] back _left foot steps on the ground at a frequency of [NUM: 0.0] Hz, during the steppingmotion, the foot will move [NUM: 0.0] meters up and down, and [NUM: 0.0] meters forward and back,drawing a circle as if it’s walking {CHOICE: forward, back}, spending [NUM: 0.0] portion of the timein the air vs gait cycle.[optional] front _right foot steps on the ground at a frequency of [NUM: 0.0] Hz, during the steppingmotion, the foot will move [NUM: 0.0] meters up and down, and [NUM: 0.0] meters forward and back,drawing a circle as if it’s walking {CHOICE: forward, back}, spending [NUM: 0.0] portion of the timein the air vs gait cycle.[optional] back _right foot steps on the ground at a frequency of [NUM: 0.0] Hz, during the steppingmotion, the foot will move [NUM: 0.0] meters up and down, and [NUM: 0.0] meters forward and back,drawing a circle as if it’s walking {CHOICE: forward, back}, spending [NUM: 0.0] portion of the timein the air vs gait cycle.[optional] The phase offsets for the four legs should be front _left: [NUM: 0.0], back _left: [NUM: 0.0],front_right: [NUM: 0.0], back_right: [NUM: 0.0].[end of description]Rules:1. If you see phrases like [NUM: default_value], replace the entire phrase with a numerical value.2. If you see phrases like CHOICE: [choice1, choice2, ...], it means you should replace the entirephrase with one of the choices listed. Be sure to replace all of them. If you are not sure about thevalue, just use your best judgement.3. Phase offset is between [0, 1]. So if two legs’ phase offset differs by 0 or 1 they are moving insynchronous. If they have phase offset difference of 0.5, they are moving opposite in the gait cycle.4. The portion of air vs the gait cycle is between [0, 1]. So if it’s 0, it means the foot will always stayon the ground, and if it’s 1 it means the foot will always be in the air.5. I will tell you a behavior/skill/task that I want the quadruped to perform and you will provide the fulldescription of the quadruped motion, even if you may only need to change a few lines. Always startthe description with [start of description] and end it with [end of description].6. We can assume that the robot has a good low-level controller that maintains balance and stability aslong as it’s in a reasonable pose.7. Y ou can assume that the robot is capable of doing anything, even for the most challenging task.8. The robot is about 0.3m high in CoM or torso center when it’s standing on all four feet withhorizontal body. It’s about 0.65m high when it stand upright on two feet with vertical body. When therobot’s torso/body is flat and parallel to the ground, the pitch and roll angles are both 0.9. Holding a foot 0.0m in the air is the same as saying it should maintain contact with the ground.10. Do not add additional descriptions not shown above. Only use the bullet points given in thetemplate.1511. If a bullet point is marked [optional], do NOT add it unless it’s absolutely needed.12. Use as few bullet points as possible. Be concise.ii) Reward Coder Prompt for QuadrupedWe have a description of a robot’s motion and we want you to turn that into the corresponding programwith following functions:def set_torso_targets(target_torso_height,target_torso_pitch, target_torso_roll, target_torso_location_xy,target_torso_velocity_xy, target_torso_heading, target_turning_speed)target _torso_height: how high the torso wants to reach. When the robot is standing on all four feet ina normal standing pose, the torso is about 0.3m high.target _torso_pitch: How much the torso should tilt up from a horizontal pose in radians. A positivenumber means robot is looking up, e.g. if the angle is 0.5*pi the robot will be looking upward, if theangel is 0, then robot will be looking forward.target _torso_velocity _xy: target torso moving velocity in local space, x is forward velocity, y is sidewayvelocity (positive means left).target _torso_heading: the desired direction that the robot should face towards. The value oftarget_torso_heading is in the range of 0 to 2*pi, where 0 and 2*pi both mean East, pi being West, etc.target_turning_speed: the desired turning speed of the torso in radians per second.Remember: one of target _torso_location _xy and target _torso_velocity _xy must be None. one oftarget_torso_heading and target_turning_speed must be None. No other inputs can be None.def set_feet_pos_parameters(feet_name,lift_height, extend_forward, move_inward)feet_name is one of (“front_left", “back_left", “front_right", “back_right").lift_height: how high should the foot be lifted in the air. If is None, disable this term. If it’s set to 0,the foot will touch the ground.extend_forward: how much should the foot extend forward. If is None, disable this term.move_inward: how much should the foot move inward. If is None, disable this term.def set_feet_stepping_parameters(feet_name, stepping_frequency, air_ratio,phase_offset, swing_up_down, swing_forward_back, should_activate)feet_name is one of (“front_left", “rear_left", “front_right", “rear_right").air_ratio (value from 0 to 1) describes how much time the foot spends in the air versus the whole gaitcycle. If it’s 0 the foot will always stay on ground, and if it’s 1 it’ll always stay in the air.phase _offset (value from 0 to 1) describes how the timing of the stepping motion differs betweendifferent feet. For example, if the phase _offset between two legs differs by 0.5, it means one leg willstart the stepping motion in the middle of the stepping motion cycle of the other leg. swing _up_downis how much the foot swings vertical during the motion cycle.swing _forward _back is how much the foot swings horizontally during the motion cycle. Ifswing _forward _back is positive, the foot would look like it’s going forward, if it’s negative, the foot willlook like it’s going backward.If should_activate is False, the leg will not follow the stepping motion.def execute_plan(plan_duration/equal.tosf/two.tosf)This function sends the parameters to the robot and execute the plan for “plan _duration” seconds,default to be 2Example answer code:import numpy as np /numbersign.tosf import numpy because we are using it belowreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_torso_targets(/zero.tosf./one.tosf,np.deg/two.tosfrad(/five.tosf), np.deg/two.tosfrad(/one.tosf/five.tosf), (/two.tosf, /three.tosf), None, None, np.deg/two.tosfrad(/one.tosf/zero.tosf))set_feet_pos_parameters("front_left", /zero.tosf./one.tosf, /zero.tosf./one.tosf, None)set_feet_pos_parameters("back_left", None, None, /zero.tosf./one.tosf/five.tosf)set_feet_pos_parameters("front_right", None, None, None)set_feet_pos_parameters("back_right", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, None)16set_feet_stepping_parameters("front_right", /two.tosf./zero.tosf, /zero.tosf./five.tosf, /zero.tosf./two.tosf, /zero.tosf./one.tosf, -/zero.tosf./zero.tosf/five.tosf, True)set_feet_stepping_parameters("back_left", /three.tosf./zero.tosf, /zero.tosf./seven.tosf, /zero.tosf./one.tosf, /zero.tosf./one.tosf, /zero.tosf./zero.tosf/five.tosf, True)set_feet_stepping_parameters("front_left", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)set_feet_stepping_parameters("back_right", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)execute_plan(/four.tosf)Remember: 1. Always format the code in code blocks.2. Do not invent new functions or classes. The only allowed functions you can call are the ones listedabove. Do not leave unimplemented code blocks in your response.3. The only allowed library is numpy. Do not import or use any other library. If you use np, be sureto import numpy.4. If you are not sure what value to use, just use your best judge. Do not use None for anything.5. Do not calculate the position or direction of any object (except for the ones provided above). Justuse a number directly based on your best guess.6. For set _torso_targets, only the last four arguments (target _torso_location _xy, target _torso_velocity _xy,target_torso_heading, target_turning_speed) can be None. Do not set None for any other arguments.7. Don’t forget to call execute_plan at the end.iii) Baseline: Code-as-Policies Prompt for QuadrupedWe have a quadruped robot. It has 12 joints in total, three for each leg. We can use the followingfunctions to control its movements:def set_target_joint_angles(leg_name, target_joint_angles)leg_name is one of (“front_left", “back_left", “front_right", “back_right").target _joint_angles: a 3D vector that describes the target angle for the abduction/adduction, hip, andknee joint of the each leg.def walk(forward_speed, sideway_speed, turning_speed)forward_speed: how fast the robot should walk forwardsideway_speed: how fast the robot should walk sidewaysturning_speed: how fast the robot should be turning (positive means turning right)def head_towards(heading_direction)heading _direction: target heading for the robot to reach, in the range of 0 to 2pi, where 0 means East,0.5pi means North, pi means West, and 1.5pi means South.def execute_plan(plan_duration/equal.tosf/one.tosf/zero.tosf)This function sends the parameters to the robot and execute the plan for “plan _duration" seconds,default to be 2Details about joint angles of each leg: abduction/adduction joint controls the upper leg to swinginginward/outward. When it’s positive, legs will swing outward (swing to the right for right legs and leftfor left legs). When it’s negative, legs will swing inward.hip joint controls the upper leg to rotate around the shoulder. When it’s zero, the upper leg is parallel tothe torso (hip is same height as shoulder), pointing backward. When it’s positive, the upper leg rotatesdownward so the knee is below the shoulder. When it’s 0.5pi, it’s perpendicular to the torso, pointingdownward. When it’s negative, the upper leg rotates upward so the knee is higher than the shoulder.knee joint controls the lower leg to rotate around the knee. When it’s zero, the lower leg is foldedcloser to the upper leg. knee joint angle can only be positive. When it’s 0.5pi, the lower leg is perpen-dicular to the upper leg. When it’s pi, the lower leg is fully streching out and parallel to the upper leg.Here are a few examples for setting the joint angles to make the robot reach a few key poses: standingon all four feet:set_target_joint_angles("front_left", [/zero.tosf, /one.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_left", [/zero.tosf, /zero.tosf./seven.tosf/five.tosf, /one.tosf./five.tosf])set_target_joint_angles("front_right", [/zero.tosf, /one.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_right", [/zero.tosf, /zero.tosf./seven.tosf/five.tosf, /one.tosf./five.tosf])execute_plan()sit down on the floor:set_target_joint_angles("front_left", [/zero.tosf, /zero.tosf, /zero.tosf])17set_target_joint_angles("back_left", [/zero.tosf, /zero.tosf, /zero.tosf])set_target_joint_angles("front_right", [/zero.tosf, /zero.tosf, /zero.tosf])set_target_joint_angles("back_right", [/zero.tosf, /zero.tosf, /zero.tosf])execute_plan()lift front left foot:set_target_joint_angles("front_left", [/zero.tosf, /zero.tosf./four.tosf/five.tosf, /zero.tosf./three.tosf/five.tosf])set_target_joint_angles("back_left", [/zero.tosf, /one.tosf, /one.tosf./five.tosf])set_target_joint_angles("front_right", [/zero.tosf, /one.tosf./four.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_right", [/zero.tosf, /one.tosf, /one.tosf./five.tosf])execute_plan()lift back left foot:set_target_joint_angles("front_left", [/zero.tosf, /zero.tosf./five.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_left", [/zero.tosf, /zero.tosf./four.tosf/five.tosf, /zero.tosf./three.tosf/five.tosf])set_target_joint_angles("front_right", [/zero.tosf, /zero.tosf./five.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_right", [/zero.tosf, /zero.tosf./five.tosf, /one.tosf./five.tosf])execute_plan()Remember:1. Always start your response with [start analysis]. Provide your analysis of the problem within 100words, then end it with [end analysis].2. After analysis, start your code response, format the code in code blocks.3. Do not invent new functions or classes. The only allowed functions you can call are the ones listedabove. Do not leave unimplemented code blocks in your response.4. The only allowed library is numpy. Do not import or use any other library. If you use np, be sureto import numpy.5. If you are not sure what value to use, just use your best judge. Do not use None for anything.6. Do not calculate the position or direction of any object (except for the ones provided above). Justuse a number directly based on your best guess.7. Write the code as concisely as possible and try not to define additional variables.8. If you define a new function for the skill, be sure to call it somewhere.9. Be sure to call execute_plan at the end.iv) Motion Descriptor Prompt for Dexterous ManipulatorWe have a dexterous manipulator and we want you to help plan how it should move to perform tasksusing the following template:[start of description]To perform this task, the manipulator’s palm should move close to {CHOICE: apple, banana, box, bowl,drawer_handle, faucet_handle, drawer_center, rest_position}.object1={CHOICE: apple, banana, box, bowl, drawer _handle, faucet _handle, drawer _center} should beclose to object2={CHOICE: apple, banana, box, bowl, drawer _handle, faucet _handle, drawer _center,nothing}.[optional] object1 needs to be rotated by [NUM: 0.0] degrees along x axis.[optional] object2 needs to be rotated by [NUM: 0.0] degrees along x axis.[optional] object1 needs to be lifted to a height of [NUM: 0.0]m at the end.[optional] object2 needs to be lifted to a height of [NUM: 0.0]m at the end.[optional] object3={CHOICE: drawer, faucet} needs to be {CHOICE: open, closed}.[end of description]Rules:1. If you see phrases like [NUM: default_value], replace the entire phrase with a numerical value.2. If you see phrases like {CHOICE: choice1, choice2, ...}, it means you should replace the entirephrase with one of the choices listed.3. If you see [optional], it means you only add that line if necessary for the task, otherwise remove thatline.4. The environment contains apple, banana, box, bowl, drawer _handle, faucet _handle. Do not inventnew objects not listed here.5. The bowl is large enough to have all other object put in there.6. I will tell you a behavior/skill/task that I want the manipulator to perform and you will provide thefull plan, even if you may only need to change a few lines. Always start the description with [start ofplan] and end it with [end of plan].187. Y ou can assume that the robot is capable of doing anything, even for the most challenging task.8. Y our plan should be as close to the provided template as possible. Do not include additional details.v) Reward Coder Prompt for Dexterous ManipulatorWe have a plan of a robot arm with palm to manipulate objects and we want you to turn that into thecorresponding program with following functions:def set_l/two.tosf_distance_reward(name_obj_A, name_obj_B)where name _obj_A and name _obj_B are selected from [“palm", “apple", “banana", “box", “bowl",“drawer _handle", “faucet _handle", “drawer _center", “rest _position"]. This term sets a reward forminimizing l2 distance between name _obj_A and name _obj_B so they get closer to each other.rest_position is the default position for the palm when it’s holding in the air.def set_obj_orientation_reward(name_obj, x_axis_rotation_radians)this term encourages the orientation of name _obj to be close to the target (specified byx_axis_rotation_radians).def execute_plan(duration/equal.tosf/two.tosf)This function sends the parameters to the robot and execute the plan for “duration” seconds, default tobe 2.def set_joint_fraction_reward(name_joint, fraction)This function sets the joint to a certain value between 0 and 1. 0 means close and 1 means open.name_joint needs to be select from [’drawer’, ’faucet’]def set_obj_z_position_reward(name_obj, z_height)this term encourages the orientation of name_obj to be close to the height (specified by z_height).def reset_reward()This function resets the reward to default values.Example plan: To perform this task, the manipulator’s palm should move close to object1=apple.object1 should be close to object2=bowl. object2 needs to be rotated by 30 degrees along x axis.object2 needs to be lifted to a height of 1.0.This is the first plan for a new task.Example answer code:import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "apple")set_l/two.tosf_distance_reward("apple", "bowl")set_obj_orientation_reward("bowl", np.deg/two.tosfrad(/three.tosf/zero.tosf))set_obj_z_position_reward("bowl", /one.tosf./zero.tosf)execute_plan(/four.tosf)Remember:1. Always format the code in code blocks. In your response execute _plan should be called exactly onceat the end.2. Do not invent new functions or classes. The only allowed functions you can call are the ones listedabove. Do not leave unimplemented code blocks in your response.3. The only allowed library is numpy. Do not import or use any other library.4. If you are not sure what value to use, just use your best judge. Do not use None for anything.5. Do not calculate the position or direction of any object (except for the ones provided above). Justuse a number directly based on your best guess.6. Y ou do not need to make the robot do extra things not mentioned in the plan such as stopping therobot.vi) Baseline: Code-as-Policies Prompt for Dexterous Manipulator19We have a manipulator and we want you to help plan how it should move to perform tasks using thefollowing APIs:def end_effector_to(position_obj)position_obj is a list of 3 float numbers [x,y,z]def end_effector_open()Open the end effector.def end_effector_close()Close the end effector.def get_object_position(obj_name)Given an object name, return a list of 3 float numbers [x,y,z] for the object position. the objectcan come from a list of [“apple", “banana", “bowl", “box", “drawer _handle", “faucet _handle",“drawer_center", “rest_position"]def get_normalized_joint_position(joint_name)Given an joint name, return a float numbers x. the joint can come from a list of [“drawer", “faucet"]def reset()Reset the agent.Example answer code:import numpy as npreset()apple_pos /equal.tosf get_object_position("apple")end_effector_to(apple/dollar.tosf_pos)Remember:1. Always format the code in code blocks.2. Do not invent new functions or classes. The only allowed functions you can call are the ones listedabove. Do not leave unimplemented code blocks in your response.3. The only allowed library is numpy. Do not import or use any other library.4. If you are not sure what value to use, just use your best judge. Do not use None for anything.5. Y ou do not need to make the robot do extra things not mentioned in the plan such as stopping therobot.6. Try your best to generate code despite the lack of context.A.6 Reward functions used in our experimentsIn this work we use a set of generic reward functions for each embodiment that the LLMs can modulate.More specifically, we design a set of residual terms as in Equation 1 that are optimized to reach zero byinternally converting them to a l2 loss. Thus given a residual term r()a reward term can be recovered byjjr()jj22. Below we describe the full set of residual terms we use in our experiments for each embodiment.For each term we select the weights for them to have about the same magnitude. The reward coder canadjust the parameters in each term and optionally set the weight to zero to disable a term.A.6.1 QuadrupedTable 3 shows the residual terms used in the quadruped tasks. Note that for the foot-related terms, theyare repeated for all four feet respectively. Furthermore, LLMs can optionally set the target foot positionsfpdirectly or through a periodic function max(asin(b2+c);0)whereais the magnitude of the motion,bis the frequency, and cis the phase offset.20Residual Term Formulation Default weightCoM X-Y position jpxypxyj 0.3CoM height pzpz 1.0base yaw pyawpyaw 0.3base pitch ppitchppitch 0.6base roll prollproll 0.1forward velocity _ px_xp 0.1sideways velocity _ py_yp 0.1yaw speed _ pyaw_yawp 0.1foot local position x fpxfpx 1foot local position y fpyfpy 1foot local position z fpzfpz 2Table 3: List of residual terms used for the quadruped robot. pdenotes the position and orientation of the robot’s torso.fpdenotes the position of the robot’s foot (in local space).()means the target value and_()means the time-derivativeof the quantity.Residual Term Formulation Default weightmove obj1 close to obj2 jc1xyzc2xyzj 5move obj to target X-Y position jczczj 5move obj to target height jcxycxyj 10move obj to target orientation joobjojmove joint to target value qq 10Table 4: List of reward terms used for the dexterous manipulator robot. cdenotes the position of the object, odenotesthe orientation of the object, qis the degree of freedom to be manipulated.A.6.2 Dexterous ManipulatorA.6.3 Sim-to-Real residual termAs seen in the supplementary video, MuJoCo MPC can discover highly dynamic and dexterousmanipulation skills that exceeds the capabilities of existing hardwares. To enable successful deploymenton the hardware, we design a regularization term to help achieve steady motions on the robot. Specifically,we use the following residual term:rsim2real=3_pee;ifjj_peejj>0:30; otherwise+_q;ifjj_qjj>0:70;otherwise+0:05_pobj+0:1_pee_pobj;ifjjpeepobjjj<0:10; otherwise+0:4qgripper1:0;ifjjpeepobjjj>0:1qgripper; otherwise;where _peeis the robot end effector velocity, _qis the robot joint velocity, _pobjis the object velocity. Thefirst two terms regularizes the joint and end effector velocities to encourage them to stay at a low-speedregion to prevent jerky robot motions. The third and fourth term encourage objects to move slowly andmatch gripper speed. This is to discourage dynamic in-hand movements. The last term encourages gripperto open while being away from objects and close when approaching objects. This term helps improve thegrasping skill transfer by encouraging a more clean grasp motion.A.7 Detailed evaluation results for each taskFigure 10 and Figure 9 shows the full evaluation results for individual tasks in Section 4.4. For each task,we query the Reward Translator LLM 10 times. For each generated reward code, we evaluate on MJPCfor 50 times and report the success rate. Therefore the results here shows the individual performance ofReward Translator andMotion Controller . We can see that for most tasks Motion Controller achieves21near perfect success rate as long as the right reward is provided, while for tasks that require more dynamicmotion or more precise manipulation (e.g. biped standing, and upright banana) Motion Controller showsmore variance in performance.Figure 9: Full evaluation results for the Dexterous Manipulator robot. Note that to better interpret the results, weorder the generated reward code in the figures based on the mean success rate.22Figure 10: Full evaluation results for the Quadruped robot. Note that to better interpret the results, we order thegenerated reward code in the figures based on the mean success rate.A.8 Example outputsTo help understand the behavior of different methods discussed in this work, here we show sampledresponses for two tasks across different methods: Biped standing for quadruped, and Turn on faucet for dex-terous manipulator. For each method, we sample 2-3responses to show both successful and failed results.A.8.1 Sampled results for Biped standing taskUser instruction :Make the robot stand upright on two back feet like a human.OursSample 1import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf,np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), (/zero.tosf./zero.tosf, /zero.tosf./zero.tosf), None, /zero.tosf, None)set_feet_pos_parameters("front_left", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_left", /zero.tosf./zero.tosf, None, None)set_feet_pos_parameters("front_right", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_right", /zero.tosf./zero.tosf, None, None)execute_plan()23Sample 2import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf,np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), (/zero.tosf./zero.tosf, /zero.tosf./zero.tosf), None, /zero.tosf, None)set_feet_pos_parameters("front_left", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_left", /zero.tosf./zero.tosf, None, None)set_feet_pos_parameters("front_right", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_right", /zero.tosf./zero.tosf, None, None)Sample 3import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf,np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), (/zero.tosf, /zero.tosf), None, /zero.tosf, None)set_feet_pos_parameters("front_left", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_left", /zero.tosf./zero.tosf, None, None)set_feet_pos_parameters("front_right", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_right", /zero.tosf./zero.tosf, None, None)set_feet_stepping_parameters("front_left", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)set_feet_stepping_parameters("back_left", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)set_feet_stepping_parameters("front_right", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)set_feet_stepping_parameters("back_right", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)execute_plan()Reward Coder onlySample 1import numpy as np/numbersign.tosf Set torso targets to stand on back feetset_torso_targets(target_torso_height/equal.tosf/zero.tosf./six.tosf,target_torso_pitch/equal.tosf/zero.tosf./five.tosf*np.pi, target_torso_roll/equal.tosf/zero.tosf./zero.tosf,target_torso_location_xy/equal.tosfNone, target_torso_velocity_xy/equal.tosfNone,target_torso_heading/equal.tosfNone, target_turning_speed/equal.tosfNone)/numbersign.tosf Lift front feet off the groundset_feet_pos_parameters(feet_name/equal.tosf"front_left",lift_height/equal.tosf/zero.tosf./three.tosf, extend_forward/equal.tosfNone, move_inward/equal.tosfNone)set_feet_pos_parameters(feet_name/equal.tosf"front_right",lift_height/equal.tosf/zero.tosf./three.tosf, extend_forward/equal.tosfNone, move_inward/equal.tosfNone)/numbersign.tosf Execute the planexecute_plan(plan_duration/equal.tosf/three.tosf)24Sample 2import numpy as np/numbersign.tosf Set the torso targets to stand on two back feetset_torso_targets(target_torso_height/equal.tosf/zero.tosf./six.tosf,target_torso_pitch/equal.tosf/zero.tosf./zero.tosf, target_torso_roll/equal.tosf/zero.tosf./zero.tosf,target_torso_location_xy/equal.tosfNone, target_torso_velocity_xy/equal.tosf(/zero.tosf,/zero.tosf), target_torso_heading/equal.tosfNone, target_turning_speed/equal.tosfNone)/numbersign.tosf Lift the front feet off the groundset_feet_pos_parameters("front_left",lift_height/equal.tosf/zero.tosf./three.tosf, extend_forward/equal.tosfNone, move_inward/equal.tosfNone)set_feet_pos_parameters(’front_right’,lift_height/equal.tosf/zero.tosf./three.tosf, extend_forward/equal.tosfNone, move_inward/equal.tosfNone)/numbersign.tosf Execute the planexecute_plan(plan_duration/equal.tosf/three.tosf)Code-as-PoliciesSample 1set_target_joint_angles("front_left", [/zero.tosf, /one.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_left", [/zero.tosf, /zero.tosf./five.tosf, /zero.tosf])set_target_joint_angles("front_right", [/zero.tosf, /one.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_right", [/zero.tosf, /zero.tosf./five.tosf, /zero.tosf])execute_plan()Sample 2set_target_joint_angles("front_left", [/zero.tosf, /zero.tosf./five.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_left", [/zero.tosf, /one.tosf./two.tosf, /zero.tosf./five.tosf])set_target_joint_angles("front_right", [/zero.tosf, /zero.tosf./five.tosf, /one.tosf./five.tosf])set_target_joint_angles("back_right", [/zero.tosf, /one.tosf./two.tosf, /zero.tosf./five.tosf])execute_plan()25Sample 3import numpy as npdef set_legs_to_pose(pose):set_target_joint_angles("front_left", pose[/zero.tosf])set_target_joint_angles("back_left", pose[/one.tosf])set_target_joint_angles("front_right", pose[/two.tosf])set_target_joint_angles("back_right", pose[/three.tosf])def execute_pose(pose, duration/equal.tosf/two.tosf):set_legs_to_pose(pose)execute_plan(duration)/numbersign.tosf set initial poseinitial_pose /equal.tosf [[/zero.tosf, /zero.tosf, /zero.tosf], /numbersign.tosf front_left[/zero.tosf, /zero.tosf, /zero.tosf], /numbersign.tosf back_left[/zero.tosf, /zero.tosf, /zero.tosf], /numbersign.tosf front_right[/zero.tosf, /zero.tosf, /zero.tosf] /numbersign.tosf back_right]set_legs_to_pose(initial_pose)execute_plan()/numbersign.tosf set target posetarget_pose /equal.tosf [[/zero.tosf, /zero.tosf./three.tosf, /zero.tosf./five.tosf], /numbersign.tosf front_left[/zero.tosf, /one.tosf./two.tosf, /one.tosf./five.tosf], /numbersign.tosf back_left[/zero.tosf, /zero.tosf, /zero.tosf], /numbersign.tosf front_right[/zero.tosf, /one.tosf./two.tosf, /one.tosf./five.tosf] /numbersign.tosf back_right]execute_pose(target_pose)A.8.2 Sampled results for Turn on faucet taskUser instruction :Turn on faucet.OursSample 1import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "faucet_handle")set_obj_orientation_reward("faucet_handle", np.deg/two.tosfrad(/nine.tosf/zero.tosf))set_joint_fraction_reward("faucet", /one.tosf)execute_plan(/two.tosf)26Sample 2import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "faucet_handle")set_joint_fraction_reward("faucet", /one.tosf) /numbersign.tosf /one.tosf means openexecute_plan(/two.tosf)Reward Coder onlySample 1import numpy as npreset_reward()set_joint_fraction_reward("faucet", /one.tosf)execute_plan()Sample 2import numpy as npreset_reward() /numbersign.tosf Reset reward for a new taskset_joint_fraction_reward("faucet",/one.tosf) /numbersign.tosf Turn on the faucet by setting the joint fraction to /one.tosf (open)execute_plan(/two.tosf)Code-as-PoliciesSample 1reset()faucet_handle_position /equal.tosf get_object_position("faucet_handle")end_effector_to(faucet_handle_position)end_effector_close()normalized_joint_position /equal.tosf get_normalized_joint_position("faucet")end_effector_open()27Sample 2/numbersign.tosfpythonimport numpy as npreset()faucet_handle_pos /equal.tosf get_object_position("faucet_handle")end_effector_to(faucet_handle_pos)end_effector_open()end_effector_close()normalized_faucet_position /equal.tosf get_normalized_joint_position("faucet")target_faucet_position /equal.tosf np.clip(normalized_faucet_position /plus.tosf /zero.tosf./five.tosf, /zero.tosf, /one.tosf)end_effector_to([faucet_handle_pos[/zero.tosf],faucet_handle_pos[/one.tosf], faucet_handle_pos[/two.tosf] /plus.tosf /zero.tosf./one.tosf])end_effector_to([faucet_handle_pos[/zero.tosf],faucet_handle_pos[/one.tosf], faucet_handle_pos[/two.tosf]])end_effector_open()A.9 Full outputs for the interactive examplesBelow we show the output reward code from our system for the two interactive examples: tasking aquadruped robot to perform moonwalk, and tasking a dexterous manipulator robot to place an apple ina drawer.Quadruped moonwalkInstruction 1: Make the robot stand upright on two back feet like a human.import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf,np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), (/zero.tosf./zero.tosf, /zero.tosf./zero.tosf), None, /zero.tosf, None)set_feet_pos_parameters("front_left", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_left", /zero.tosf./zero.tosf, None, None)set_feet_pos_parameters("front_right", /zero.tosf./six.tosf/five.tosf, None, None)set_feet_pos_parameters("back_right", /zero.tosf./zero.tosf, None, None)execute_plan()28Instruction 2: Good, you actually don’t need to keep the front paws at certain height, just leaveit to the controller.import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf,np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), (/zero.tosf./zero.tosf, /zero.tosf./zero.tosf), None, /zero.tosf, None)set_feet_pos_parameters("back_left", /zero.tosf./zero.tosf, None, None)set_feet_pos_parameters("back_right", /zero.tosf./zero.tosf, None, None)/numbersign.tosf Set stepping parameters for all feet to disable stepping motionfor foot in ["front_left", "rear_left", "front_right", "rear_right"]:set_feet_stepping_parameters(foot, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)execute_plan()Instruction 3: Good, now make the robot do a moon-walk while stand up like this.import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf, np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), None, ( -/zero.tosf./two.tosf , /zero.tosf./zero.tosf),/zero.tosf, None)set_feet_pos_parameters("back_left", None, None, None)set_feet_pos_parameters("back_right", None, None, None)set_feet_stepping_parameters("back_left", /zero.tosf./five.tosf, /zero.tosf./five.tosf, /zero.tosf./five.tosf, /zero.tosf./one.tosf, -/zero.tosf./two.tosf, True)set_feet_stepping_parameters("back_right", /zero.tosf./five.tosf, /zero.tosf./five.tosf, /zero.tosf./zero.tosf, /zero.tosf./one.tosf, -/zero.tosf./two.tosf, True)set_feet_stepping_parameters("front_left", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)set_feet_stepping_parameters("front_right", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)execute_plan()29Instruction 4: Moon walk means the robot should walk backward while the feet swings as if theyare moving forward. Correct your answer and also make it walk at a speed of 0.5m/s.import numpy as npreset_reward()set_torso_targets(/zero.tosf./six.tosf/five.tosf, np.deg/two.tosfrad(/nine.tosf/zero.tosf), np.deg/two.tosfrad(/zero.tosf), None, ( -/zero.tosf./five.tosf , /zero.tosf./zero.tosf),/zero.tosf, None)set_feet_pos_parameters("back_left", None, None, None)set_feet_pos_parameters("back_right", None, None, None)set_feet_stepping_parameters("back_left", /zero.tosf./five.tosf, /zero.tosf./five.tosf, /zero.tosf./five.tosf, /zero.tosf./one.tosf, /zero.tosf./two.tosf , True)set_feet_stepping_parameters("back_right", /zero.tosf./five.tosf, /zero.tosf./five.tosf, /zero.tosf./zero.tosf, /zero.tosf./one.tosf, /zero.tosf./two.tosf , True)set_feet_stepping_parameters("front_left", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)set_feet_stepping_parameters("front_right", /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, /zero.tosf./zero.tosf, False)execute_plan()Dexterous Manipulator place an apple in a drawerInstruction 1: Open the drawer.import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "drawer_handle")set_joint_fraction_reward("drawer", /one.tosf) /numbersign.tosf /one.tosf means openexecute_plan()Instruction 2: Good, now put the apple inside the drawer while keep it open. Assume drawer ison the floor so no need to lift up the apple.import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "apple")set_l/two.tosf_distance_reward("apple", "drawer_center")set_joint_fraction_reward("drawer", /one.tosf)execute_plan()Instruction 3: Good, now release the apple and move hand away.import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "rest_position")execute_plan(/four.tosf)30Instruction 4: Now close the drawer.import numpy as npreset_reward()/numbersign.tosf This is a new task so reset reward; otherwise we don’t need itset_l/two.tosf_distance_reward("palm", "drawer_handle")set_joint_fraction_reward("drawer", /zero.tosf)execute_plan()31 |
eyykI3UIHa | NOIR: Neural Signal OperatedIntelligent Robots for Everyday ActivitiesRuohan Zhang*1,4, Sharon Lee*1, Minjune Hwang*1, Ayano Hiranaka*2,Chen Wang1, Wensi Ai1, Jin Jie Ryan Tan1, Shreya Gupta1, Yilun Hao1,Gabrael Levine1, Ruohan Gao1, Anthony Norcia3, Li Fei-Fei1,4, Jiajun Wu1,4*Equally contributed; zharu@stanford.edu1Department of Computer Science, Stanford University2Department of Mechanical Engineering, Stanford University3Department of Psychology, Stanford University4Institute for Human-Centered AI (HAI), Stanford UniversityAbstract: We present Neural Signal Operated Intelligent Robots (NOIR), ageneral-purpose, intelligent brain-robot interface system that enables humans tocommand robots to perform everyday activities through brain signals. Throughthis interface, humans communicate their intended objects of interest and actionsto the robots using electroencephalography (EEG). Our novel system demon-strates success in an expansive array of 20 challenging, everyday household ac-tivities, including cooking, cleaning, personal care, and entertainment. The effec-tiveness of the system is improved by its synergistic integration of robot learningalgorithms, allowing for NOIR to adapt to individual users and predict their inten-tions. Our work enhances the way humans interact with robots, replacing tradi-tional channels of interaction with direct, neural communication. Project website:https://noir-corl.github.io/Keywords: Brain-Robot Interface; Human-Robot InteractionFigure 1: NOIR is a general-purpose brain-robot interface that allows humans to use their brainsignals (1) to control robots to perform daily activities, such as making Sukiyaki (2), ironing clothes(7), playing Tic-Tac-Toe with friends (17), and petting a robot dog (21).7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.1 IntroductionBrain-robot interfaces (BRIs) are a pinnacle achievement in the realm of art, science, and engi-neering. This aspiration, which features prominently in speculative fiction, innovative artwork, andgroundbreaking scientific studies, entails creating robotic systems that operate in perfect synergywith humans. A critical component of such systems is their ability to communicate with humans.In human-robot collaboration and robot learning, humans communicate their intents through ac-tions [1], button presses [2, 3], gaze [4–7], facial expression [8], language [9, 10], etc [11, 12].However, the prospect of direct communication through neural signals stands out to be the mostthrilling but challenging medium.We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent BRIsystem with non-invasive electroencephalography (EEG). The primary principle of this system ishierarchical shared autonomy, where humans define high-level goals while the robot actualizes thegoals through the execution of low-level motor commands. Taking advantage of the progress inneuroscience, robotics, and machine learning, our system distinguishes itself by extending beyondprevious attempts to make the following contributions.First, NOIR is general-purpose in its diversity of tasks and accessibility. We show that humans canaccomplish an expansive array of 20 daily everyday activities, in contrast to existing BRI systemsthat are typically specialized at one or a few tasks or exist solely in simulation [13–22]. Additionally,the system can be used by the general population, with a minimum amount of training.Second, the “I” in NOIR means that our robots are intelligent and adaptive. The robots are equippedwith a library of diverse skills, allowing them to perform low-level actions without dense human su-pervision. Human behavioral goals can naturally be communicated, interpreted, and executed by therobots with parameterized primitive skills , such as Pick(obj-A) orMoveTo(x,y) . Additionally,our robots are capable of learning human intended goals during their collaboration. We show thatby leveraging the recent progress in foundation models, we can make such a system more adaptivewith limited data. We show that this can significantly increase the efficiency of the system.The key technical contributions of NOIR include a modular neural signal decoding pipeline forhuman intentions. Decoding human intended goals (e.g., “pick up the mug from the handle”) fromneural signals is extremely challenging. We decompose human intention into three components:What object to manipulate, How to interact with the object, and Where to interact, and show thatsuch signals can be decoded from different types of neural data. These decomposed signals naturallycorrespond to parameterized robot skills and can be communicated effectively to the robots.In 20 household activities involving tabletop or mobile manipulations, three human subjects suc-cessfully used our system to accomplish these tasks with their brain signals. We demonstrate thatfew-shot robot learning from humans can significantly improve the efficiency of our system. Thisapproach to building intelligent robotic systems, which utilizes human brain signals for collabora-tion, holds immense potential for the development of critical assistive technologies for individualswith or without disabilities and to improve the quality of their life.2 Brain-Robot Interface (BRI): BackgroundSince Hans Berger’s discovery of EEG in 1924, several types of devices have been developed torecord human brain signals. We chose non-invasive, saline-based EEG due to its cost and acces-sibility to the general population, signal-to-noise ratio, temporal and spatial resolutions, and typesof signals that can be decoded (see Appendix 2). EEG captures the spontaneous electrical activityof the brain using electrodes placed on the scalp. EEG-based BRI has been applied to prosthetics,wheelchairs, as well as navigation and manipulation robots. For comprehensive reviews, see [22–25]. We utilize two types of EEG signals that are frequently employed in BRI, namely, steady-statevisually evoked potential (SSVEP) and motor imagery (MI).SSVEP is the brain’s exogenous response to periodic external visual stimulus [26], wherein the braingenerates periodic electrical activity at the same frequency as flickering visual stimulus. The appli-2Figure 2: NOIR has two components, a modular pipeline for decoding goals from human brainsignals, and a robotic system with a library of primitive skills. The robots possess the ability to learnto predict human intended goals hence reducing the human effort required for decoding.cation of SSVEP in assistive robotics often involves the usage of flickering LED lights physicallyaffixed to different objects [27, 28]. Attending to an object (and its attached LED light) will increasethe EEG response at that stimulus frequency, allowing the object’s identity to be inferred. Inspiredby prior work [15], our system utilizes computer vision techniques to detect and segment objects,attach virtual flickering masks to each object, and display them to the participants for selection.Motor Imagery (MI) differs from SSVEP due to its endogenous nature, requiring individuals tomentally simulate specific actions, such as imagining oneself manipulating an object. The decodedsignals can be used to indicate a human’s intended way of interacting with the object. This approachis widely used for rehabilitation, and for navigation tasks [29] in BRI systems. This approach oftensuffers from low decoding accuracy [22].Much existing BRI research focuses on the fundamental problem of brain signal decoding, whileseveral existing studies focus on how to make robots more intelligent and adaptive [13–17, 30]. In-spired by this line of work, we leverage few-shot policy learning algorithms to enable robots to learnhuman preferences and goals. This minimizes the necessity for extensive brain signal decoding,thereby streamlining the interaction process and enhancing overall efficiency.Our study is grounded in substantial advancements in both the field of brain signal decoding androbot learning. Currently, many existing BRI systems target only one or a few specific tasks. To thebest of our knowledge, no previous work has presented an intelligent, versatile system capable ofsuccessfully executing a wide range of complex tasks, as demonstrated in our study.3 The NOIR SystemThe challenges we try to tackle are: 1) How do we build a general-purpose BRI system that worksfor a variety of tasks? 2) How do we decode relevant communication signals from human brains? 3)How do we make robots more intelligent and adaptive for more efficient collaboration? An overviewof our system is shown in Fig. 2. Humans act as planning agents to perceive, plan, and communicatebehavioral goals to the robot, while robots use pre-defined primitive skills to achieve these goals.The overarching goal of building a general-purpose BRI system is achieved by synergistically inte-grating two designs together. First, we propose a novel modular brain decoding pipeline for humanintentions, in which the human intended goal is decomposed into three components: what, how, andwhere (Sec. 3.1). Second, we equip the robots with a library of parameterized primitive skills toaccomplish human-specified goals (Sec. 3.2). This design enables humans and robots to collaborateto accomplish a variety of challenging, long-horizon everyday tasks. At last, we show a key featureof NOIR to allow robots to act more efficiently and to be capable of adapting to individual users, weadopt few-shot imitation learning from humans (Sec. 3.3).3.1 The brain: A modular decoding pipelineWe hypothesize that the key to building a general-purpose EEG decoding system is modularization.Decoding complete behavioral goals (e.g., in the form of natural language) is only feasible with ex-pensive devices like fMRI, and with many hours of training data for each individual [31]. As shownin Fig. 3, we decompose human intention into three components: (a) What object to manipulate; (b)How to interact with the object; (c) Where to interact. The decoding of specific user intents from3Figure 3: A modular pipeline for decoding human intended goals from EEG signals: (a) What objectto manipulate, decoded from SSVEP signals using CCA classifiers; (b) How to interact with theobject, decoded from MI signals using CSP+QDA algorithms; (c) Where to interact, decoded fromMI signals. A safety mechanism that captures muscle tension from jaw clench is used to confirm orreject decoding results.EEG signals is challenging but can be done with steady-state visually evoked potential and motorimagery, as introduced in Sec. 2. For brevity, details of decoding algorithms are in Appendix 6.Selecting objects with steady-state visually evoked potential (SSVEP). Upon showing the taskset-up on a screen, we first infer the user’s intended object. We make objects on the screen flickerwith different frequencies (Fig. 3a), which, when focused on by the user, evokes SSVEP [26]. Byidentifying which frequency is stronger in the EEG data, we may infer the frequency of the flick-ering visual stimulus, and hence the object that the user focuses on. We apply modern computervision techniques to circumvent the problem of having to physically attach LED lights [27, 28].Specifically, we use the foundation model OWL-ViT [32] to detect and track objects, which takes inan image and object descriptions and outputs object segmentation masks. By overlaying each maskof different flickering frequencies ( 6Hz,7.5Hz,8.57Hz, and 10Hz[33, 34]), and having the userfocus on the desired object for 10 seconds, we are able to identify the attended object.We use only the signals from the visual cortex (Appendix 6) and preprocess the data with a notchfilter. We then use Canonical Correlation Analysis (CCA) for classification [35]. We create a Canon-ical Reference Signal (CRS), which is a set of sinandcoswaves, for each of our frequencies andtheir harmonics. We then use CCA to calculate the frequency whose CRS has the highest correlationwith the EEG signal, and identify the object that was made to flicker at that frequency.Selecting skill and parameters with motor imagery (MI). The user then chooses a skill and itsparameters. We frame this as a k-way ( k≤4) MI classification problem, where we aim to decodewhich of the kpre-decided actions the user was imagining. Unlike SSVEP, a small amount ofcalibration data (10-min) is required due to the distinct nature of each user’s MI signals. The fourclasses are: Left Hand ,Right Hand ,Legs , and Rest ; the class names describe the body parts thatusers imagine using to execute some skills (e.g. pushing a pedal with feet). Upon being presentedwith the list of kskill options, we record a 5-second EEG signal, and use a classifier trained on thecalibration data. The user then guides a cursor on the screen to the appropriate location for executingthe skill. To move the cursor along the xaxis, the user is prompted to imagine moving their Lefthand for leftward cursor movement. We record another five seconds of data and utilize a 2-wayclassifier. This process is repeated for x,y, and zaxes.For decoding, we use only EEG channels around the brain areas related to motor imagery (Appendix6). The data is band-pass-filtered between 8Hzand30Hzto include μ-band and β-band frequencyranges correlated with MI activity [36]. The classification algorithm is based on the common spatial4pattern (CSP) [37–40] algorithm and quadratic discriminant analysis (QDA). Due to its simplicity,CSP+QDA is explainable and amenable to small training datasets. Contour maps of electrode con-tributions to the top few CSP-space principal components are shown in the middle row of Fig. 3.There are distinct concentrations around the right and left motor areas, as well as the visual cortex(which correlates with the Rest class).Confirming or interrupting with muscle tension. Safety is critical in BRI due to noisy decoding.We follow a common practice and collect electrical signals generated from facial muscle tension(Electromyography, or EMG). This signal appears when users frown or clench their jaws, indicatinga negative response. This signal is strong with near-perfect decoding accuracy, and thus we use itto confirm or reject object, skill, or parameter selections. With a pre-determined threshold valueobtained through the calibration stage, we can reliably detect muscle tension from 500-ms windows.3.2 The robot: Parameterized primitive skillsOur robots must be able to solve a diverse set of manipulation tasks under the guidance of humans,which can be achieved by equipping them with a set of parameterized primitive skills. The benefitsof using these skills are that they can be combined and reused across tasks. Moreover, these skillsare intuitive to humans. Since skill-augmented robots have shown promising results in solving long-horizon tasks, we follow recent works in robotics with parameterized skills [41–52], and augmentthe action space of our robots with a set of primitive skills and their parameters. Neither the humannor the agent requires knowledge of the underlying control mechanism for these skills, thus the skillscan be implemented in any method as long as they are robust and adaptive to various tasks.We use two robots in our experiment: A Franka Emika Panda arm for tabletop manipulation tasks,and a PAL Tiago robot for mobile manipulation tasks (see Appendix for hardware details). Skills forthe Franka robot use the operational space pose controller (OSC) [53] from the Deoxys API [54].For example, Reaching skill trajectories are generated by numerical 3D trajectory interpolationconditioned on the current robot end-effector 6D pose and target pose. Then OSC controls therobot to reach the waypoints along the trajectory orderly. The Tiago robot’s navigation skill isimplemented using the ROS MoveBase package, while all other skills are implemented using MoveItmotion planning framework [55]. A complete list of skills for both robots is in Appendix 3. Later,we will show that humans and robots can work together using these skills to solve all the tasks.3.3 Leveraging robot learning for efficient BRIThe modular decoding pipeline and the primitive skill library lay the foundation for NOIR. How-ever, the efficiency of such a system can be further improved. During the collaboration, the robotsshould learn the user’s object, skill, and parameter selection preferences, hence in future trials, therobot can predict users’ intended goals and be more autonomous, hence reducing the effort requiredfor decoding. Learning and generalization are required since the location, pose, arrangement, andinstance of the objects could differ from trial to trial. Meanwhile, the learning algorithms should besample-efficient since human data is expensive to collect.Retrieval-based few-shot object and skill selection. In NOIR, human effort can be reduced if therobot intelligently learns to propose appropriate object-skill selections for a given state in the task.Inspired by retrieval-based imitation learning [56–58], our proposed method learns a latent staterepresentation from observed states. Given a new state observation, it finds the most similar state inthe latent space and the corresponding action. Our method is shown in Fig. 4. During task execu-tion, we record data points that consist of images and the object-skill pairs selected by the human.The images are first encoded by a pre-trained R3M model [59] to extract useful features for robotmanipulation tasks, and are then passed through several trainable, fully-connected layers. Theselayers are trained using contrastive learning with a triplet loss[60] that encourages the images withthe same object-skill label to be embedded closer in the latent space. The learned image embeddingsand object-skill labels are stored in the memory. During test time, the model retrieves the nearestdata point in the latent space and suggests the object-action pair associated with that data point tothe human. Details of the algorithm can be found in Appendix 7.1.One-shot skill parameter learning. Parameter selection requires a lot of human effort as it needsprecise cursor manipulation through MI. To reduce human effort, we propose a learning algorithm5Figure 4: Left: Retrieval-based few-shot object and skill selection model. The model learns a latentrepresentation for observations. Given a new observation, it finds the most relevant experience inthe memory and selects the corresponding skill and object. Right: One-shot skill parameter learningalgorithm, which finds a semantically corresponding point in the test image given a reference pointin the training image. The feature visualization shows 3 of the 768 DINOv2 tokens used.for predicting parameters given an object-skill pair as an initial point for cursor control. Assumingthat the user has once successfully pinpointed the precise key point to pick a mug’s handle, doesthis parameter need to be specified again in the future? Recent advancement in foundation modelssuch as DINOv2 [61] allows us to find corresponding semantic key points, eliminating the needfor parameter re-specification. Compared to previous works, our algorithm is one-shot [62–66] andpredicts specific 2D points instead of semantic segments [67, 68]. As shown in Fig. 4, given atraining image ( 360×240) and parameter choice (x, y), we predict the semantically correspondingpoint in the test images, in which positions, orientations, instances of the target object, and contextsmay vary. We utilize a pre-trained DINOv2 model to obtain semantic features [61]. We input bothtrain and test images into the model and generate 768 patch tokens, each as a pixel-wise featuremap of dimension 75×100. We then extract a 3×3patch centered around the provided trainingparameter and search for a matching feature in the test image, using cosine similarity as the distancemetric. Details of this algorithm can be found in Appendix 7.2.4 ExperimentsTasks. NOIR can greatly benefit those who require assistance with everyday activities. We selecttasks from the BEHA VIOR benchmark [69] and Activities of Daily Living [70] to capture actualhuman needs. The tasks are shown in Fig. 1, and consist of 16 tabletop tasks and four mobilemanipulation tasks. The tasks encompass various categories, including eight meal preparation tasks,six cleaning tasks, three personal care tasks, and three entertainment tasks. For systematic evaluationof task success, we provide formal definitions of these activities in the BDDL language format [69,71], which specifies the initial and goal conditions of a task using first-order logic. Task definitionsand figures can be found in Appendix 4.Procedure. The human study conducted has received approval from Institutional Review Board.Three healthy human participants (2 male, 1 female) performed all 15 Franka tasks. Sukiyaki, fourTiago tasks, and learning tasks are performed by one user. We use the EGI NetStation EEG system,which is completely non-invasive, making almost everyone an ideal subject. Before experiments,users are familiarized with task definitions and system interfaces. During the experiment, users stayin an isolated room, remain stationary, watch the robot on a screen, and solely rely on brain signalsto communicate with the robots (more details about the procedure can be found in Appendix 5).5 ResultsWe seek to provide answers to the following questions through extensive evaluation: 1) Is NOIRtruly general-purpose, in that it allows all of our human subjects to accomplish the diverse set ofeveryday tasks we have proposed? 2) Does our decoding pipeline provide accurate decoding results?3) Does our proposed robot learning and intention prediction algorithm improve NOIR’s efficiency?System performance. Table 1 summarizes the performance based on two metrics: the number ofattempts until success and the time to complete the task in successful trials. When the participantreached an unrecoverable state in task execution, we reset the environment and the participant re-attempted the task from the beginning. Task horizons (number of primitive skills executed) are6Task WipeSpill CollectToy SweepTrash CleanBook IronCloth OpenBasket PourTea SetTable GrateCheese CutBananaTask horizon 4.33 7.67 5.67 7.00 4.67 5.33 4.00 8.33 7.00 5.33# Attempts 1.00 1.33 2.33 3.33 2.33 1.67 1.67 5.67 1.33 1.67Time (min) 14.74 25.24 20.59 27.73 16.95 15.90 13.53 20.91 24.98 17.68Human time (%) 79.02 83.97 82.34 80.00 79.56 82.03 83.15 81.15 81.79 81.21Task CookPasta Sandwich Hockey OpenGift TicTacToe Sukiyaki TrashDisposal CovidCare WaterPlant PetDogTask horizon 8.33 9.00 5.00 7.00 14.33 13.00 8.00 8.00 4.00 6.00# Attempts 1.67 1.67 1.33 2.67 2.00 1.00 1.00 1.00 1.00 1.00Time (min) 30.06 27.87 15.83 23.57 43.08 43.45 7.25 8.80 3.00 4.58Human time (%) 83.26 82.71 82.00 79.90 80.54 84.85 55.32 62.29 87.41 87.53Table 1: NOIR system performance. Task horizon is the average number of primitive skills executed.# attempts indicate the average number of attempts until the first success (1 means success on the firstattempt). Time indicates the task completion time in successful trials. Human time is the percentageof the total time spent by human users, this includes decision-making time and decoding time. Withonly a few attempts, all users can accomplish these challenging tasks.included as a reference. Although these tasks are long-horizon and challenging, NOIR shows veryencouraging results: on average, tasks can be completed with only 1.83 attempts. The reason fortask failures is human errors in skill and parameter selection, i.e., the users pick the wrong skills orparameters, which leads to non-recoverable states and needs manual resets. Decoding errors or robotexecution errors are avoided thanks to our safety mechanism with confirmation and interruption.Although our primitive skill library is limited, human users find novel usage of these skills to solvetasks creatively. Hence we observe emerging capabilities such as extrinsic dexterity. For example,in task CleanBook (Fig. 1.6), Franka’s Pick skill is not designed to grasp a book from the table, butusers learn to push the book towards the edge of the table and grasp it from the side. In CutBanana(Fig. 1.12), users utilize Push skill to cut. The average task completion time is 20.29 minutes.Note that the time humans spent on decision-making and decoding is relatively long (80% of totaltime), partially due to the safety mechanism. Later, we will show that our proposed robot learningalgorithms can address this issue effectively.Decoding accuracy. A key to our system’s success is the accuracy in decoding brain signals. Ta-ble 2 summarizes the decoding accuracy of different stages. We find that CCA on SSVEP produces ahigh accuracy of 81.2%, meaning that object selection is mostly accurate. As for CSP + QDA on MIfor parameter selection, the 2-way classification model performs at 73.9%accuracy, which is con-sistent with current literature [36]. The 4-way skill-selection classification models perform at about42.2%accuracy. Though this may not seem high, it is competitive considering inconsistencies at-tributed to long task duration (hence the discrepancy between calibration and task-time accuracies).Our calibration time is only 10 minutes, which is significantly shorter compared to the duration oftypical MI calibration and training sessions by several orders of magnitude [21]. More calibrationprovides more data for training more robust classifiers, and allows human users to practice morewhich typically yields stronger brain signals. Overall, the decoding accuracy is satisfactory, andwith the safety mechanism, there has been no instance of task failure caused by incorrect decoding.Object and skill selection results. We then answer the third question: Does our proposed robotlearning algorithm improve NOIR’s efficiency? First, we evaluate object and skill selection learn-ing. We collect a dataset offline with 15 training samples for each object-skill pair in MakePastatask. Given an image, a prediction is considered correct if both the object and the skill are pre-dicted correctly. Results are shown in Table 3. While a simple image classification model usingResNet [72] achieves an average accuracy of 0.31, our method with a pre-trained ResNet backboneachieves significantly higher accuracy at 0.73, highlighting the importance of contrastive learningand retrieval-based learning. Using R3M as the feature extractor further improves the performanceto 0.94. The generalization ability of the algorithm is tested on the same MakePasta task. Forinstance-level generalization, 20 different types of pasta are used; for context generalization, werandomly select and place 20 task-irrelevant objects in the background. Results are shown in Table3. In all variations, our model achieves accuracy over 93%, meaning that the human can skip theskill and object selection 93% of the time, significantly reducing their time and effort. We furthertest our algorithm during actual task execution (Fig. 5). A human user completes the task with andwithout object-skill prediction two times each. With object and skill learning, the average time re-quired for each object-skill selection is reduced by 60% from 45.7to18.1seconds. More detailsabout the experiments and visualization of learned representation can be found in Appendix 7.1.7Decoding Stage Signal Technique Calibration Acc. Task-Time Acc.Object selection (What?) SSVEP CCA (4-way) - 0.812Skill selection (How?) MI CSP + QDA (4-way) 0.580 0.422Parameter selection (Where?) MI CSP + QDA (2-way) 0.882 0.739Confirmation / interruption EMG Thresholding (2-way) 1.0 1.0Table 2: Decoding accuracy at different stages of the experiment.One-shot parameter learning results. First, using our pre-collected dataset (see Appendix 7.2),we compare our algorithm against multiple baselines. The MSE values of the predictions are shownin Table 4. Random sample shows the average error when randomly predicting points in the 2Dspace. Sample on objects randomly predicts a point on objects and not on the background; the ob-ject masks here are detected with the Segment Anything Model (SAM) [73]. For Pixel similarity ,we employ the cosine similarity and sliding window techniques used in our algorithm, but on rawimages without using DINOv2 features. All of the baselines are drastically outperformed by ouralgorithm. Second, our one-shot learning method demonstrates robust generalization capability, astested on the respective dataset; table 4 presents the results. The low prediction error means thatusers spend much less effort in controlling the cursor to move to the desired position. We fur-ther demonstrate the effectiveness of the parameter learning algorithm in actual task execution forSetTable , quantified in terms of saved human effort in controlling the cursor movement (Fig. 5).Without learning, the cursor starts at the chosen object or the center of the screen. The predictedresult is used as the starting location for cursor control which led to a considerable decrease in cursormovement, with the mean distance reduced by 41%. These findings highlight the potential of pa-rameter learning in improving efficiency and reducing human effort. More results and visualizationscan be found in Appendix 7.2.Method Acc. ↑ Generalization Acc. ↑Random 0.12 ±0.02 Position 0.95 ±0.04Classfication (ResNet) 0.31 ±0.11 Pose 0.94 ±0.04Ours (ResNet) 0.73 ±0.09 Instance 0.93 ±0.02Ours (R3M) 0.94 ±0.04 Context 0.98 ±0.02Table 3: Object-skill learning results. Our methodis highly accurate and robust.Method MSE ↓ Generalization MSE ↓Random sample 175.8 ±29.7 Position 5.6 ±6.0Sample on objects 137.2 ±55.7 Orientation 12.0 ±11.7Pixel similarity 45.9 ±50.1 Instance 16.4 ±22.2Ours 15.8 ±23.8 Context 26.8 ±62.5Table 4: One-shot parameter learning results. Ourmethod is highly accurate and generalizes well.Figure 5: Left: Object and skill selection learn-ing reduces the decoding time by 60%. Right:Parameter learning decreases cursor movementdistance by 41%.6 Conclusion, Limitations, and Ethical ConcernsIn this work, we presented a general-purpose, intelligent BRI system that allows human users tocontrol a robot to accomplish a diverse, challenging set of real-world activities using brain signals.NOIR enables human intention prediction through few-shot learning, thereby facilitating a moreefficient collaborative interaction. NOIR holds a significant potential to augment human capabilitiesand enable critical assistive technology for individuals who require everyday support.NOIR represents a pioneering effort in the field, unveiling potential opportunities while simultane-ously raising questions about its limitations and potential ethical risks which we address in Appendix1. The decoding speed, as it currently stands, restricts tasks to those devoid of time-sensitive inter-actions. However, advancements in the field of neural signal decoding hold promise for alleviatingthis concern. Furthermore, the compilation of a comprehensive library of primitive skills presents along-term challenge in robotics, necessitating additional exploration and development. Nonetheless,we maintain that once a robust set of skills is successfully established, human users will indeed becapable of applying these existing skills to complete new tasks.8AcknowledgmentsThe work is in part supported by NSF CCRI #2120095, ONR MURI N00014-22-1-2740, N00014-23-1-2355, N00014-21-1-2801, AFOSR YIP FA9550-23-1-0127, the Stanford Institute for Human-Centered AI (HAI), Amazon, Salesforce, and JPMC.References[1] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learningmethods. ACM Computing Surveys (CSUR) , 50(2):1–35, 2017.[2] R. Zhang, D. Bansal, Y . Hao, A. Hiranaka, J. Gao, C. Wang, R. Mart ́ın-Mart ́ın, L. Fei-Fei,and J. Wu. A dual representation framework for robot learning with human guidance. InConference on Robot Learning , pages 738–750. PMLR, 2023.[3] L. Guan, M. Verma, S. S. Guo, R. Zhang, and S. Kambhampati. Widening the pipeline inhuman-guided reinforcement learning with explanation and context-aware data augmentation.Advances in Neural Information Processing Systems , 34:21885–21897, 2021.[4] H. Admoni and B. Scassellati. Social eye gaze in human-robot interaction: a review. Journalof Human-Robot Interaction , 6(1):25–63, 2017.[5] A. Saran, R. Zhang, E. S. Short, and S. Niekum. Efficiently guiding imitation learning agentswith human gaze. In Proceedings of the 20th International Conference on Autonomous Agentsand MultiAgent Systems , pages 1109–1117, 2021.[6] R. Zhang, Z. Liu, L. Zhang, J. A. Whritner, K. S. Muller, M. M. Hayhoe, and D. H. Ballard.Agil: Learning attention from human for visuomotor tasks. In Proceedings of the europeanconference on computer vision (eccv) , pages 663–679, 2018.[7] R. Zhang, A. Saran, B. Liu, Y . Zhu, S. Guo, S. Niekum, D. Ballard, and M. Hayhoe. Humangaze assisted artificial intelligence: A review. In IJCAI: Proceedings of the Conference , volume2020, page 4951. NIH Public Access, 2020.[8] Y . Cui, Q. Zhang, B. Knox, A. Allievi, P. Stone, and S. Niekum. The empathic frameworkfor task learning from implicit human feedback. In Conference on Robot Learning , pages604–626. PMLR, 2021.[9] A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang,R. Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. InConference on Robot Learning , pages 287–318. PMLR, 2023.[10] W. Huang, C. Wang, R. Zhang, Y . Li, J. Wu, and L. Fei-Fei. V oxposer: Composable 3d valuemaps for robotic manipulation with language models. In 7th Annual Conference on RobotLearning , 2023.[11] R. Zhang, F. Torabi, L. Guan, D. H. Ballard, and P. Stone. Leveraging human guidance fordeep reinforcement learning tasks. arXiv preprint arXiv:1909.09906 , 2019.[12] R. Zhang, F. Torabi, G. Warnell, and P. Stone. Recent advances in leveraging human guidancefor sequential decision-making tasks. Autonomous Agents and Multi-Agent Systems , 35(2):31,2021.[13] I. Akinola, Z. Wang, J. Shi, X. He, P. Lapborisuth, J. Xu, D. Watkins-Valls, P. Sajda, andP. Allen. Accelerated robot learning via human brain signals. In 2020 IEEE internationalconference on robotics and automation (ICRA) , pages 3799–3805. IEEE, 2020.[14] Z. Wang, J. Shi, I. Akinola, and P. Allen. Maximizing bci human feedback using active learn-ing. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 10945–10951. IEEE, 2020.9[15] I. Akinola, B. Chen, J. Koss, A. Patankar, J. Varley, and P. Allen. Task level hierarchicalsystem for bci-enabled shared autonomy. In 2017 IEEE-RAS 17th International Conferenceon Humanoid Robotics (Humanoids) , pages 219–225. IEEE, 2017.[16] A. F. Salazar-Gomez, J. DelPreto, S. Gil, F. H. Guenther, and D. Rus. Correcting robot mis-takes in real time using eeg signals. In 2017 IEEE international conference on robotics andautomation (ICRA) , pages 6570–6577. IEEE, 2017.[17] L. Schiatti, J. Tessadori, N. Deshpande, G. Barresi, L. C. King, and L. S. Mattos. Human in theloop of robot learning: Eeg-based reward signal for target identification and reaching task. In2018 IEEE International Conference on Robotics and Automation (ICRA) , pages 4473–4480.IEEE, 2018.[18] M. Aljalal, R. Djemal, and S. Ibrahim. Robot navigation using a brain computer interfacebased on motor imagery. Journal of Medical and Biological Engineering , 39:508–522, 2019.[19] Y . Xu, C. Ding, X. Shu, K. Gui, Y . Bezsudnova, X. Sheng, and D. Zhang. Shared controlof a robotic arm using non-invasive brain–computer interface and computer vision guidance.Robotics and Autonomous Systems , 115:121–129, 2019.[20] X. Chen, B. Zhao, Y . Wang, S. Xu, and X. Gao. Control of a 7-dof robotic arm system with anssvep-based bci. International journal of neural systems , 28(08):1850018, 2018.[21] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He. Noninvasive electroencephalo-gram based control of a robotic arm for reach and grasp tasks. Scientific Reports , 6(1):38565,2016.[22] M. Aljalal, S. Ibrahim, R. Djemal, and W. Ko. Comprehensive review on brain-controlledmobile robots and robotic arms based on electroencephalography signals. Intelligent ServiceRobotics , 13:539–563, 2020.[23] L. F. Nicolas-Alonso and J. Gomez-Gil. Brain computer interfaces, a review. sensors , 12(2):1211–1279, 2012.[24] L. Bi, X.-A. Fan, and Y . Liu. Eeg-based brain-controlled mobile robots: a survey. IEEEtransactions on human-machine systems , 43(2):161–176, 2013.[25] N. M. Krishnan, M. Mariappan, K. Muthukaruppan, M. H. A. Hijazi, and W. W. Kitt. Elec-troencephalography (eeg) based control in assistive mobile robots: A review. In IOP Confer-ence Series: Materials Science and Engineering , volume 121, page 012017. IOP Publishing,2016.[26] E. D. Adrian and B. H. Matthews. The berger rhythm: potential changes from the occipitallobes in man. Brain , 57(4):355–385, 1934.[27] C. J. Perera, I. Naotunna, C. Sadaruwan, R. A. R. C. Gopura, and T. D. Lalitharatne. Ssvepbased bmi for a meal assistance robot. In 2016 IEEE International Conference on Systems,Man, and Cybernetics (SMC) , pages 002295–002300. IEEE, 2016.[28] J. Ha, S. Park, C.-H. Im, and L. Kim. A hybrid brain–computer interface for real-life meal-assist robot control. Sensors , 21(13):4578, 2021.[29] J. Zhang and M. Wang. A survey on robots controlled by motor imagery brain-computerinterfaces. Cognitive Robotics , 1:12–24, 2021.[30] X. Mao, M. Li, W. Li, L. Niu, B. Xian, M. Zeng, and G. Chen. Progress in eeg-based brainrobot interaction systems. Computational intelligence and neuroscience , 2017, 2017.[31] J. Tang, A. LeBel, S. Jain, and A. G. Huth. Semantic reconstruction of continuous languagefrom non-invasive brain recordings. Nature Neuroscience , pages 1–9, 2023.10[32] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Ma-hendran, A. Arnab, M. Dehghani, Z. Shen, X. Wang, X. Zhai, T. Kipf, and N. Houlsby. Simpleopen-vocabulary object detection with vision transformers, 2022.[33] D. Zhu, J. Bieger, G. G. Molina, and R. M. Aarts. A survey of stimulation methods used inssvep-based bcis. Computational intelligence and neuroscience , 2010:1–12, 2010.[34] R. Ku ́s, A. Duszyk, P. Milanowski, M. Łabecki, M. Bierzy ́nska, Z. Radzikowska, M. Michal-ska, J. ̇Zygierewicz, P. Suffczy ́nski, and P. J. Durka. On the quantification of ssvep frequencyresponses in human eeg in realistic bci conditions. PloS one , 8(10):e77536, 2013.[35] L. Shao, L. Zhang, A. N. Belkacem, Y . Zhang, X. Chen, J. Li, and H. Liu. Eeg-controlledwall-crawling cleaning robot using ssvep-based brain-computer interface, 2020.[36] N. Padfield, J. Zabalza, H. Zhao, V . Masero, and J. Ren. Eeg-based brain-computer interfacesusing motor-imagery: Techniques and challenges. Sensors , 19(6):1423, 2019.[37] S.-L. Wu, C.-W. Wu, N. R. Pal, C.-Y . Chen, S.-A. Chen, and C.-T. Lin. Common spatial patternand linear discriminant analysis for motor imagery classification. In 2013 IEEE Symposium onComputational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB) , pages 146–151.IEEE, 2013.[38] S. Kumar, A. Sharma, and T. Tsunoda. An improved discriminative filter bank selection ap-proach for motor imagery eeg signal classification using mutual information. BMC bioinfor-matics , 18:125–137, 2017.[39] Y . Zhang, Y . Wang, J. Jin, and X. Wang. Sparse bayesian learning for obtaining sparsity of eegfrequency bands based feature vectors in motor imagery classification. International journalof neural systems , 27(02):1650032, 2017.[40] B. Yang, H. Li, Q. Wang, and Y . Zhang. Subject-based feature extraction by using fisherwpd-csp in brain–computer interfaces. Computer methods and programs in biomedicine , 129:21–28, 2016.[41] R. Chitnis, T. Silver, J. B. Tenenbaum, T. Lozano-Perez, and L. P. Kaelbling. Learning neuro-symbolic relational transition models for bilevel planning. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 4166–4173. IEEE, 2022.[42] S. Nasiriany, H. Liu, and Y . Zhu. Augmenting reinforcement learning with behavior primitivesfor diverse manipulation tasks. In 2022 International Conference on Robotics and Automation(ICRA) , pages 7477–7484. IEEE, 2022.[43] Y . Zhu, J. Tremblay, S. Birchfield, and Y . Zhu. Hierarchical planning for long-horizon manip-ulation with geometric and symbolic scene graphs. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 6541–6548. IEEE, 2021.[44] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[45] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. arXiv preprint arXiv:2209.05451 , 2022.[46] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[47] D. Xu, A. Mandlekar, R. Mart ́ın-Mart ́ın, Y . Zhu, S. Savarese, and L. Fei-Fei. Deep affor-dance foresight: Planning through what can be done in the future. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 6206–6213. IEEE, 2021.11[48] C. Wang, D. Xu, and L. Fei-Fei. Generalizable task planning through representation pretrain-ing. IEEE Robotics and Automation Letters , 7(3):8299–8306, 2022.[49] S. Cheng and D. Xu. Guided skill learning and abstraction for long-horizon manipulation.arXiv preprint arXiv:2210.12631 , 2022.[50] C. Agia, T. Migimatsu, J. Wu, and J. Bohg. Taps: Task-agnostic policy sequencing. arXivpreprint arXiv:2210.12250 , 2022.[51] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart ́ın-Mart ́ın, C. Wang, G. Levine,M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everydayactivities and realistic simulation. In 6th Annual Conference on Robot Learning , 2022.[52] A. Hiranaka, M. Hwang, S. Lee, C. Wang, L. Fei-Fei, J. Wu, and R. Zhang. Primitive skill-based robot learning from human evaluative feedback. arXiv preprint arXiv:2307.15801 , 2023.[53] O. Khatib. A unified approach for motion and force control of robot manipulators: The opera-tional space formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987.[54] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. Viola: Object-centric imitation learning for vision-basedrobot manipulation. In Conference on Robot Learning , pages 1199–1210. PMLR, 2023.[55] D. Coleman, I. Sucan, S. Chitta, and N. Correll. Reducing the barrier to entry of complexrobotic software: a moveit! case study, 2014.[56] E. Mansimov and K. Cho. Simple nearest neighbor policy method for continuous control tasks,2018. URL https://openreview.net/forum?id=ByL48G-AW .[57] S. Nasiriany, T. Gao, A. Mandlekar, and Y . Zhu. Learning and retrieval from prior data forskill-based imitation learning. In 6th Annual Conference on Robot Learning , 2022.[58] M. Du, S. Nair, D. Sadigh, and C. Finn. Behavior retrieval: Few-shot imitation learning byquerying unlabeled datasets, 2023.[59] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. In 6th Annual Conference on Robot Learning , 2021.[60] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighborclassification. Journal of machine learning research , 10(2), 2009.[61] M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khalidov, P. Fernandez, D. Haz-iza, F. Massa, A. El-Nouby, M. Assran, N. Ballas, W. Galuba, R. Howes, P.-Y . Huang, S.-W. Li,I. Misra, M. Rabbat, V . Sharma, G. Synnaeve, H. Xu, H. Jegou, J. Mairal, P. Labatut, A. Joulin,and P. Bojanowski. Dinov2: Learning robust visual features without supervision, 2023.[62] T. Luddecke and F. Worgotter. Learning to segment affordances. In 2017 IEEE InternationalConference on Computer Vision Workshops (ICCVW) , pages 769–776. IEEE, 2017.[63] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis. Object-based affordancesdetection with convolutional neural networks and dense conditional random fields. In 2017IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 5908–5915. IEEE, 2017.[64] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis. Translating videos to com-mands for robotic manipulation with deep recurrent neural networks. In IEEE InternationalConference on Robotics and Automation (ICRA) , pages 3782–3788. IEEE, 2018.[65] T. Mar, V . Tikhanoff, G. Metta, and L. Natale. Self-supervised learning of grasp dependenttool affordances on the icub humanoid robot. In IEEE International Conference on Roboticsand Automation (ICRA) , pages 3200–3206. IEEE, 2015.12[66] T. Mar, V . Tikhanoff, G. Metta, and L. Natale. Self-supervised learning of tool affordancesfrom 3d tool representation through parallel som mapping. In IEEE International Conferenceon Robotics and Automation (ICRA) , pages 894–901. IEEE, 2017.[67] H. Luo, W. Zhai, J. Zhang, Y . Cao, and D. Tao. One-shot affordance detection. In Proceedingsof the 30th International Joint Conference on Artificial Intelligence (IJCAI) , 2021.[68] D. Hadjivelichkov, S. Zwane, M. P. Deisenroth, L. Agapito, and D. Kanoulas. One-shot transferof affordance regions? affcorrs! In 6th Conference on Robot Learning (CoRL) , 2022.[69] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart ́ın-Mart ́ın, C. Wang, G. Levine,M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everydayactivities and realistic simulation. In Conference on Robot Learning , pages 80–93. PMLR,2023.[70] S. Katz. Assessing self-maintenance: activities of daily living, mobility, and instrumentalactivities of daily living. Journal of the American Geriatrics Society , 1983.[71] S. Srivastava, C. Li, M. Lingelbach, R. Mart ́ın-Mart ́ın, F. Xia, K. E. Vainio, Z. Lian, C. Gok-men, S. Buch, K. Liu, et al. Behavior: Benchmark for everyday household activities in virtual,interactive, and ecological environments. In Conference on Robot Learning , pages 477–490.PMLR, 2022.[72] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778,2016.[73] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, P. Doll ́ar, and R. Girshick. Segment anything. arXiv:2304.02643 , 2023.[74] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y . Ng, et al.Ros: an open-source robot operating system. In ICRA Workshop on Open Source Software ,volume 3, page 5. Kobe, Japan, 2009.[75] B. Blankertz, K.-R. Muller, G. Curio, T. M. Vaughan, G. Schalk, J. R. Wolpaw, A. Schlogl,C. Neuper, G. Pfurtscheller, T. Hinterberger, et al. The bci competition 2003: progress and per-spectives in detection and discrimination of eeg single trials. IEEE transactions on biomedicalengineering , 51(6):1044–1051, 2004.[76] R. Scherer and C. Vidaurre. Motor imagery based brain–computer interfaces. In SmartWheelchairs and Brain-Computer Interfaces , pages 171–195. Elsevier, 2018.[77] A. Ravi, N. H. Beni, J. Manuel, and N. Jiang. Comparing user-dependent and user-independenttraining of cnn for ssvep bci. In Journal of Neural Engineering , 2020.[78] A. Gramfort, M. Luessi, E. Larson, D. A. Engemann, D. Strohmeier, C. Brodbeck, R. Goj,M. Jas, T. Brooks, L. Parkkonen, et al. Meg and eeg data analysis with mne-python. Frontiersin neuroscience , page 267, 2013.[79] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller. Optimal spatial filtering of single trialeeg during imagined hand movement. IEEE transactions on rehabilitation engineering , 8(4):441–446, 2000.[80] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighborclassification. Journal of Machine Learning Research , 10(9):207–244, 2009. URL http://jmlr.org/papers/v10/weinberger09a.html .[81] L. van der Maaten and G. Hinton. Viualizing data using t-sne. Journal of Machine LearningResearch , 9:2579–2605, 11 2008.13Appendix 1: Questions and Answers about NOIRQ (Safety) : Is EEG safe to use? Are there any potential risks or side effects of using the EEG forextended periods of time?A: EEG devices are generally safe with no known side effects and risks, especially when comparedto invasive devices like implants. We use saline solution to lower electrical impedance and improveconductance. The solution could cause minor skin irritation when the net is used for extendedperiods of time, hence we mix the solution with baby shampoo to mitigate this.Q (Safety) : How does the system ensure user safety, particularly in the context of real-world taskswith varying environments and unpredictable events?A: We implement an EEG-controlled safety mechanism to confirm or interrupt robot actions withmuscle tension, as decoded through clenching. Nevertheless, it is important to note that the currentimplementation entails a 500ms delay when interrupting robot actions which might lead to a poten-tial risk in more dynamic tasks. With more training data using a shorter decoding window, the issuecan be potentially mitigated.Q (Universality) : Can EEG / NOIR be applied to different people? Given that the paper has onlybeen tested on three human subjects, how can the authors justify the generalizability of the findings?A: The EEG device employed in our research is versatile, catering to both adults and children asyoung as five years old. Accompanied by SensorNets of varying sizes, the device ensures compati-bility with different head dimensions. Our decoding methods have been thoughtfully designed withdiversity and inclusion in mind, drawing upon two prominent EEG signals: steady-state visuallyevoked potential and motor imagery. These signals have exhibited efficacy across a wide range ofindividuals. However, it is important to acknowledge that the interface of our system, NOIR, isexclusively visual in nature, rendering it unsuitable for individuals with severe visual impairments.Q (Portability) : Can EEG be used outside the lab?A: While mobile EEG devices offer portability, it is worth noting that they often exhibit a com-paratively much lower signal-to-noise ratio. Various sources contribute to the noise present in EEGsignals, including muscle movements, eye movements, power lines, and interference from other de-vices. These sources of noise exist in and outside of the lab; consequently, though we’ve chosento implement robust decoding techniques based on classical statistics, more robust further filteringtechniques to mitigate these unwanted artifacts and extract meaningful information accurately areneeded for greater success in more chaotic environments.Q (Privacy) : How does the system differentiate between intentional brain signals for task executionand other unrelated brain activity? How will you address potential issues of privacy and security?A: The decoding algorithms employed in our study were purposefully engineered to exclusivelycapture task-relevant signals, ensuring the exclusion of any extraneous information. Adhering tothe principles of data privacy and in compliance with the guidelines set by the Institutional ReviewBoard (IRB) for human research, the data collected from participants during calibration and ex-perimental sessions were promptly deleted following the conclusion of each experiment. Only thedecoded signals, stripped of any identifying information, were retained for further analysis.Q (Scalability) : How scalable is the robotics system? Can it be easily adapted to different robotplatforms or expanded to accommodate a broader range of tasks beyond the 20 household activitiestested?A: Within the context of our study, two notable constraints are the speed of decoding and theavailability of primitive skills. The former restricts the range of tasks to those that do not involvetime-sensitive and dynamic interactions. However, the advancement in decoding accuracy and thereduction of the decoding window duration may eventually address this limitation. These improve-ments can potentially be achieved through the utilization of larger training datasets and the im-plementation of machine-learning-based decoding models, leveraging the high temporal resolutionoffered by EEG.14Property gel-based EEG dry EEG MEG fMRI fNIRS implantInvasive? No No No No No YesCost similar lower higher higher varies higherUniversality similar better similar similar similar worseSetup time longer shorter similar similar longer longerSignal-to-noise ratio better worse - - - betterTemporal resolution similar lower similar lower lower -Spatial resolution similar lower higher higher higher -Table 5: A comparison between brain recording devices, using our saline-based EEG device as thebaseline. Note that the comparison is based on the average products that are available on the marketfor research, and does not account for specialized or customized devices. Universality considerswhether the device can be used by the general population. For signal-to-noise ratio, MEG, fMRI,and fNIRS record different types of neural signals which are not directly comparable to EEG. Forimplants, the temporal and spatial resolution largely depends on the particular type of implant deviceused.The development of a comprehensive library of primitive skills stands as a long-term objective inthe field of robotics research. This entails creating a repertoire of fundamental abilities that can beadapted and combined to address new tasks. Additionally, our findings indicate that human userspossess the ability to innovate and devise novel applications of existing skills to accomplish tasks,akin to the way humans employ tools.Q (Potential impact) : How exactly do both individuals with and without disabilities benefit fromthis BRI system?A: The potential applications of systems like NOIR in the future are vast and diverse. One signifi-cant area where these systems can have a profound impact is in assisting individuals with disabilities,particularly those with mobility-related impairments. By enabling these individuals to accomplishActivities of Daily Living and Instrumental Activities of Daily Living [70] tasks, such systemscan greatly enhance their independence and overall quality of life. Currently, individuals withoutdisabilities may initially find the BRI pipeline to have a learning curve, resulting in inefficienciescompared to their own performance in daily activities in their first few attempts. However, robotlearning methods hold the promise of addressing these inefficiencies over time, and enable robots tohelp their users when needed.Appendix 2: Comparison between Different Brain Recording DevicesWe use the EGI NetStation EEG system which uses rapid application 128-channel saline-based EGISensorNets. Here we justify our choice of using non-invasive, saline-based EEG as the recordingdevice for brain signals. A comparison of different brain reading devices (gel-based EEG, dry EEG,MEG, fMRI, fNIRS, implant) and their advantages and disadvantages are shown in Table 5, usingour device as the baseline. Two noticeable alternatives are functional magnetic resonance imaging(fMRI) and invasive implants. fMRI measures the small changes in blood flow that occur withbrain activity, which has a very high spatial resolution hence fine-grained information such as objectcategories and language [31] can be decoded from it. But fMRI suffers from low temporal resolution,and the recording device is extremely costly and cannot be used in daily scenarios. Brain implantshave a very good signal-to-noise ratio and have great potential. However, the main concern is that itrequires surgery to be applied, and health-related risks are not negligible.Appendix 3: System SetupRobot platform. The robot we use in our tabletop manipulation task is a standard Franka Emikarobot arm with three RealSense cameras. For mobile manipulation, we use a Tiago++ model fromPAL Robotics, with an omnidirectional base, two 7-degrees-of-freedom arms with parallel-yaw grip-pers, a 1-degree-of-freedom prismatic torso, two SICK LiDAR sensors (back and front of the base),15Robot Skill ParametersFranka Reaching 6D goal pose in worldFranka Picking 3D world pos to pick, gripper orientation (choose from 4)Franka Placing 3D world pos to place, gripper orientation (choose from 3)Franka Pushing 3D world pos to start pushing, axis of motion (choose from 3)Franka Wiping 3D world pos to start wipingFranka Drawing 3D world posFranka Pouring 3D world pos, gripper orientation (choose from 3)Franka Pulling 3D world pos, gripper orientation (choose from 2), pull direction (choose from 2)Franka Grating 3D world posTiago Navigating ID of pre-defined positions and posesTiago Picking ID of the objectTiago Placing ID of the objectTiago Pouring ID of the objectTiago Dropping ID of object to drop the grasped object byTable 6: Parameterized primitive skills for Franka and Tiago robots.and an ASUS Xtion RGB-D camera mounted on the robot’s head, which can be controlled in yawand pitch. All sensors and actuators are connected through the Robot Operating System, ROS [74].The code runs on a laptop with an Nvidia GTX 1070 that sends the commands to the onboard robotcomputer to be executed.Primitive skills list. A list of primitive skills along with their parameters can be found in Table 6,eight for Franka (16 tasks) and five for Tiago (four tasks). Human users can accomplish all 20 tasks,which are long-horizon and challenging, using these skills.Appendix 4: Task DefinitionsFor systematic evaluation of task success, we provide formal definitions of our tasks in the for-mat of BEHA VIOR Domain Definition Language (BDDL) language [69, 71]. BDDL is a predicatelogic-based language that establishes a symbolic state representation built on predefined, meaningfulpredicates grounded in physical states [71]. Each task is defined in BDDL as an initial and goal con-dition parametrizing sets of possible initial states and satisfactory goal states, as shown in the figuresat the end of the appendix. Compared to scene- or pose-specific definitions which are too restricted,BDDL is more intuitive to humans while providing concrete evaluation metrics for measuring tasksuccess.Appendix 5: Experimental ProcedureEEG device preparation. In our experiments, we use the 128-channel HydroCel Geodesic Sen-sorNet from Magstim EGI, which has sponge tips in its electrode channels. Prior to experiments,the EEG net is soaked in a solution containing dissolved conductive salt (Potassium Chloride) andbaby shampoo for 15 minutes. After the soaking, the net is worn by the experiment subject, and animpedance check is done. This impedance check entails ensuring that the impedance of each chan-nel electrode is ≤50.0kΩ, by using a syringe to add more conductive fluid between the electrodesand the scalp. We then carefully put on a shower cap to minimize the drying of conductive fluid overthe course of the experiment.Instructions to subjects. Before commencing the experiments, subjects are given instructions onhow to execute the SSVEP, MI, and muscle tension (jaw-clenching) tasks. For SSVEP, they areinstructed to simply focus on the flickering object of interest without getting distracted by the otherobjects on the screen. For MI, similar to datasets such as BCI Competition 2003 [75], and as perextensive literature review [76], we instruct subjects to either imagine continually bending theirhands at the wrist (wrist dorsiflexion) or squeezing a ball for the hand actions (“Left”, “Right”), andto imagine depressing a pedal with both feet (feet dorsiflexion) for the “Legs” action. For the “Rest”class, as is common practice in EEG experiments in general, we instruct users to focus on a fixationcross displayed on the screen. Subjects were told to stick with their actions of choice throughout16Figure 6: Map of relevant electrodes we use during SSVEP (Left) and Motor Imagery (Right).the experiment, for consistency. For muscle tension, subjects were told to simply clench their jawwithout too much or too little effort.Interface. For SSVEP, subjects are told in writing on the screen to focus on the object of interest.Thereafter, a scene image of the objects with flickering masks overlaid on each object is presented,and we immediately begin recording the EEG Data over this period of time. For MI, the cues aredifferent during calibration and task-time. During calibration, subjects are presented with a warningsymbol ( .) on screen for 1 second, before being presented with the symbol representing the actionthey are to imagine ( <-: “Left”, ->: “Right”, v: “Legs”, +: “Rest”), which lasts on screen for 5500ms. We record the latter 5000 ms of EEG data. After which, there is a randomized period of rest thelasts between 0.5 and 2 seconds, before the process repeats for another randomly chosen action class.This is done in 4 blocks of 5 trials per action, for a total of 20 trials per action. This procedure is againsimilar to datasets like BCI Competition 2003 [75], that use non-linguistic cues and randomizationof rest / task. At task-time, similar to SSVEP, subjects are told in writing on the screen to performMI to select a robot skill to execute. Thereafter, a written mapping of a class symbol ( {<-, ->,v, +}) to skill ( {pick from top, pick from side, ... }) is presented, and we begin recordingEEG Data after a 2-second delay. For muscle tension, there is also a calibration phase, similar toMI, which entails collecting three 500ms-long trials for each class (“Rest”, and “Clench”) at the startof each experiment. The cues are written on the screen in words. At task time, when appropriate,written prompts are also presented on the screen (e.g. “clench if incorrect”), followed by a writtencountdown, after which the user has a 500ms window to clench (or not).Appendix 6: Decoding Algorithms DetailsFor both SSVEP and MI, we select a subset of channels and discard the signals from the rest, asshown in Figure 6. They correspond to the visual cortex for SSVEP, and the motor and visual areasfor MI (with peripheral areas). For muscle tension (jaw clenching), we retain all channels.SSVEP. To predict the object of interest, we apply Canonical Correlation Analysis (CCA) as shownin [77] to the collected SSVEP data. As each potential object of interest is flashing at a differentfrequency, we are able to generate reference signals Yfnfor each frequency fn:Yfn=sin(2πfnt)cos(2 πfnt)sin(4πfnt)cos(4 πfnt), t=h1fs2fs. . .Nsfsi(1)where fsis the sampling frequency and Nsis the number of samples.17LetXrefer to the collected SSVEP data, and Yrefer to a set of reference signals for a givenfrequency. The linear combinations of XandYcan be represented as x=X⊤Wxandy=Y⊤Wy,and CCA finds the weights WxandWythat maximizes the correlation between xandyby solvingthe following equation:maxWx,Wyρ(x, y) =E(W⊤xXY⊤Wy)qE(W⊤xXX⊤Wx)E(W⊤yY Y⊤Wy)(2)By calculating the maximum correlation ρfnfor each frequency fnused for potential objects ofinterest, we are then able to predict the output class by finding argmaxfn(ρfn)and matching theresult to the object of interest with that frequency.Furthermore, we are able to return a list of predicted objects of interest in descending order oflikelihood by matching each object to a list of descending maximum correlations ρfn.Motor imagery. To perform MI classification, we first band-pass filter the data between 8Hz -30Hz, as that is the frequency range that includes the μ-band and β-band signals relevant to MI.The data is then transformed using the Common Spatial Pattern (CSP) algorithm. CSP is a lineartransformation technique that applies a rotation to the data to orthogonalize the components wherethe over-timestep variance of the data differs the most across classes. We can then use the log-variance of each time series after rotation as features and perform QDA. Thereafter, we extractfeatures by taking the normalized variance of this transformed data (called “CSP-space data”). Wethen perform Quadratic Discriminant Analysis (QDA) on this data. To calculate our calibrationaccuracy, we perform K-fold cross validation with KCV= 4, but we use the entire calibrate datasetto fit the classifier for deployment at task-time.CSP can be very briefly described as a process which orthogonalizes variance. To illustrate in the 2-class case, suppose the i-th calibration EEG time-series for class kcan be written as X(i)k∈RC×T,where C=number of channels, T= number of time-steps, i∈[1,20], and k∈ {1,2}. Supposefurther that the data is mean-normalized. Then:ˆCov(Xk) =12020Xi=1CovX(i)k=12020Xi=11TX(i)kX(i)⊤k(3)And we perform a simultaneous diagonalization of {ˆCov(Xk)}:Cov(X2)−1Cov(X1) =QΛQ⊤.The transformation of any time-series Xinto the CSP-space is simply:XCSP=XQ (4)Note that we only keep the first NCSP= 4columns of XQ. The Python mnepackage [78] providesa multi-class generalization of this algorithm that we use. Feature extraction can be readily done bytaking the component-wise variance of XCSP, but we find that taking the normalized component-wiselog-variance is better, as corroborated by previous studies [79]:fp(X) = log Var(XCSP,p)PNCSPi=jVar(XCSP,j)!(5)f(X) = (f1(X), ..., f NCSP(X)) (6)where XCSP,jdenotes the j-th column of XCSP. The QDA step is straightforward: given our calibra-tion dataset {f(X(i)k)}, we simply fit a quadratic discriminant using the Python sklearn package,which allows us to recover a list of MI class predictions in decreasing order of likelihood.Muscle tension. To detect jaw clenches, electromyography (EMG) data is relevant. This is alsopicked up by our EEG net, and which, for succinctness, we will refer to as EEG data as well.18Input dimension 2048Number of hidden layers 5Hidden layer dimension 1024Output dimension 1024Number of epochs 100Batch size 40Optimizer AdamLearning rate 0.001Table 7: Feature embedding model and training hyperparameters for object and skill learning.Facial muscle tension results in a very significant high-variance signal across almost all channelsthat is very detectable using simple variance-based threshold filters without having to perform anyfrequency filters. Recall that we record three 500ms-long trials for each class (“Rest”, “Clench”).In short, for each of the calibration time-series, we take the variance of the channel with the medianvariance; call this variance m. Then, we just take the mid-point between the maximum mbetweenthe rest samples, and the minimum mbetween the clench samples, and have this be our thresholdvariance level.LetX(i)k∈RC×T, where C=number of channels, T=number of time-steps, i∈[1,3], andk∈ {Rest,Clench }.m(i)k= mediancnVarX(i)k,co, c∈[1, C] (7)where X(i)k,cdenotes the c-th row of X(i)k.Threshold =12maxi{m(i)Rest}+ mini{m(i)Clench}(8)Appendix 7: Robot Learning Algorithm DetailsObject and skill learning detailsWe utilize pre-trained R3M as the feature extractor. Our training procedure aims to learn a latentrepresentation of an input image for inferring the correct object-skill pair in the given scene. Thefeature embedding model is a fully-connected neural network that further encodes the outputs ofthe foundation model. Model parameters and training hyperparameters are summarized in Table 7.Collecting human data using a BRI system is expensive. To enable few-shot learning, the featureembedding model is trained using a triplet loss [80], which operates on three input vectors: anchor,positive (with the same label as the anchor), and negative (with a different label). Triplet loss pullsinputs with the same labels together by penalizing their distance in the latent space, as well as pushesinputs with different labels away. The loss function is defined as:J(a, p, n ) = max ( ∥f(a)−f(p)∥2− ∥f(a)−f(n)∥2+α,0) (9)where ais the anchor vector, pthe positive vector, nthe negative vector, fthe model, and α= 1isour separation margin.Generalization test set. We test our algorithm in the following generalization settings:Position and pose. For position generalization, we randomize the initial positions of all objects inthe scene with fixed orientation and collect 20 different trajectories. For the pose generalization, werandomize both the initial positions and orientations of all objects.Context. The context-level generalization refers to placing the target object in different environ-ments, defined by different backgrounds, object orientations, and the inclusion of different objectsin the scene. We collect 20 different trajectories with these variations.Instance. The instance generalization aims to assess the model’s capability to generalize acrossdifferent types of objects present in the scene. For our target task ( MakePasta ), we collect 20trajectories with 20 different kinds of pasta with different shapes, sizes, and colors.19Figure 7: t-SNE visualization of latent representation generated by object and skill learning embed-ding model for MakePasta task pose generalization dataset.Latent representation visualization. To understand the separability of latent representations gen-erated by our object-skill learning model, we visualize the 1024-dimensional final image represen-tations using the t-SNE data visualization technique [81]. Results for the MakePasta pose general-ization test set are shown in Figure 7. The model can well separate each of the different stages ofthe task, allowing us to retrieve the correct object-skill pair for an unseen image.One-shot parameter learning detailsDesign choices. We empirically found that using DINOv2’s ViT-B model, alongside a 75x100 fea-ture map and a 3x3 sliding window, with cosine similarity as the distance metric, resulted in the bestperformance for our image resolutions.Generalization test set We test the generalization ability of our algorithm on 1008 unique trainingand test pairs, encompassing four types of generalizations including 8 position trials, 8 orientationtrials, 32 context trials, and 960 instance trials.Position and orientation The position and orientation generalizations, shown in Fig. 8 and Fig. 9respectively, are tested in isolation, e.g. when the position is varied, the orientation is kept the same.Context The context-level generalization, shown in Fig. 10, refers to placing the target object indifferent environments, e.g. the training image might show the target object in the kitchen while thetest image shows the target object in a workspace. Here, we allow for position and orientation tovary as well.Instance To test our algorithm’s capability of instance-level generalization, shown in Fig. 11, wecollected a set of four different object categories, each containing five unique object instances. Ourobject categories consist of mug, pen, bottle, and medicine bottle, whereas the bottle and medicinebottle categories consist of images from both the top-down and side views. We test all permutationswithin each object category including train and test pairs with different camera views. Here, weallow for position and orientation to vary as well.Test set for comparing our method against baselines We test our method against baselines on1080 unique training and test pairs, encompassing four types of generalizations including 8 positiontrials, 8 orientation trials, 32 context trials, 960 instance trials, 48 trials where we vary all fourgeneralizations simultaneously, and 24 trials from the SetTable task.Position, orientation, context, and instance simultaneously. Finally, we test our algorithm’s abilityto generalize when all four variables differ between the training and test image, shown in Fig. 12.Here, the only object category we use is a mug.20Figure 8: Position generalization. The first train parameter is set on the mug handle. The secondtrain parameter is set on the spoon grip.Figure 9: Orientation generalization. The first train parameter is set on the mug handle. The secondtrain parameter is set on the pen grip.Figure 10: Context generalization. The first train parameter is set on the mug handle. The secondtrain parameter is set on the spoon grip.Figure 11: Instance generalization. First pair shows instance generalization with different cameraviews: from the top and from the side. The first train parameter is set on the bottle cap. The secondtrain parameter is set on the pen grip.Figure 12: Position, orientation, instance, and context generalization. Both train parameters are seton the mug handle.21222324 |
ES_TOp4YJeD | ADU-Depth: Attention-based Distillation withUncertainty Modeling for Depth EstimationZizhang Wu1,2†Zhuozheng Li2†Zhi-Gang Fan2Yunzhe Wu2Xiaoquan Wang3Rui Tang2Jian Pu11Fudan University,2ZongmuTech,3ExploAIwuzizhang87@gmail.com, {zhuozheng.li, zhigang.fan, nelson.wu, rui.tang }@zongmutech.com,rocky.wang@exploai.com, jianpu@fudan.edu.cn∗Abstract: Monocular depth estimation is challenging due to its inherent ambi-guity and ill-posed nature, yet it is quite important to many applications. Whilerecent works achieve limited accuracy by designing increasingly complicated net-works to extract features with limited spatial geometric cues from a single RGBimage, we intend to introduce spatial cues by training a teacher network that lever-ages left-right image pairs as inputs and transferring the learned 3D geometry-aware knowledge to the monocular student network. Specifically, we present anovel knowledge distillation framework, named ADU-Depth, with the goal ofleveraging the well-trained teacher network to guide the learning of the studentnetwork, thus boosting the precise depth estimation with the help of extra spatialscene information. To enable domain adaptation and ensure effective and smoothknowledge transfer from teacher to student, we apply both attention-adapted fea-ture distillation and focal-depth-adapted response distillation in the training stage.In addition, we explicitly model the uncertainty of depth estimation to guide dis-tillation in both feature space and result space to better produce 3D-aware knowl-edge from monocular observations and thus enhance the learning for hard-to-predict image regions. Our extensive experiments on the real depth estimationdatasets KITTI and DrivingStereo demonstrate the effectiveness of the proposedmethod, which ranked 1st on the challenging KITTI online benchmark2.Keywords: Monocular depth estimation, Camera perception, Distillation1 IntroductionMonocular depth estimation, with the goal of measuring the per-pixel distance from a single cam-era perception, is absolutely crucial to unlocking exciting robotic applications such as autonomousdriving [1, 2], 3D scene understanding [3, 4] and augmented reality [5, 6]. However, it is quite dif-ficult to obtain precise depth values from only a single 2D input, since monocular depth estimationis ill-posed and inherently ambiguous [7, 8] where many 3D scenes can actually give the same inputpicture [9, 10]. Early depth estimation methods relied primarily on hand-crafted features [11, 12],geometric assumptions [12], and non-parametric depth transfer [13] and achieved limited success.With the rise of deep learning techniques, the performance of monocular depth estimation has seena significant boost [14, 15, 16]. For supervised learning, a list of approaches focus on designingdeeper and more complicated networks to improve depth prediction under the supervision of ground-truth depth [17, 18, 19]. The current trend has been to combine convolutional neural networks(CNNs) with vision transformers and attention mechanisms [20, 21, 22, 23], which can extract moremeaningful features in an image and capture long-range dependencies. Inspired by the works of∗The denotion †means these authors contributed equally and means the corresponding author.2From 24 Jun. 2022 to 22 Feb. 2023.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Illustration of ADU-Depth, which consists of a student network, a teacher network and thedistillation scheme. We design an attention-adapted feature distillation and a focal-depth-adaptedresponse distillation. Meanwhile, we introduce an uncertainty estimation module to guide the left-right knowledge transfer, which are UMF-Distillation and UMR-Distillation, respectively. Duringinference, only the monocular student is employed with only a single image as input.Conditional Random Fields (CRFs) [24, 11, 25], a recent method [9] was proposed to split the inputinto windows and incorporate the neural window fully-connected CRFs with a multi-head attentionmechanism [26] to better capture the geometric relationships between nodes in the graph. However,considering that a single image contains limited 3D scene geometric information, this renders themonocular depth estimation task a difficult fitting problem without extra depth cues [27, 10].In this work, we present a novel distillation framework named ADU-Depth. We show that it is able toassist top depth predictors to infer more accurate depth maps, benefiting from 3D-aware knowledgetransfer of left-right views as humans do. We first train a teacher network to learn 3D knowledgeby concatenating the left-right images together and modify the model structure to accommodate theimage pair inputs. This intuitive yet quite effective mechanism leads to a powerful teacher networkand can be easily deployed for recent depth networks. Considering the strength of capturing geo-metric relationships between nodes in a graph, we use an encoder-decoder network similar to [9]as the student network and add an uncertainty branch. To ensure effective and smooth knowledgedistillation, the proposed distillation framework should be capable of producing 3D-aware featuresand responses under the guidance of the teacher. Therefore, we introduce a self-attention adaptationmodule for feature imitation and shrink the feature domain gap. Next, we design a focal-depth-lossto more effectively distill high-quality responses from soft-labels provided by the teacher. Fur-thermore, we combine distillation with an uncertainty estimation module to learn the predictiondistribution and better produce 3D-aware features and responses from monocular observations.To the best of our knowledge, this is the first time in the supervised depth estimation community thatleft-right view knowledge is transferred from the teacher network to a monocular student. In sum-mary, our contributions include: (i) a novel ADU-Depth framework that distills 3D-aware knowl-edge from a teacher network into a supervised monocular student; (ii) an attention adaptation modulefor both encoded and decoded feature distillation; (iii) a distillation-score based focal-depth-loss forresponse distillation and focuses on hard-to-transfer knowledge; and (iv) an uncertainty estimationmodule to guide the distillation in both feature and result spaces. The above techniques enable ourmethod to achieve state-of-the-art performance of monocular depth estimation across all metrics onthe KITTI and DrivingStereo datasets and ranked 1st on the KITTI online benchmark.22 Related WorkMonocular Depth Estimation. Neural network based methods have dominated most monoculardepth estimation benchmarks. Eigen et al. [14, 15] first proposed a depth regression approach byutilizing two stacked CNNs to integrate global and local information. To model the ambiguousmapping between images and depth maps, Laina et al. [28] presented an end-to-end convolutionarchitecture with residual learning. In contrast, Godard et al. [29] formulated depth estimation asa pixel-level classification task and also obtained depth prediction confidence. Besides, severalmulti-view methods [30, 2] employed a recurrent neural network based optimization technique thatrepeatedly alternates depth and pose updates to minimize the metric cost. Furthermore, many self-supervised monocular works took left-right image pairs [31, 32, 33, 34, 35, 36] or multi-frameimages [37, 38, 39] as input for the network training. By benefiting from the vision transformers,recent supervised works [18, 21, 23, 22] employed the attention mechanism for more accurate depthestimation. More recently, the neural window FC-CRFs with multi-head attention was proposed tobetter capture the relationship between nodes in the graph and reduce the computational cost [9].However, in the absence of 3D geometric information, these complicated transformer and attentionbased methods achieved limited performance gain for monocular depth estimation.Knowledge Distillation Methods. Knowledge distillation (KD) was first proposed in [40] totransfer the learned knowledge from a cumbersome teacher model to a light-weight student modelthrough teacher-generated soft targets. Besides its success in image classification, KD strategy hasbeen widely applied in objection detection [41, 42, 43], semantic segmentation [44], and depth esti-mation [45, 46, 47, 36, 48]. Pilzer et al. [46] claimed that distillation is particularly relevant to depthestimation due to the limitations of network size in practical real-time applications. They attemptedto exploit distillation to transfer knowledge from the refinement teacher network to a self-supervisedstudent. For fast depth estimation on mobile devices, Wang et al. [47] utilized knowledge distillationto transfer the knowledge of a stronger complex teacher network to a light-weight student network.More recently, DistDepth [48] distilled depth-domain structure knowledge into a self-supervised stu-dent to obtain accurate indoor depth maps. Unlike existing distillation methods for depth estimation,our ADU-Depth is the first to distill knowledge from left-right views into a supervised monoculardepth predictor and incorporates novel distillation techniques into the framework.Uncertainty Estimation Methods. Uncertainty estimation plays a critical role in improving therobustness and performance of deep learning models [49, 50, 51, 52]. Particularly, it was also intro-duced into monocular depth estimation [53, 54, 55], which was mostly in self- and semi-supervisedmethods due to their high uncertainty. Poggi et al. [56] examined several manners of uncertaintymodeling in monocular depth estimation and suggested the optimum strategy in various scenar-ios. Nie et al. [55] employed a student-teacher network to iteratively improve uncertainty perfor-mance and depth estimation accuracy through uncertainty-aware self-improvement. Inspired by thework [57] that introduces uncertainty estimation to better produce high-quality-like features fromlow-quality observations for degraded image recognition, we design an uncertainty estimation mod-ule and incorporate it into the proposed distillation framework for depth estimation, which can betterproduce 3D-aware features and responses from monocular observations.3 Proposed Method3.1 Overall FrameworkAs illustrated in Fig. 1, the framework of ADU-Depth can be separated into three parts (i.e., a teachernetwork, a monocular student, and the distillation scheme). We first pre-train the teacher networkon left-right image pairs with dense ground-truth depth. We then freeze it to produce features andresponses for training the student network on a single image by the proposed distillation strategy.The goal of our framework is to train the monocular student to learn to generate 3D-aware featuresand responses under the guidance of the left-right view based teacher network. To meet this goal,3we first introduce attention-adapted feature distillation and focal-depth-adapted response distillationto encourage feature and response imitation, respectively. Then, an uncertainty estimation module(UEM) is designed to model the uncertainty of both the feature and response distribution.Student model. We employ an encoder-decoder network similar to the work [9] as our baselinedepth predictor due to its promising performance and ability to capture relationships between thenodes in the graph. In detail, it adopts a multi-level encoder-decoder architecture, where four levelsof neural window FC-CRFs modules are stacked. The model employs the swin-transformer [58]encoder to extract multi-level image features and the CRFs optimization decoder to predict the next-level depth according to the image features and coarse depth. We consider that the neural windowFC-CRFs module and internal multi-head attention mechanism can better capture the critical scenegeometry information of the left-right view data, hence outputting an optimized depth map. Inparticular, we further introduce an novel uncertainty estimation module alongside the depth head asanother branch, which generates the per-pixel uncertainty to better guide the knowledge distillation.Teacher model. To make it easier accessible and more flexible to deploy the teacher network andensure the model training efficiency in practical applications, while reducing the structural domaingap between the teacher and student networks, our teacher model chooses to learn 3D knowledge byconcatenating the left-view and right-view images together and modifies the initial three channelsof the swin-transformer to six channels to accommodate the image pair inputs. This mechanism issimple yet quite effective and can avoid huge computations and domain gaps brought by complexstructural changes. Furthermore, the concatenation operation is also used in the stereo version of theunsupervised method [32], where the additional 3D information of left-right images enables a morecomplete understanding of the input scene. Note that, left-right image pairs are commonly used forpractical robotic applications, and many existing datasets are available [59, 60, 61].3.2 Knowledge Distillation StrategyIn this section, we describe our three complementary distillation schemes: attention-adapted featuredistillation, focal-depth-adapted response distillation, and uncertainty modeling based distillation.Attention-adapted feature distillation. Due to the domain gap between features extracted froma single image and a left-right (stereo) image pair, we consider it sub-optimal to directly execute amonocular student to learn the feature representation of the teacher. While the 3D scene geome-try information learned from the left-right data domain greatly enhances teacher performance, theeffective transfer of learned knowledge from teacher to student is an obvious challenge. Since thenumber of input channels of the student is not compatible with those of the teacher, an adaptationmodule is important to align the former to the latter for calculating the distance metric and enablingan effective feature approximation between the student and teacher. Unlike the work in [43], whichuses a full convolution adaptation layer, our experiments show that self-attention adaptation layersare more conducive to bridging the data domain gap and forcing the student to more effectivelyadapt and imitate the teacher’s features. They are able to capture long-range feature dependenciesand focus on the critical spatial cues of the image itself [26]. Specifically, the feature encoder ex-tracts four levels of feature maps following Fig. 1. For each level of feature maps, we first flatten itand then employ a self-attention operation to obtain the attention-adapted feature Fse:Fse= softmax(Qse·Kse)·Xse, (1)where Qse,Kse, and Xseare the learned query, key, and value of the student encoder, respectively. (·)denotes dot production. Similarly, the attention-adapted decoded feature Fsdcan also be obtainedvia Eq. (1), where Qsd,Ksd, and Xsdare learned from the multi-level decoded features. Then, giventhe multi-level extracted feature maps of the teacher model Fteand the attention-adapted encodedstudent feature Fse, the proposed encoded feature distillation loss Lecan be formulated as:Le=1NFte−Fse22. (2)4Similarly, given the multi-level decoded feature maps of teacher model Ftdand the attention-adapteddecoded student feature Fsd, our decoded feature distillation loss Ldcan be formulated as:Ld=1NFtd−Fsd22. (3)Focal-depth-adapted response distillation. We use the predicted depth from the teacher as extrasoft labels to transfer the learned knowledge to the student in the result space. Furthermore, inspiredby the focal loss [62], we design a focal-depth loss to enhance the learning of hard-to-predict sam-ples. Note that the focal loss is originally for classification and the p∈[0,1]in the focal loss is theestimated probability for the class with label y= 1. To enhance the student’s learning of hard-to-transfer knowledge of depth estimation from the teacher network, we introduce a depth distillationscore pd∈[0,1]to indicate the degree of prediction approximation between the student and teacher:pd= 1−1NNXi=1dti−dsidti, (4)where Nis the number of pixels, iis the pixel index. Then, we use the variant αdto balance theimportance of high distillation-score examples. The proposed focal-depth loss can be described as:Lfocal(pd) =−αd(1−pd)γlog (pd), (5)where γ≥0is a tunable focusing parameter, dtianddsidenote the depth predictions of the teacherand student, respectively. Furthermore, we also use the soft label L1-loss for distillation in the resultspace and then combine it with uncertainty modeling, which is represented in Eq. (8). For each pixeliin an image, the initial soft label response distillation loss is computed as:Lrd=NXi=1dti−dsi1. (6)Uncertainty modeling based distillation. To estimate the pixel-wise uncertainty, we design anuncertainty estimation module (UEM) as a new branch at the end of the backbone network. TheUEM consists of a convolution layer, a sigmoid activation function, and a scaling layer. Based onthe Bayesian deep learning framework [49], we train the depth network to predict the per-pixel logvariance si= log σ2i. The predicted uncertainty map has the same dimension as the depth map andis then rearranged to the size of multi-level feature maps, i.e. 1/4, 1/8, 1/16, and 1/32 of the originalsize. Given the Gaussian assumption for l2loss, the uncertainty modeling based feature distillationlossLumfis defined as:Lumf=1NNXi=112exp (−si)Fti−g ̃Fsi;si22+12si, (7)where Ftdenotes left-right feature maps from the teacher network and ̃Ft=g ̃Fsi;simeans therestored 3D-aware feature maps from the student network with estimated log variances sithroughthe uncertainty estimation module. For those ̃Ftfar away from Ft, the network will predict largervariances to reduce the error term exp (−si)Fti−g ̃Fsi;si22, instead of overfitting to thoseerroneous regions. When ̃Ftiis easy to learn, the second term12siperforms a major role in the lossfunction, and the network tends to make variances smaller. In this way, it works similarly to anattention mechanism, which enables the network to focus on the hard regions in the image or hardsamples in the training set [63, 57]. Lumfis employed on both encoded and decoded features.For the response distillation in result space, we choose Laplace uncertainty loss since it is moreappropriate to model the variances of residuals with l1loss. Given the Laplace assumption, we use ̃dti=g ̃dsi;sito denote the learned stereo-aware student depth prediction. Then, the uncertaintymodeling based response distillation (UMR-Distillation) loss Lumris defined as:Lumr=1NNXi=1√2 exp(−si)dti−g ̃dsi;si1+si. (8)5Method Reference Sup Distill Sq Rel ↓Abs Rel ↓RMSE ↓RMSE log↓δ1<1.25↑δ2<1.252↑δ3<1.253↑Xu et al. [64] CVPR 2018 ✓ ✗ 0.897 0.122 4.677 – 0.818 0.954 0.985Guo et al. [45] ECCV 2018 ✓ ✓ 0.515 0.092 3.163 0.159 0.901 0.971 0.988Refine and Distill [46] CVPR 2019 ✗✓ 0.831 0.098 4.656 0.202 0.882 0.948 0.973Yin et al. [65] ICCV 2019 ✓ ✗ – 0.072 3.258 0.117 0.938 0.990 0.998PackNet-SAN [66] CVPR 2021 ✓ ✗ – 0.062 2.888 – 0.955 – –PWA [20] AAAI 2021 ✓ ✗ 0.221 0.060 2.604 0.093 0.958 0.994 0.999DPT [21] ICCV 2021 ✓ ✗ – 0.062 2.573 0.092 0.959 0.995 0.999SingleNet [36] ICCV 2021 ✗✓ 0.681 0.094 4.392 0.185 0.892 0.962 0.981AdaBins [18] CVPR 2021 ✓ ✗ 0.190 0.058 2.360 0.088 0.964 0.995 0.999NeWCRFs [9] CVPR 2022 ✓ ✗ 0.155 0.052 2.129 0.079 0.974 0.997 0.999P3Depth [10] CVPR 2022 ✓ ✗ 0.270 0.071 2.842 0.103 0.953 0.993 0.998Liu et al. [67] TCSVT 2023 ✗✓ 0.635 0.096 4.158 0.171 0.905 0.969 0.985ADU-Depth – ✓ ✓ 0.147 0.049 2.080 0.076 0.976 0.997 0.999Table 1: Quantitative results on the KITTI Eigen split with a cap of 0-80m. Seven widely-usedmetrics are reported and calculated strictly following NeWCRFs [9]. The methods are divided bywhether they are supervised (Sup) and the use of distillation (Distill).Method Dataset SILog ↓Sq Rel ↓Abs Rel ↓iRMSE ↓RMSE ↓δ1<1.25↑δ2<1.252↑δ3<1.253↑DORN [17] val 12.22 3.03 11.78 11.68 3.80 0.913 0.985 0.995BA-Full [19] val 10.64 1.81 8.25 8.47 3.30 0.938 0.988 0.997NeWCRFs [9] val 8.31 0.89 5.54 6.34 2.55 0.968 0.995 0.998ADU-Depth val 6.64 0.61 4.61 5.25 1.98 0.981 0.997 0.999DORN [17] online test 11.77 2.23 8.78 12.98 – – – –BA-Full [19] online test 11.61 2.29 9.38 12.23 – – – –PackNet-SAN[66] online test 11.54 2.35 9.12 12.38 – – – –PWA [20] online test 11.45 2.30 9.05 12.32 – – – –NeWCRFs [9] online test 10.39 1.83 8.37 11.03 – – – –ADU-Depth online test 9.69 1.69 7.26 9.61 – – – –Table 2: Quantitative results on the KITTI official split. Eight widely-used metrics are calculatedfor the validation set and four metrics from the KITTI official online server are used for the test set.Our method ranked 1st on the KITTI depth prediction benchmark (initially named “ZongDepth”).3.3 End-to-End Network TrainingFor the training of the teacher network, only the loss Lbfrom the baseline method is adopted [9].We train our student network in an end-to-end manner using the following loss function:L=Lb+λ1· Lumr+λ2· Lumf+λ3· Lfocal, (9)where λ1, λ2, λ3are the hyper-parameters to balance the loss of each module.4 Experimental Results4.1 Implementation DetailsOur work is implemented in PyTorch and experimented on Nvidia RTX A6000 GPUs. The networkis trained for 40 epochs with a mini-batch size of 4. The input image is resized as 352×1120 beforebeing fed into the network and then rearranged and downscaled to four levels in the encoder-decodernetwork, i.e. 1/4,1/8,1/16,1/32. The learning rate is 2×10−5. We set β1= 0.9andβ2= 0.999in the Adam optimizer for network optimization. For KITTI dataset, λ1= 0.9, λ2= 0.6, λ3= 0.8.The output depth map is 1/4×1/4size of the original image and finally resized to the full resolution.For the analysis of computation time, the inferred speed of our ADU-Depth is 3.82 FPS, which isfaster than the 3.48 FPS of NeWCRFs [9] and without introducing an additional inference cost.4.2 EvaluationsEvaluation on KITTI. We first evaluate our method on the KITTI Eigen split [14]. As shown inTable 1, our ADU-Depth achieves new state-of-the-art performance with significant improvementsover other top performers and existing distillation-based depth estimation methods [45, 46, 36] withsignificant improvements. We then evaluate our method on the KITTI official split [59] as shownin Table 2. Obviously, our method achieves state-of-the-art performance across all the evaluationmetrics compared to existing depth estimation approaches.6Input image NeWCRFs Depth mapOur Depth map Our Error map NeWCRFs Error mapSILog = 23.88 SILog = 5.96 SILog = 13.19 SILog = 5.88 SILog = 9.54 SILog = 8.67 SILog = 16.67 SILog = 12.21 Figure 2: Qualitative results generated by the official server on the KITTI online benchmark.Method Sq Rel ↓Abs Rel ↓RMSE ↓DORN [17] 0.126 0.055 2.173Adabins [18] 0.083 0.036 1.459NeWCRFs [9] 0.071 0.032 1.310ADU-Depth 0.064 0.028 1.146Table 3: Quantitative results on the DrivingStereo.Settings Sq Rel ↓Abs Rel ↓RMSE ↓Teacher 0.074 0.040 1.418Baseline 0.188 0.056 2.325- Focal 0.152 0.050 2.103- UEM 0.154 0.051 2.108- Attention 0.161 0.052 2.127Full-Setting 0.147 0.049 2.080Table 4: Ablation study on the KITTI Eigen split.Evaluation on KITTI online test. On the challenging KITTI online benchmark, our methodranked 1st among all submissions of depth prediction for seven months, while NewCRFs [9] ranked12th. The performance gain is more obvious than its improvement on the Eigen split, since ourmethod is better at predicting the depth of hard regions or test images with a domain gap. Thescale-invariant logarithmic (SILog) error is the main ranking metric used to measure the relation-ships between points in the scene [14]. The lowest “SILog” error indicates that our method couldbetter capture the relationships between nodes in the scene, reasoning as the learning of stereo-aware knowledge. We also provide qualitative comparison results in Fig. 2, including the predicteddepth maps and the error maps. The error map depicts correct estimates in blue and wrong esti-mates in red color tones [68]. Dark regions denote the occluded pixels that fall outside the imageboundaries. Our method predicts more accurate and reliable depths in various scenes, especially forhard-to-predict image regions, e.g., distant objects, repeated textures and weak light scenes. On theone hand, attention-adapted feature distillation allows our model to effectively learn more about thespatial information of the scene from the teacher network. On the other hand, the introduction ofuncertainty and focal-depth made the model focus on the learning of difficult areas.Evaluation on DrivingStereo. DrivingStereo [60] is a large-scale stereo dataset that consists over180k images covering a diverse set of of driving scenarios. We further compare our method withseveral well-known methods on a subset of the DrivingStereo dataset with 7000 image pairs fortraining and 600 images only for testing. The quantitative results of our method compared withothers are shown in Table 3, where four widely used evaluation metrics are calculated for the testset. Our ADU-Depth outperforms these monocular depth predictors in all evaluation metrics.4.3 Qualitative results of Uncertainty EstimationThe uncertainty estimation module (UEM) should be capable of producing uncertainty values thatare well aligned with the depth estimation errors. Fig. 3 shows several examples of our predicted un-certainty maps, where brighter colors indicate high uncertainty. We can see that the high uncertaintyvalues are distributed on the distant background region and the object boundaries. These uncertaintyvalues are quite reasonable according to the estimated depth maps and sparse ground truths.7Input image Estimated depth map Estimated uncertainty map Ground truthClose Far Low HighFigure 3: Qualitative results of our ADU-Depth with UEM. High uncertainty values are displayedin far background regions and object boundaries according to the estimated uncertainty maps.4.4 Ablation StudyTo better inspect the effect of each novel component in our distillation strategy, we conduct eachmodule by an ablation study in Table 4. We also report the results produced by our teacher networkin the first row as a reference. The simple yet effective design of teacher network achieves thepromising performance by capturing additional 3D scene information in left-right image pairs.Since the proposed distillation framework contains three main components, we conduct indepen-dent experiments with different combinations to further verify the effectiveness of each module.The attention-adapted feature distillation module shows a significant performance gain. It impliesthat the key to shrinking the monocular-stereo feature domain gap is to adaptively learn the teacher-student feature approximation. Then, UEM-guided distillation also achieves lower estimation errorsdue to learning for regions of high uncertainty and hard regions. Even though these two compo-nents have shown powerful effects, where the “Sq Rel” error is reduced from 0.188 to 0.154, theintroduction of focal-depth adapted response distillation enables our ADU-Depth to achieve furtheraccuracy boost and the optimal performance with the “Sq Rel” error 0.147. It benefits from theenhanced learning of hard-to-transfer knowledge from the teacher model.5 LimitationsAlthough the proposed ADU-Depth can boost the performance of monocular depth estimation andlearn 3D-aware knowledge from the left-right view based teacher model with novel distillation tech-niques (as shown in Table 4), our method cannot theoretically guarantee the applicability of depthestimation to 3D understanding. In future work, we will predict a volumetric scene representationand leverage novel view synthesis to provide spatial cues for monocular depth estimation.6 ConclusionWe propose a novel distillation framework, ADU-Depth, to improve the performance of the monoc-ular depth predictor, especially on distant objects as well as the silhouette of objects. To the bestof our knowledge, we are the first supervised monocular depth estimation method that simulatesthe 3D-aware knowledge learned by a teacher network from left-right image pairs on a monocularstudent. To guarantee effective and smooth knowledge transfer to the monocular student, we designan attention-adapted feature imitation and a focal-depth based response imitation. In addition, wedesign an uncertainty estimation module to model the uncertainty of depth estimation, which guidesthe distillation in both feature space and result space. Extensive experimental results on the KITTIand DrivingStereo datasets show that our method achieves a new state-of-the-art performance.8References[1] Y . Li, Y . Chen, J. He, and Z. Zhang. Densely constrained depth estimator for monocular 3dobject detection. In European Conf. on Computer Vision (ECCV) , pages 718–734. Springer,2022.[2] Z. Wu, Z.-G. Fan, Z. Li, J. Wang, T. Xu, Q. Tang, F. Wang, and Z. Luo. Monocular fisheyedepth estimation for automated valet parking: Dataset, baseline and deep optimizers. In 2022IEEE 25th International Conference on Intelligent Transportation Systems (ITSC) , pages 01–07. IEEE, 2022.[3] T. Laidlow, J. Czarnowski, and S. Leutenegger. Deepfusion: Real-time dense 3d reconstructionfor monocular slam using single-view depth and gradient predictions. In 2019 InternationalConference on Robotics and Automation (ICRA) , pages 4068–4074. IEEE, 2019.[4] F. Wimbauer, N. Yang, L. V on Stumberg, N. Zeller, and D. Cremers. Monorec: Semi-supervised dense reconstruction in dynamic environments from a single moving camera. InIEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages 6112–6122, 2021.[5] F. El Jamiy and R. Marsh. Survey on depth perception in head mounted displays: distanceestimation in virtual reality, augmented reality, and mixed reality. IET Image Processing , 13(5):707–712, 2019.[6] R. Du, E. Turner, M. Dzitsiuk, L. Prasso, I. Duarte, J. Dourgarian, J. Afonso, J. Pascoal,J. Gladstone, N. Cruces, et al. Depthlab: Real-time 3d interaction with depth maps for mobileaugmented reality. In Proceedings of the 33rd Annual ACM Symposium on User InterfaceSoftware and Technology (UIST) , pages 829–843, 2020.[7] M. Rey-Area, M. Yuan, and C. Richardt. 360monodepth: High-resolution 360deg monoculardepth estimation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages3762–3772, 2022.[8] V . Guizilini, R. Ambrus ,, D. Chen, S. Zakharov, and A. Gaidon. Multi-frame self-superviseddepth with transformers. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) ,pages 160–170, 2022.[9] W. Yuan, X. Gu, Z. Dai, S. Zhu, and P. Tan. Newcrfs: Neural window fully-connected crfsfor monocular depth estimation. In IEEE Conf. on Computer Vision and Pattern Recognition(CVPR) , 2022.[10] V . Patil, C. Sakaridis, A. Liniger, and L. Van Gool. P3depth: Monocular depth estimationwith a piecewise planarity prior. In IEEE Conf. on Computer Vision and Pattern Recognition(CVPR) , pages 1610–1621, 2022.[11] A. Saxena, M. Sun, and A. Y . Ng. Make3d: Learning 3d scene structure from a single stillimage. IEEE Trans. Pattern Anal. & Mach. Intell (PAMI) , 31(5):824–840, 2008.[12] D. Hoiem, A. A. Efros, and M. Hebert. Automatic photo pop-up. In ACM SIGGRAPH 2005Papers , pages 577–584. 2005.[13] K. Karsch, C. Liu, and S. B. Kang. Depth transfer: Depth extraction from video using non-parametric sampling. IEEE Trans. Pattern Anal. & Mach. Intell (PAMI) , 36(11):2144–2158,2014.[14] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep network. Conference and Workshop on Neural Information Processing Systems(NeurIPS) , 27, 2014.9[15] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a commonmulti-scale convolutional architecture. In IEEE Int. Conf. on Computer Vision (ICCV) , pages2650–2658, 2015.[16] D. Xu, E. Ricci, W. Ouyang, X. Wang, and N. Sebe. Multi-scale continuous crfs as sequentialdeep networks for monocular depth estimation. In IEEE Conf. on Computer Vision and PatternRecognition (CVPR) , pages 5354–5362, 2017.[17] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao. Deep ordinal regression networkfor monocular depth estimation. In IEEE Conf. on Computer Vision and Pattern Recognition(CVPR) , pages 2002–2011, 2018.[18] S. F. Bhat, I. Alhashim, and P. Wonka. AdaBins: Depth estimation using adaptive bins. InIEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages 4009–4018, 2021.[19] S. Aich, J. M. U. Vianney, M. A. Islam, and M. K. B. Liu. Bidirectional attention network formonocular depth estimation. In IEEE Int. Conf. on Robotics and Automation (ICRA) , pages11746–11752. IEEE, 2021.[20] S. Lee, J. Lee, B. Kim, E. Yi, and J. Kim. Patch-wise attention network for monocular depthestimation. In AAAI Conf. on Artificial Intell. (AAAI) , volume 35, pages 1873–1881, 2021.[21] R. Ranftl, A. Bochkovskiy, and V . Koltun. Vision transformers for dense prediction. In IEEEInt. Conf. on Computer Vision (ICCV) , pages 12179–12188, 2021.[22] A. Agarwal and C. Arora. Attention attention everywhere: Monocular depth prediction withskip attention. In IEEE Winter Conf. on Applications of Computer Vision (WACV) , 2022.[23] S. F. Bhat, I. Alhashim, and P. Wonka. Localbins: Improving depth estimation by learninglocal distributions. In European Conf. on Computer Vision (ECCV) , pages 480–496. Springer,2022.[24] A. Saxena, S. Chung, and A. Ng. Learning depth from single monocular images. Conferenceand Workshop on Neural Information Processing Systems (NeurIPS) , 18, 2005.[25] X. Wang, C. Hou, L. Pu, and Y . Hou. A depth estimating method from a single image usingfoe crf. Multimedia Tools and Applications (MTA) , 74(21):9491–9506, 2015.[26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polo-sukhin. Attention is all you need. In Conference and Workshop on Neural Information Pro-cessing Systems (NeurIPS) , pages 6000–6010, 2017.[27] J. Hu, Y . Zhang, and T. Okatani. Visualization of convolutional neural networks for monoculardepth estimation. In IEEE Int. Conf. on Computer Vision (ICCV) , pages 3869–3878, 2019.[28] I. Laina, C. Rupprecht, V . Belagiannis, F. Tombari, and N. Navab. Deeper depth predictionwith fully convolutional residual networks. In Int. Conf. on 3D Vision (3DV) , pages 239–248,2016.[29] Y . Cao, Z. Wu, and C. Shen. Estimating depth from monocular images as classification usingdeep fully convolutional residual networks. IEEE Transactions on Circuits and Systems forVideo Technology , 28(11):3174–3182, 2017.[30] X. Gu, W. Yuan, Z. Dai, C. Tang, S. Zhu, and P. Tan. DRO: Deep recurrent optimizer forstructure-from-motion. arXiv preprint arXiv:2103.13201 , 2021.[31] J. Xie, R. Girshick, and A. Farhadi. Deep3d: Fully automatic 2d-to-3d video conversion withdeep convolutional neural networks. In European Conf. on Computer Vision (ECCV) , pages842–857. Springer, 2016.10[32] R. Garg, V . K. Bg, G. Carneiro, and I. Reid. Unsupervised cnn for single view depth estimation:Geometry to the rescue. In European Conf. on Computer Vision (ECCV) , pages 740–756.Springer, 2016.[33] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation withleft-right consistency. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) ,pages 270–279, 2017.[34] F. Tosi, F. Aleotti, M. Poggi, and S. Mattoccia. Learning monocular depth estimation infusingtraditional stereo knowledge. In IEEE Conf. on Computer Vision and Pattern Recognition(CVPR) , pages 9799–9809, 2019.[35] H. Choi, H. Lee, S. Kim, S. Kim, S. Kim, K. Sohn, and D. Min. Adaptive confidence thresh-olding for monocular depth estimation. In IEEE Int. Conf. on Computer Vision (ICCV) , pages12808–12818, 2021.[36] Z. Chen, X. Ye, W. Yang, Z. Xu, X. Tan, Z. Zou, E. Ding, X. Zhang, and L. Huang. Revealingthe reciprocal relations between self-supervised stereo and monocular depth estimation. InIEEE Int. Conf. on Computer Vision (ICCV) , pages 15529–15538, 2021.[37] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages1851–1858, 2017.[38] C. Godard, O. Mac Aodha, M. Firman, and G. J. Brostow. Digging into self-supervised monoc-ular depth estimation. In IEEE Int. Conf. on Computer Vision (ICCV) , pages 3828–3838, 2019.[39] J. Watson, O. Mac Aodha, V . Prisacariu, G. Brostow, and M. Firman. The temporal opportunist:Self-supervised multi-frame monocular depth. In IEEE Conf. on Computer Vision and PatternRecognition (CVPR) , pages 1164–1174, 2021.[40] G. Hinton, O. Vinyals, J. Dean, et al. Distilling the knowledge in a neural network. Conferenceand Workshop on Neural Information Processing Systems (NeurIPS) , 2015.[41] P. Chen, S. Liu, H. Zhao, and J. Jia. Distilling knowledge via knowledge review. In IEEE Conf.on Computer Vision and Pattern Recognition (CVPR) , pages 5008–5017, 2021.[42] Z. Chong, X. Ma, H. Zhang, Y . Yue, H. Li, Z. Wang, and W. Ouyang. Monodistill: Learningspatial features for monocular 3d object detection. In Int. Conf. on Learning Representations(ICLR) , 2022.[43] T. Wang, L. Yuan, X. Zhang, and J. Feng. Distilling object detectors with fine-grained fea-ture imitation. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR) , pages 4933–4942, 2019.[44] Y . Liu, K. Chen, C. Liu, Z. Qin, Z. Luo, and J. Wang. Structured knowledge distillation forsemantic segmentation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) ,pages 2604–2613, 2019.[45] X. Guo, H. Li, S. Yi, J. Ren, and X. Wang. Learning monocular depth by distilling cross-domain stereo networks. In European Conf. on Computer Vision (ECCV) , pages 484–500,2018.[46] A. Pilzer, S. Lathuiliere, N. Sebe, and E. Ricci. Refine and distill: Exploiting cycle-inconsistency and knowledge distillation for unsupervised monocular depth estimation. InIEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages 9768–9777, 2019.[47] Y . Wang, X. Li, M. Shi, K. Xian, and Z. Cao. Knowledge distillation for fast and accuratemonocular depth estimation on mobile devices. In IEEE Conf. on Computer Vision and PatternRecognition (CVPR) , pages 2457–2465, 2021.11[48] C.-Y . Wu, J. Wang, M. Hall, U. Neumann, and S. Su. Toward practical monocular indoordepth estimation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages3814–3824, 2022.[49] A. Kendall and Y . Gal. What uncertainties do we need in bayesian deep learning for computervision? In Conference and Workshop on Neural Information Processing Systems (NeurIPS) ,volume 30, 2017.[50] B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncer-tainty estimation using deep ensembles. Advances in neural information processing systems(NeurIPS) , 30, 2017.[51] Y . Gal and Z. Ghahramani. Dropout as a bayesian approximation: Insights and applications.InInt. Conf. on Machine Learning (ICML) , volume 1, page 2, 2015.[52] Y . Shen, Z. Zhang, M. R. Sabuncu, and L. Sun. Real-time uncertainty estimation in computervision via uncertainty-aware distribution distillation. In IEEE Winter Conf. on Applications ofComputer Vision (WACV) , pages 707–716, 2021.[53] J. Hornauer and V . Belagiannis. Gradient-based uncertainty for monocular depth estimation.InEuropean Conf. on Computer Vision (ECCV) , 2022.[54] N. Yang, L. v. Stumberg, R. Wang, and D. Cremers. D3vo: Deep depth, deep pose and deepuncertainty for monocular visual odometry. In IEEE Conf. on Computer Vision and PatternRecognition (CVPR) , pages 1281–1292, 2020.[55] X. Nie, D. Shi, R. Li, Z. Liu, and X. Chen. Uncertainty-aware self-improving framework fordepth estimation. IEEE Robotics and Automation Letters (RA-L) , 7(1):41–48, 2021.[56] M. Poggi, F. Aleotti, F. Tosi, and S. Mattoccia. On the uncertainty of self-supervised monoculardepth estimation. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages3227–3237, 2020.[57] Z. Yang, W. Dong, X. Li, J. Wu, L. Li, and G. Shi. Self-feature distillation with uncertaintymodeling for degraded image recognition. In European Conf. on Computer Vision (ECCV) ,2022.[58] Z. Liu, Y . Lin, Y . Cao, H. Hu, Y . Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer:Hierarchical vision transformer using shifted windows. In IEEE Int. Conf. on Computer Vision(ICCV) , pages 10012–10022, 2021.[59] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? The KITTI visionbenchmark suite. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) , pages3354–3361, 2012.[60] G. Yang, X. Song, C. Huang, Z. Deng, J. Shi, and B. Zhou. Drivingstereo: A large-scale datasetfor stereo matching in autonomous driving scenarios. In IEEE Conf. on Computer Vision andPattern Recognition (CVPR) , pages 899–908, 2019.[61] Y . Cabon, N. Murray, and M. Humenberger. Virtual KITTI 2. arXiv preprintarXiv:2001.10773 , 2020.[62] T.-Y . Lin, P. Goyal, R. Girshick, K. He, and P. Doll ́ar. Focal loss for dense object detection. InIEEE Int. Conf. on Computer Vision (ICCV) , pages 2980–2988, 2017.[63] Y . Huang, P. Shen, Y . Tai, S. Li, X. Liu, J. Li, F. Huang, and R. Ji. Improving face recognitionfrom hard samples via distribution distillation loss. In European Conf. on Computer Vision(ECCV) , pages 138–154. Springer, 2020.12[64] D. Xu, W. Wang, H. Tang, H. Liu, N. Sebe, and E. Ricci. Structured attention guided convo-lutional neural fields for monocular depth estimation. In IEEE Conf. on Computer Vision andPattern Recognition (CVPR) , pages 3917–3925, 2018.[65] W. Yin, Y . Liu, C. Shen, and Y . Yan. Enforcing geometric constraints of virtual normal fordepth prediction. In IEEE Int. Conf. on Computer Vision (ICCV) , pages 5684–5693, 2019.[66] V . Guizilini, R. Ambrus, W. Burgard, and A. Gaidon. Sparse auxiliary networks for unifiedmonocular depth prediction and completion. In IEEE Conf. on Computer Vision and PatternRecognition (CVPR) , pages 11078–11088, 2021.[67] Z. Liu, R. Li, S. Shao, X. Wu, and W. Chen. Self-supervised monocular depth estimation withself-reference distillation and disparity offset refinement. IEEE Transactions on Circuits andSystems for Video Technology , 2023.[68] J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger. Sparsity invariantcnns. In 2017 international conference on 3D Vision (3DV) , pages 11–20. IEEE, 2017.13 |
XsWGVbPfB4Z | 4D-Former: Multimodal 4D Panoptic SegmentationAli Athar1,3†∗Enxu Li1,2∗Sergio Casas1,2Raquel Urtasun1,21Waabi2University of Toronto3RWTH Aachen University{aathar, tli, sergio, urtasun }@waabi.aiAbstract: 4D panoptic segmentation is a challenging but practically useful taskthat requires every point in a LiDAR point-cloud sequence to be assigned a se-mantic class label, and individual objects to be segmented and tracked over time.Existing approaches utilize only LiDAR inputs which convey limited informationin regions with point sparsity. This problem can, however, be mitigated by uti-lizing RGB camera images which offer appearance-based information that canreinforce the geometry-based LiDAR features. Motivated by this, we propose4D-Former: a novel method for 4D panoptic segmentation which leverages bothLiDAR and image modalities, and predicts semantic masks as well as temporallyconsistent object masks for the input point-cloud sequence. We encode semanticclasses and objects using a set of concise queries which absorb feature informa-tion from both data modalities. Additionally, we propose a learned mechanismto associate object tracks over time which reasons over both appearance and spa-tial location. We apply 4D-Former to the nuScenes and SemanticKITTI datasetswhere it achieves state-of-the-art results. For more information, visit the projectwebsite: https://waabi.ai/4dformer .Keywords: Panoptic Segmentation, Sensor Fusion, Temporal Reasoning, Au-tonomous Driving1 IntroductionPerception systems employed in self-driving vehicles (SDVs) aim to understand the scene both spa-tially and temporally. Recently, 4D panoptic segmentation has emerged as an important task whichinvolves assigning a semantic label to each observation, as well as an instance ID representing eachunique object consistently over time, thus combining semantic segmentation, instance segmentationand object tracking into a single, comprehensive task. Potential applications of this task includebuilding semantic maps, auto-labelling object trajectories, and onboard perception. The task is,however, challenging due to the sparsity of the point-cloud observations, and the computationalcomplexity of 4D spatio-temporal reasoning.Traditionally, researchers have tackled the constituent tasks in isolation, i.e., segmenting classes[1, 2, 3, 4], identifying individual objects [5, 6], and tracking them over time [7, 8]. However,combining multiple networks into a single perception system makes it error-prone, potentially slow,and cumbersome to train. Recently, end-to-end approaches [9, 10, 11] for 4D panoptic segmentationhave emerged, but they utilize only LiDAR data which provides accurate 3D geometry, but is sparseat range and lacks visual appearance information that might be important to disambiguate certainclasses (e.g., a pedestrian might look like a pole at range). Nonetheless, combining LiDAR andcamera data effectively and efficiently is non-trivial as the observations are very different in nature.In this paper, we propose 4D-Former, a novel approach for 4D panoptic segmentation that effec-tively fuses information from LiDAR and camera data to output high quality semantic segmentationlabels as well as temporally consistent object masks for the input point cloud sequence. To thebest of our knowledge, this is the first work that explores multi-sensor fusion for 4D panoptic pointcloud segmentation. Towards this goal, we propose a novel transformer-based architecture that∗Indicates equal contribution.†Work done while an intern at Waabi.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.fuses features from both modalities by efficiently encoding object instances and semantic classes asconcise queries . Moreover, we propose a learned tracking framework that maintains a history ofpreviously observed object tracks, allowing us to overcome occlusions without hand-crafted heuris-tics. This gives us an elegant way to reason in space and time about all the tasks that constitute 4Dpanoptic segmentation. We demonstrate the effectiveness of 4D-Former on both nuScenes [12] andSemanticKITTI [13] benchmarks and show that we significantly outperform the state-of-the-art.2 Related Work3D Panoptic Segmentation: This task combines semantic and instance segmentation, but doesnot require temporally consistent object tracks. Current approaches often utilize a multi-brancharchitecture to independently predict semantic and instance labels. A backbone network is usedto extract features from the LiDAR point cloud with various representations e.g. points [14], vox-els [1, 3], 2D range views [15, 16], or birds-eye views [17]. Subsequently, the network branchesinto two paths to generate semantic and instance segmentation predictions. Typically, instance pre-dictions are obtained through deterministic [18, 19, 20] or learnable clustering [6, 21], proposalgeneration [22], or graph-based methods [23, 24]. These methods are not optimized end-to-end.Several recent work [25, 26, 27] extends the image-level approach from Cheng et al. [28] to per-form panoptic segmentation in the LiDAR domain in an end-to-end fashion. We adopt a similarapproach to predict semantic and instance masks from learned queries, however, our queries attendto multi-modal features whereas the former utilizes only LiDAR inputs.LiDAR Tracking: This task involves predicting temporally consistent bounding-boxes for theobjects in the input LiDAR sequence. We classify existing approaches into two main groups:tracking-by-detection and end-to-end methods. The tracking-by-detection paradigm [7, 8, 29] hasbeen widely researched, and generally consists of a detection framework followed by a trackingmechanism. Since LiDAR point clouds typically lack appearance information but offer more spatialand geometric cues, existing approaches usually rely on motion cues for tracking ( e.g. Kalman Fil-ters [30], Hungarian matching [31] or Greedy Algorithm [8] for association). Recently, end-to-endframeworks [32] have also emerged where a single network performs per-frame detection and tem-poral association. In contrast to these, 4D-Former utilizes both LiDAR and image modalities, andperforms point-level instance tracking and semantic segmentation with a single unified framework.4D Panoptic Segmentation: This is the task we tackle in our work, and it involves extending3D panoptic segmentation to include temporally consistent instance segmentation throughout theinput sequence. Most existing methods [9, 11, 33] employ a sliding-window approach which tracksinstances within a short clip of upto 5 frames. 4D-PLS [9] models object tracklets as Gaussian dis-tributions and segments them by clustering per-point spatio-temporal embeddings over the 4D inputvolume. 4D-StOP [11] proposes a center-based voting technique to generate track proposals whichare then aggregated using learned geometric features. These methods associate instances acrossclips using mask IoU in overlapping frames. CA-Net [10], on the other hand, learns contrastive em-beddings for objects to associate per-frame predictions over time. Recently, concurrent work fromZhuet al. [34] develops rotation-equivariant networks which provide more robust feature learningfor 4D panoptic segmentation. Different to these, 4D-Former utilizes multimodal inputs, and adoptsa transformer-based architecture which models semantic classes and objects as concise queries .LiDAR and Camera Fusion: Multimodal approaches have recently become popular for objectdetection and semantic segmentation. Existing methods can be grouped into two categories: (1)point-level fusion methods, which typically involve appending camera features to each LiDAR point[35, 36, 37] or fusing the two modalities at the feature level [38, 39, 40]. (2) Proposal-level fusion,where object detection approaches [41, 42] employ transformer-based architectures which representobject as queries and then fuse them with camera features. Similarly, Li et al. [43] perform seman-tic segmentation by modeling semantic classes as queries which attend to scene features from bothmodalities. 4D-Former, on the other hand, tackles 4D panoptic segmentation whereas the aforemen-tioned methods perform single-frame semantic segmentation or object detection.2Fusion Block Fusion Block tt-1Images at t Encoder Decoder ResNet FPNt-1tSemantic Masks t-1tTracklet Masks Tracks from iteration i-1 Tracklet Association Modulet-1 tTrack Masks 4D Panoptic Predictions for iteration i Point-level Fusionto iteration i+1 Iteration i Header LiDAR Lidar to image projection Panoptic decoder Point-voxel encoder Image encoder Figure 1: 4D-Former inference at iteration i. Note that tracking history from i−1is used in theTracklet Association Module.3 Multimodal 4D Panoptic SegmentationIn this paper we propose 4D-Former to tackle 4D panoptic segmentation. The task consists of la-belling each 4D LiDAR point with a semantic class and a track ID that specifies a consistent instanceover time. Camera images provide rich additional context to help make more accurate predictions,particularly in regions where LiDAR is sparse. To this end, we propose a novel transformer-basedarchitecture that effectively combines sparse geometric features from LiDAR with dense contextualfeatures from cameras. In particular, it models object instances and semantic classes using concise,learnable queries , followed by iterative refinement by self-attention and cross-attention to LiDARand camera image features. Using these queries, our method is able to attend only to regions ofthe sensor data that are relevant, making the multimodal fusion of multiple cameras and LiDARtractable. In order to handle sequences of arbitrary length as well as continuous streams of data(e.g., in the onboard setting), 4D-Former operates in a sliding window fashion, as illustrated inFig. 1. At each iteration, 4D-Former takes as input the current LiDAR scan at time t, the past scanatt−1, and the camera images at time t. It then generates semantic and tracklet predictions forthese two LiDAR scans. To make the tracklet predictions consistent over the entire input sequence,we propose a novel Tracklet Association Module (TAM) which maintains a history of previouslyobserved object tracks, and associates them based on a learning-based matching objective.3.1 Multimodal EncoderOur input encoder extracts image features from the camera images, and point-level and voxel-levelfeatures by fusing information from the LiDAR point clouds and camera features. These featuresare then utilized in our transformer-based panoptic decoder presented in Sec. 3.2.Image feature extraction: Assume the driving scene is captured by a set of images of size H×Wcaptured from multiple cameras mounted on the ego-vehicle. We employ a ResNet-50 [44]backbone, followed by a Feature Pyramid Network (FPN) [45], to produce a set of multi-scale,D−dimensional feature maps {Is|s= 4,8}for each of the images, where Is∈RH/s×W/s×D.v2p MLP MLP + p2v Point-level Fusion p2v v2p Point-level Fusion + p2v v2p + MLP LiDAR Figure 2: Overview of point and voxel feature extraction.p2v: point-to-voxel. v2p: voxel-to-point.Point/voxel feature extraction: Thenetwork architecture is inspired by [46]and consists of a point-branch and avoxel-branch. The point-branch learnspoint-level embeddings, thus preservingfine details, whereas the voxel-branchperforms contextual reasoning using 3Dsparse convolutional blocks [47] andprovides multi-scale feature maps. Eachof the Npoints in the input LiDARpoint-cloud is represented as an 8-D feature which include the xyzcoordinates, relative timestamp,intensity, and 3D relative offsets to the nearest voxel center. An MLP is applied to obtain initial point3embeddings which are then averaged over a voxel to obtain voxel features. These voxel features areprocessed through four residual blocks with 3D sparse convolutions, each of which downsamplesthe feature map by 2×. Four additional residual blocks are then used to upsample the sparse featuremaps back to the original resolution, thus yielding a set of D-dimensional voxel features at variousstrides V={Vi∈RNi×D|i= 1,2,4,8}, where Nidenotes the number of non-empty voxelsat the i-th stride. At various stages in this network, point-level features are updated with imagefeatures via point-level fusion (as explained in the next paragraph). Moreover, we exploit point-to-voxel and voxel-to-point operations to fuse information between the point and voxel branches atdifferent scales, as illustrated in Fig. 2. We denote the final point-level features as Z∈RN×D.Point-level fusion: We enrich the geometry-based LiDAR features with appearance-based imagefeatures by performing a fine-grained, point-level feature fusion. This is done by taking the pointfeatures Zlidar∈RN×Dat intermediate stages inside the LiDAR backbone, and projecting theircorresponding (x, y, z )coordinates to the highest resolution image feature map I4. Note that thiscan be done since the image and LiDAR sensors are calibrated, which is typically the case in modernself-driving vehicles. This yields a set of image features Zimg∈RM×D, where M≤Nsincegenerally not all LiDAR points have valid image projections. We use Z+lidar∈RM×Dto denote thesubset of features in Zlidarwhich have valid image projections, and Z−lidar∈R(N−M)×Dfor the rest.We then perform point-level fusion between image and LiDAR features as follows:Z+lidar← −MLP fusion([Z+lidar, Zimg]) Z−lidar← −MLP pseudo(Z−lidar) (1)where both MLPs contain 3 layers, and [·,·]denotes channel-wise concatenation. Intuitively,MLP fusion performs pairwise fusion for corresponding image and LiDAR features. On the other hand,MLP pseudo updates the non-projectable LiDAR point features to resemble fused embeddings.3.2 Transformer-based Panoptic DecoderWe propose a novel decoder which predicts per-point semantic and object track masks with a unifiedarchitecture. This stands in contrast with existing methods [9, 11, 48, 6] which generally have sepa-rate heads for each output. Our architecture is inspired by image-level object detection/segmentationmethods [49, 28], but the key difference is that our decoder performs multimodal fusion.x y z LiDAR to Image Projection Figure 3: LiDAR to image projection.We initialize a set of queries Q∈RT×Drandomly at thestart of training where the number of queries ( T) is as-sumed to be an upper-bound on the number of objects ina given scene. The idea to use these queries to segment avarying number of objects as well as the non-instantiable‘stuff’ classes in the scene. The queries are input to a se-ries of ‘fusion blocks’. Each block is composed of multi-ple layers where the queries Qare updated by: (1) cross-attending to the voxel features Vi∈RNi×Cat a given stride, (2) cross-attending to the set of imagefeatures Fi∈RMi×Cwhich are obtained by projecting the (x, y, z )coordinates for the voxel fea-tures Viinto each of the multi-scale image feature maps1{I4, I8}(see Fig. 3 for an illustration),and (3) self-attending to each other twice intermittently, and also passing through 2×FeedforwardNetworks (FFN). The architecture of these fusion blocks is illustrated in Fig. 4.Cross-attention Self-attention FFN Cross-attention Self-attention FFN Figure 4: Fusion block architecture.These queries distill information about the objects andsemantic classes present in the scene. To this end, self-attention enables the queries to exchange information be-tween one another, and cross-attention allows them tolearn global context by attending to the features from bothmodalities across the entire scene. This mitigates theneed for dense feature interaction between the two modal-ities which, if done naively, would be computationally in-tractable since NiandMiare on the order of 104. Our1Mi≤2Nisince each voxel feature is projected into 2 image feature maps ( I4,I8), but not all LiDARvoxel features have valid image projections4fusion block avoids this by leveraging a set of concise queries which attend to the scene featuresfrom both modalities in a sequential fashion where the computational footprint of each operation ismanageable since T≪NiandT≪Mi.Our transformer-based panoptic decoder is composed of four such fusion blocks, each involv-ing cross-attention to voxel features at different strides, and their corresponding image features.We proceed in a coarse-to-fine manner where the inputs to the fusion blocks are ordered as:(V8,F8),(V4,F4),(V2,F2),(V1,F1). Note that this query-level fusion compliments the fine-grained, point-level fusion in the LiDAR backbone explained in Sec. 3.1. The updated queriesoutput by the decoder, denoted by Q′∈RT×D, are used to obtain logits for the object tracklet andsemantic masks, where each logit represents the log probability of a Bernoulli distribution capturingwhether the query represents a specific instance or class. Per-point object tracklet masks Mparecalculated as the dot-product of the updated queries Q′with the point-level features Z∈RN×D:Mp← −Z·Q′T∈RN×T(2)Semantic (per-class) confidence scores are obtained by passing Q′through a linear layer. This layerhas a fan-out of 1 +Cto predict a classification score for each of the Csemantic classes, and anadditional ‘no-object’ score which is used during inference to detect inactive queries that representneither an object nor a ‘stuff’ class. We use the semantic prediction to decide whether the querymask belongs to an object track, or to one of the ‘stuff’ classes.Soft-masked Cross-attention: Inspired by [50, 28], we employ soft-masked cross-attention toimprove convergence. Given a set of queries Q, the output Qx-attn of cross-attention is computed as:Qx-attn← −softmaxQ(K+E)T+αMTv√DV (3)Here, K∈R{Ni,Mi}×DandV∈R{Ni,Mi}×Ddenote the keys and values (derived as linear projec-tions from ViorFi), respectively, E∈R{Ni,Mi}×Ddenotes positional encodings (explained in thenext paragraph), αis a scalar weighting factor, and MTvis the voxel-level query mask computed byapplying Eq. 2 to Q, followed by voxelization and downsampling to the required stride. Intuitively,the term “ αMTv” amplifies the correspondence between queries and voxel/image features based onthe mask prediction from the previous layer. This makes the queries focus on their respective ob-ject/class targets.Positional Encodings: We impart the cross-attention operation with 3D coordinate information ofthe features in Viby using positional encodings ( Ein Eq. 3). These contain two components: (1)Fourier encodings [51] of the (x, y, z )coordinates, and (2) a depth component which is obtainedby applying sine and cosine activations at various frequencies to the Euclidean distance of eachvoxel feature from the LiDAR sensor. Although the depth can theoretically be inferred from thexyz coordinates, we find it beneficial to explicitly encode it. Intuitively, in a multi-modal setupthe depth provides a useful cue for how much the model should rely on features from differentmodalities, e.g., for far-away points the image features are more informative as the LiDAR is verysparse. Both components haveD2dimensions and are concatenated to obtain the final positionalencoding E∈R{Ni,Mi}×D. For the image features Fi, we use the encoding of the correspondingvoxel.3.3 Tracklet Association Module (TAM)The 4D panoptic task requires object track IDs to be consistent over time. Since 4D-Former pro-cesses overlapping clips, one way to achieve temporal consistency is to associate tracklet masksacross clips based on their respective mask IoUs, as done by existing works [9, 11, 33]. However,this approach cannot resolve even brief occlusions which frequently arise due to inaccurate maskpredictions and/or objects moving out of view. To mitigate this shortcoming, we propose a learnableTracklet Association Module (TAM) which can associate tracklets across longer frame gaps andreasons over the objects’ appearances and spatial locations.The TAM is implemented as an MLP which predicts an association score for a given pair of tracklets.The input to our TAM is constructed by concatenating the following attributes of the input tracklet5abc LiDAR Front Camera LiDAR Front CameraT = 1 T = 2abc Semantic TracksFigure 5: Qualitative results on nuScenes sequence 0798 with both LiDAR and image views.pair along the feature dimension: (1) their (x, y, z )mask centroid coordinates, (2) their respectivetracklet queries, (3) the frame gap between them, and (4) their mask IoU. We refer the readers tothe supplementary material for an illustration. Intuitively, the tracklet queries encode object appear-ances, whereas the frame gap, mask centroid and mask IoU provide strong spatial cues. Our TAMcontains 4 fully connected layers and produces a scalar association score as the final output. Themask IoU is set to zero for tracklet pairs with no overlapping frames. Furthermore, the mask cen-troid coordinates and frame gap are expanded to 64-D each by applying sine and cosine activationsat various frequencies, similar to the depth encodings discussed in Sec. 3.2.During inference, we maintain a memory bank containing object tracks. Each track is representedby the tracklet query, mask centroid and frame number for its most recent occurrence. This memorybank is maintained for the past Thistframes. At frame t, we compute association scores for all pair-wise combinations of the tracks in the memory bank with the predicted tracklets in the current clip.Subsequently, based on the association score, each tracklet in the current clip is either associatedwith a previous track ID, or is used to start a new track ID.3.4 LearningWe employ a two-stage training approach for the proposed architecture. During the first stage,we exclude the TAM and optimize the network for a single input clip. The predicted masks arematched to ground-truth objects and stuff class segments with bi-partite matching based on theirmask IoUs and classification scores. Note that during training, each stuff class present in the sceneis treated as an object track. Subsequently, the masks are supervised with a combination of binarycross entropy and DICE losses, denoted by LceandLdicerespectively, and the classification output issupervised with a cross-entropy loss Lcls. These losses are computed for the output of each of the Bfusion blocks in the Panoptic Decoder, followed by summation. Lastly, the point-level pseudo-fusionoutput discussed in Sec. 3.1 is supervised by the following L 2regression loss Lpf:Lpf← −MLP fusion([Z+lidar, Zimg])−MLP pseudo(Z+lidar)2(4)The final loss Ltotalis computed by taking the sum of the individual losses with empirically chosenweights. The superscript Lb·denotes the loss for the output of the b-th fusion block in the decoder.Ltotal← −+Lpf+BXb=15Lbce+ 2Lbdice+ 2Lbcls(5)The second stage involves optimizing the TAM with the remaining network frozen. We generatetracklet predictions for multiple clips separated by different frame gaps, and then optimize the TAMusing all pairwise combinations of tracklets in the given clip set. The predicted association scoresare supervised with a binary cross-entropy loss.4 ExperimentsImplementation Details: We process clips containing 2 frames each, with voxel size of 0.1 m,and the feature dimensionality D= 128 . The images are resized in an aspect-ratio preserving6MethodValidation TestPAT LSTQ PTQ PQ TQ S assoc PAT LSTQ PTQ PQ TQ S assocPanopticTrackNet [53] 44.0 43.4 50.9 51.6 38.5 32.3 45.7 44.8 51.6 51.7 40.9 36.74D-PLS [9] 59.2 56.1 55.5 56.3 62.3 51.4 60.5 57.8 55.6 56.6 64.9 53.6Cylinder3D++ [1] + OGR3MOT [54] - - - - - - 62.7 61.7 61.3 61.6 63.8 59.4(AF)2-S3Net [3] + OGR3MOT [54] - - - - - - 62.9 62.4 60.9 61.3 64.5 59.9EfficientLPS [22] + KF [30] 64.6 62.0 60.6 62.0 67.6 58.6 67.1 63.7 62.3 63.6 71.2 60.2EfficientLPT [33] - - - - - - 70.4 66.0 67.0 67.9 71.2 -4D-Former 78.3 76.4 75.2 77.3 79.4 73.9 79.4 78.2 75.5 78.0 75.5 76.1Table 1: Benchmark results for nuScenes validation and test set.manner such that the lower dimension is 480px. For training time data augmentation, we randomlysubsample the LiDAR pointcloud to 105points, and also apply random rotation and point jitter.The images undergo SSD-based color augmentation [52], and are randomly cropped to 70% of theiroriginal size. In the first stage, we train for 80 epochs with AdamW optimizer with batch size 8across 8x Nvidia T4 GPUs (14GB of usable VRAM). The learning rate is set to 3×10−3for theLiDAR feature extractor, and 10−4for the rest of the network. The rate is decayed in steps of 0.1after 30 and 60 epochs. For the second stage, we train the TAM for 2 epochs on a single GPU withlearning rate 10−4. During inference, we associate tracklets over temporal history Thist= 4.Datasets: To verify the efficacy of our approach, we apply it to two popular benchmarks:nuScenes [12] and SemanticKITTI [13]. nuScenes [12] contains 1000 sequences, each 20s long andannotated at 2Hz. The scenes are captured with a 32-beam LiDAR sensor and 6 cameras mounted atdifferent angles around the ego vehicle. The training set contains 600 sequences, whereas validationand test each contain 150. The primary evaluation metric is Panoptic Tracking (PAT). Compared tonuScenes, SemanticKITTI [9] contains fewer but longer sequences, and uses LiDAR Segmentationand Tracking Quality (LSTQ) as the primary evaluation metric. One caveat is that image input isonly available from a single, forward-facing camera. As a result, only a small fraction ( ∼15%)of LiDAR points are visible in the camera image. For this reason, following existing multimodalmethods [43], we evaluate only those points which have valid camera image projections.Method LSTQ Sassoc Scls IoUstIoUth4D-PLS [9] 65.4 72.3 59.1 62.6 61.84D-StOP [11] 71.0 82.5 61.0 63.0 66.0Ours 73.9 80.9 67.6 64.9 71.3Table 2: SemanticKITTI validation results.Comparison to state-of-the-art: Results onnuScenes are shown in Tab. 1 and visualizedin Fig. 5. We see that 4D-Former outperformsexisting methods across all metrics. In termsPAT, 4D-Former achieves 78.3 and 79.4 on theval and test sets, respectively. This is signifi-cantly better than the 70.4 (+9.0) achieved byEfficientLPT [33] on the test set and the 64.6(+13.7) achieved by EfficientLPS [22]+KF on val. We attribute this to 4D-Former’s ability to reasonover multimodal inputs and segment both semantic classes and object tracks in an end-to-end learnedfashion. The results on SemanticKITTI validation set are reported in Tab. 2. For a fair compari-son, we also evaluated existing top-performing methods on the same sub-set of camera-projectablepoints. We see that 4D-Former achieves 73.9 LSTQ which is higher than the 71.0 (+2.8) achievedby 4D-StOP and also the 65.4 (+8.4) achieved by 4D-PLS [9]. Aside from S assoc, 4D-Former is alsobetter for other metrics.Setting PAT LSTQ PTQ PQMask IoU 76.3 74.6 73.9 77.3TAM 78.3 76.4 75.2 77.3Table 3: Ablation results for temporal as-sociation on nuScenes validation set.Effect of Tracklet Association Module: The effec-tiveness of the TAM is evident from Tab. 3 where wecompare it to a baseline which uses only use mask IoUin the overlapping frame for association. This resultsin the PAT dropping from 78.3 to 76.3. This highlightsthe importance of using a learned temporal associationmechanism with both spatial and appearance cues.Next, we ablate other aspects of our method in Tab. 4.For these experiments, we subsample the training set by using only every fourth frame to save timeand resources. The final model is also re-trained with this setting for a fair comparison (row 6).7#PFCAFDEPCPAT LSTQ PTQ PQ1. ✓ ✓ 59.7 64.3 60.8 63.62.✓ ✓ ✓ 61.8 65.2 64.3 67.63. ✓ ✓ ✓ 63.2 66.1 63.1 66.34.✓ ✓ ✓ 64.1 66.4 65.7 69.15.✓ ✓ ✓ 64.6 66.7 66.0 69.46.✓ ✓ ✓ ✓ 66.1 67.4 66.2 69.9Table 4: Ablation results on nuScenes val set. PF:Point Fusion, CAF: Cross-attention Fusion, DE:Depth encodings, PC: Pseudo-camera feature loss.Effect of Image Fusion: Row 1 is a LiDAR-only model which does not utilize image inputin any way. This achieves 59.7 PAT which issignificantly worse than the final model’s 66.1.This shows that using image information yieldssignificant performance improvements. Row 2utilizes point-level fusion (Sec. 3.1), but doesnot apply cross-attention to image features inthe decoder (Sec. 3.2). This setting achieves61.8 PAT which is better than the LiDAR-onlysetting (59.7), but still much worse than the fi-nal model (66.1). Row 3 tests the opposite con-figuration: the decoder includes cross-attentionto image features, but no point-level fusion is applied. This yields 63.2 PAT which is slightly higherthan row 2 (61.8) but worse than the final setting (66.1). We conclude that while both types of fusionare beneficial in a standalone setting, combining them yields a larger improvement.Effect of Depth Encodings: As discussed in Sec. 3.2, our positional encodings contain a depthcomponent which is calculated by applying sine/cosine activations with multiple frequencies to thedepth value of each voxel feature. Row 4 omits this component and instead only uses Fourierencodings based on the xyz coordinates. This setting yields 64.1 PAT which is lower than the fullmodel (66.1), thus showing that explicitly encoding depth is beneficial.Effect of Pseudo-camera Feature Loss: Recall from Sec. 3.4 that we supervise pseudo-camerafeatures for point fusion with an L 2regression loss. Row 5 shows that without this loss the PATreduces from 66.1 to 64.6. Other metrics also reduce, though to a lesser extent.5 LimitationsOur method performs less effectively on SemanticKITTI compared to nuScenes, particularly incrowded scenes with several objects. In addition to lower camera image coverage, this is due to thelimited number of moving actors in the SemanticKITTI training set which, on average, contains only0.63 pedestrians and 0.18 riders per frame. Existing LiDAR-only methods [2, 20, 55] overcome thisby using instance cutmix augmentation which involves randomly inserting LiDAR scan cutouts ofactors into training scenes. Doing the same in a multimodal setting is, however, non-trivial sinceit would require the camera images to also be augmented accordingly. Consequently, a promisingfuture direction is to develop more effective augmentation techniques for multimodal training.Our tracking quality is generally good for vehicles, but is comparatively worse for smaller objectclasses e.g.bicycle ,pedestrian (see class-wise results in the supplementary material), and althoughthe TAM is more effective than mask IoU, the improvement plateaus at Thist= 4 (i.e. 2s into thepast). Another area for future work thus involves improving the tracking mechanism to handle longertime horizons and challenging object classes.6 ConclusionWe proposed a novel, online approach for 4D panoptic segmentation which leverages both LiDARscans and RGB images. We employ a transformer-based Panoptic Decoder which segments semanticclasses and object tracklets by attending to scenes features from both modalities. Furthermore, ourTracklet Association Module (TAM) accurately associates tracklets over time in a learned fashionby reasoning over spatial and appearance cues. 4D-Former achieves state-of-the-art results on thenuScenes and SemanticKITTI benchmarks, thus demonstrating its efficacy on large-scale, real-worlddata. We hope our work will spur advancement in SDV perception systems, and encourage otherresearchers to develop multi-sensor methods for further improvement.8AcknowledgmentsWe thank the anonymous reviewers for the insightful comments and suggestions. We would alsolike to thank the Waabi team for their valuable support.References[1] X. Zhu, H. Zhou, T. Wang, F. Hong, Y . Ma, W. Li, H. Li, and D. Lin. Cylindrical and asym-metrical 3d convolution networks for lidar segmentation. In CVPR , 2021.[2] J. Xu, R. Zhang, J. Dou, Y . Zhu, J. Sun, and S. Pu. Rpvnet: A deep and efficient range-point-voxel fusion network for lidar point cloud segmentation. In ICCV , 2021.[3] R. Cheng, R. Razani, E. Taghavi, E. Li, and B. Liu. af2-s3net: Attentive feature fusion withadaptive feature selection for sparse semantic segmentation network. In CVPR , 2021.[4] E. Li, S. Casas, and R. Urtasun. Memoryseg: Online lidar semantic segmentation with a latentmemory. In ICCV , 2023.[5] Y . Zhao, X. Zhang, and X. Huang. A divide-and-merge point cloud clustering algorithm forlidar panoptic segmentation. In ICRA , 2022.[6] E. Li, R. Razani, Y . Xu, and B. Liu. Smac-seg: Lidar panoptic segmentation via sparse multi-directional attention clustering. In ICRA , 2022.[7] X. Weng, J. Wang, D. Held, and K. Kitani. 3d multi-object tracking: A baseline and newevaluation metrics. In IROS , 2020.[8] H.-k. Chiu, J. Li, R. Ambrus ̧, and J. Bohg. Probabilistic 3d multi-modal, multi-object trackingfor autonomous driving. In ICRA , 2021.[9] M. Aygun, A. Osep, M. Weber, M. Maximov, C. Stachniss, J. Behley, and L. Leal-Taix ́e. 4dpanoptic lidar segmentation. In CVPR , 2021.[10] R. Marcuzzi, L. Nunes, L. Wiesmann, I. Vizzo, J. Behley, and C. Stachniss. Contrastiveinstance association for 4d panoptic segmentation using sequences of 3d lidar scans. IEEERobotics and Automation Letters , 7(2):1550–1557, 2022.[11] L. Kreuzberg, I. E. Zulfikar, S. Mahadevan, F. Engelmann, and B. Leibe. 4d-stop: Panopticsegmentation of 4d lidar using spatio-temporal object proposal generation and aggregation. InECCV Workshops , 2023.[12] H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan,and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In CVPR , 2020.[13] J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall. Se-mantickitti: A dataset for semantic scene understanding of lidar sequences. In ICCV , 2019.[14] H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas. Kpconv:Flexible and deformable convolution for point clouds. In ICCV , 2019.[15] A. Milioto, I. Vizzo, J. Behley, and C. Stachniss. Rangenet++: Fast and accurate lidar semanticsegmentation. In IROS , 2019.[16] C. Xu, B. Wu, Z. Wang, W. Zhan, P. Vajda, K. Keutzer, and M. Tomizuka. Squeezesegv3:Spatially-adaptive convolution for efficient point-cloud segmentation. In ECCV , 2020.[17] Y . Zhang, Z. Zhou, P. David, X. Yue, Z. Xi, B. Gong, and H. Foroosh. Polarnet: An improvedgrid representation for online lidar point clouds semantic segmentation. In CVPR , 2020.9[18] A. Milioto, J. Behley, C. McCool, and C. Stachniss. Lidar panoptic segmentation for au-tonomous driving. In IROS , 2020.[19] J. Li, X. He, Y . Wen, Y . Gao, X. Cheng, and D. Zhang. Panoptic-phnet: Towards real-time andhigh-precision lidar panoptic segmentation via clustering pseudo heatmap. In CVPR , 2022.[20] Z. Zhou, Y . Zhang, and H. Foroosh. Panoptic-polarnet: Proposal-free lidar point cloud panopticsegmentation. In CVPR , 2021.[21] S. Gasperini, M.-A. N. Mahani, A. Marcos-Ramiro, N. Navab, and F. Tombari. Panoster: End-to-end panoptic segmentation of lidar point clouds. IEEE Robotics and Automation Letters , 6(2):3216–3223, 2021.[22] K. Sirohi, R. Mohan, D. B ̈uscher, W. Burgard, and A. Valada. Efficientlps: Efficient lidarpanoptic segmentation. In IEEE Transactions on Robotics , 2021.[23] R. Razani, R. Cheng, E. Li, E. Taghavi, Y . Ren, and L. Bingbing. Gp-s3net: Graph-basedpanoptic sparse semantic segmentation network. In ICCV , 2021.[24] E. Li, R. Razani, Y . Xu, and B. Liu. Cpseg: Cluster-free panoptic segmentation of 3d lidarpoint clouds. In ICRA , 2023.[25] R. Marcuzzi, L. Nunes, L. Wiesmann, J. Behley, and C. Stachniss. Mask-based panoptic lidarsegmentation for autonomous driving. IEEE Robotics and Automation Letters , 8(2):1141–1148, 2023.[26] S. Su, J. Xu, H. Wang, Z. Miao, X. Zhan, D. Hao, and X. Li. Pups: Point cloud unified panopticsegmentation. In AAAI , 2023.[27] Z. Xiao, W. Zhang, T. Wang, C. C. Loy, D. Lin, and J. Pang. Position-guided point cloudpanoptic segmentation transformer. In arXiv , 2023.[28] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask trans-former for universal image segmentation. In CVPR , 2022.[29] C. Zheng, X. Yan, J. Gao, W. Zhao, W. Zhang, Z. Li, and S. Cui. Box-aware feature enhance-ment for single object tracking on point clouds. In ICCV , 2021.[30] R. E. Kalman. A new approach to linear filtering and prediction problems. 1960.[31] H. W. Kuhn. The hungarian method for the assignment problem. Naval research logisticsquarterly , 2(1-2):83–97, 1955.[32] C. Luo, X. Yang, and A. Yuille. Exploring simple 3d multi-object tracking for autonomousdriving. In ICCV , 2021.[33] R. Mohan and A. Valada. 7th ai driving olympics: 1st place report for panoptic tracking. arXiv ,2021.[34] M. Zhu, S. Han, H. Cai, S. Borse, M. G. Jadidi, and F. Porikli. 4d panoptic segmentation asinvariant and equivariant field prediction. In arXiv , 2023.[35] T. Yin, X. Zhou, and P. Kr ̈ahenb ̈uhl. Multimodal virtual point 3d detection. NeurIPS , 34:16494–16507, 2021.[36] S. V ora, A. H. Lang, B. Helou, and O. Beijbom. Pointpainting: Sequential fusion for 3d objectdetection. In CVPR , 2020.[37] C. Wang, C. Ma, M. Zhu, and X. Yang. Pointaugmenting: Cross-modal augmentation for 3dobject detection. In CVPR , pages 11794–11803, 2021.10[38] M. Liang, B. Yang, S. Wang, and R. Urtasun. Deep continuous fusion for multi-sensor 3dobject detection. In ECCV , 2018.[39] Z. Zhuang, R. Li, K. Jia, Q. Wang, Y . Li, and M. Tan. Perception-aware multi-sensor fusionfor 3d lidar semantic segmentation. In ICCV , 2021.[40] Y . Li, A. W. Yu, T. Meng, B. Caine, J. Ngiam, D. Peng, J. Shen, Y . Lu, D. Zhou, Q. V . Le, et al.Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection. In CVPR , 2022.[41] X. Bai, Z. Hu, X. Zhu, Q. Huang, Y . Chen, H. Fu, and C.-L. Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In CVPR , 2022.[42] X. Chen, T. Zhang, Y . Wang, Y . Wang, and H. Zhao. Futr3d: A unified sensor fusion frameworkfor 3d detection. In arXiv , 2022.[43] J. Li, H. Dai, H. Han, and Y . Ding. Mseg3d: Multi-modal 3d semantic segmentation forautonomous driving. In arXiv , 2023.[44] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR ,2016.[45] T.-Y . Lin, P. Doll ́ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramidnetworks for object detection. In CVPR , 2017.[46] H. Tang, Z. Liu, S. Zhao, Y . Lin, J. Lin, H. Wang, and S. Han. Searching efficient 3d architec-tures with sparse point-voxel convolution. In ECCV , 2020.[47] H. Tang, Z. Liu, X. Li, Y . Lin, and S. Han. Torchsparse: Efficient point cloud inference engine.InMLSys , 2022.[48] F. Hong, H. Zhou, X. Zhu, H. Li, and Z. Liu. Lidar-based panoptic segmentation via dynamicshifting network. In CVPR , 2021.[49] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-endobject detection with transformers. In ECCV , 2020.[50] A. Athar, J. Luiten, A. Hermans, D. Ramanan, and B. Leibe. Hodor: High-level object de-scriptors for object re-segmentation in video learned from static images. In CVPR , 2022.[51] M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ra-mamoorthi, J. Barron, and R. Ng. Fourier features let networks learn high frequency functionsin low dimensional domains. In NeurIPS , 2020.[52] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y . Fu, and A. C. Berg. Ssd: Singleshot multibox detector. In ECCV , 2016.[53] J. V . Hurtado, R. Mohan, W. Burgard, and A. Valada. Mopt: Multi-object panoptic tracking.InCVPR-W , 2020.[54] J.-N. Zaech, A. Liniger, D. Dai, M. Danelljan, and L. Van Gool. Learnable online graphrepresentations for 3d multi-object tracking. IEEE Robotics and Automation Letters , 2022.[55] X. Yan, J. Gao, C. Zheng, C. Zheng, R. Zhang, S. Cui, and Z. Li. 2dpass: 2d priors assistedsemantic segmentation on lidar point clouds. In ECCV , 2022.11Supplementary MaterialsA Tracklet Association ModuleWe provide an illustration of the proposed Tracklet Association Module (TAM) in Fig.6. The inputto our TAM is constructed by concatenating the following attributes of the input tracklet pair alongthe feature dimension: (1) their (x, y, z )mask centroid coordinates, (2) their respective trackletqueries, (3) the frame gap between them, and (4) their mask IoU. The frame gap and mask centroidcoordinates are expanded to 64-D each by applying sine/cosine activations with various frequencies.The concatenated set of features is input to a 4-layer MLP which produces a scalar association scorefor the input tracklet pair.t1t2 concatenate Association Score MLP x x-Mask Centroids Queries Frame Gap Mask IoU Figure 6: Illustration of the Tracklet Association Module (TAM).B Detailed Quantitative ResultsIn this section, we first present the 3D panoptic metrics on the two benchmarks for reference andthen provide the detailed class-wise metrics on the two datasets.Specifically, we present the 3D panoptic metrics on nuScenes validation set in Tab. 5. Please notethat we did not include any other methods in the table since it’s unfair to directly compare withother single-scan based methods on the 3D benchmark. For completeness, we evaluate our Se-manticKITTI results using 3D panoptic metrics and report the results in Tab. 6 for both cases: eval-uating only those points which are projectable into the camera (Camera FoV) and also the Full Scanwhich includes all LiDAR scan points. Unsurprisingly, because of the missing camera image input,our performance on the full scan (60.7 PQ) is lower than that on the camera FoV only (64.3 PQ).Lastly, we present the detailed per-class results for: nuScenes val set (Tab. 7), nuScenes test set(Tab. 8), and SemanticKITTI val set (Tab.9).PQ PQ†PQStPQThRQ RQStRQThSQ SQStSQTh4D-Former [Ours] 77.3 80.9 73.5 79.6 86.5 84.1 87.8 89.0 86.7 90.4Table 5: Results on nuScenes 3D panoptic segmentation validation benchmarkPQ PQ†PQStPQThRQ RQStRQThSQ SQStSQThmIoUFull Scan 60.7 65.4 56.6 66.4 70.3 68.8 72.4 76.0 72.9 80.1 66.3Camera FoV only 64.3 66.7 60.6 69.5 73.6 72.1 75.6 80.6 80.6 80.5 67.6Table 6: Results on SemanticKITTI 3D panoptic segmentation validation benchmark12MetricmeanBarrierBicycleBusCarConstructionMotorcyclePedestrainTraffic ConeTrailerTruckDrivableOther FlatSidewalkTerrainManmadeVegetationPTQ 75.17 64.11 74.33 79.05 90.89 64.64 81.87 88.03 83.04 58.67 76.92 95.61 51.92 68.92 54.61 82.46 87.59sPTQ 75.50 65.25 75.33 79.39 91.16 64.94 82.43 88.47 83.65 59.14 77.26 95.61 51.92 68.92 54.61 82.46 87.59IoU 78.86 82.74 52.69 90.41 94.31 54.95 88.96 82.66 68.98 65.41 82.57 96.32 71.33 73.36 75.54 91.80 89.75PQ 77.34 68.59 79.51 80.98 93.51 67.63 86.77 91.71 87.74 61.00 78.91 95.61 51.92 68.92 54.61 82.46 87.59SQ 89.02 82.53 87.83 93.95 95.73 88.56 91.36 93.59 90.53 86.60 93.66 96.13 84.50 79.75 78.79 91.11 89.64RQ 86.46 83.11 90.52 86.19 97.68 76.37 94.98 98.00 96.92 70.44 84.26 99.45 61.45 86.42 69.30 90.51 97.72Table 7: Class-wise results on nuScenes val set. Metrics are provided in [%]MetricmeanBarrierBicycleBusCarConstructionMotorcyclePedestrainTraffic ConeTrailerTruckDrivableOther FlatSidewalkTerrainManmadeVegetationPTQ 75.47 63.20 73.20 75.21 90.14 62.44 81.01 89.11 84.95 65.46 75.13 97.10 46.13 71.44 58.00 85.16 89.85sPTQ 75.90 64.63 73.98 75.42 90.45 63.73 81.92 89.57 85.48 66.14 75.43 97.10 46.13 71.44 58.00 85.16 89.85IoU 80.42 86.66 48.99 92.24 91.72 68.22 79.79 79.84 77.24 85.54 73.81 97.41 66.51 78.50 76.62 93.04 90.62PQ 77.99 68.63 78.30 77.48 93.01 69.07 86.69 92.64 89.13 68.17 77.05 97.10 46.13 71.44 58.00 85.16 89.85SQ 89.66 81.69 89.13 94.74 95.80 87.12 92.62 93.94 91.63 88.30 94.29 97.36 85.46 81.85 78.04 91.08 91.51RQ 86.59 84.01 87.85 81.78 97.09 79.28 93.60 98.62 97.28 77.19 81.71 99.73 53.98 87.28 74.31 93.50 98.19Table 8: Class-wise results on nuScenes test set. Metrics are provided in [%]C Qualitative Comparison (LiDAR-only vs. Fusion)Figures 7 and 8 provide a qualitative comparison of our proposed method with the LiDAR-onlybaseline (Tab. 4, row 1 in the main text). We provide the segmentation results in the LiDAR domainfor both LiDAR-only and fusion models in the first two columns, respectively, and the correspondingcamera view in the third column. The region of interest in each case is highlighted in red.In the first example (Fig. 7), the baseline wrongly segments the building at range as vegetation dueto the limited information obtained from the LiDAR input. By contrast, the final model with fusioneffectively leverages the rich contextual information from the camera (highlighted by the red box)and segments the correct class.In the second example (Fig. 8), the baseline fails to track pedestrians when they are close to eachother (the two pedestrians on the left are merged together as a single instance). By contrast, thecamera view provides distinct appearance cues for each pedestrian, enabling our model to accuratelysegment and track them.13MetricmeanCarBicycleMotorcycleTruckOther VehiclePersonBicyclistMotorcyclistRoadParkingSidewalkOther GroundBuildingFenceVegetationTrunkTerrainPoleTraffic SignAssoc 80.9 89.0 32.0 63.0 88.0 56.0 49.0 82.0 31.0 - - - - - - - - - - -IoU 67.6 97.0 61.0 78.0 84.0 73.0 83.0 95.0 0.0 96.0 44.0 80.0 4.0 88.0 56.0 89.0 71.0 76.0 66.0 45.0Table 9: Class-wise results on SemanticKITTI val set. Metrics are provided in [%]. Note thatassociation metrics are not available for ‘stuff’ classes.T = 1 T = 2 LiDAR-only With Fusion Camera View Figure 7: Qualitative comparison of semantic segmentation for LiDAR-only vs. fusion model onsequence 0105 from nuScenes.T = 1 T = 2 LiDAR-only With Fusion Camera View Figure 8: Qualitative comparison of instance segmentation and tracking for LiDAR-only vs. fusionmodel on sequence 0003 from nuScenes.14 |
HDYMjiukjn | ROBOPIANIST : Dexterous Piano Playing with DeepReinforcement LearningKevin Zakkaβ, δ, Philipp Wuβ, Laura Smithβ,Nimrod Gileadiδ, Taylor Howellσ, Xue Bin Pengψ, Sumeet Singhδ, Yuval Tassaδ,Pete Florenceδ, Andy Zengδ, Pieter AbbeelββUC Berkeley,δGoogle DeepMind,σStanford University,ψSimon Fraser UniversityCorrespondence to: zakka@berkeley.eduAbstract: Replicating human-like dexterity in robot hands represents one of thelargest open problems in robotics. Reinforcement learning is a promising ap-proach that has achieved impressive progress in the last few years; however, theclass of problems it has typically addressed corresponds to a rather narrow def-inition of dexterity as compared to human capabilities. To address this gap, weinvestigate piano-playing, a skill that challenges even the human limits of dexter-ity, as a means to test high-dimensional control, and which requires high spatialand temporal precision, and complex finger coordination and planning. We intro-duce R OBOPIANIST , a system that enables simulated anthropomorphic hands tolearn an extensive repertoire of 150piano pieces where traditional model-basedoptimization struggles. We additionally introduce an open-sourced environment,benchmark of tasks, interpretable evaluation metrics, and open challenges forfuture study. Our website featuring videos, code, and datasets is available athttps://kzakka.com/robopianist/ .Keywords: high-dimensional control, bi-manual dexterity1 IntroductionDespite decades-long research into replicating the dexterity of the human hand, high-dimensionalcontrol remains a grand challenge in robotics. This topic has inspired considerable research fromboth mechanical design [1, 2, 3] and control theoretic points of view [4, 5, 6, 7, 8]. Learning-basedapproaches have dominated the recent literature, demonstrating proficiency with in-hand cube orien-tation and manipulation [9, 10, 11] and have scaled to a wide variety of geometries [12, 13, 14, 15].These tasks, however, correspond to a narrow set of dexterous behaviors relative to the breadth ofhuman capabilities. In particular, most tasks are well-specified using a single goal state or termina-tion condition, limiting the complexity of the solution space and often yielding unnatural-lookingbehaviors so long as they satisfy the goal state. How can we bestow robots with artificial embodiedintelligence that exhibits the same precision and agility as the human motor control system?In this work, we seek to challenge our methods with tasks commensurate with this complexity andwith the goal of emergent human-like dexterous capabilities. To this end, we introduce a family oftasks where success exemplifies many of the properties that we seek in high-dimensional controlpolicies. Our unique desiderata are (i) spatial and temporal precision, (ii) coordination, and (iii)planning. We thus built an anthropomorphic simulated robot system, consisting of two robot handssituated at a piano, whose goal is to play a variety of piano pieces, i.e., correctly pressing sequencesof keys on a keyboard, conditioned on sheet music, in the form of a Musical Instrument DigitalInterface (MIDI) transcription (see Figure 1). The robot hands exhibit high degrees of freedom (22actuators per hand, for a total of 44), and are partially underactuated, akin to human hands. Control-ling this system entails sequencing actions so that the hands are able to hit exactly the right notesat exactly the right times; simultaneously achieving multiple different goals, in this case, fingers on7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.CAnthropomorphic hands with44 DOF position control EmbodimentFingeringAgent is rewarded for hitting a key with the corresponding(same color) fingerDPiano KeyBJoint with linear spring that generates sound when depressedPianoA88-key digital pianowith a sustain “pedal”Figure 1: ROBOPIANIST simulation featuring a full-size digital keyboard (A) with 88 piano keys modeled aslinear springs (B). In the piano playing task, two (left and right) anthropomorphic Shadow hands (C) are taskedwith playing a musical piece encoded as a trajectory of key presses (D).each hand hitting different notes without colliding; planning how to press keys in anticipation ofhow this would enable the hands to reach later notes under space and time constraints.We propose R OBOPIANIST , an end-to-end system that leverages deep reinforcement learning (RL)to synthesize policies capable of playing a diverse repertoire of musical pieces on the piano. Weshow that a combination of careful system design and human priors (in the form of fingering anno-tations) is crucial to its performance. Furthermore, we introduce R OBOPIANIST -REPERTOIRE -150,a benchmark of 150 songs, which allows us to comprehensively evaluate our proposed system andshow that it surpasses a strong model-based approach by over 83%. Finally, we demonstrate theeffectiveness of multi-task imitation learning in training a single policy capable of playing multiplesongs. To facilitate further research and provide a challenging benchmark for high-dimensional con-trol, we open source the piano-playing environment along with R OBOPIANIST -REPERTOIRE -150 athttps://kzakka.com/robopianist/ .2 Related WorkWe address related work within two primary areas: dexterous high-dimensional control, and roboticpianists. For a more comprehensive related work, please see Appendix A.Dexterous Manipulation as a High-Dimensional Control Problem. The vast majority of the ma-nipulation literature uses lower-dimensional systems (i.e., single-arm, simple end-effectors), whichcircumvents challenges that arise in more complex systems. Specifically, only a handful of general-purpose policy optimization methods have been shown to work on high-dimensional hands, even fora single hand [10, 9, 12, 11, 16, 14, 17, 7], and of these, only a subset has demonstrated results inthe real world [10, 9, 12, 11, 16]. Results with bi-manual hands are even rarer [15, 18]. In addi-tion, the class of problems generally tackled in these settings corresponds to a definition of dexteritypertaining to traditional manipulation skills [19], such as re-orientation, relocation, manipulatingsimply-articulated objects (e.g., door-opening, ball throwing and catching), and using simple tools(e.g., hammer) [20, 21, 15, 11, 22, 12]. This effectively reduces the search space for controls to pre-dominantly a single “basin-of-attraction" in behavior space per task. In contrast, our piano-playingtask encompasses a more complex notion of a goal, extendable to arbitrary difficulty by only varyingthe musical score.Robotic Piano Playing. Robotic pianists have a rich history within the literature, with severalworks dedicated to the design of specialized hardware [23, 24, 25, 26, 27, 28], and/or customizedcontrollers for playing back a song using pre-programmed commands [29, 30]. The works of Scholz[31], Yeon [32] use a dexterous hand to play the piano by leveraging a combination of inverse kine-matics and offline trajectory planning. In Xu et al. [33], the authors formulate piano playing as anRL problem for a single Allegro hand on a miniature piano and leverage tactile sensor feedback. Thepiano playing tasks considered in these prior works are relatively simple (e.g., play up to six succes-2sive notes, or three successive chords with only two simultaneous keys pressed for each chord). Onthe other hand, R OBOPIANIST allows a general bi-manual controllable agent to emulate a pianist’sgrowing proficiency by providing a large range of musical pieces with graded difficulties.3 Experimental SetupIn this section, we introduce the simulated piano-playing environment as well as the musical suiteused to train and evaluate our agent.Simulation details. We build our simulated piano-playing environment (depicted in Figure 1) us-ing the open-source MuJoCo [34, 35] physics engine. The piano model is a full-size digital keyboardwith 52 white keys and 36 black keys. We use a Kawai manual [36] as reference for the keys’ po-sitioning and dimensions. Each key is modeled as a joint with a linear spring and is considered“active” when its joint position is within 0.5◦of its maximum range, at which point a synthesizeris used to generate a musical note. We also implement a mechanism to sustain the sound of anycurrently active note to mimic the mechanical effect of a sustain pedal on a real piano. The left andright hands are Shadow Dexterous Hand [37] models from MuJoCo Menagerie [38], which havebeen designed to closely reproduce the kinematics of the human hand.Musical representation. We use the Musical Instrument Digital Interface (MIDI) standard torepresent a musical piece as a sequence of time-stamped messages corresponding to note-on ornote-off events. A message carries additional pieces of information such as the pitch of a noteand its velocity. We convert the MIDI file into a time-indexed note trajectory (a.k.a, piano roll),where each note is represented as a one-hot vector of length 88. This trajectory is used as the goalrepresentation for the agent, informing it which keys must be pressed at each time step.Musical evaluation. We use precision, recall, and F1 scores to evaluate the proficiency of ouragent. These metrics are computed by comparing the state of the piano keys at every time step withthe corresponding ground-truth state, averaged across all time steps. If at any given time there arekeys that should be “on” and keys that should be “off”, precision measures how good the agent is atnot hitting any of the “off” keys, while recall measures how good the agent is at hitting the “on” keys.The F1 score combines precision and recall into one metric, and ranges from 0 (if either precisionor recall is 0) to 1 (perfect precision and recall). We primarily use the F1 score for our evaluationsas it is a common heuristic accuracy score in the audio information retrieval literature [39], and wefound empirically that it correlates with qualitative performance on our tasks.MDP formulation. We model piano-playing as a finite-horizon Markov Decision Process (MDP)defined by a tuple (S,A, ρ, p, r, γ, H )where S ⊂Rnis the state space, A ⊂Rmis the actionspace, ρ(·)is the initial state distribution, p(·|s, a)governs the dynamics, r:S ×A → Rdefines therewards, γ∈[0,1)is the discount factor, and His the horizon. The goal of an agent is to maximizeits total expected discounted reward over the horizon: EhPHt=0γtr(st, at)i.Observations Unit SizeHand and forearm joints rad 52Forearm Cartesian position m 6Piano key joints rad 88Active fingers discrete L·10Piano key goal state discrete L·88Table 1: The agent’s observation space. Lcorresponds to the lookahead horizon.The agent’s observations consist of proprioceptive andgoal state information. The proprioceptive state containshand and keyboard joint positions. The goal state infor-mation contains a vector of key goal states obtained byindexing the piano roll at the current time step, as wellas a discrete vector indicating which fingers of the handsshould be used at that timestep. To successfully play thepiano, the agent must be aware of at least a few seconds’worth of its next goals in order to be able to plan appro-priately. Thus the goal state is stacked for some lookahead horizon L. A detailed description of theobservation space is given in Table 1. The agent’s action is 45 dimensional and consists of targetjoint angles for the hand with an additional scalar value for the sustain pedal. The agent predicts30 2.5M 5MSamples0.000.250.500.751.00F1T winkle T winkle0 2.5M 5MSamples0.000.250.500.751.00F1Clair de Lune0 2.5M 5MSamples0.000.250.500.751.00F1NocturneSparse Rew. + Fingering + Min. Energy + Lookahead + Reduced ActionFigure 2: ROBOPIANIST Design Considerations . The F1 performance for 3 songs of increasing difficulty.From left to right as specified by the legend, each curve inherits all MDP attributes of the curve before it butmakes one additional modification as described by its label.target joint angles at 20Hz and the targets are converted to torques using PD controllers running at500Hz. Since the reward function is crucial for learning performance, we discuss its design in Sec-tion 4.Fingering labels and dataset. Piano fingering refers to the assignment of fingers to notes, e.g.,“C4 played by the index finger of the right hand”. Sheet music will typically provide sparse fingeringlabels for the tricky sections of a piece to help guide pianists, and pianists will often develop theirown fingering preferences for a given piece. Since fingering labels aren’t available in MIDI files bydefault, we used annotations from the PIG dataset [40] to create a corpus of 150annotated MIDI filesfor use in the simulated environment. Overall this dataset we call REPERTOIRE-150 contains pianopieces from 24Western composers spanning baroque, classical and romantic periods. The piecesvary in difficulty, ranging from relatively easy (e.g., Mozart’s Piano Sonata K 545 in C major) tosignificantly harder (e.g., Scriabin’s Piano Sonata No. 5) and range in length from tens of secondsto over 3 minutes.4 R OBOPIANIST System DesignOur aim is to enable robots to exhibit sophisticated, high-dimensional control necessary for success-fully performing challenging musical pieces. Mastery of the piano requires (i) spatial and temporalprecision (hitting the right notes, at the right time), (ii) coordination (simultaneously achieving mul-tiple different goals, in this case, fingers on each hand hitting different notes, without colliding), and(iii) planning (how a key is pressed should be conditioned on the expectation of how it would enablethe policy to reach future notes). These behaviors do not emerge if we solely optimize with a sparsereward for pressing the right keys at the right times. The main challenge is exploration, which isfurther exacerbated by the high-dimensional nature of the control space.We overcome this challenge with careful system design and human priors, which we detail in thissection. The main results are illustrated in Figure 2. We pick 3 songs in increasing difficulty fromROBOPIANIST -REPERTOIRE -150. We note that “Twinkle Twinkle” is the easiest while “Nocturne”is the hardest. We train for 5M samples, 3 seeds. We evaluate the F1 every 10K training steps for 1episode (no stochasticity in the environment).4.1 Human priorsWe found that the agent struggled to play the piano with a sparse reward signal due to the explo-ration challenge associated with the high-dimensional action space. To overcome this issue, weincorporated the fingering labels within the reward formulation (Table 2). When we remove thisprior and only reward the agent for the key press, the agent’s F1 stays at zero and no substantiallearning progress is made. We suspect that the benefit of fingering comes not only from helpingthe agent achieve the current goal, but facilitating key presses in subsequent timesteps. Having thepolicy discover its own preferred fingering, like an experienced pianist, is an exciting direction forfuture research.44.2 Reward designWe first include a reward proportional to how depressed the keys that should be active are. Wethen add a constant penalty if any inactive keys are pressed hard enough to produce sound. Thisgives the agent some leeway to rest its fingers on inactive keys so long as they don’t produce sound.We found that giving a constant penalty regardless of the number of false positives was crucial forlearning; otherwise, the agent would become too conservative and hover above the keyboard withoutpressing any keys. In contrast, the smooth reward for pressing active keys plays an important rolein exploration by providing a dense learning signal. We introduce two additional shaping terms:(i) we encourage the fingers to be spatially close to the keys they need to press (as prescribed bythe fingering labels) to help exploration, and (ii) we minimize energy expenditure, which reducesvariance (across seeds) and erratic behavior control policies trained with RL are prone to generate.The total reward at a given time step is a weighted sum over the aforementioned components. Adetailed description of the reward function can be found in Appendix B.4.3 Peeking into the futureWe observe additional improvements in performance and variance from including future goal statesin the observation, i.e., increasing the lookahead horizon L. Intuitively, this allows the policy tobetter plan for future notes – for example by placing the non-finger joints (e.g., the wrist) in amanner that allows more timely reaching of notes at the next timestep.4.4 Constraining the action spaceTo alleviate exploration even further, we explore disabling degrees of freedom [41] in the ShadowHand that either do not exist in the human hand (e.g., the little finger being opposable) or are notstrictly necessary for most songs. We additionally reduce the joint range of the thumb. While thisspeeds up learning considerably, we observe that with additional training time, the full action spaceeventually achieves similar F1 performance.5 ResultsIn this section, we present our experimental findings on R OBOPIANIST -ETUDE -12, a sub-set of R OBOPIANIST -REPERTOIRE -150 consisting of 12 songs. The results on the fullROBOPIANIST -REPERTOIRE -150 can be found in Appendix C. We design our experiments to an-swer the following questions:(1) How does our method compare to a strong baseline in being able to play individual pieces?(2) How can we enable a single policy to learn to play more than one song?(3) What effects do our design decisions have on the feasibility of acquiring highly complex,dexterous control policies?5.1 Specialist Policy LearningFor our policy optimizer, we use a state-of-the-art model-free RL algorithm DroQ [42], one of sev-eral regularized variants of the widely-used Soft-Actor-Critic [43] algorithm. We evaluate onlinepredictive control (MPC) as a meaningful point of comparison. Specifically, we use the implemen-tation from Howell et al. [44] that leverages the physics engine as a dynamics model, and whichwas shown to solve previously-considered-challenging dexterous manipulation tasks [9, 13, 11] insimulation. Amongst various planner options in [44], we found the most success with PredictiveSampling , a derivative-free sampling-based method. A detailed discussion of this choice of base-line and its implementation can be found in Appendix F. Our method uses 5 million samples to trainfor each song using the same set of hyperparameters, and the MPC baseline is run at one-tenth ofreal-time speed to give the planner adequate search time.5Figure 3: Policies in the R OBOPIANIST environment displaying skilled piano behaviors such as (left) simul-taneously controlling both hands to reach for notes on opposite ends of the keyboard, (middle) playing a chordwith the left hand by precisely and simultaneously hitting a note triplet, and (right) playing a trill with the righthand, which involves rapidly alternating between two adjacent notes.Piano Sonata D845Partita No26Bagatelle Op3 No4French Suite No5 Waltz Op64 No1Piano Sonata K279French Suite No1Piano Sonata No21Kreisleriana Op16 No8 Golliwogg's CakewalkPiano Sonata No23French Suite No5T ask0.00.20.40.60.81.0F1RoboPianistMPCFigure 4: The F1 scores achieved byROBOPIANIST (blue) and MPC (green) foreach of the R OBOPIANIST -ETUDE -12 tasks.The quantitative results are shown in Figure 4. We ob-serve that the R OBOPIANIST agent significantly outper-forms the MPC baseline, achieving an average F1 score of0.79compared to 0.43for MPC. We hypothesize that themain bottleneck for MPC is compute: the planner strug-gles with the large search space which means the qual-ity of the solutions that can be found in the limited timebudget is poor. Qualitatively, our learned agent displaysremarkably skilled piano behaviors such as (1) simultane-ously controlling both hands to reach for notes on oppo-site ends of the keyboard, (2) playing chords by preciselyand simultaneously hitting note triplets, and (3) play-ingtrills by rapidly alternating between adjacent notes(see Figure 3). We encourage the reader to listen to thesepolicies on the supplementary website1.5.2 Multi-Song Policy Learning0 3e5 6e5 9e5Samples0.00.20.40.6F1(Train) Für Elise F10 3e5 6e5 9e5Samples0.0440.0480.0520.0560.060F1(T est) Etude-12 F1Multitask RL TrainingNumber of Training Songs1 2 4 8 16Figure 5: Multi-song training with RL . We test theability to learn multiple songs simultaneously. Thetraining performance on the shared song, Für Elise, isshown on the left. We see adding more songs degradesperformance on individual songs. On the right, we eval-uate on the held out R OBOPIANIST -ETUDE -12 songs,suggesting the difficulty of generalization with multi-task RL training.Multitask RL Ideally, a single agent shouldbe able to learn how to play all songs. Secondly,given enough songs to practice on, we wouldlike this agent to zero-shot generalize to newones. To investigate whether this is possible, wecreate a multi-task environment where the num-ber of tasks in the environment corresponds tothe number of songs available in the trainingset. Note that in this multi-song setting, envi-ronments of increasing size are additive (i.e.,a 2-song environment contains the same songas the 1-song environment plus an additionalone). We use Für Elise as the base song andreport the performance of a single agent trainedon increasing amounts of training songs in Fig-ure 5. We observe that training on an increas-ing amount of songs is significantly harder thantraining specialist policies on individual songs. Indeed, the F1 score on Für Elise continues to dropas the number of training songs increases, from roughly 0.7 F1 for 1 song (i.e., the specialist) toalmost 0 F1 for 16 songs. We also evaluate the agent’s zero-shot performance (i.e., no fine-tuning)on R OBOPIANIST -ETUDE -12 (which does not have overlap with the training songs) to test our RLagent’s ability to generalize. We see, perhaps surprisingly, that multitask RL training fails to posi-tively transfer on the test set regardless of the size of the pre-training tasks.1https://kzakka.com/robopianist/60 2e5 4e5Samples0.20.40.60.8F1(Train) Für Elise F10 2e5 4e5Samples0.0750.1000.1250.150F1(T est) Etude-12 F1Multitask BC - Scaling Songs# Train Songs1 2 4 8 16 32 64(a) Scaling songs for a model with a hidden dimension of 1024.0 2e5 4e5Samples0.00.20.40.6F1(Train) Für Elise F10 2e5 4e5Samples0.1200.1350.150F1(T est) Etude-12 F1Multitask BC with 64 Songs - Scaling Model SizeHidden Dimension64 128 256 512 1024 (b) Scaling the model size trained on 64 songs.Figure 6: Here we show the learning curves of BC, showing the interplay between the attempted number oftraining songs and model size. The song Für Elise, is in the training dataset for each experiment. We showthe F1 score across Für Elise and R OBOPIANIST -ETUDE -12. We see that for a small number of songs, themodel quickly overfits, suggesting that for the given model size the amount of data and diversity is lacking.We see that adding more songs results in a higher test F1, suggesting that the model is beginning to generalize.However, zero-shot test performance is far from that of the expert.As we increase the model size, the F1 score on the training song improves, suggesting that largermodels can better capture the complexity of the piano playing task across multiple songs.Multitask Behavioral Cloning Since multitask training with RL is challenging, we insteaddistill the specialist RL policies trained in Subsection 5.1 into a single multitask policy withBehavioral Cloning (BC) [45]. To do so, we collect 100 trajectories for each song in theROBOPIANIST -REPERTOIRE -150, hold out 2 for validation, and use the R OBOPIANIST -ETUDE -12trajectories for testing. We then train a feed-forward neural network to predict the expert actionsconditioned on the state and goal using a mean squared error loss. For a more direct comparisonwith multi-task RL, the dataset subsets use trajectories from the same songs used in the equivalentlysized multi-task RL experiments. We observe in Figure 6a that as we increase the number of songsin the training dataset, the model’s ability to generalize improves, resulting in higher test F1 scores.We note that for a large model (hidden size of 1024), training on too few songs results in overfittingbecause there isn’t enough data. Using a smaller model alleviates this issue, as shown in the moredetailed multitask BC results found in Appendix E, but smaller models are unable to perform well onmultiple songs. Despite having better generalization performance than RL, zero-shot performanceon R OBOPIANIST -ETUDE -12 falls far below the performance achieved by the specialist policy. Ad-ditionally, we investigate the effect of model size on the multitask BC performance. We train modelswith different hidden dimensions (fixing the number of hidden layers) on a dataset of 64 songs. Asshown in Figure 6b, a smaller hidden dimension results in lower F1 performance which most likelyindicates underfitting.5.3 Further analysisIn this section, we discuss the effect of certain hyperparameters on the performance of the RL agent.Control frequency and lookahead horizon: Figure 7 illustrates the interplay between the controlfrequency (defined as the reciprocal of control timestep) and the lookahead horizon L, and theireffect on the F1 score. Too large of a control timestep can make it impossible to play faster songs,while a smaller control timestep increases the effective task horizon, thereby increasing the compu-tational complexity of the task. Lookahead on the other hand controls how far into the future theagent can see goal states. We observe that as the control timestep decreases, the lookahead must beincreased to maintain the agent ability to see and reason about future goal states. A control frequencyof 20Hz (0.05 seconds) is a sweet spot, with notably 100Hz (0.01 seconds) drastically reducing thefinal score. At 100Hz, the MDP becomes too long-horizon, which complicates exploration, and at10Hz, the discretization of the MIDI file becomes too coarse, which negatively impacts the timingof the notes.70.01 0.025 0.05 0.1Control Timestep0 1 5 10 20Lookahead0.08 0.32 0.43 0.420.14 0.43 0.50 0.470.21 0.52 0.59 0.490.26 0.55 0.63 0.560.25 0.53 0.64 0.560.10.20.30.40.50.6F1Figure 7: Lookahead & Control Timestep .These parameters jointly modulate the fore-sight and control granularity of the agent. Wefind a control timestep of 0.05 offers optimalcontrol granularity, and performance generallyimproves with the lookahead up to a point.0 2.5M 5MSamples0.00.20.40.60.81.0F1T winkle T winkle0 2.5M 5MSamples0.00.20.40.60.81.0F1Für Elise0.84 0.88 0.92 0.96 0.99Figure 8: Discount factor . We find the discount factor tohave a high impact on F1, especially as the task complexityincreases. While on simpler songs (Twinkle Twinkle) the dis-count factor does not have any visible effect, it is critical forfaster-paced songs (Für Elise).The curves show that any dis-count in the range of 0.84-0.92 can be usd with similar effect.Discount factor : As shown in Figure 8, the discount factor has a significant effect on the F1 score.We sweep over a range of discount factors on two songs of varying difficulty. Notably, we findthat discounts in the range 0.84to0.92produce policies with roughly the same performance andhigh discount factors (e.g., 0.99, 0.96) result in lower F1 performance. Qualitatively, on Für Elise,we noticed that agents trained with higher discounts were often conservative, opting to skip entirenote sub-segments. However, agents trained with lower discount factors were willing to risk makingmistakes in the early stages of training and thus quickly learned how to correctly strike the notes andattain higher F1 scores.6 DiscussionLimitations While R OBOPIANIST produces agents that push the boundaries of bi-manual dexter-ous control, it does so in a simplified simulation of the real world. For example, the velocity of anote, which modulates the strength of the key press, is ignored in the current reward formulation.Thus, the dynamic markings of the composition are ignored. Furthermore, our RL training approachcan be considered wasteful, in that we learn by attempting to play the entire piece at the start of everyepisode, rather than focusing on the parts of the song that need more practicing. Finally, our resultshighlight the challenges of multitask learning especially in the RL setting.Conclusion In this paper, we introduced R OBOPIANIST , which provides a simulation frameworkand suite of tasks in the form of a corpus of songs, together with a high-quality baseline and vari-ous axes of evaluation, for studying the challenging high-dimensional control problem of masteringpiano-playing with two hands. Our results demonstrate the effectiveness of our approach in learninga broad repertoire of musical pieces, and highlight the importance of various design choices re-quired for achieving this performance. There is an array of exciting future directions to explore withROBOPIANIST including: leveraging human priors to accelerate learning (e.g., motion priors fromYouTube), studying zero-shot generalization to new songs, incorporating multimodal data such assound and touch. We believe that R OBOPIANIST serves as a valuable platform for the research com-munity, enabling further advancements in high-dimensional control and dexterous manipulation.AcknowledgmentsWe would like to thank members of the Robot Learning Lab and anonymous reviewers for theirfeedback and helpful comments. This project was supported in part by ONR #N00014-22-1-2121under the Science of Autonomy program.8References[1] S. Yuan, L. Shao, C. L. Yako, A. M. Gruebele, and J. K. Salisbury. Design and control of rollergrasper v2 for in-hand manipulation. 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 9151–9158, 2020.[2] Z. Xu and E. Todorov. Design of a highly biomimetic anthropomorphic robotic hand towardsartificial limb regeneration. In 2016 IEEE International Conference on Robotics and Automa-tion (ICRA) , pages 3485–3492, 2016. doi:10.1109/ICRA.2016.7487528.[3] C. McCann, V . Patel, and A. Dollar. The stewart hand: A highly dexterous, six-degrees-of-freedom manipulator based on the stewart-gough platform. icra, 28(2):23–36, 2021. doi:10.1109/MRA.2021.3064750.[4] R. Fearing. Implementing a force strategy for object re-orientation. In Proceedings. 1986IEEE International Conference on Robotics and Automation , volume 3, pages 96–102, 1986.doi:10.1109/ROBOT.1986.1087655.[5] D. Rus. In-hand dexterous manipulation of piecewise-smooth 3-d objects. The InternationalJournal of Robotics Research , 18(4):355–381, 1999.[6] A. Okamura, N. Smaby, and M. Cutkosky. An overview of dexterous manipulation. In Pro-ceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics andAutomation. Symposia Proceedings (Cat. No.00CH37065) , volume 1, pages 255–262 vol.1,2000. doi:10.1109/ROBOT.2000.844067.[7] T. Pang, H. J. T. Suh, L. Yang, and R. Tedrake. Global planning for contact-rich manipulationvia local smoothing of quasi-dynamic contact models. ArXiv , abs/2206.10787, 2022.[8] R. R. Ma and A. M. Dollar. On dexterity and dexterous manipulation. In 2011 15th Interna-tional Conference on Advanced Robotics (ICAR) , pages 1–7, 2011. doi:10.1109/ICAR.2011.6088576.[9] M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. W. Pachocki,A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder,L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. The International Jour-nal of Robotics Research , 39:20 – 3, 2018.[10] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron,A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. A. Tezak, J. Tworek, P. Welinder,L. Weng, Q. Yuan, W. Zaremba, and L. M. Zhang. Solving rubik’s cube with a robot hand.ArXiv , abs/1910.07113, 2019.[11] A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk,K. Van Wyk, A. Zhurkevich, B. Sundaralingam, et al. Dextreme: Transfer of agile in-handmanipulation from simulation to reality. arXiv preprint arXiv:2210.13702 , 2022.[12] T. Chen, J. Xu, and P. Agrawal. A system for general in-hand object re-orientation. In Confer-ence on Robot Learning , pages 297–307, 2022.[13] A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. Conference on Robot Learning (CoRL) , abs/1909.11652, 2019.[14] T. Chen, M. Tippur, S. Wu, V . Kumar, E. H. Adelson, and P. Agrawal. Visual dexterity: In-handdexterous manipulation from depth. ArXiv , abs/2211.11744, 2022.[15] Y . Chen, Y . Yang, T. Wu, S. Wang, X. Feng, J. Jiang, S. M. McAleer, H. Dong, Z. Lu, and S.-C. Zhu. Towards human-level bimanual dexterous manipulation with reinforcement learning.arXiv preprint arXiv:2206.08686 , 2022.9[16] H. Qi, A. Kumar, R. Calandra, Y . Ma, and J. Malik. In-Hand Object Rotation via Rapid MotorAdaptation. In Conference on Robot Learning (CoRL) , 2022.[17] I. Mordatch, E. Todorov, and Z. Popovi ́c. Discovery of complex behaviors through contact-invariant optimization. ACM Trans. Graph. , 31(4), jul 2012. ISSN 0730-0301. doi:10.1145/2185520.2185539. URL https://doi.org/10.1145/2185520.2185539 .[18] A. M. Castro, F. N. Permenter, and X. Han. An unconstrained convex formulation of compliantcontact. IEEE Transactions on Robotics , 2022.[19] R. R. Ma and A. M. Dollar. On dexterity and dexterous manipulation. In 2011 15th Interna-tional Conference on Advanced Robotics (ICAR) , pages 1–7. IEEE, 2011.[20] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4RL: Datasets for deep data-drivenreinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[21] C. Smith, Y . Karayiannidis, L. Nalpantidis, X. Gratal, P. Qi, D. V . Dimarogonas, and D. Kragic.Dual arm manipulation—a survey. Robotics and Autonomous systems , 60(10):1340–1353,2012.[22] H. J. Charlesworth and G. Montana. Solving challenging dexterous manipulation tasks withtrajectory optimisation and reinforcement learning. In International Conference on MachineLearning , pages 1496–1506. PMLR, 2021.[23] I. Kato, S. Ohteru, K. Shirai, T. Matsushima, S. Narita, S. Sugano, T. Kobayashi, and E. Fuji-sawa. The robot musician ‘wabot-2’(waseda robot-2). Robotics , 3(2):143–155, 1987.[24] J.-C. Lin, H.-H. Huang, Y .-F. Li, J.-C. Tai, and L.-W. Liu. Electronic piano playing robot. In2010 International Symposium on Computer, Communication, Control and Automation (3CA) ,volume 2, pages 353–356. IEEE, 2010.[25] T. Maloney. Piano-playing robotic arm. Technical report, 2019.[26] J. Hughes and P. Maiolino. An anthropomorphic soft skeleton hand exploiting conditionalmodels for piano playing. Science Robotics , 3(25), 2018.[27] R. Castro Ornelas. Robotic Finger Hardware and Controls Design for Dynamic Piano Playing .PhD thesis, Massachusetts Institute of Technology, 2022.[28] D. Zhang, J. Lei, B. Li, D. Lau, and C. Cameron. Design and analysis of a piano playing robot.In2009 International Conference on Information and Automation , pages 757–761, 2009. doi:10.1109/ICINFA.2009.5205022.[29] Y .-F. Li and L.-L. Chuang. Controller design for music playing robot—applied to the anthro-pomorphic piano robot. In 2013 IEEE 10th International Conference on Power Electronicsand Drive Systems (PEDS) , pages 968–973. IEEE, 2013.[30] A. Zhang, M. Malhotra, and Y . Matsuoka. Musical piano performance by the act hand. In 2011IEEE international conference on robotics and automation , pages 3536–3541. IEEE, 2011.[31] B. Scholz. Playing Piano with a Shadow Dexterous Hand . PhD thesis, Universität Hamburg,2019.[32] S. Yeon. Playing piano with a robotic hand, 2022. URL https://github.com/seongho-yeon/playing-piano-with-a-robotic-hand .[33] H. Xu, Y . Luo, S. Wang, T. Darrell, and R. Calandra. Towards learning to play piano withdexterous hands and touch. In 2022 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 10410–10416. IEEE, 2022.10[34] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE, 2012. doi:10.1109/IROS.2012.6386109.[35] S. Tunyasuvunakool, A. Muldal, Y . Doron, S. Liu, S. Bohez, J. Merel, T. Erez, T. Lil-licrap, N. Heess, and Y . Tassa. dm_control: Software and tasks for continuous con-trol. Software Impacts , 6:100022, 2020. ISSN 2665-9638. doi:https://doi.org/10.1016/j.simpa.2020.100022. URL https://www.sciencedirect.com/science/article/pii/S2665963820300099 .[36] K. M. Instruments. Kawai Vertical Piano Regulation Manual, 2019. URL https://kawaius.com/wp-content/uploads/2019/04/Kawai-Upright-Piano-Regulation-Manual.pdf.[37] Shadow Dexterous Hand, 2005. URL https://www.shadowrobot.com/dexterous-hand-series/ .[38] M. M. Contributors. MuJoCo Menagerie: A collection of high-quality simulation models forMuJoCo, 2022. URL http://github.com/deepmind/mujoco_menagerie .[39] C. Raffel, B. McFee, E. J. Humphrey, J. Salamon, O. Nieto, D. Liang, and D. P. W. Ellis.Mir_eval: A transparent implementation of common mir metrics. In International Society forMusic Information Retrieval Conference , 2014.[40] E. Nakamura, Y . Saito, and K. Yoshii. Statistical learning and estimation of piano fingering.ArXiv , abs/1904.10237, 2019.[41] L. Smith, I. Kostrikov, and S. Levine. A walk in the park: Learning to walk in 20 minutes withmodel-free reinforcement learning. ArXiv , abs/2208.07860, 2022.[42] T. Hiraoka, T. Imagawa, T. Hashimoto, T. Onishi, and Y . Tsuruoka. Dropout q-functions fordoubly efficient reinforcement learning. International Conference on Learning Representa-tions (ICLR) , 2022.[43] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In ICML , 2018.[44] T. A. Howell, N. Gileadi, S. Tunyasuvunakool, K. Zakka, T. Erez, and Y . Tassa. Predictivesampling: Real-time behaviour synthesis with mujoco. ArXiv , abs/2212.00541, 2022.[45] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS , 1988.[46] A. M. Okamura, N. Smaby, and M. R. Cutkosky. An overview of dexterous manipulation. InProceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Roboticsand Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 1, pages 255–262.IEEE, 2000.[47] A. Rajeswaran, V . Kumar, A. Gupta, J. Schulman, E. Todorov, and S. Levine. Learning com-plex dexterous manipulation with deep reinforcement learning and demonstrations. ArXiv ,abs/1709.10087, 2017.[48] B. Sundaralingam and T. Hermans. Relaxed-rigidity constraints: kinematic trajectory opti-mization and collision avoidance for in-grasp manipulation. Autonomous Robots , 43:469–483,2019.[49] I. Radosavovic, X. Wang, L. Pinto, and J. Malik. State-only imitation learning for dexterousmanipulation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 7865–7871. IEEE, 2021.11[50] N. Heess, D. TB, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y . Tassa, T. Erez, Z. Wang,S. Eslami, et al. Emergence of locomotion behaviours in rich environments. arXiv preprintarXiv:1707.02286 , 2017.[51] A. Sharma, S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-aware unsuperviseddiscovery of skills. In International Conference on Learning Representations , 2020.[52] S. Tunyasuvunakool, A. Muldal, Y . Doron, S. Liu, S. Bohez, J. Merel, T. Erez, T. Lillicrap,N. Heess, and Y . Tassa. dm_control: Software and tasks for continuous control. SoftwareImpacts , 6:100022, 2020.[53] X. B. Peng, Z. Ma, P. Abbeel, S. Levine, and A. Kanazawa. AMP: Adversarial motion priorsfor stylized physics-based character control. ACM Transactions on Graphics (TOG) , 40(4):1–20, 2021.[54] J. Merel, L. Hasenclever, A. Galashov, A. Ahuja, V . Pham, G. Wayne, Y . W. Teh, and N. Heess.Neural probabilistic motor primitives for humanoid control. In International Conference onLearning Representations , 2019.[55] J. Merel, S. Tunyasuvunakool, A. Ahuja, Y . Tassa, L. Hasenclever, V . Pham, T. Erez, G. Wayne,and N. Heess. Catch & carry: reusable neural controllers for vision-guided whole-body tasks.ACM Transactions on Graphics (TOG) , 39(4):39–1, 2020.[56] A. Escontrela, X. B. Peng, W. Yu, T. Zhang, A. Iscen, K. Goldberg, and P. Abbeel. Adver-sarial motion priors make good substitutes for complex reward functions. In 2022 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 25–32. IEEE, 2022.[57] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula,A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transfor-mations of Python+NumPy programs, 2018. URL http://github.com/google/jax .[58] H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning.arXiv e-prints , 2015.[59] S. Fujimoto, H. van Hoof, and D. Meger. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Machine Learning,ICML 2018, Stockholmsmassan, Stockholm, Sweden, July 10-15, 2018 , 2018.[60] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: asimple way to prevent neural networks from overfitting. J. Mach. Learn. Res. , 15:1929–1958,2014.[61] J. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. ArXiv , abs/1607.06450, 2016.[62] X. Glorot and Y . Bengio. Understanding the difficulty of training deep feedforward neuralnetworks. In International Conference on Artificial Intelligence and Statistics , 2010.[63] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR , abs/1412.6980,2014.[64] Y . Tassa, T. Erez, and E. Todorov. Synthesis and stabilization of complex behaviors throughonline trajectory optimization. 2012 IEEE/RSJ International Conference on Intelligent Robotsand Systems , pages 4906–4913, 2012.12A Extended Related WorkWe address related work within two primary areas: dexterous high-dimensional control, and roboticpianists.Dexterous Manipulation and High-Dimensional Control The vast majority of the control lit-erature uses much lower-dimensional systems (i.e., single-arm, simple end-effectors) than high-dimensional dexterous hands. Specifically, only a handful of general-purpose policy optimizationmethods have been shown to work on high-dimensional hands, even for a single hand [10, 9, 12, 11,16, 14, 17, 7], and of these, only a subset has demonstrated results in the real world [10, 9, 12, 11, 16].Results with bi-manual hands are even rarer, even in simulation only [15, 18].As a benchmark, perhaps the most distinguishing aspect of R OBOPIANIST is in the definition of“task success”. As an example, general manipulation tasks are commonly framed as the continualapplication of force/torque on an object for the purpose of a desired change in state (e.g., SE(3) poseand velocity). Gradations of dexterity are predominantly centered around the kinematic redundancyof the arm or the complexity of the end-effector, ranging from parallel jaw-grippers to anthropomor-phic hands [46, 15]. A gamut of methods have been developed to accomplish such tasks, rangingfrom various combinations of model-based and model-free RL, imitation learning, hierarchical con-trol, etc. [47, 10, 13, 12, 48, 49]. However, the class of problems generally tackled correspondsto a definition of dexterity pertaining to traditional manipulation skills [19], such as re-orientation,relocation, manipulating simply-articulated objects (e.g., door opening, ball throwing and catching),and using simple tools (e.g., hammer) [20, 21, 15, 11, 22]. The only other task suite that we know ofthat presents bi-manual tasks, the recent Bi-Dex [15] suite, presents a broad collection of tasks thatfall under this category.While these works represent an important class of problems, we explore an alternative notion ofdexterity and success. In particular, for most all the aforementioned suite of manipulation tasks,the “goal” state is some explicit, specific geometric function of the final states; for instance, anopen/closed door, object re-oriented, nail hammered, etc. This effectively reduces the search spacefor controls to predominantly a single “basin-of-attraction" in behavior space per task. In contrast,the R OBOPIANIST suite of tasks encompasses a more complex notion of a goal, which is encodedthrough a musical performance. In effect, this becomes a highly combinatorially variable sequenceof goal states, extendable to arbitrary difficulty by only varying the musical score. “Success” isgraded on accuracy over an entire episode; concretely, via a time-varying non-analytic output ofthe environment, i.e., the music. Thus, it is not a matter of the “final-state” that needs to satisfycertain termination/goal conditions, a criterion which is generally permissive of less robust executionthrough the rest of the episode, but rather the behavior of the policy throughout the episode needs tobe precise and musical.Similarly, the literature on humanoid locomotion and more broadly, “character control", another im-portant area of high-dimensional control, primarily features tasks involving the discovery of stablewalking/running gaits [50, 51, 52], or the distillation of a finite set of whole-body movement pri-ors [53, 54, 55], to use downstream for training a task-level policy. Task success is typically encodedvia rewards for motion progress and/or reaching a terminal goal condition. It is well-documented thatthe endless pursuit of optimizing for these rewards can yield unrealistic yet “high-reward" behaviors.While works such as [53, 56] attempt to capture stylistic objectives via leveraging demonstrationdata, these reward functions are simply appended to the primary task objective. This scalarizationof multiple objectives yields an arbitrarily subjective Pareto curve of optimal policies. In contrast,performing a piece of music entails both objectively measurable precision with regards to melodicand rhythmic accuracy, as well as a subjective measure of musicality. Mathematically, this translatesasstylistic constraint satisfaction, paving the way for innovative algorithmic advances.Robotic Piano Playing Robotic pianists have a rich history within the literature, with several worksdedicated to the design of specialized hardware [23, 24, 25, 26, 27, 28], and/or customized con-trollers for playing back a song using pre-programmed commands (open-loop) [29, 30]. The work13in [31] leverages a combination of inverse kinematics and trajectory stitching to play single keysand playback simple patterns and a song with a Shadow hand [37]. More recently, in [32], theauthor simulated robotic piano playing using offline motion planning with inverse kinematics fora 7-DoF robotic arm, along with an Iterative Closest Point-based heuristic for selecting fingeringfor a four-fingered Allegro hand. Each hand is simulated separately, and the audio results are com-bined post-hoc. Finally, in [33], the authors formulate piano playing as an RL problem for a singleAllegro hand (four fingers) on a miniature piano, and additionally leverage tactile sensor feedback.However, the tasks considered are rather simplistic (e.g., play up to six successive notes, or threesuccessive chords with only two simultaneous keys pressed for each chord). The R OBOPIANISTbenchmark suite is designed to allow a general bi-manual controllable agent to emulate a pianist’sgrowing proficiency on the instrument by providing a curriculum of musical pieces, graded in diffi-culty. Leveraging two underactuated anthropomorphic hands as actuators provides a level of realismand exposes the challenge of mastering this suite of high-dimensional control problems.14B Detailed Reward FunctionReward Formula Weight ExplanationKey Press 0.5·g(||ks−kg||2) + 0.5·(1−1{false positive }) 1 Press the right keys and only the right keysEnergy Penalty |τjoints|⊤|vjoints| -5e-3 Minimize energy expenditureFinger Close to Key g(||pf−pk||2) 1 Shaped reward to bring fingers to keyTable 2: The reward function used to train R OBOPIANIST agents. τrepresents the joint torque, vis the jointvelocity, pfandpkrepresent the position of the finger and key in the world frame respectively, ksandkgrepresent the current and the goal states of the key respectively, and gis a function that transforms the distancesto rewards in the [0,1]range.15C Full Repertoire Results0.00 0.25 0.50 0.75 1.00F1T askRoboPianistMPCFigure 9: Results on the full repertoire of 150 songs.16D R OBOPIANIST Training detailsComputing infrastructure and experiment running timeOur model-free RL codebase is implemented in JAX [57]. Experiments were performed on a GoogleCloud n1-highmem-64 machine with an Intel Xeon E5-2696V3 Processor hardware with 32 cores(2.3 GHz base clock), 416 GB RAM and 4 Tesla K80 GPUs. Each “run”, i.e., the training andevaluation of a policy on one task with one seed, took an average of 5 hrs wall clock time. Theserun times are recorded while performing up to 8 runs in parallel.Network architectureWe use a regularized variant of clipped double Q-learning [58, 59], specifically DroQ [42], forthe critic. Each Q-function is parameterized by a 3-layer multi-layer perceptron (MLP) with ReLUactivations. Each linear layer is followed by dropout [60] with a rate of 0.01and layer normaliza-tion [61]. The actor is implemented as a tanh -diagonal-Gaussian, and is also parameterized by a3-layer MLP that outputs a mean and covariance. Both actor and critic MLPs have hidden layerswith256neurons and their weights are initialized with Xavier initialization [62], while their biasesare initialized to zero.Training and evaluationWe first collect 5000 seed observations with a uniform random policy, after which we sample actionsusing the RL policy. We then perform one gradient update every time we receive a new environmentobservation. We use the Adam [63] optimizer for neural network optimization. Evaluation happensin parallel in a background thread every 10000 steps. The latest policy checkpoint is rolled out bytaking the mean of the output (i.e., no sampling). Since our environment is “fixed”, we perform onlyone rollout per evaluation.Reward formulationThe reward function for training the RL agent consists of three terms: 1) a key press term rkey, 2) amove finger to key term rfinger, and 3) an energy penalty term renergy .rkeyencourages the policy to press the keys that need to be pressed and discourages it from pressingkeys that shouldn’t be pressed. It is implemented as:rkey= 0.5· 1KKXig(||kis−1||2)!+ 0.5·(1−1{false positive }),where K is the number of keys that need to be pressed at the current timestep, ksis the normal-ized joint position of the key between 0 and 1, and 1{false positive }is an indicator function that is1 if any key that should not be pressed creates a sound. gis the tolerance function from thedm_control [52] library: it takes the L2 distance of ksand1and converts it into a bounded positivenumber between 0 and 1. We use the parameters bounds=0.05 andmargin=0.5 .rfinger encourages the fingers that are active at the current timestep to move as close as possible tothe keys they need to press. It is implemented as:rfinger=1KKXig(||pif−pik||2),where pfis the Cartesian position of the finger and piis the Cartesian position of a point centeredat the surface of the key. gfor this reward is parameterized by bounds=0.01 andmargin=0.1 .Finally, renergy penalizes high energy expenditure and is implemented as:renergy =|τjoints|⊤|vjoints|,where τjoints is a vector of joint torques and vjoints is a vector of joint velocities.The final reward function sums up the aforementioned terms as follows:rtotal=rkey+rfinger−0.005·renergy17Other hyperparametersFor a comprehensive list of hyperparameters used for training the model-free RL policy, see Table 3.Hyperparameter ValueTotal train steps 5MOptimizerType ADAMLearning rate 3×10−4β1 0.9β2 0.999CriticHidden units 256Hidden layers 3Non-linearity ReLUDropout rate 0.01ActorHidden units 256Hidden layers 3Non-linearity ReLUMisc.Discount factor 0.99Minibatch size 256Replay period every 1stepEval period every 10000 stepNumber of eval episodes 1Replay buffer capacity 1MSeed steps 5000Critic target update frequency 1Actor update frequency 1Critic target EMA momentum ( τQ) 0.005Actor log std dev. bounds [−20,2]Entropy temperature 1.0Learnable temperature TrueTable 3: Hyperparameters for all model-free RL experiments.E Multitask BC Results180 2e5 4e50.00.20.40.60.864(Train) Für Elise F10 2e5 4e50.000.040.080.120.16(T est) Etude-12 F10 2e5 4e50.000.150.300.450.60Train Loss0 2e5 4e50.000.050.100.150.200.25Validation Loss0 2e5 4e50.00.30.60.91.2T est Loss0 2e5 4e50.00.20.40.60.81280 2e5 4e50.000.040.080.120.160 2e5 4e50.000.150.300.450.600 2e5 4e50.000.050.100.150.200.250 2e5 4e50.00.30.60.91.20 2e5 4e50.00.20.40.60.82560 2e5 4e50.000.040.080.120.160 2e5 4e50.000.150.300.450.600 2e5 4e50.000.050.100.150.200.250 2e5 4e50.00.30.60.91.20 2e5 4e50.00.20.40.60.85120 2e5 4e50.000.040.080.120.160 2e5 4e50.000.150.300.450.600 2e5 4e50.000.050.100.150.200.250 2e5 4e50.00.30.60.91.20 2e5 4e5Samples0.00.20.40.60.810240 2e5 4e5Samples0.000.040.080.120.160 2e5 4e5Samples0.000.150.300.450.600 2e5 4e5Samples0.000.050.100.150.200.250 2e5 4e5Samples0.00.30.60.91.2# Train Songs1 2 4 8 16 32 64Figure 10: Caption for the figure.F BaselinesComputing infrastructure and experiment running timeOur MPC codebase is implemented in C++ with MJPC [44]. Experiments were performed on a2021 M1 Max Macbook Pro with 64 GB of RAM.AlgorithmWe use MPC with Predictive Sampling (PS) as the planner. PS is a derivative-free sampling-basedalgorithm that iteratively improves a nominal sequence of actions using random search. Concretely,Ncandidates are created at every iteration by sampling from a Gaussian with the nominal as themean and a fixed standard deviation σ. The returns from the candidates are evaluated, after whichthe highest scoring candidate is set as the new nominal. The action sequences are represented withcubic splines to reduce the search space and smooth the trajectory. In our experiments, we used19N= 10 ,σ= 0.05, and a spline dimension of 2. We plan over a horizon of 0.2seconds, use aplanning time step of 0.01seconds and a physics time step of 0.005seconds.Cost formulationThe cost function for the MPC baseline consists of 2 terms: 1) a key press term ckey, 2) and a movefinger to key term cfinger.The costs are implemented similarly to the model-free baseline, but don’t make use of the gfunction,i.e., they solely consist in unbounded l2 distances.The total cost is thus:ctotal=ckey+cfingerNote that we experimented with a control cost and an energy cost but they decreased performanceso we disabled them.Alternative baselinesWe also tried the optimized derivative-based implementation of iLQG [64] also provided by [44],but this was not able to make substantial progress even at significantly slower than real-time speeds.iLQG is difficult to make real time because the action dimension is large and the algorithm theoreti-cal complexity is O(|A|3·H). The piano task presents additional challenges due to the large numberof contacts that are generated at every time step. This make computing derivatives for iLQG veryexpensive (particularly for our implementation which used finite-differencing to compute them). Apossible solution would be to use analytical derivatives and differentiable collision detection.Besides online MPC, we could have used offline trajectory optimization to compute short referencetrajectories for each finger press offline and then track these references online (in real time) usingLQR. We note, however, that the (i) high dimensionality, (ii) complex sequence of goals addingmany constraints, and (iii) overall temporal length (tens of seconds) of the trajectories pose chal-lenges for this sort of approach.20 |
_DYsYC9smK | DYNAMO-GRASP: DYNAMics-aware Optimizationfor GRASP Point Detection in Suction GrippersBoling Yang*1, Soofiyan Atar*1, Markus Grotz1, Byron Boots1, Joshua R. Smith11The University of Washington*Authors Contributed EquallyAbstract: In this research, we introduce a novel approach to the challenge of suc-tion grasp point detection. Our method, exploiting the strengths of physics-basedsimulation and data-driven modeling, accounts for object dynamics during thegrasping process, markedly enhancing the robot’s capability to handle previouslyunseen objects and scenarios in real-world settings. We benchmark DYNAMO-GRASP against established approaches via comprehensive evaluations in bothsimulated and real-world environments. DYNAMO-GRASP delivers improvedgrasping performance with greater consistency in both simulated and real-worldsettings. Remarkably, in real-world tests with challenging scenarios, our methoddemonstrates a success rate improvement of up to 48% over SOTA methods.Demonstrating a strong ability to adapt to complex and unexpected object dynam-ics, our method offers robust generalization to real-world challenges. The resultsof this research set the stage for more reliable and resilient robotic manipulationin intricate real-world situations. Experiment videos, dataset, model, and code areavailable at: https://sites.google.com/view/dynamo-grasp .1Keywords: Suction Grasping, Manipulation, Deep Learning, Vision1 IntroductionGrasp point detection is essential for successful robotic manipulation, as it requires identifying theoptimal location on an object for a robot to securely grasp and manipulate. Rapid and reliable grasp-ing capabilities for a wide range of objects can benefit various applications, such as warehouse andservice robots. Suction grasping is a popular grasping modality in real-world settings due to its sim-plicity and reliability when handling objects with nonporous, flat surfaces compared to parallel-jawor multi-finger grasping. Existing methods for finding suitable grasping areas for suction gripperstypically focus on maximizing suction seal quality and robustness against wrenches, taking intoaccount the object’s shape, size, and surface properties [1, 2].Most existing methods for suction grasping assume a top-down manipulation setting, where objectsare initially placed on a stable, flat surface before being grasped, and the robot attempts to graspthe object from above. This is due to the suction cup gripper requiring the robot to apply a specificamount of force to press the suction cup against the object’s surface, which causes the cup to deformand create an air seal, resulting in a secure suction grasp. Consequently, an object being graspedneeds sufficient and stable support in the direction opposite the robot’s pushing. Without suchsupport, the object may move in an unfavorable direction, leading to the suction cup’s failure toform the air seal. However, numerous real-world scenarios require a robot to grasp objects withoutstable support, such as grasping from a container with a side opening or from an unstable pile ofobjects. In these situations, the objects may exhibit significantly more complex dynamics duringthe manipulation process due to the displacement caused by the robot’s motion and the objects’interactions with one another. State-of-the-art grasp point detection methods for suction graspingcould suffer from these complex object-picking scenarios because they do not consider the objects’1Our dataset and code are open-sourced at: https://github.com/dynamo-grasp/dynamo-grasp7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.movement during the manipulation process. This limitation greatly restricts the range of scenariosin which suction grippers can be applied, preventing them from reaching their full potential in real-world manipulation tasks. Fig.1.a. illustrates a real-world manipulation task.In this work, our goal is to fully exploit the potential of suction grippers by developing a grasppoint detection model that not only examines quantitative metrics such as suction quality but, moreimportantly, considers the object dynamics during the picking process. This paper makes the fol-lowing contributions: 1. Suction Grasping by Taking Object Movement into Consideration:We describe the challenge of complex object movement during suction grasping, which no currentstate-of-the-art method adequately addresses. 2. An Open Source Novel Suction Grasping Sim-ulation: To address this challenge, we developed a high-performance suction grasping simulationenvironment using Isaac Gym[3]. This simulation environment models the influence of object dy-namics on the success of suction grasps throughout the grasping process. 3. A Dataset and LearnedModel: Utilizing the simulation environment, we generated a dataset that contains more than onemillion simulated grasps and trained a grasp point detection model that takes into account how themovement of objects and the robot’s kinematics impact the success of grasping. 4. Evaluation ina Real-world Warehouse Setting: We assessed two grasp point detection approaches alongsideour model. In both simulated and real-world experiments, our method surpassed the alternatives interms of accuracy and consistency.2 Related WorkFigure 1: a. Suction grasping for real-world sce-narios remains challenging due to limited analy-sis of object movements. b. SOTA methods onlyreason for object’s surface properties. Left: Thequasi-static spring model. Right : Wrench basis forthe suction cup. [1] c. Left: A warehouse pickingscenario. Middle : DexNet failing the grasp due toobject toppling. Right : An effective grasp pointthat prevents unfavorable object movements. Seethe project website for experiment videos.Suction-based robot manipulators have gainedwidespread popularity in real-world applica-tions. For instance, suction grasping methodsare used in manufacturing [4, 5, 6], warehous-ing [7, 8], underwater manipulation [9, 10],food and fruit manipulation [11, 12, 13, 14],etc. Another major direction where suctiongrasping has been applied is in the explorationof end-effector modalities [15, 16, 17, 18].Analytic Models. In the realm of conventionalsuction cup grippers, the effective analysis ofgrasp quality necessitates the modeling of var-ious cup properties. Given that these suctioncups are typically fashioned from elastic mate-rials, such as rubber or silicone, researchers fre-quently employ spring-mass systems to repre-sent their deformations [1, 2, 19]. Upon estab-lishing a secure grasp on an object using a suc-tion gripper, the suction cup is typically mod-eled as a rigid entity. The subsequent analysisinvolves assessing the forces imposed on theobject, encompassing those along the surfacenormal, friction-induced tangential forces andsuction-generated pulling forces [20]. Mahleret al. [1] introduced a combined model inDexNet3.0, incorporating both torsional fric-tion and contact moment within a compliantmodel of the contact ring between the cupand the object. This amalgamated model hasdemonstrated its efficacy and is employed insubsequent works [2, 21]. Additionally, this work adapts the analytic models from DexNet3.0 forthe purpose of data annotation.2Learning Suction Grasps. Machine learning research in robotics has been actively exploring the se-lection of optimal grasp points to enhance suction grasping for intricate manipulation tasks [22, 23].These tasks include novel object picking, object stewing, picking from containers, etc. Existingapproaches generate training data through either human expertise [24] or simulations [1, 2, 22, 25].DexNet3.0 [1], for instance, synthesizes training data and proposes suction grasp points that aidin forming an effective suction seal and ensuring wrench resistance. Several other studies cen-ter around clustered scenarios by creating models that take RGB-D input and predict graspablepoints [2, 25, 24]. Jiang et al. [22] proposed a methodology that simultaneously considers graspingquality and robot reachability for bin-picking tasks. Despite these studies primarily focusing on an-alyzing surface properties or robot configuration, they largely overlook how the displacement of theobject during the picking process might impact the success of the task. Addressing this particularaspect is the main focus of our work.Visual Pushing. This project also shares relevance with the active research area of object dis-placement modeling during manipulation [26, 27]. Effective non-prehensile manipulation strategieshave been successfully applied to enhance grasping operations [28, 29, 30]. Recently, reasoningobject translation via visual input has gained huge advances. Transporter and its variants [31, 32]have introduced a data-efficient learning paradigm that links visual inputs to desired robotic actions.Nonetheless, these methods are underpinned by a strong assumption of translational equivariancein visual representation, a condition that is often not met in non-table-top settings. Visual foresightmethods [33, 34] have offered a model-based framework that predicts future observations based on astate-action pair. However, these approaches necessitate searching through the action space given aspecific task, which can be time-consuming for intricate real-world problems. Other existing studies[35, 36, 37] have examined robotic manipulation from a sideways perspective. However, none ofthem have explicitly modeled the complex dynamics caused by the interaction between the robotand the objects.3 Problem StatementOur objective is to identify grasp points on a target object within a container filled with multipleitems, using a single-view depth image observation. The identified grasp points should enable arobot to successfully establish a suction grasp, even when the object lacks stable support in thedirection opposite to the robot’s push. Consistent with previous suction grasp point detection stud-ies [1, 2, 21], a grasp point is defined by a target point [p,v]. Here, p∈R3represents the center ofthe contact ring between the suction cup and the object, while v∈S2denotes the gripper’s approachdirection. The grasp labeling function is defined as 1if the grasp successfully forms a suction graspon the target object, and 0if it does not. This section discusses the crucial criteria in order to form asuccessful suction grasp.3.1 Seal Quality and Wrench ResistanceA suction cup is capable of lifting objects owing to a differential in air pressure. This differential iscreated across the cup’s membrane by a vacuum generator, which pulls the object toward the cup.Ensuring a tight seal between the suction cup and the target object is crucial for successful operation.For sealing evaluation, we follow the highly effective quasi-static spring-based model proposed inDexNet 3.0 [1]. As shown in Fig.1.b., this model uses a combination of three spring systems torepresent suction cup deformation. A perimeter springs system is used to assess the deformationbetween adjacent vertices, namely viandvi+1. The cone springs system signifies the deformationof the suction cup’s physical structure, as determined by the distance between vianda. Lastly, theflexion springs, which connect vertex vitovi+2, are employed to resist bending along the surface ofthe cup.When the suction cup forms an air seal with the object, the suction gripper should be able to resistwrenches that are caused by gravity or other disturbances. The suction ring contact model proposedin DexNet 3.0 [1] efficiently encapsulates the forces experienced by a suction cup during a grasp.As depicted in Fig.1.b, this model takes into account five forces. The actuated normal force (fz)and3Figure 2: An overview of the proposed pipeline: a.We conducted system identification using 19everyday objects of diverse shapes, weights, volumes, and materials to ascertain the function Fdiscussed in Section 4. b.Calculation of deformation score at each simulation time step. c.&d. Gen-erating dataset with our simulation environment. e.&f. Trained DYNAMO-GRASP model outputsan affordance map highlighting optimal grasp areas.vacuum force Vrepresent the gripper pressing into the object along the contact axis and air suctionpulling the object, respectively. The friction forces (fx, fy)and torsional friction (τz)result fromthe normal force exerted between the suction cup and the object, acting as resistive forces. Lastly,the elastic restoring torques (τx, τy)result from the elastic restoring forces within the suction cup,which apply torque on the object along the boundary of the contact ring.3.2 Object MovementMost established suction grasping techniques assume little to no movement of the object during theprocess, which facilitates the deformation of the suction cup, thereby enabling the formation of anair seal for a secure grasp. However, in various practical manipulation scenarios, the target objectmight not have ample and steady support opposite the robot’s push. This lack of support can leadto undesirable shifts in the object’s position, preventing the successful creation of the air seal. Thesituation becomes even more complex when other objects are located near the target, due to theinteractions among them. This work addresses these complexities by modeling the movement ofobjects during the picking process, which enhances the applicability and efficiency of suction-basedgrippers in real-world manipulation tasks. Assuming that an object’s state is denoted by its Cartesianpose and velocity in a workspace, represented as s= (p, δp), the states of iobjects in a containerat time tcan be represented as st={st0, st1, ..., s ti}. At each time step, a robot equipped with asuction gripper performs a pushing action at= (ft,p,v), applying a force ftto a specific location,p, on the object’s surface in the direction of v. The state transition model p(st+1) =T(st, at)provides a distribution over the potential movements of the objects during the picking process.4 DYNAMO-GRASPThis section proposes a robot learning pipeline designed to create a grasp point detection model.This model suggests suction grasp points by analyzing combined information regarding object sur-face properties and object movement during the picking process. We first implemented a newsuction grasping simulation environment that accurately simulate suction cup properties and ob-jects’ displacement caused by the robot’s motion and the objects’ interactions with one another. Atransformer-based model is trained to take a depth image as input and generates an affordance mapover the target object’s surface, indicating the likelihood of a successful suction grasp if a robotexecutes a pushing action along the surface normal across various areas of the object. Please notethat our method primarily focuses on analyzing the impact of physical interaction between robots4and objects on the quality of a suction grasp. During execution, we filter out grasp points that of-fer inadequate air seals and wrench resistance, based on DexNet’s output. Fig.2 shows the systemarchitecture.4.1 Simulation Environment and Data GenerationIn order to eliminate the need for expensive real robot data collection, we carefully designed a sim-ulation environment that accurately replicates the physical properties of the suction cup, the motionof objects caused by robot grasping, as well as the robot’s kinematics during the picking process.We chose to implement our grasping simulation environment based on Isaac Gym, allowing allcomputations to be accelerated via GPUs. While Isaac Gym lacks important features that emulatedetailed suction grasping properties, our environment integrates several custom-implemented func-tional modules. It provides a pipeline that accurately simulates a suction-picking process by takinginto account factors such as suction cup properties, robot kinematic constraints, collisions, controlnoise, and object dynamics. Additional implementation details can be found in Appendix.A.1Modeling Suction Properties. The majority of popular physics simulations for robotics merelysimulate suction grasp through simplistic mechanisms. These mechanisms typically involve directlyattaching the object to the robot’s end-effector or creating an attracting force between the object andthe effector. However, these approaches neglect critical physical details. Specifically, to successfullyregister a suction grasp, the suction cup must be pushed and deformed to a sufficient extent that therim of the suction cup attaches to the surface of the object, thereby forming an air seal. Modelingthe amount of force required to form an air seal is crucial for this problem. This is because when thetarget object lacks rigid support, exerting sufficient force directly causes the object’s displacement.Understanding the magnitude of the force that the robot exerts on the target object is instrumentalin recreating accurate object dynamics. To model the deformation properties of the suction cup, wefirst adopted the Perimeter Springs in the quasi-static spring system, as discussed in Sec 3.1. Givena grasp point pon the object’s surface and the angle of incident v, this model calculates a suctiondeformation score Sdeform = 1−max( r1, r2, ..., r n), where ri= min(1 ,|(l′i−li)/li|). Here, lirepresents the original length of the perimeter spring linking vertex viandvi+1, and l′iis the lengthafter projecting the vertices onto the object’s surface. Using real-world data, we then conduct asystem identification process to ascertain the function F. This function signifies how forcefully therobot needs to press the suction cup to achieve a successful grasp, given a deformation score of aspecific grasp point: F(Sdeform )→fgrasp .Simulating Grasping Physics. (1) Kinematics: Our simulation accepts a robot’s model as input andcontrols the robot using an end-effector controller to attempt various suction grasps. This approachenables the simulation to demonstrate how the robot’s form factor and kinematic properties impactits grasp. For instance, some grasp points might be physically unattainable for the robot due to itsmanipulability and reachability constraints or collisions. (2) Generating Scenarios: Our experimentprimarily focuses on a warehouse lateral picking scenario. During our data generation process, thesimulation randomly selects one to three objects from our object set and spawns them into the samecontainer with random positions and orientations. We also implement domain randomization forobservation noise, objects’ weights, and controller parameters, ensuring the dataset reflects a rangeof diverse physical properties and robot behaviors. One of the objects in the container is randomlyassigned as the target object to be picked. (3) Sampling Grasp Points: Given a picking scenario,we sample two sets of candidate grasp points from the visible surface of the target object. Thefirst set is derived from uniform sampling across the entire surface, ensuring that the robot exploresdiverse picking strategies. The second set contains the grasp points with the highest score returnedby DexNet via the Cross-Entropy Method (CEM) sampling strategy, ensuring the robot exploresareas that DexNet deems preferable.Labeling Data. After sampling the candidate grasp points, our simulated robot ‘physically’ exe-cutes each candidate pby performing a sequence of pushing actions A={at}Tt=0, where eachaction at= (ft,p,v). Here, vis determined by the surface normal at p. The robot exerts a con-stant force ft=fcif the target object is unstable and moves in response to the gripper’s push.5Once the object finds a position with adequate support against the push, ftgradually increases untilthe suction cup deforms enough to form an air seal or until the object starts moving again. Thesimulation of the suction cup’s deformation and the precise estimation of suction grasp registrationinvolves a continuous calculation of Sdeform andfgrasp at each timestep. This process considersthe suction cup’s current position relative to the target object, as shown in Fig.2.b. A force sensoron the end-effector continuously monitors ft, and a grasp is deemed successful if ft≥fgrasp . Anyfailure to meet this condition, such as collisions or inaccuracies in end-effector positioning due tomanipulability or reachability issues, results in the grasp point being marked as unsuccessful. Forsuccessful grasps, we incorporated a penalization term pmove into the label to penalize unnecessaryobject movements. Further details are discussed in Appendix.A.1.4.2 Model TrainingWe employ the suction grasping simulation, as described above, to generate a dataset. This datasetrepresents a warehouse scenario where a robot equipped with a suction gripper is tasked with extract-ing a target object from a small container filled with multiple unorganized items. The experimentalsetting is detailed in Sec.5. This dataset was used to train a model for grasp point selection. Themodel takes a single-view point cloud of the container’s interior, a segmentation mask identifyingeach object within the container and its boundaries as inputs. It then outputs an affordance maprepresenting the estimated probability of successful grasps at all potential grasp points on the targetobject. The largest value in the affordance map indicates the optimal grasp point, (p∗,v∗), for thegiven scenario. As shown in Fig.2.e, our model employs an auto-encoder architecture, integrating atransformer encoder and a deconvolutional decoder. As previously mentioned, our data generationprocess is designed to capture the inherent variabilities of complex real-world robotic suction grasp-ing tasks. These include variations stemming from the physical properties of different objects, robotconstraints, and stochasticity in the controller, among others. Empirically, we discovered that thefollowing loss function can effectively mitigate the adverse effects of the high aleatoric uncertaintyin our dataset during training: LYmax=1NPNi=1(yi−ˆyi)2,∀yi∈Ymax. Here, Ymax representsa subset of samples containing the nhighest-scored grasp points on an object. More details arediscussed in Appendix.A.2.5 ExperimentFigure 3: Left: The simulation environment for data gen-eration and experiments. The simulated objects with differ-ent weights, sizes, and shapes are displayed on the left sideof the robot. Right: In Section 5.1, challenging test casesare presented where only DYNAMO-GRASP was success-ful in grasping the target object. The orange, blue, and yel-low points indicate the grasp points proposed by DYNAMO-GRASP, DexNet, and the Centroid method, respectively.Our experiment focuses on roboticsuction grasping for industrial ware-house shelves [38]. Fig.1 depicts therobot setup and the industrial shelv-ing unit which is packed with ob-jects. The opening of these shelv-ing units is located on the side, whichmakes suction grasping significantlymore challenging compared to top-down manipulation scenarios, as therobot’s movements can trigger a se-ries of object displacements, leadingto objects being shifted or even top-pled over. Consequently, this sce-nario serves as an excellent evalua-tion environment for our work. Oursystem setup is as follows. Through-out our evaluation, we employed aUniversal Robots UR16e robot equipped with a Robotiq EPick suction gripper and an Intel Re-alsense L515 camera mounted on its wrist. A large variety of objects with different shapes, dimen-sions, and physical properties were used in our experiment, details can be found in Appendix.A.1.6Dyn(Full) Dyn w/ MSR Dyn w/o PEN Dex CenTotal Success Rate 88.05% 86.75% 82.93% 81.12% 78.78%Success Std 0.30 0.32 0.36 0.36 0.40Table 1: The first row of the table displays the grasping success rate for each method, calculatedfrom all 1300 picks. The second row provides the standard deviation of the success rate for eachmethod across various scenarios. The first three columns of the table present an ablation comparisonfor our DYNAMO-GRASP ( DYN ) method, while DexandCen represent the DexNet and Centroidmethods, respectively.In our experiment, we focus on evaluating three methods: 1. our method DYNAMO-GRASP (Dyn),2. DexNet3.0 (Dex), and 3. the Centroid method (Cen). DexNet3.0 is a SOTA suction-picking tech-nique, serving as a strong baseline. Meanwhile, the Centroid method, a straightforward approachinvolving suctioning on the object’s centroid, has proven effective in similar tasks at the AmazonRobotics Challenge [37, 39].5.1 Large-scale, Diverse Scenario Assessment, and Ablation TestTo comprehensively assess the performance and robustness of various methods for the suction grasp-ing challenge, we generated 260 diverse picking scenarios. We use the same simulation environmentas we used to generate our training dataset. Each of the three methods was tested with five suctiongrasps per scenario in simulation, resulting in 1300 simulated suction grasps for each method’s eval-uation. The scenarios were generated by sampling from a distribution that incorporates even greaterrandomness in object orientation than the dataset used for model training. These scenarios incorpo-rate a wide range of object configurations, leading to potentially complex object movements duringpicking.Comparing the first, fourth, and fifth columns of Table.1, it is evident that our method exhibits amarked improvement over both DexNet and the Centroid method in terms of overall success rateand consistent performance across various scenarios. Our method achieved the highest successrate of 88.05% and exhibited the least variance in success across different scenarios. The first,second, and third columns of Table.1 presents an ablation test that illustrates the contributions ofvarious components in our learning pipeline to the effective training of our model. Dyn(Full) is ourfinal model, Dyn w/ MSR represents a model trained with standard MSR loss instead of the LYmaxdescribed in Sec.4.2, and Dyn w/o PEN further remove the use of penalization term pmove in thelabeling process.5.2 Real-world EvaluationTo assess real-world efficacy, we executed 375 real-world suction grasps to evaluate the variousmethods. In this experiment, we curated three sets of scenarios: the Common set ,Challengingset, and Adversarial set , each embodying a distinct level or type of challenge for suction grasping.The statistic of all experimental trials and their comparison to the simulated trials are detailed inTable.2, 3, and 4 in Appendix.A.3.1.The Common Set: In this experiment, we sampled ten scenarios from the 260randomly generatedones as detailed in Sec.5.1. We then recreated these scenarios in the real world using objects withdimensions similar to those in the simulations. Each method was used to perform five grasps on eachof these scenarios. This evaluation set captures the typical challenges of most picking tasks in thisspecific warehouse environment. As shown in Fig.4, our model demonstrates an advantage witha total success rate of 94%, averaging 4.7 successful grasps out of five attempts and a standarddeviation of 0.67 . In contrast, both DexNet and the Centroid method average 4.2successful graspsout of five attempts. Their higher standard deviations, 0.92and1.03respectively, point to lessconsistent performance.The Challenging Set and Adversarial Set: We are particularly interested in the more challengingcases. Consequently, we devised two sets of scenarios in the real world to further test the capa-bilities of the three methods. The Challenging Set comprises five scenarios from the 260scenarios7Figure 4: Comparison of the total success rates of different methods underscores their real-worldperformance on the three evaluation sets described Sec.5.2. The total success rate is computed bydividing the number of successful grasps by the total number of attempts within an evaluation set.described in Sec.5.1. These scenarios exhibit the lowest combined success rate for the three methodsin simulation, representing the most challenging situations our simulation generated without humanbias. In contrast, the Adversarial Set comprises five scenarios designed by a human operator, specif-ically tailored to challenge these grippers. The objects featured in this set are everyday items thatwere not included during the training phase. As depicted in Fig.4 and Table.3,4 in Appendix.A.3.1,DYNAMO-GRASP markedly outperforms the two baseline methods in both total success rateand performance consistency in more challenging scenarios. On the challenging set, our methodachieved a success rate of 60%, whereas, on the adversarial set, it reached 76%. In stark contrast,DexNet and the Centroid method’s success rates are 24% and 36% for the challenging set, with bothachieving 28% on the adversarial set. Furthermore, DYNAMO-GRASP consistently executed morethan four successful grasps out of five attempts in over half of the scenarios in both sets. Meanwhile,the other two methods faltered, rarely managing even three successful grasps in any scenario withinthese evaluation sets.Qualitative Analysis. The Fig.7 in Appendix.A.3.1 depicts the grasp points chosen by variousmethods and indicates the success of each attempt during the adversarial evaluation. The figureoffers insights into the areas chosen by each method for grasping and sheds light on which areas aremore likely to lead to successful grasps. For example, in the first scenario, a tall bottle is partiallypropped up by a box in the back. The test checks the grasp method’s awareness of potential objecttoppling. DYNAMO-GRASP chose the bottle’s lower part, ensuring the box supported the pick.Some grasp points chosen by the other two methods were higher up on the bottle leading to topplingmovements. Similarly, in scenarios two, four, and five, DYNAMO-GRASP tends to select grasppoints from regions that are overlooked by the other methods, resulting in more successfulgrasps in these scenarios.6 Conclusion and LimitationThis paper discusses the challenge of complex object movement during suction grasping, whichno current state-of-the-art method adequately addresses. We introduced DYNAMO-GRASP, adynamic-aware grasp point detection method that selects grasp points by factoring in the impactof object movement on the success of suction grasping. DYNAMO-GRASP delivers improvedgrasping performance with greater consistency in both simulated and real-world settings. Notably,in real-world experiments involving challenging scenarios, our method exhibits an improvement ofup to 48% in success rate compared to alternative methods. Limitations and future work: Firstly,the dataset used in our simulation environment primarily includes objects with relatively simple ge-ometric shapes. This aspect could limit the efficacy of our method when dealing with objects ofuncommon or complex shapes. Similarly, our real-world experiments primarily involved simple ge-ometric objects, such as boxes and bottles. In future research, there’s potential to develop effectiveheuristics that combine information from both DYNAMO-GRASP and DexNet. While our methodemphasizes modeling object movement, DexNet primarily targets suction quality based on objectsurface geometry. Integrating the strengths of both methods could lead to enhanced performance inspecific applications.8AcknowledgmentsThis research is funded by the UW + Amazon Science Hub as part of the project titled, “RoboticManipulation in Densely Packed Containers.” We would like to thank Dr. Michael Wolf from Ama-zon for valuable discussions. We would like to thank Yi Li for many constructive inputs to themachine learning aspect of this paper. We thank all members of the Amazon manipulation projectwho generously lent us their computing resources and robot time.References[1] J. Mahler, M. Matl, X. Liu, A. Li, D. Gealy, and K. Goldberg. Dex-net 3.0: Computing robustvacuum suction grasp targets in point clouds using a new analytic model and deep learning. In2018 IEEE International Conference on robotics and automation (ICRA) , pages 5620–5627.IEEE, 2018.[2] H. Cao, H.-S. Fang, W. Liu, and C. Lu. Suctionnet-1billion: A large-scale benchmark forsuction grasping. IEEE Robotics and Automation Letters , 6(4):8718–8725, 2021.[3] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, and G. State. Isaac gym: High performance gpu-based physics simula-tion for robot learning, 2021.[4] T. Zhang, C. Zhang, and T. Hu. A robotic grasp detection method based on auto-annotateddataset in disordered manufacturing scenarios. Robotics and Computer-Integrated Manufac-turing , 76:102329, 2022.[5] M. Yang, L. Yu, C. Wong, C. Mineo, E. Yang, I. Bomphray, and R. Huang. A cooperativemobile robot and manipulator system (co-mrms) for transport and lay-up of fibre plies in mod-ern composite material manufacture. The International Journal of Advanced ManufacturingTechnology , pages 1–17, 2021.[6] A. S. Olesen, B. B. Gergaly, E. A. Ryberg, M. R. Thomsen, and D. Chrysostomou. A collab-orative robot cell for random bin-picking based on deep learning policies and a multi-gripperswitching strategy. Procedia Manufacturing , 51:3–10, 2020.[7] S. Hasegawa, K. Wada, K. Okada, and M. Inaba. A three-fingered hand with a suction grippingsystem for warehouse automation. Journal of Robotics and Mechatronics , 31(2):289–304,2019.[8] M. Schwarz, A. Milan, C. Lenz, A. Munoz, A. S. Periyasamy, M. Schreiber, S. Sch ̈uller, andS. Behnke. Nimbro picking: Versatile part handling for warehouse automation. In 2017 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 3032–3039. IEEE, 2017.[9] H. S. Stuart, M. Bagheri, S. Wang, H. Barnard, A. L. Sheng, M. Jenkins, and M. R. Cutkosky.Suction helps in a pinch: Improving underwater manipulation with gentle suction flow. In 2015IEEE/RSJ international conference on intelligent robots and systems (IROS) , pages 2279–2284. IEEE, 2015.[10] H. Kumamoto, N. Shirakura, J. Takamatsu, and T. Ogasawara. Underwater suction gripperfor object manipulation with an underwater robot. In 2021 IEEE International Conference onMechatronics (ICM) , pages 1–7. IEEE, 2021.[11] P. Y . Chua, T. Ilschner, and D. G. Caldwell. Robotic manipulation of food products–a review.Industrial Robot: An International Journal , 30(4):345–354, 2003.[12] R. Morales, F. Badesa, N. Garcia-Aracil, J. Sabater, and L. Zollo. Soft robotic manipulation ofonions and artichokes in the food industry. Advances in Mechanical Engineering , 6:345291,2014.9[13] K. Gilday, J. Lilley, and F. Iida. Suction cup based on particle jamming and its performancecomparison in various fruit handling tasks. In 2020 IEEE/ASME International Conference onAdvanced Intelligent Mechatronics (AIM) , pages 607–612. IEEE, 2020.[14] C. Blanes, M. Mellado, C. Ortiz, and A. Valera. Technologies for robot grippers in pick andplace operations for fresh fruits and vegetables. Spanish Journal of Agricultural Research , 9(4):1130–1141, 2011.[15] S. Jeong, P. Tran, and J. P. Desai. Integration of self-sealing suction cups on the flexotendonglove-ii robotic exoskeleton system. IEEE Robotics and Automation Letters , 5(2):867–874,2020.[16] T. M. Huh, K. Sanders, M. Danielczuk, M. Li, Y . Chen, K. Goldberg, and H. S. Stuart. A multi-chamber smart suction cup for adaptive gripping and haptic exploration. In 2021 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 1786–1793. IEEE,2021.[17] B. Mazzolai, A. Mondini, F. Tramacere, G. Riccomi, A. Sadeghi, G. Giordano, E. Del Dottore,M. Scaccia, M. Zampato, and S. Carminati. Octopus-inspired soft arm with suction cupsfor enhanced grasping tasks in confined environments. Advanced Intelligent Systems , 1(6):1900041, 2019.[18] J. Nakahara, B. Yang, and J. R. Smith. Contact-less manipulation of millimeter-scale objectsvia ultrasonic levitation. In 2020 8th IEEE RAS/EMBS International Conference for Biomedi-cal Robotics and Biomechatronics (BioRob) , pages 264–271. IEEE, 2020.[19] X. Provot et al. Deformation constraints in a mass-spring model to describe rigid cloth be-haviour. In Graphics interface , pages 147–147. Canadian Information Processing Society,1995.[20] R. Kolluru, K. P. Valavanis, and T. M. Hebert. Modeling, analysis, and performance evaluationof a robotic gripper system for limp material handling. IEEE Transactions on Systems, Man,and Cybernetics, Part B (Cybernetics) , 28(3):480–486, 1998.[21] J. Mahler, M. Matl, V . Satish, M. Danielczuk, B. DeRose, S. McKinley, and K. Goldberg.Learning ambidextrous robot grasping policies. Science Robotics , 4(26):eaau4984, 2019.[22] P. Jiang, J. Oaki, Y . Ishihara, J. Ooga, H. Han, A. Sugahara, S. Tokura, H. Eto, K. Komoda,and A. Ogawa. Learning suction graspability considering grasp quality and robot reachabilityfor bin-picking. Frontiers in Neurorobotics , 16, 2022.[23] A. Mousavian, C. Eppner, and D. Fox. 6-dof graspnet: Variational grasp generation for objectmanipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision ,pages 2901–2910, 2019.[24] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu,E. Romo, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasp-ing and cross-domain image matching. The International Journal of Robotics Research , 41(7):690–705, 2022.[25] Q. Shao, J. Hu, W. Wang, Y . Fang, W. Liu, J. Qi, and J. Ma. Suction grasp region prediction us-ing self-supervised learning for object picking in dense clutter. In 2019 IEEE 5th InternationalConference on Mechatronics System and Robots (ICMSR) , pages 7–12. IEEE, 2019.[26] M. T. Mason. Mechanics and planning of manipulator pushing operations. The InternationalJournal of Robotics Research , 5(3):53–71, 1986.[27] K. M. Lynch and M. T. Mason. Stable pushing: Mechanics, controllability, and planning. Theinternational journal of robotics research , 15(6):533–556, 1996.10[28] M. Dogar and S. Srinivasa. A framework for push-grasping in clutter. Robotics: Science andsystems VII , 1, 2011.[29] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergiesbetween pushing and grasping with self-supervised deep reinforcement learning. In IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , 2018.[30] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research , 17(1):1334–1373, 2016.[31] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021.[32] H. Wu, J. Ye, X. Meng, C. Paxton, and G. S. Chirikjian. Transporters with visual foresight forsolving unseen rearrangement tasks. In 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 10756–10763. IEEE, 2022.[33] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE Inter-national Conference on Robotics and Automation (ICRA) , pages 2786–2793. IEEE, 2017.[34] B. Huang, S. D. Han, J. Yu, and A. Boularias. Visual foresight trees for object retrieval fromclutter with nonprehensile rearrangement. IEEE Robotics and Automation Letters , 7(1):231–238, 2022. doi:10.1109/LRA.2021.3123373.[35] R. Wang, Y . Miao, and K. E. Bekris. Efficient and high-quality prehensile rearrangement incluttered and confined spaces. In 2022 International Conference on Robotics and Automation(ICRA) , pages 1968–1975. IEEE, 2022.[36] C. Eppner, S. H ̈ofer, R. Jonschkowski, R. Mart ́ın-Mart ́ın, A. Sieverling, V . Wall, and O. Brock.Lessons from the amazon picking challenge: Four aspects of building robotic systems. InRobotics: science and systems , pages 4831–4835, 2016.[37] C. Hernandez, M. Bharatheesha, W. Ko, H. Gaiser, J. Tan, K. van Deurzen, M. de Vries,B. Van Mil, J. van Egmond, R. Burger, et al. Team delft’s robot winner of the amazon pickingchallenge 2016. In RoboCup 2016: Robot World Cup XX 20 , pages 613–624. Springer, 2017.[38] M. Grotz, S. Atar, Y . Li, P. Torrado, B. Yang, N. Walker, M. Murray, M. Cakmak, and J. R.Smith. Towards robustly picking unseen objects from densely packed shelves. In RSS Work-shop on Perception and Manipulation Challenges for Warehouse Automation , 2023.[39] K.-T. Yu, N. Fazeli, N. Chavan-Dafle, O. Taylor, E. Donlon, G. D. Lankenau, and A. Ro-driguez. A summary of team mit’s approach to the amazon picking challenge 2015. arXivpreprint arXiv:1604.03639 , 2016.[40] B. Yang, G. Habibi, P. Lancaster, B. Boots, and J. Smith. Motivating physical activity via com-petitive human-robot interaction. In Conference on Robot Learning , pages 839–849. PMLR,2022.[41] B. Yang, L. Zheng, L. J. Ratliff, B. Boots, and J. R. Smith. Stackelberg games for learningemergent behaviors during competitive autocurricula. arXiv preprint arXiv:2305.03735 , 2023.[42] B. Huang, Y . Chen, T. Wang, Y . Qin, Y . Yang, N. Atanasov, and X. Wang. Dynamic handover:Throw and catch with bimanual hands. arXiv preprint arXiv:2309.05655 , 2023.[43] T. Chen, J. Xu, and P. Agrawal. A system for general in-hand object re-orientation. Conferenceon Robot Learning , 2021.11[44] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ international conference on intelligent robots and systems , pages 5026–5033. IEEE,2012.[45] E. Coumans and Y . Bai. Pybullet quickstart guide, 2021.[46] L. Beyer, X. Zhai, and A. Kolesnikov. Better plain vit baselines for imagenet-1k. arXiv preprintarXiv:2205.01580 , 2022. URL https://arxiv.org/abs/2205.01580 .[47] Y . Li, M. Zhang, M. Grotz, K. Mo, and D. Fox. Stow: Discrete-frame segmentation andtracking of unseen objects for warehouse picking robots. In 7th Annual Conference on RobotLearning , 2023.[48] M. Heo, S. Hwang, S. W. Oh, J.-Y . Lee, and S. J. Kim. Vita: Video instance segmentation viaobject token association, 2022.[49] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask trans-former for universal image segmentation. 2022.[50] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discoveringclusters in large spatial databases with noise. In kdd, volume 96, pages 226–231, 1996.12A AppendixA.1 Simulation DetailsA simulation that accurately replicates the targeted robotic tasks can significantly enhance the effi-ciency of various machine learning algorithms in learning these tasks[40, 41, 42, 43]. Most physi-cal simulators, such as Mujoco[44], PyBullet[45], and IsaacGym[3], which excel at simulating thephysical properties of object motion, lack the functionality to simulate the characteristics of suctioncups during suction grasping. Our development efforts focus on utilizing the sensing and physicalinformation in IsaacGym to create more realistic suction-picking properties.System Identification: Our system identification process aims to accurately model the force re-quired by the robot to deform the suction cup. This ensures the rim of the cup adheres to the object’ssurface, forming an air seal. We chose 18 everyday objects with varied surface geometric charac-teristics, aiming to cover a broad spectrum of deformation scores. For each object, we executed tensuction grasps using our UR16 robot. To minimize measurement noise, the objects were held firmlyto limit movement during the grasping process. The force required for the suction gripper to achievea suction seal was detected by a sudden decrease in suction airflow and the force torque sensor lo-cated on the robot’s wrist. We observed that the characteristics of our suction cup differ significantlybetween nearly flat object surfaces and those that are more curved or intricate. Consequently, wechose to represent the function F(Sdeform )→fgrasp using a hybrid linear function:F(Sdeform ) =7.66−0.06∗Sdeform ifSdeform ≤8022.2−0.18∗Sdeform otherwiseTypically, the size and firmness of a suction cup influence its working range for objects of varyingsizes and weights. However, this doesn’t profoundly alter the nature of this grasping problem. Forinstance, when using a small suction cup to manipulate lighter, smaller objects, these objects typi-cally have less friction with the container and reduced inertia, making them more prone to toppling.However, even though we anticipate a certain degree of generalization to unseen suction cups, werecommend carrying out the system identification process to achieve optimal performance.Figure 5: Force exerted on an object as a function of the suction deformation score. Solid linesrepresent system identification fits for cylindrical (blue-colored line) and cuboidal (violet-coloredline) objects. The dotted line demarcates the distribution of data points between the two objecttypes.Domain Randomization: We performed domain randomization to vary object weights, whereeach of the ten chosen objects had their original weights varied by −5 g,−10 g,5 g, and 10 g,13leading to five weight versions per object. These objects were cylindrical or cuboidal with varyingdimensions, inertia, and weights. The cylindrical ones had radii from 26 mm to46.3 mm , heightsbetween 95 mm and155 mm , and weights ranging from 118 g to602 g . In contrast, the cuboidalobjects had lengths from 112 mm to165 mm , breadths between 55 mm and102 mm , and widthsfrom 55 mm to95 mm . Notably, with every weight change, the inertia properties were appropriatelymodified. When placing objects in the simulator, their orientations on all three axes were uniformlypicked from −180◦to180◦. Although their initial placements followed predefined bin coordinates,potential collisions might displace some objects. As a preventive measure, we ensured that eachobject remained within the bin limits and verified the stability of each object setup by spawningit thrice and monitoring its movement at each simulation step for minimal displacement until thesuction gripper gets in contact with the target object. Lastly, to closely mimic our physical robotsetup with the Intel RealSense L515 camera, we added Gaussian noise (mean: 0 mm , standarddeviation: 0.9 mm ) to the depth images.Labeling: For the label for each configuration, each grasp point score serves as an indicator ofgrasp success. A grasp point that fails to achieve a secure suction grip is assigned a definitive zeroscore. Additionally, the label is designated as a ‘failure’ if the robotic arm does not align and picksthe object at the computed angle of incidence derived from the surface normals, ensuring the graspadheres to the pre-calculated optimal orientation. Another critical constraint is that the arm mustavoid unintended contact with any other object before establishing contact with the target, as suchcollisions can compromise the grasp’s integrity and lead to potential inaccuracies or damage. On theother hand, successful grasp points are scored using the equation s= 1−pmove , where,pmove = max(0 ,min(objmovement, 0.3))objmovement =T−1Xt=0||trant+1−trans t||+ (1− |quatt+1·quatt)pmove is a penalization term that discourages unnecessary movement of the target object.objmovement calculates the total movement of the target object during the picking process. Thepicking horizon Tis discretized by a fixed interval, and trepresents a time step within T. tran andquat represent the translation and orientation of the target object at a given time step, respectively.Dataset: We implemented specific data augmentation techniques on our dataset to enhance ourmodel’s resilience against variances in real-world scenarios. We added Gaussian noise to the pointcloud data and flipped the input data along with their corresponding labels, strengthening the model’sability to recognize various object orientations and thereby improving its generalization capabilities.These augmentation strategies significantly expanded the diversity of our training dataset, ensuringthe model’s proficiency in managing diverse input perturbations. The complete dataset, includinglabels and augmented inputs, consisted of around 12000 configurations, including augmentations,enhancing the dataset’s diversity and depth, which occupy approximately 10 GB of storage space.A.2 Learning DetailsModel Architecture: Our model employs a variation of the Vision Transformer (ViT), adopting thearchitecture from Beyer et al.[46]. We chose ViT because it represents a state-of-the-art architecturewidely used in vision classification tasks. Utilizing this architecture demonstrates that a standardnetwork, when trained with our dataset, effectively addresses the challenge of suction grasping incomplex object clusters. This is achieved without the need for custom modifications to the modelarchitecture.Loss Fuction: We initially experimented with both the standard MSE loss and Cross-entropy lossbut observed only mediocre performance from the model. As highlighted in Section 4, the domainrandomization process introduced significant stochasticity to our dataset. Empirically, we found thepresented loss function to be more effective in this specific context. The use of Ymax is a simpletechnique designed to mitigate the adverse effects of high aleatoric uncertainty present in the trainingdata. It updates the model by only taking into account the grasp points that the model deems to have14Figure 6: Training and validation metrics over epochs: The top row displays the metrics related totraining, with the left graph showing the training accuracy (calculated using all grasp points) andthe right graph presenting the training loss (determined with 15 highest-scored grasp points). Thebottom row focuses on validation metrics, with the left graph illustrating the validation accuracy(using all grasp points) and the right graph depicting the validation loss (using the 15 highest-scoredgrasp points)a high confidence of success. This approach is aimed at penalizing false positive predictions madewith high confidence or encouraging true positive predictions made with high confidence whiledisregarding low confidence labels, which usually arise due to data noise. We discovered that thisloss function led to improved prediction accuracy and produced a smoother affordance map.Hyperparameters: We trained our ViT model using the Adam optimizer with a learning rate of5e−5and a batch size of 128 images. The model converged in about 500 epochs, and the trainingwas conducted on an NVIDIA RTX 3090. Our ViT model consists of eight heads, each with adimension of 64. Consequently, we set Q, K, and V to 128, 257, and 1536, respectively. The modelaccepts a 4×256×256tensor as input. The first, second, and third channels represent the x, y,and z values of the cropped point cloud observation for the container. The fourth channel providesa segmentation mask that localizes the target object. The model produces a 256×256affordancemap. Each pixel in this map provides a score ranging from 0to1. A higher score indicates a morefavorable grasp point for achieving a successful suction grasp.Segmentation Mask: Within the Isaac GYM simulator, we adhere to ground truth segmenta-tion masks. For real-robot experiments, we used a specialized method, called “STOW” [47], thatcombines VITA [48] and the Mask2Former [49], which is tailored for joint unseen object instancesegmentation and tracking. The method uses transformer-based architectures and dynamic trackinganchors to handle real-world visuals characterized by dense clustering and substantial intra-frameobject displacements.Grasp Point Selection: After getting the affordance map, we first identify the pixels that representthe target object in the map using the segmentation mask. Subsequently, we use the DBSCANalgorithm [50] to cluster regions displaying high-affinity scores exceeding 0.9. During this clusteringphase, each cluster encompasses a minimum of five pixels. The final grasp point is defined by thecentroid of the cluster with the highest average affinity score.15Figure 7: Real-world adversarial evaluation with five grasp points for each configuration: DYNAMOGRASP (our method), DexNet, and Centroid. The color-coded points represent the suggested grasppoints success and failure from various algorithms. The successfully identified grasp points aremarked by the color along the label “success” and “failure”.A.3 Experiment Details:In the real-world environment setup, distinct from the simulator approach, objects were first stowedinto a designated bin. Following this, a segmentation algorithm was employed to generate a maskdelineating each object. The user then selects the target object based on its unique value in thegrayscale mask image, referred to as the ‘target object id’. With the object identified, the next phaseinvolves running the inference of a user-provided algorithm to determine the optimal strategy forpicking the selected object. The entire operation is orchestrated through a state machine, ensuringa seamless transition between stages. Each state is connected sequentially. In evaluating successand failure across various methods, a grasp point is deemed unsuccessful if motion planning failsconsecutively on two occasions. Additionally, if the system does not create a suction with theobject, it is also considered a failure. A successful grasp is solely determined by the creation of agood suction with the target object.A.3.1 Extra Experimental Result:The Sim2Real Gap: Our DYNAMO-GRASP model was exclusively trained using simulated data.In most of our experiments, this model exhibited outstanding real-world performance without requir-ing tuning using real-world data. This indicates the model’s strong ability to generalize in real-worldconditions, showcasing a minimal sim2real gap. To delve deeper into our pipeline’s constraints, wepinpointed situations where simulation deemed the target object “impossible” to pick up. In theseinstances, all three picking techniques registered a 0% success rate in simulation. Importantly, thesesituations are infrequent, accounting for just around 3% of the 260 simulated test scenarios. We thenmirrored these situations in an actual warehouse environment and ran a real robot experiment asdelineated in Sec.5.2. Despite the struggles faced by all three methods to secure high success rates(DYN: 16%, Dex: 8%, Cen: 40%), the real-world challenges weren’t as formidable as projectedby the simulation. However, this did highlight a sim2real gap in these rare scenarios. Our observa-tions also revealed that, in cases where our simulation wasn’t entirely accurate, the centroid methodsurpassed the performance of learning-based approaches. This observation emphasizes the value ofrefining learning-based models using actual world data.16Common SetReal world experiments sim experimentsDYN Dex Cen DYN Dex CenScenario 1 3 2 3 5 5 5Scenario 2 5 4 5 5 5 5Scenario 3 5 4 5 5 5 5Scenario 4 5 5 5 5 5 5Scenario 5 5 5 5 5 5 5Scenario 6 5 5 5 5 5 5Scenario 7 4 4 4 5 5 5Scenario 8 5 4 4 0 5 5Scenario 9 5 4 2 5 0 5Scenario 10 5 5 4 5 5 5Avg. Success Grasps 4.7 4.2 4.2 4.5 4.5 5Std. Dev. 0.675 0.919 1.033 1.581 1.581 0Total Success Rate 94% 84% 84% 90% 90% 100%Table 2: Comparative evaluation of grasp success rates in common scenarios for three methodolo-gies: DYNAMO-GRASP (DYN), DexNet (Dex), and Centroid (Cen). The table enumerates theaverage success rates, standard deviations, and total success rates for each method.Challenging SetReal world experiments sim experimentsDYN Dex Cen DYN Dex CenScenario 1 4 1 2 5 0 0Scenario 2 2 2 3 0 3 3Scenario 3 4 1 0 5 0 0Scenario 4 0 1 3 0 0 2Scenario 5 5 1 1 5 0 0Avg. Success Grasps 3 1.2 1.8 3 0.6 1Std. Dev. 2 0.447 1.304 2.739 1.342 1.414Total Success Rate 60% 24% 36% 60% 12% 20%Table 3: Comparative evaluation of grasp success rates in challenging scenarios for three method-ologies: DYNAMO-GRASP (DYN), DexNet (Dex), and Centroid (Cen). The table enumerates theaverage success rates, standard deviations, and total success rates for each method.Adversarial SetReal world experimentsDYN Dex CenScenario 1 5 3 2Scenario 2 3 0 0Scenario 3 1 0 1Scenario 4 5 3 1Scenario 5 5 1 3Avg. Success Grasps 3.8 1.4 1.4Std. Dev. 1.789 1.517 1.14Total Success Rate 76% 28% 28%Table 4: Comparative evaluation of grasp success rates in adversarial scenarios for three method-ologies: DYNAMO-GRASP (DYN), DexNet (Dex), and Centroid (Cen). The table enumerates theaverage success rates, standard deviations, and total success rates for each method.17 |
2Qrd-Yw4YmF | Sequential Dexterity: Chaining Dexterous Policiesfor Long-Horizon ManipulationYuanpei Chen∗, Chen Wang∗, Li Fei-Fei, C. Karen LiuStanford UniversityAbstract: Many real-world manipulation tasks consist of a series of subtasks thatare significantly different from one another. Such long-horizon, complex taskshighlight the potential of dexterous hands, which possess adaptability and versatility,capable of seamlessly transitioning between different modes of functionalitywithout the need for re-grasping or external tools. However, the challenges arise dueto the high-dimensional action space of dexterous hand and complex compositionaldynamics of the long-horizon tasks. We present Sequential Dexterity, a generalsystem based on reinforcement learning (RL) that chains multiple dexterous policiesfor achieving long-horizon task goals. The core of the system is a transition fea-sibility function that progressively finetunes the sub-policies for enhancing chainingsuccess rate, while also enables autonomous policy-switching for recovery fromfailures and bypassing redundant stages. Despite being trained only in simulationwith a few task objects, our system demonstrates generalization capability to novelobject shapes and is able to zero-shot transfer to a real-world robot equipped witha dexterous hand. Code and videos are available at sequential-dexterity.github.io.Keywords: Dexterous Manipulation, Long-Horizon Manipulation, ReinforcementLearning6HDUFK2ULHQW*UDVS,QVHUW6HTXHQWLDOGH[WHURXVPDQLSXODWLRQ,QLWLDOVWDWH*RDOFigure 1: We present Sequential Dexterity , a system that learns to chain multiple versatile dexterousmanipulation motions for tackling long-horizon tasks (e.g., building a block structure from a pile ofblocks), which is able to zero-shot transfer to the real world.1 IntroductionMany real-world manipulation tasks consist of a sequence of smaller but drastically different subtasks.For example, in the task of Lego structure building (Fig.1), the task involves searching within a boxto locate a specific block piece. Once found, the piece is then oriented andgrasped firmly in hand,setting it up for the final insertion at the goal location. Such a task demands a flexible and versatile*Equal contribution. Correspondence to Chen Wang <chenwj@stanford.edu >7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.manipulator to adapt and switch between different modes of functionality seamlessly, avoidingre-grasping or use of external tools. Furthermore, it requires a long-horizon plan that considers thetemporal context and functional relationship between the subtasks in order to successfully executethe entire sequence of tasks. These requirements motivate the use of dexterous hand, which has thepotential to reach human-level dexterity by utilizing various hand configurations and their inherentcapabilities. However, fully utilizing dexterous hands to achieve long-horizon, versatile tasks remainsan outstanding challenge, calling for innovative solutions.Recent developments in dexterous manipulation have made significant strides in areas such asobject grasping [ 1–3] and in-hand manipulation [ 4–9]. However, these works primarily investigatesingle-stage skills, overlooking the potential of sequencing multiple dexterous policies for long-horizontasks. A naive way to chain multiple dexterous policies together is to simply execute a single-stage skillone after the other. While the simple strategy works in some scenarios [ 10–12], a subtask in general caneasily fail when encountering a starting state it has never seen during training. Regularizing the statespace between neighboring skills can mitigate this out-of-distribution issue [ 13,14], but long-horizondexterous manipulation requires a comprehensive optimization of the entire skill chain, due to thecomplex coordination between non-adjacent tasks. For instance, as depicted in Fig. 1, the robot needsto strategize in advance when orienting the block, aiming for an optimal object pose that facilitatesnot only the immediate subsequent grasping but also the insertion task in the later stage of the task.This paper proposes a new method to effectively chain multiple high-dimensional manipulation policiesvia a combination of regularization and optimization. We introduce a bi-directional process consistingof a forward initialization process and a backward fine-tuning process. The forward initializationprocess models the end-state distribution of each sub-policy, which defines the initial state distributionfor the subsequent policy. The forward initialization process associates the preceding policy withthe subsequent one by injecting a bias in the initial state distribution of the subsequent policy duringtraining. Conversely, we also introduce a backward fine-tuning mechanism to associate the subsequentpolicy with the preceding one. We define a Transition Feasibility Function which learns to identifyinitial states from which the subsequent policy can succeed its task. The transition feasibility functionis used to fine-tune the preceding policy, serving as an auxiliary reward signal. With the backwardfine-tuning process, the transition feasibility function effectively backpropagates the long-term goalsto influence the earlier policies via its learning objectives, thereby enabling global optimization acrossthe entire skill chain. Once the policies are trained and deployed, the transition feasibility functions canbe repurposed to serve as stage identifiers that determine the appropriate timing for policy switchingand which subsequent policy to switch to. The transition feasibility function substantially improvesthe robustness of task execution and increases the success rate. Our experimental results demonstratethat the bi-directional optimization process notably enhances the performance of chaining multipledexterous manipulation policies, which can further zero-shot transfer to a real-world robot armequipped with a dexterous hand to tackle challenging long-horizon dexterous manipulation tasks.In summary, the primary contributions of this work encompass:• The first to explore policy chaining for long-horizon dexterous manipulation.•A general bi-directional optimization framework that effectively chains multiple dexterousskills for long-horizon dexterous manipulation.•Our framework exhibits state-of-the-art results in multi-stage dexterous manipulation tasksand facilitates zero-shot transfer to a real-world dexterous robot system.2 Related WorkDexterous manipulation. Dexterous manipulation represents a long-standing area of researchin robotics [ 15–19]. With its high degree of freedom, a dexterous hand can execute a variety ofmanipulation skills [ 1–7,9,20–24]. Traditional algorithms have typically addressed these challengesby leveraging trajectory optimization founded on analytical dynamics modeling [ 17–19]. Thesetechniques pose simplification over the active contacts between the hand and objects, limiting theireffectiveness in more complex tasks. Conversely, deep reinforcement learning have exhibited thecapacity to learn dexterous skills without assumptions for simplification [ 7,8,23]. Despite their2D%XLOGLQJ%ORFNV6LPDQG5HDOE7RRO3RVLWLRQLQJ6WHS*UDVS,QLWLDOVWDWH6WHS2ULHQW2XUV%DVHOLQH:ULVWFDPHUD7RSGRZQFDPHUDFigure 2: Overview of the environment setups. (a) Workspace of Building Blocks task in simulationand real-world. (b) The setup of the Tool Positioning task. Initially, the tool is placed on the table in arandom pose, and the dexterous hand needs to grasp the tool and re-orient it to a ready-to-use pose. Thecomparison results illustrate how the way of grasping directly influences subsequent orientation.notable flexibility in learning dexterous primitives, these methods predominantly focus on singularmanipulation tasks such as object re-orientation [ 5,6,25–27] or isolated skills for reset-free learningsystem [ 28]. Our work prioritizes the chaining of multiple dexterous primitives, which incorporatesthe skill feasibility into a comprehensive learning framework for long-horizon dexterous manipulation.Long-horizon robot manipulation. Training a robot to perform long-horizon manipulation tasksfrom scratch is challenging, primarily due to the cumulative propagation of errors throughout thetask execution process. Established methods tackle these tasks by breaking them down into simpler,reusable subtasks [ 29]. Typically, these algorithms comprise a set of low-level sub-policies, whichcan be obtained through various means, such as unsupervised exploration [ 30–34], learning fromdemonstration [ 35–39] and pre-defined measures [ 11,40–44]. Despite their distinct merits, these worksdo not address the specific challenge of long-horizon manipulation in the context of dexterous hands.This challenge largely stems from the compounded complexity produced by the extensive state space ofa hand coupled with the extended scope of long-horizon tasks. Therefore, even when provided with high-level plans, ensuring a seamless transition between dexterous policies remains a formidable challenge.Skill-chaining. Prior policy-chaining methods focus on updating each sub-task policy to encompassthe terminal states of the preceding policy [ 10,13]. However, given the high degree of freedomcharacteristic of a hand, the terminal state space undergoes limitless expansion, thereby complicatingeffective training. Closely related to our work is Lee et al. [ 14], wherein a discriminator is learnedto regulate the expansion of the terminal state space. Nevertheless, its uni-directional trainingprocess restricts optimization to adjacent skills only, disregarding the influence of long-term goalson early non-adjacent policies. In contrast, our bi-directional training mechanism enables thebackpropagation of the long-term goal reward to optimize the entire policy chain. Our conceptof backward fine-tuning draws significant inspiration from goal-regression planning in classicalsymbolic planning literatures [ 45] (also known as pre-image backchaining [ 46–50]). However, theseworks assume access to a set of pre-defined motion primitives, which is hard to obtain in dexterousmanipulation setups. Our work focuses on the learning and chaining of a sequence of dexterouspolicies from scratch, targeting the accomplishment of long-horizon task objectives.3 Problem SetupsWe study the task of chaining a sequence of dexterous policies to accomplish long-horizon manipulationtasks, examples of which include Lego-like building blocks or picking up and positioning a tool toa desired pose. These two tasks both require the use of multiple dexterous skills to complete, makingthem highly suitable for studying long-horizon dexterous manipulation with skill chaining.Constructing a structure of blocks. This long-horizon task includes four different subtasks:searching for a block with desired dimension and color from a pile of cluttered blocks, orientingthe block to a favorable position, grasping the block, and finally inserting the block to its designatedposition on the structure. This sequence of actions repeats until the structure is completed according tothe given assembly instructions. The block set, initially arranged in a random configuration, comprises3D7UDLQLQJLQ6LPXODWLRQE'HSOR\PHQWLQWKH5HDO:RUOG)RUZDUGLQLWLDO%DFNZDUGILQHWXQLQJ)RUZDUGLQLWLDOL]DWLRQ%DFNZDUGILQHWXQLQJ6WDWHVSDFH(QGVWDWHFROOHFWLRQ,QLWLDOVWDWHVDPSOLQJ5HZDUG6XSHUYLVHGOHDUQLQJ3ROLF\UROORXWV,QLWLDOVWDWH3ROLF\ILQHWXQLQJ6XPUHZDUG7UDQVLWLRQ)HDVLELOLW\)XQFWLRQ3ROLF\UROORXWV3ROLF\WUDLQLQJ6HDUFK2ULHQW*UDVS,QVHUW2ULHQW*UDVS,QVHUW5*%'FDPHUD'REMSRVH3URSULRFHSWLRQ0RWRUWDFWLOH3ROLF\VHOHFWLRQ+]3'FRQWURO)RUZDUGLQLWLDO%DFNZDUGILQHWXQLQJ)RUZDUGLQLWLDO%DFNZDUGILQHWXQLQJFigure 3: Overview of Sequential Dexterity . (a) A bi-directional optimization scheme consists of aforward initialization process and a backward fine-tuning mechanism based on the transition feasibilityfunction. (b) The learned system is able to zero-shot transfer to the real world. The transition feasibilityfunction serves as a policy-switching identifier to select the most appropriate policy to execute.eight distinct types (different shapes, masses and colors), totaling 72 blocks. We operate under theassumption of having access to an assembly manual that outlines the sequence and desired positioningof each block piece on the board. The environment provides the robot with two RGB-D cameraviews—one from a top-down camera over the box and the other from the wrist-mounted camera (asis shown in Fig. 2(a)). No other sensors are used in either simulation or the real world. More detailsof the definition of the sub-task are introduced in Sec. 4.4 and Sec. 5.1.Tool positioning. This task involves two subtasks: grasping a tool with a long handle (e.g., hammer,spatula) from a table and in-hand orienting it to a ready-to-use pose (e.g., make the flat side of thehammerhead face the nail, as is shown in Fig. 2(b)). The environment provides the robot with the6D pose of the target tool. For more details on the task setups, please refer to Appendix. F.4 Sequential DexterityWe propose a bi-directional optimization process to tackle long-horizon dexterous manipulationtasks. Our approach contains three main components: (1) Training dexterous sub-policies (Sec. 4.1),(2) Chaining sub-policies through fine-tuning (Sec. 4.2), (3) Improving system robustness throughautomatic policy-switching (Sec. 4.3).4.1 Learning dexterous sub-policiesTraining a dexterous manipulation policy from scratch for solving long-horizon tasks, like buildinga block tower (Fig. 1), is significantly challenging given the high degree of freedom associated witha dexterous hand (evidenced by the result of RL-scratch in Tab. 1 and Tab. 2). As such, we firstdecompose a long-horizon task into a K-step sequence of sub-tasks G=(g1,g2,...,gK)and train eachsub-policy πiwith Proximal Policy Optimization (PPO) [ 51] algorithm. We formulate each sub-task asa Markov Decision Process (MDP) M=(SSS,AAA,π,T,R,γ,ρ ), with state space SSS, action space AAA, policyof the agent π, transition distribution T(st+1|st,at)(st∈SSS,at∈AAA), reward function R, discountfactor γ∈(0,1), and initial state distribution ρ. The policy πoutputs a distribution of motor actionsatbased on the current state inputs st. The goal is to train the policy πto maximize the sum of rewardsEπ[PT−1t=0γtrt](rt=R(st,at,st+1)) in an episode with Ttime steps.However, due to the large state space of a dexterous hand, it is difficult to accurately sample thepotential initial states for training individual sub-policies. Take the insertion task as an example (Fig. 34sub-policy π4), randomly sampling the initial hand configuration and object’s in-hand pose does notassure a physically stable grasp. However, we provide a critical observation that the successful endstate of prior sub-task πi−1inherently provides plausible initial states for πito start with. Similarobservation is find in [ 7]. Inspired by this, we propose a forward initialization training scheme (Fig. 3(a)). Given a long-horizon task G=(g1,g2,...,gK), our framework sequentially trains the sub-policiesaccording to the task’s chronological order. After training each sub-policy πi, we start policy rolloutsand collect a set of successful terminal states {siT}, which is later used as the initial state distributionρi+1for training the succeeding policy πi+1. Such forward training method ensures the validityof the initial states and makes the learning of dexterous policies effective. More details of trainingsub-policies can be found in Sec. 4.4.4.2 Policy chaining with transition feasibility functionChaining multiple policies using forward initialization alone may not guarantee success since theprevious policy πi−1might reach a termination state that its successor πicannot solve. This issue arisesbecause the preceding policy πi−1does not take into account whether the end states are feasible forthe subsequent policy πito succeed. To address this challenge, it is crucial to convey the feasibility ofthe following policy πiin reverse to its predecessor πi−1, enabling the latter to optimize toward statesthatπican handle. Based on this hypothesis, we propose a backward policy fine-tuning mechanismwith a transition feasibility function (Fig. 3 (b)).Learning transition feasibility function. The feasibility of a given state for a policy can bedescribed as the policy’s ability to succeed in the end when starting from that state. We formalizethis concept by creating a function that maps the transition state si0∈ρi(which is equivalent to si−1T)to the expected sum of reward within the sub-task execution, Eπi[PT−1t=0rt]. We name this function,F:S 7→R, the Transition Feasibility Function . However, a single state si−1Tis not sufficient todifferentiate the performance of πi. In particular, the velocity of object from the previous sub-task maybe critical to the performance of πi, which cannot be captured by si−1Talone. As a result, the transitionfeasibility function takes a sequence of observation states si−1[T−10:T](10 steps in our experiments) asinput and employs a multi-head attention network [ 52] to extract suitable temporal information forlearning the F. The final learning objective of the transition feasibility function Fifor sub-policy πiis:Li=∥Fi(s[T−10:T])−Eπi[T−1Xt=0rt]∥2 (1)Backward policy fine-tuning. Once Fiis trained, we can fine-tune the prior policy πi−1, byincorporating Fias an auxiliary reward component. The fine-tuning starts with updating the second-to-the-last sub-task policy πK−1and sequentially moves backward, refining each preceding policy until thefirst one π1is updated. In each fine-tuning step, we utilize Fias an additional reward, combined with theoriginal sub-task reward Ri−1, to fine-tune policy πi−1. The final policy fine-tuning reward function is:Ri−1′(st,at,st+1;Fi)=λ1Ri−1(st,at,st+1)+λ2Fi(s[T−10:T]), (2)where λ1andλ2are the weighting factors. Once πi−1has been refined, we execute policy rolloutsto gather data, which maps from the initial state si−2[T−10:T]to the accumulated reward Eπi−1[PT−1t=0rt]received by the policy πi−1at the terminal state si−1T. This data helps to construct a new transition feasi-bility function Fi−1, which is further used to fine-tune the preceding policy πi−2. The implementationpseudocode of the bi-directional forward and backward training process is illustrated in Appendix. A.4.3 Policy switching with transition feasibility functionA key challenge in chaining multiple dexterous policies is to determine when to switch to the nextpolicy and what should be the next policy to execute. Prior works approach this issue by establishinga predetermined execution horizon for a policy, transitioning abruptly to the subsequent policy oncethe maximum step count is attained [ 10,14,53–55]. The pre-scheduled policy transitions workedin some scenarios, but they are not suitable for dexterous manipulation that involves non-prehensile5Figure 4: Examples of policy-switching with transition feasibility function. Each example contains animage from the wrist-mount camera (left) and its corresponding feasibility score cioutputted by thetransition feasibility function (right). We highlight the target block in the image for better visualization.The policy-switching process visits each sub-policy in reverse order. The first sub-policy with afeasibility score ci>1.0is selected for execution.maneuvers and in-hand manipulation. For instance, if a robot is reorienting an object in-hand, apremature policy switch before the object is stabilized could result in task failure. The key to tacklingthis issue is to automatically figure out the appropriate switch time such that the transition state willlead to success of next policy. The transition feasibility function provides exactly the information weneed for identifying the switch timing. As such, during execution, we repurpose our trained transitionfeasibility functions as a policy-switching identifier. At each time step, the transition feasibilityfunction of the next sub-policy will output a feasibility score ci+1t=Fi+1(s[t−10:t])/hi+1, where hi+1is a threshold hyperparameter defined based on the reward of successful task executions. The idealtime of policy switching can then be defined as the moment when ci+1t>1.Simply executing sub-policies sequentially may not guarantee successful task execution since the robotsometimes needs to recover using a previous policy and sometimes needs to bypass a future policy ifthe sub-task has already been achieved. Thus, effective policy switching requires the robot to not onlyconsider the current policy and its successor, but also the entire skill chain. To achieve this, we groupthe learned transition feasibility function (F2,F3...,FK)as a stage estimator. At each policy-switchingstep, we calculate the feasibility score from the final transition feasibility function of the entire taskcKt=FK(s[t−10:t])/hK, sequentially moving backward. The sub-policy, for which the first feasibilityscore cit>1, is considered the next policy for execution. If none of the feasibility scores satisfy, therobot will restart from the beginning of the entire task. Leveraging the learned transition feasibilityfunction in this manner enhances the robot’s robustness against unexpected failures during policyexecution, while also allowing it to bypass redundant stages, thus promoting efficient task execution.4.4 Implementation detailsRL reward. Training sub-policies require pre-defined sub-task rewards {Ri}Ki=1. Establishing suchrewards can be complex as the appropriate sub-goals that would most contribute to the overall taskaccomplishment may not be readily apparent. However, we pose a critical finding that the backwardfine-tuning mechanism can transmit the goal of the entire task to each sub-task. For instance, thetransition feasibility function of the inserting policy informs the grasping policy about the in-handobject pose that would be most beneficial for the insertion. Furthermore, such backward transmissioncan influence all preceding sub-policies, enabling the entire policy chain to optimize for the overalltask goal. This mechanism alleviates the burden of reward shaping and allows us to use standardsub-task rewards that are agnostic to the final task goal for training sub-policies. For instance, thesub-task reward of grasping is defined as whether the target object has been lifted. The specifics ofhow the object is held in hand are automatically managed during the backward fine-tuning process.The detailed descriptions of each sub-task reward are documented in the Appendix. D.State-action space. The state space for the sub-policies is built around the perspective of the hand.It integrates proprioception and motor tactile [ 56,57] information from the 16-degree-of-freedomAllegro Hand as well as the target object’s 6D pose in the reference frame of the wrist-mounted camera.During simulation-based sub-policy training, we augment this state space with additional information,6Trained UnseenBlock 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 8 ALLRL-scratch 0.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.00Curriculum RL 0.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.00V-Chain [34] 0.15±0.020.09±0.040.11±0.040.10±0.040.08±0.020.08±0.030.03±0.020.04±0.020.08±0.02Policy-Seq [10] 0.20±0.040.14±0.020.15±0.030.23±0.030.15±0.030.17±0.000.16±0.010.12±0.020.16±0.02T-STAR [14] 0.19±0.040.18±0.020.11±0.010.27±0.020.17±0.040.25±0.020.26±0.030.10±0.030.19±0.03Ours w/o temporal 0.47±0.060.44±0.070.43±0.000.49±0.040.40±0.040.51±0.040.18±0.010.16±0.030.38±0.04Ours 0.61±0.030.55±0.010.52±0.030.63±0.030.51±0.060.53±0.060.22±0.020.16±0.010.46±0.03Table 1: Results for the Building Blocks tasksuch as the velocity of each hand joint and the target object. In real-world deployments, these statesare abstracted via policy distillation [ 6,7,58]. The action space of our system includes 16-dimensionalhand joints and 3D wrist translation of the robot arm. To concentrate the sub-policy learning on criticalaspects of the manipulation task, when the target’s (either object or goal location) relative pose to thewrist camera is detected more than 5centimeters from the hand, we employ a motion-planning-basedoperational space controller (OSC [ 59]) to move the end-effector to a position 5centimeters abovethe target. More details of the state-action space are introduced in Appendix. C.5 Experiments5.1 Experiment setupsT-STAROurs(a).Effectofpolicy-switching(b).ObjectposeswithhighfeasibilityscoreforGraspFigure 5: (a) Performance improvement of Oursgiven 0/1/2/3 maximum policy-switching times.(b) Visualization of object poses with high feasi-bility score for the Grasp sub-policy in BuildingBlocks task. The x, y, and z axes are the roll, yaw,and pitch of the object, respectively. In Ours, eachpoint in the diagram represents a pose that is re-garded as feasible by the transition feasibility func-tion (ci>1.0). For T-STAR, we use the poses thatare judged as successful by its discriminator.Environment setups. The environment is ini-tialized with a Franka Emika robot arm equippedwith an Allegro Hand as the end-effector. In theBuilding Block task, we placed a box of blocks(eight categories, in total 72 pieces) and a buildingboard on the table. For the tool positioning task,a long-handled tool is placed on the table. Thecontrol frequency is 60 Hz for both the robot armand the hand. The real-world hardware mirrorsthe simulation setups. We coordinate the use oftop-down and wrist-mount cameras to access the6D pose and segmentation mask of the object inthe real-world: we use the top-down camera onceat the beginning, use the wrist-mount camera onceat the beginning of each maneuver in orienting ,and use the wrist-mount camera continuously insearching ,grasping andinserting ; More detailsof the environment setups are in Appendix. F.Baseline methods. We compare our approach with the following baselines: 1) RL-scratch is vanillaPPO algorithm [ 51] learns the task from scratch. 2) Curriculum RL follows a procedure trainingstrategy to expand from the first skill to the entire task. 3) V-Chain [ 34] combines skill-chaining withthe value function from the PPO policy. 4) Policy-Seq [ 10] focuses on the forward initiation process inskill-chaining. 5) T-STAR [14] incorporates a discriminator to regularize the terminal states.5.2 ResultsBi-directional optimization framework is key for chaining multiple dexterous policies. In Tab. 1and Tab. 2, our approach learned with bi-directional optimization (Ours and Ours w/o temporal) outper-forms prior uni-directional skill-chaining methods (V-Chain, Policy-Seq and T-STAR) significantly,with more than 20% improvement in task success rate in two long-horizon tasks. We further analyzewhat really matters for successful policy chaining. We visualize the transition feasibility score of thegrasping sub-policy (T-STAR’s result is calculated from its discriminator) in Fig. 5(b). We found ourapproach with backward fine-tuning scheme correctly transits the goal of the succeeding inserting skillto prior grasping and encourages the policy to grasp the block when its studs face up, which facilitates7Trained UnseenHammer Spatula Spoon ALLRL-scratch 0.17±0.050.06±0.030.10±0.010.11±0.03Curriculum RL 0.29±0.020.17±0.010.16±0.080.21±0.04Policy-Seq [10] 0.43±0.010.29±0.060.24±0.040.32±0.02T-STAR [14] 0.47±0.010.40±0.030.26±0.040.37±0.03Ours w/o temp. 0.77±0.030.54±0.070.40±0.040.57±0.05Ours 0.81±0.010.57±0.040.43±0.080.60±0.04Table 2: Results for the tool positioning taskSingleBlock 1SingleBlock 4DoubleBlock 1RL-scratch 0/10 0/10 0/10Policy-Seq [34] 0/10 2/10 0/10T-STAR [14] 3/15 5/18 0/13Ours 12/20 10/20 5/15Table 3: Real world results in the BuildingBlocks task. Single/Double refers to buildingone single block or stacking two blocks.the insertion. T-STAR with the uni-directional learning process, however, suggests many states wherethe block’s studs are facing horizontally, thereby bringing challenges for subsequent insertion.Transition feasibility function significantly improves performance of long-horizon dexterousmanipulation. In Tab. 1 and Tab. 2, the models learned with the transition feasibility function (Oursand Ours w/o temporal) outperforms the one using the PPO-trained value function (V-chain) for morethan30% in task success rate. This result implies that the value function of PPO policy fails to modelthe feasibility of subsequent policy, which further affects policy chaining results.Temporal inputs facilitate handling high-dimensional state spaces, particularly for dexterousmanipulation. In Tab. 1, by training the transition feasibility function to extract temporal informationfrom a sequence of history states, Ours exceeds Ours w/o temporal for 8%in task success rate. Thisresult highlights the importance of extracting velocity and temporal information for chaining dexterouspolicies that contain dynamic finger motions.Ability to switch sub-policy autonomously is essential for succeeding long-horizon tasks. Fig.5(a) illustrates the performance improvement of enabling automatic policy-switching. Only with amaximum allowance of three switching times, Ours can improve more than 30% in task success rate.This result concludes that it is crucial to have the capability of switching forward and backward byleveraging the transition feasibility function of each sub-policy. Such policy-switching ability furthercontributes to our real-world results in Tab. 3, which allows the policy to handle a challenging 8-steplong-horizon task (Double Block 1) with more than 30% task success rate (10 maximum switchingtimes for all methods in the real-world experiments). Please refer to the website for more results.Real-world results. In Tab. 3 real-world experiments, our approach has more than 30% successrate improvements compared to prior methods. This result is consistent with the results in simulation.Ours has a 33% success rate in building a double-block structure, while the other baselines have a 0%success rate. This result highlights the ability of our model when tackling long-horizon dexterousmanipulation tasks. For more details of the real-world setups, please refer to Appendix. B.6 LimitationsThere are several limitations of our work. First, we encounter difficulties in simulating a contact-richinsertion process which necessitates an additional manually designed pressing motion to completelyinsert the blocks during real-world deployment. Second, the motor tactile does not yield a significantimprovement in performance, as observed in Appendix Tab. 8. Our future research could explore thepotential of sensor-based tactile signals for contact-rich tasks, as proposed in [60, 61, 20, 24, 27].7 ConclusionWe present Sequential Dexterity, a system developed for tackling long-horizon dexterous manipulationtasks. Our system leverages a bi-directional optimization process to chain multiple dexterous policieslearned with deep reinforcement learning. At the core of our system is the Transition FeasibilityFunction, a pivotal component facilitating a gradual fine-tuning of sub-policies and enabling dynamicpolicy switching, thereby significantly increasing the success rate of policy chaining. Our system hasthe capability to zero-shot transfer to a real-world dexterous robot, exhibiting generalization acrossnovel object shapes. Our bi-directional optimization framework also has the potential to be a generalskill chaining method beyond dexterous manipulation. Potential applications including chaining skillsfor bimanual robots.8AcknowledgmentsThis research was supported by National Science Foundation NSF-FRR-2153854, NSF-NRI-2024247,NSF-CCRI-2120095 and Stanford Institute for Human-Centered Artificial Intelligence, SUHAI. Inthe real-world experiment, the controller of the Franka Emika Panda arm is developed based onDeoxys [ 62] and the Allegro hand is controlled through zmq [ 2]. We would also like to thank RuochengWang, Wenlong Huang, Yunzhi Zhang and Albert Wu for providing feedback on the paper.References[1]Z. Q. Chen, K. Van Wyk, Y .-W. Chao, W. Yang, A. Mousavian, A. Gupta, and D. Fox. Learningrobust real-world dexterous grasping policies via implicit shape augmentation. arXiv preprintarXiv:2210.13638 , 2022.[2]A. Wu, M. Guo, and C. K. Liu. Learning diverse and physically feasible dexterous grasps withgenerative model and bilevel optimization. arXiv preprint arXiv:2207.00195 , 2022.[3]Y . Qin, B. Huang, Z.-H. Yin, H. Su, and X. Wang. Dexpoint: Generalizable point cloudreinforcement learning for sim-to-real dexterous manipulation. In Conference on Robot Learning ,pages 594–605. PMLR, 2023.[4]O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron,M. Plappert, G. Powell, A. Ray, et al. Learning dexterous in-hand manipulation. The InternationalJournal of Robotics Research , 39(1):3–20, 2020.[5]A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk,K. Van Wyk, A. Zhurkevich, B. Sundaralingam, et al. Dextreme: Transfer of agile in-handmanipulation from simulation to reality. arXiv preprint arXiv:2210.13702 , 2022.[6]T. Chen, M. Tippur, S. Wu, V . Kumar, E. Adelson, and P. Agrawal. Visual dexterity: In-handdexterous manipulation from depth. arXiv preprint arXiv:2211.11744 , 2022.[7]T. Chen, J. Xu, and P. Agrawal. A system for general in-hand object re-orientation. Conferenceon Robot Learning , 2021.[8]OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino,M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan,W. Zaremba, and L. Zhang. Solving rubik’s cube with a robot hand. CoRR , abs/1910.07113,2019.[9]S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation. arXiv preprint arXiv:2203.13251 , 2022.[10] A. Clegg, W. Yu, J. Tan, C. K. Liu, and G. Turk. Learning to dress: Synthesizing human dressingmotion via deep reinforcement learning. ACM Transactions on Graphics (TOG) , 37(6):1–10,2018.[11] Y . Lee, S.-H. Sun, S. Somasundaram, E. S. Hu, and J. J. Lim. Composing complex skills bylearning transition policies. In International Conference on Learning Representations , 2019.[12] X. B. Peng, M. Chang, G. Zhang, P. Abbeel, and S. Levine. Mcp: Learning composablehierarchical control with multiplicative compositional policies. Advances in Neural InformationProcessing Systems , 32, 2019.[13] G. Konidaris and A. Barto. Skill discovery in continuous reinforcement learning domains usingskill chaining. Advances in neural information processing systems , 22, 2009.[14] Y . Lee, J. J. Lim, A. Anandkumar, and Y . Zhu. Adversarial skill chaining for long-horizon robotmanipulation via terminal state regularization. arXiv preprint arXiv:2111.07999 , 2021.9[15] J. K. Salisbury and J. J. Craig. Articulated hands: Force control and kinematic issues. TheInternational journal of Robotics research , 1(1):4–17, 1982.[16] M. T. Mason and J. K. Salisbury Jr. Robot hands and the mechanics of manipulation. 1985.[17] I. Mordatch, Z. Popovi ́c, and E. Todorov. Contact-invariant optimization for hand manipulation.InProceedings of the ACM SIGGRAPH/Eurographics symposium on computer animation , pages137–144, 2012.[18] Y . Bai and C. K. Liu. Dexterous manipulation using both palm and fingers. In 2014 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 1560–1565. IEEE, 2014.[19] V . Kumar, Y . Tassa, T. Erez, and E. Todorov. Real-time behaviour synthesis for dynamic hand-manipulation. In 2014 IEEE International Conference on Robotics and Automation (ICRA) ,pages 6808–6815. IEEE, 2014.[20] Z.-H. Yin, B. Huang, Y . Qin, Q. Chen, and X. Wang. Rotating without seeing: Towards in-handdexterity through touch. arXiv preprint arXiv:2303.10880 , 2023.[21] A. Sivakumar, K. Shaw, and D. Pathak. Robotic telekinesis: Learning a robotic hand imitator bywatching humans on youtube. arXiv preprint arXiv:2202.10448 , 2022.[22] K. Zakka, L. Smith, N. Gileadi, T. Howell, X. B. Peng, S. Singh, Y . Tassa, P. Florence, A. Zeng,and P. Abbeel. Robopianist: A benchmark for high-dimensional robot control. arXiv preprintarXiv:2304.04150 , 2023.[23] Y . Chen, T. Wu, S. Wang, X. Feng, J. Jiang, Z. Lu, S. McAleer, H. Dong, S.-C. Zhu, and Y . Yang.Towards human-level bimanual dexterous manipulation with reinforcement learning. Advancesin Neural Information Processing Systems , 35:5150–5163, 2022.[24] I. Guzey, B. Evans, S. Chintala, and L. Pinto. Dexterity from touch: Self-supervised pre-trainingof tactile representations with robotic play. arXiv preprint arXiv:2303.12076 , 2023.[25] H. Qi, A. Kumar, R. Calandra, Y . Ma, and J. Malik. In-Hand Object Rotation via Rapid MotorAdaptation. In Conference on Robot Learning (CoRL) , 2022.[26] W. Huang, I. Mordatch, P. Abbeel, and D. Pathak. Generalization in dexterous manipulation viageometry-aware multi-task learning. arXiv preprint arXiv:2111.03062 , 2021.[27] G. Khandate, S. Shang, E. T. Chang, T. L. Saidi, J. Adams, and M. Ciocarlie. Sampling-basedExploration for Reinforcement Learning of Dexterous Manipulation. In Proceedings of Robotics:Science and Systems , Daegu, Republic of Korea, July 2023. doi:10.15607/RSS.2023.XIX.020.[28] A. Gupta, J. Yu, T. Z. Zhao, V . Kumar, A. Rovinsky, K. Xu, T. Devlin, and S. Levine. Reset-freereinforcement learning via multi-task learning: Learning dexterous manipulation behaviorswithout human intervention. In ICRA , pages 6664–6671. IEEE, 2021.[29] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporalabstraction in reinforcement learning. Artificial intelligence , 112(1-2):181–211, 1999.[30] J. Schmidhuber. Towards compositional learning with dynamic neural networks . Inst. f ̈urInformatik, 1990.[31] P.-L. Bacon, J. Harb, and D. Precup. The option-critic architecture. In Proceedings of the AAAIconference on artificial intelligence , volume 31, 2017.[32] O. Nachum, S. S. Gu, H. Lee, and S. Levine. Data-efficient hierarchical reinforcement learning.Advances in neural information processing systems , 31, 2018.[33] A. Levy, G. Konidaris, R. Platt, and K. Saenko. Learning multi-level hierarchies with hindsight.arXiv preprint arXiv:1712.00948 , 2017.10[34] V . C. Kumar, S. Ha, and C. K. Liu. Expanding motor skills using relay networks. In Conferenceon Robot Learning , pages 744–756. PMLR, 2018.[35] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto. Robot learning from demonstration byconstructing skill trees. The International Journal of Robotics Research , 31(3):360–375, 2012.[36] T. Kipf, Y . Li, H. Dai, V . Zambaldi, A. Sanchez-Gonzalez, E. Grefenstette, P. Kohli, andP. Battaglia. Compile: Compositional imitation learning and execution. In InternationalConference on Machine Learning , pages 3418–3428. PMLR, 2019.[37] Y . Lu, Y . Shen, S. Zhou, A. Courville, J. B. Tenenbaum, and C. Gan. Learning task decompositionwith ordered memory policy network. arXiv preprint arXiv:2103.10972 , 2021.[38] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances.arXiv preprint arXiv:2204.01691 , 2022.[39] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar. Mimicplay:Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422 ,2023.[40] T. D. Kulkarni, K. Narasimhan, A. Saeedi, and J. Tenenbaum. Hierarchical deep reinforce-ment learning: Integrating temporal abstraction and intrinsic motivation. Advances in neuralinformation processing systems , 29, 2016.[41] J. Oh, S. Singh, H. Lee, and P. Kohli. Zero-shot task generalization with multi-task deepreinforcement learning. In International Conference on Machine Learning , pages 2661–2670.PMLR, 2017.[42] J. Merel, A. Ahuja, V . Pham, S. Tunyasuvunakool, S. Liu, D. Tirumala, N. Heess, and G. Wayne.Hierarchical visuomotor control of humanoids. arXiv preprint arXiv:1811.09656 , 2018.[43] J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, andT. Funkhouser. Tidybot: Personalized robot assistance with large language models. arXivpreprint arXiv:2305.05658 , 2023.[44] C. Wang, D. Xu, and L. Fei-Fei. Generalizable task planning through representation pretraining.IEEE Robotics and Automation Letters , 7(3):8299–8306, 2022.[45] R. Waldinger. Achieving several goals simultaneously. In Readings in artificial intelligence ,pages 250–271. Elsevier, 1981.[46] T. Lozano-Perez, M. T. Mason, and R. H. Taylor. Automatic synthesis of fine-motion strategiesfor robots. The International Journal of Robotics Research , 3(1):3–24, 1984.[47] L. P. Kaelbling and T. Lozano-P ́erez. Hierarchical task and motion planning in the now. In 2011IEEE International Conference on Robotics and Automation , pages 1470–1477. IEEE, 2011.[48] L. P. Kaelbling and T. Lozano-P ́erez. Pre-image backchaining in belief space for mobile manipu-lation. In Robotics Research: The 15th International Symposium ISRR , pages 383–400. Springer,2017.[49] D. Xu, R. Mart ́ın-Mart ́ın, D.-A. Huang, Y . Zhu, S. Savarese, and L. F. Fei-Fei. Regressionplanning networks. Advances in Neural Information Processing Systems , 32, 2019.[50] C. Agia, T. Migimatsu, J. Wu, and J. Bohg. Stap: Sequencing task-agnostic policies. arXivpreprint arXiv:2210.12250 , 2022.[51] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. CoRR , abs/1707.06347, 2017.11[52] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polo-sukhin. Attention is all you need. In NIPS , pages 5998–6008, 2017.[53] E. Rosete-Beas, O. Mees, G. Kalweit, J. Boedecker, and W. Burgard. Latent plans for task-agnostic offline reinforcement learning. In Conference on Robot Learning , pages 1838–1849.PMLR, 2023.[54] Z. Su, O. Kroemer, G. E. Loeb, G. S. Sukhatme, and S. Schaal. Learning to switch betweensensorimotor primitives using multimodal haptic signals. In SAB, volume 9825 of Lecture Notesin Computer Science , pages 170–182. Springer, 2016.[55] O. Kroemer, C. Daniel, G. Neumann, H. van Hoof, and J. Peters. Towards learning hierarchicalskills for multi-phase manipulation tasks. In ICRA , pages 1503–1510. IEEE, 2015.[56] L. Sievers, J. Pitz, and B. B ̈auml. Learning purely tactile in-hand manipulation with a torque-controlled hand. In 2022 International Conference on Robotics and Automation (ICRA) , pages2745–2751. IEEE, 2022.[57] J. Pitz, L. R ̈ostel, L. Sievers, and B. B ̈auml. Dextrous tactile in-hand manipulation using a modularreinforcement learning architecture. arXiv preprint arXiv:2303.04705 , 2023.[58] A. A. Rusu, S. G. Colmenarejo, C ̧. G ̈ulc ̧ehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V . Mnih,K. Kavukcuoglu, and R. Hadsell. Policy distillation. In ICLR (Poster) , 2016.[59] O. Khatib. A unified approach for motion and force control of robot manipulators: The operationalspace formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987. doi:10.1109/JRA.1987.1087068.[60] B. Romero, F. Veiga, and E. Adelson. Soft, round, high resolution tactile fingertip sensorsfor dexterous robotic manipulation. In 2020 IEEE International Conference on Robotics andAutomation (ICRA) , pages 4796–4802. IEEE, 2020.[61] W. K. Do and M. Kennedy. Densetact: Optical tactile sensor for dense shape reconstruction. In2022 International Conference on Robotics and Automation (ICRA) , pages 6188–6194. IEEE,2022.[62] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. Viola: Imitation learning for vision-based manipulationwith object proposal priors. 6th Annual Conference on Robot Learning , 2022.[63] H. K. Cheng and A. G. Schwing. XMem: Long-term video object segmentation with an atkinson-shiffrin memory model. In ECCV , 2022.[64] C. Wang, D. Xu, Y . Zhu, R. Mart ́ın-Mart ́ın, C. Lu, L. Fei-Fei, and S. Savarese. Densefusion: 6dobject pose estimation by iterative dense fusion. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 3343–3352, 2019.[65] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onrobot learning , pages 1094–1100. PMLR, 2020.[66] H. J. Charlesworth and G. Montana. Solving challenging dexterous manipulation tasks withtrajectory optimisation and reinforcement learning. In International Conference on MachineLearning , pages 1496–1506. PMLR, 2021.12A Training pseudocodeAlgorithm 1 SEQUENTIAL DEXTERITY : A bi-directional optimization framework for skill chainingRequire: sub-task MDPs M1,...,MK1:Initialize sub-policies π1θ,...,πKθ, transition feasibility function F1ω,...,FKω, ternimal state buffersB1I,...,BKI, the sum of reward buffers B1R,...,BKR, and the weighting factors of the backwardfine-tuning λ1andλ2.2:foriteration m=0,1,...,M do3: foreach sub-task i=1,...,K do4: while until convergence of πiθdo5: Rollout trajectories τ=(si0,ai0,ri0,...,siT)withπiθ6: Update πiθby maximizing Eπi[PT−1t=0γtrit]7: end while8: end for ▷Forward initialization9: foreach sub-task i=K,..., 2do10: while until convergence of πi−1θdo11: Rollout trajectories τi−1=(si−10,ai−10,ri−10,...,si−1T)withπi−1θ12: Sample si0from environment or Bi−1β13: Rollout trajectories τi=(si0,ai0,ri0,...,siT)withπiθ14: ifsub-task iis complete then15: BiT←BiT∪s[T−10:T],BiR←BiR∪[PT−1t=0rit]16: end if17: Update Fiwiths[T−10:T]∼Bi−1Tand[PT−1t=0rt]∼BiR18: Update πi−1θby maximizing Eπi−1[PT−1t=0γt(λ1Ri−1(si−1t,ai−1t,si−1t+1)+λ2Fiω(si[T−10:T]))]19: end while20: end for ▷Backward finetuning21:end forB Real-world system setupsDuring real-world deployment, some observations used in the simulation are hard to accuratelyestimate (e.g., joint velocity, object velocity, etc.). We use the teacher-student policy distillationframework [ 6,7,58] to abstract away these observation inputs from the policy model. In each policyrollout, our system first uses the top-down camera view to perform a color-based segmentation tolocalize the target block piece given by the manual. Then, the robot calls motion planning API tomove to the target location with OSC controller [ 59]. After that, our system uses the wrist cameraview to track the segmentation and 6D pose of the object with a combination of color-based initialsegmentation, Xmem segmentation tracker [ 63], and Densefusion pose estimator [ 64]. If the targetobject is deeply buried (as the case in the top left corner of Fig. 4), the transition feasibility functionwill inform the robot to execute the searching policy until the target appears. During the last insertionstage, the estimated 6D object pose will guide the robot policy to adjust its finger and wrist motionto align with the goal location as it learned in the simulation. Since simulating contact-rich insertionis still a research challenge in graphics, after the robot has placed the block to the target location, weperform a scripted pressing motion (spread out the entire hand and press down) on the target locationto ensure a firm insert. The output of the policy which controls the hand is low-pass filtered with anexponential moving average (EMA) smoothing factor [ 6], which can also effectively reduce jitteringmotions. Our results in the real-world were obtained with an EMA of 0.2, which provides a balancebetween agility and stability of the motions. More details about real-world system setups and resultscan be found in the Supplementary video.13C State Space in SimulationC.1 Building BlocksSearching Table.4 gives the specific information of the state space of the searching task.Table 4: Observation space of Search task.Index Description0 - 23 dof position23 - 46 dof velocity46 - 98 fingertip pose, linear velocity, angle velocity (4 x 13)98 - 111 hand base pose, linear velocity, angle velocity111 - 124 object base pose, linear velocity, angle velocity124 - 143 the actions of the last timestep143 - 159 motor tactile159 - 160 the number of pixels occupied by the target object maskOrienting Table.5 gives the specific information of the state space of the orienting task.Table 5: Observation space of Orient and Grasp task.Index Description0 - 23 dof position23 - 46 dof velocity46 - 98 fingertip pose, linear velocity, angle velocity (4 x 13)98 - 111 hand base pose, linear velocity, angle velocity111 - 124 object base pose, linear velocity, angle velocity124 - 143 the actions of the last timestep143 - 159 motor tactileGrasping Table.5 gives the specific information of the state space of the grasping task.Inserting Table.6 gives the specific information of the state space of the inserting task.Table 6: Observation space of Insert task.Index Description0 - 23 dof position23 - 46 dof velocity46 - 98 fingertip pose, linear velocity, angle velocity (4 x 13)98 - 111 hand base pose, linear velocity, angle velocity111 - 124 object base pose, linear velocity, angle velocity124 - 143 the actions of the last timestep143 - 159 motor tactile159 - 166 goal pose166 - 169 goal position - object position169 - 173 goal rotation - object rotationC.2 Tool positioningGrasping Table.5 gives the specific information of the state space of the grasping task.In-hand Orientation Table.6 gives the specific information of the state space of the in-hand orienta-tion task.14Table 7: Domain randomization of all the sub-tasks.Parameter Type Distribution Initial RangeRobotMass Scaling uniform [0.5, 1.5]Friction Scaling uniform [0.7, 1.3]Joint Lower Limit Scaling loguniform [0.0, 0.01]Joint Upper Limit Scaling loguniform [0.0, 0.01]Joint Stiffness Scaling loguniform [0.0, 0.01]Joint Damping Scaling loguniform [0.0, 0.01]ObjectMass Scaling uniform [0.5, 1.5]Friction Scaling uniform [0.5, 1.5]Scale Scaling uniform [0.95, 1.05]Position Noise Additive gaussian [0.0, 0.02]Rotation Noise Additive gaussian [0.0, 0.2]ObservationObs Correlated. Noise Additive gaussian [0.0, 0.001]Obs Uncorrelated. Noise Additive gaussian [0.0, 0.002]ActionAction Correlated Noise Additive gaussian [0.0, 0.015]Action Uncorrelated Noise Additive gaussian [0.0, 0.05]EnvironmentGravity Additive normal [0, 0.4]D Reward functionsD.1 Building BlocksSearching Denote the τis the commanded torques at each timestep, the count of individual pixelswithin the target object’s segmentation mask in the top-down camera frame as P, The sum of thedistance between each fingertip and the object asP4i=0fi, the action penalty as ∥a∥22, and the torquepenalty as ∥τ∥22. Finally, the rewards are given by the following specific formula:r=λ1∗P+λ2∗min(e0−4Xi=0fi,0)+λ3∗∥a∥22+λ4∗∥τ∥22 (3)where λ1=5.0,λ2=1.0,λ3=−0.001,λ4=−0.003, ande0=0.2.Orienting Denote the τis the commanded torques at each timestep, the angular distance betweenthe current object pose and the initial pose as θ, the sum of the distance between each fingertip and theobject asP4i=0fi, the action penalty as ∥a∥22, and the torque penalty as ∥τ∥22. Finally, the rewards aregiven by the following specific formula:r=λ1∗θ+λ2∗min(e0−4Xi=0fi,0)+λ3∗∥a∥22+λ4∗∥τ∥22 (4)where λ1=1.0,λ2=1.0,λ3=−0.001,λ4=−0.003, ande0=0.6.Grasping Denote the τis the commanded torques at each timestep, the sum of the distance betweeneach fingertip and the object asP4i=0fi, the action penalty as ∥a∥22, and the torque penalty as ∥τ∥22.Finally, the rewards are given by the following specific formula:r=λ1∗exp[α0∗min(e0−4Xi=0fi,0)]+λ2∗∥a∥22+λ3∗∥τ∥22 (5)where λ1= 1.0,λ2=−0.001,λ3=−0.003,α0=−5.0, and e0= 0.1. It is worth noting that in thelatter half of our grasping training, we force the hand to lift, so if the grip is unstable, the object willdrop and the reward will decrease.15Block 1 Block 2 Block 3 Block 4 Block 5Block 6 Block 7 Block 8(a) Real world (b) SimulationFigure 6: The block model we use in simulation and real-world. (b) is the eight blocks used in ourbuilding blocks task. The upper Block 1-5 is the training block, and the lower Block 6-8 is the unseenblock for testing.Inserting Denote the τis the commanded torques at each timestep, the object and goal position as xoandxg, the angular position difference between the object and the goal as da, the sum of the distancebetween each fingertip and the object asP4i=0fi, the action penalty as ∥a∥22, and the torque penalty as∥τ∥22. Finally, the rewards are given by the following specific formula:r=λ1∗exp[−(α0∗∥xo−xg∥2+α1∗2∗arcsin( clamp (∥da∥2,0,1)))]+λ2∗min(e0−4Xi=0fi,0)+λ3∗∥a∥22+λ4∗∥τ∥22(6)where λ1=1.0,λ2=0.0,λ3=−0.001,λ4=−0.003,α0=20.0,α1=1.0, ande0=0.06.D.2 Tool positioningGrasping Denote the τis the commanded torques at each timestep, the sum of the distance betweeneach fingertip and the object asP4i=0fi, the action penalty as ∥a∥22, and the torque penalty as ∥τ∥22.Finally, the rewards are given by the following specific formula:r=λ1∗exp[α0∗min(e0−4Xi=0fi,0)]+λ2∗∥a∥22+λ3∗∥τ∥22 (7)where λ1= 1.0,λ2=−0.001,λ3=−0.003,α0=−5.0, and e0= 0.1. It is worth noting that in thelatter half of our grasping training, we force the hand to lift, so if the grip is unstable, the object willdrop and the reward will decrease.In-hand Orientation Denote the τis the commanded torques at each timestep, the object and goalposition as xoandxg, the angular position difference between the object and the goal as da, the sumof the distance between each fingertip and the object asP4i=0fi, the action penalty as ∥a∥22, and thetorque penalty as ∥τ∥22. Finally, the rewards are given by the following specific formula:r=λ1∗exp[−(α0∗∥xo−xg∥2+α1∗2∗arcsin( clamp (∥da∥2,0,1)))]+λ2∗min(e0−4Xi=0fi,0)+λ3∗∥a∥22+λ4∗∥τ∥22(8)where λ1=1.0,λ2=0.0,λ3=−0.001,λ4=−0.003,α0=20.0,α1=1.0, ande0=0.06.D.3 Reward ConstructionWe use an exponential map in the grasping reward function, which is an effective reward shapingtechnique used in the case to minimize the distance between fingers and object (e.g., grasping task),introduced by [ 65,66]. For the same term in the other two reward function, since the other two reward16functions mainly consider other objectives, we empirically find there is no need to use exponentialmap in these cases. To improve the calculation efficiency, we use quaternion to represent the objectorientation. The angular position difference is then computed through the dot product between thenormalized goal quaternion and the current object’s quaternion.E Domain RandomizationIsaac Gym provides lots of domain randomization functions for RL training. We add the randomizationfor all the sub-tasks as shown in Table. 7 for each environment. we generate new randomization every1000 simulation steps.F Task SetupsF.1 Sub-task definition.Here we introduce the functionalities of each sub-policy in the Building Block task: Search aims todig and retrieve the target block when it is buried by other blocks in the box. The initial task goal is tomake the target block’s visible surface in the wrist-view camera larger than a threshold. The transitionfeasibility function finetunes the policy to a reach a state that facilitates the succeeding orientation.Orient aims to rotate the target block. The initial task goal is to freely rotating the target block in-handwithout specific goal pose. In the backward step, the transition feasibility function finetunes the policyto rotate the block to a pose that facilitates the succeeding grasping and insertion. Grasp aims to liftup the target block and hold in-hand. The initial task goal is to lift up the target block for more than30 centimeters. The transition feasibility function further finetunes the policy to grasp the block in away that allows the succeeding insertion to in-hand adjust the block for 90 degrees depending on thegiven task goal. Insert aims to rotate the block in-hand for 90 degrees if the goal pose of the block isvertical and adjust the robot’s wrist position with 3D delta motions to align the block with the desiredinsertion location. In the real-world experiments, since the finger motor of the dexterous hand is notstrong enough to fully insert the block, we add an additional scripted pressing motion to complete thelast step of the insertion.F.2 Building BlocksBlock model. For the building blocks task, we use the same model as Mega Bloks1as our blocks. It isa range of large, stackable construction blocks designed specifically for the small hands of the children.We take eight different types of blocks (denoted as Block 1, Block 2,..., Block 8) as the models of ourblock, and carefully measured the dimensions to ensure that they were the same as in the real world.The block datasets is shown in Figure. 6. For all building block sub-tasks, we use Block 1-5 as thetraining object and Block 6-8 as the unseen object for testing.Physics in insertion between two blocks. It is difficult to simulate the realistic insertion in thesimulator, and it is easy to explode or model penetration when the two models are in frequent contact.Therefore, we want the plug and slot between the two blocks can be inserted without frequent friction.We reduced the diameter of all block plugs and convex decomposed them via VHACD method whenloaded into Isaac Gym. Finally, we made one block possible to insert another block through free fall toverify the final effect.Initialization. In simulation, we randomly sample the initial block placement above the box, allowingthem to fall and form the initial scene. In the real-world experiments, we manually shuffle the blocks’placement in the box, with the shuffling based on the criteria that none of the task-related blocks lieswithin the margin of 10 centimeters from the edges of the box. If the criteria are not satisfied, were-shuffle the blocks.1https://www.megabrands.com/en-us/mega-bloks.17Figure 7: The collision meshes in the simulation.Initial StateGoal StateHammer Spoon (unseen ) Spatula (unseen )Figure 8: Visualization of the three tools we use in Tool Positioning task. The Hammer is use fortraining and the Spoon and Spatula is only use for testing. We also show the goal pose of the tools.Success criteria. In the Building Block task, the task success is defined as whether the target blockhas been inserted on the desired pose on the LEGO board. We assume the access to a building manualthat specifies the shape and color of the target block and its desired goal pose on the board. In the ToolPositioning task, the task success is defined as whether the tool has been lifted and held in hand in aready-to-use pose (e.g., hammer head facing down).Task objects. In the Building Block task, we use the mesh model of Mega Blocks as our task objects.It is a range of large, stackable construction blocks designed specifically for the small hands of children.We take eight different types of blocks (denoted as Block 1, Block 2, . . . , Block 8). These blocks areillustrated in Appendix Figure. 6. In our experiment, Block 1-5 are used as the training objects andBlock 6-8 are unseen ones for testing policy generalization. In the Tool Positioning task, the tools weconsider consists of hammer, scoop and spatula, which have different thickness over the handle andvariation of the center of mass. The hammer is used as a training object and the other two are unseenones for testing policy generalization.Collision meshes. We visualize the collision mesh used in the simulation in Figure. 7. We do observethe drop of simulation speed when loading 72 blocks, but it’s still enough for training 1024 agentstogether at a speed of 5000 FPS with an NVIDIA RTX 3090 GPU. To optimize the speed, we reducethe resolution of the convex decomposition over blocks in Search, Orient and Grasp sub-tasks. Thehigh-resolution blocks are only used for the training of the Insert sub-task.18Figure 9: Snapshots of the searching task.Figure 10: Snapshots of the orienting task.Figure 11: Snapshots of the grasping task.Figure 12: Snapshots of the inserting task.Figure 13: Snapshots of the hammer positioning.F.3 Tool positioningFor the tool positioning task, we have a total of three tools: hammer, spatula, and spoon. We use thehammer for training and test both in the hammer, spatula, and spoon. This long-horizon task involvesgrasp a tool and re-orient it onto a pose suitable for its use. Fig.8 shows what they look like and theinitial and goal state of the each three tools.F.4 Typical frames of all sub-tasksFor the convenience of readers, we show some typical frames of all the sub-tasks in simulation.F.4.1 Building BlocksWe visualize the rollout of the Building Blocks task in Figure. 9, Figure. 10, Figure. 11, and Figure. 12.F.4.2 Tool PositioningWe visualize the rollout of the Tool Positioning task in Figure. 13, Figure. 14, and Figure. 15.19Figure 14: Snapshots of the spoon positioning.Figure 15: Snapshots of the spatula positioning.Trained Unseen AllOurs w/o belief state 0.40±0.080.16±0.070.29±0.06Ours w/o tactile 0.43±0.040.33±0.000.37±0.02Ours w/o both 0.26±0.050.02±0.010.14±0.02Ours 0.43±0.040.36±0.040.38±0.04Table 8: Ablation study on the system choices in single-step Orient taskTrained UnseenBlock 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Block 8 ALLOurs (0-step) 0.47±0.060.44±0.070.43±0.000.49±0.040.40±0.040.51±0.040.18±0.010.16±0.030.38±0.04Ours (5-step) 0.52±0.070.47±0.020.46±0.030.55±0.030.44±0.020.54±0.010.20±0.030.17±0.020.42±0.03Ours (10-step) 0.61±0.030.55±0.010.52±0.030.63±0.030.51±0.060.53±0.060.22±0.020.16±0.010.46±0.03Ours (15-step) 0.55±0.030.51±0.040.53±0.020.59±0.010.50±0.070.50±0.050.16±0.040.14±0.030.44±0.04Table 9: Ablation study in historical frame of the transition feasibility functionG Motor tactile and belief state.We found that motor tactile and belief state are beneficial for dexterous in-hand manipulation. Tab. 8 isthe ablation study of the design choices of our input state space. We modify the objective of the Orientsub-task in the Building Blocks task to a pre-defined goal orientation and train each ablation methodonly on this sub-policy. We find the belief state pose estimator has the highest improvement ( 9%in tasksuccess rate), which highlights its effects on in-hand manipulation.H Ablation study in historical frame of the transition feasibility functionWe add an ablation study by using 0-step, 5-step, 10-step and 15-step of history states as the inputs tothe transition feasibility model, as shown in Table. 9. The task success rate gradually increases whenmore history steps are used and becomes stable after 10 steps. This result indicates that 10 to 15 historysteps is ideal for the Building Block task.I Environmental speedTable. 10 shows the simulation FPS and wall-clock time cost of the training process for each sub-task.All of our experiments are run with Intel i7-9700K CPU and NVIDIA RTX 3090 GPU.J Hyperparameters of the PPOJ.1 Building BlocksJ.2 Tool Positioning20Building Blocks Tool PositioningSearch Orient Grasp Insert Grasp ReorientWall-clock time31111 ±3691 15458 ±1381 16397 ±1904 21851 ±2791 17282 ±2472 14500 ±1831(s/10000 episode)FPS (frame/s) 1298±154 5920±529 5504±639 14360 ±1834 19920 ±2849 20896 ±2638Table 10: Mean and standard deviation of FPS (frame per second) of the sub-tasks.Table 11: Hyperparameters of PPO in Building Blocks.Hyperparameters Searching Orienting Grasping & InsertingNum mini-batches 4 4 8Num opt-epochs 5 10 2Num episode-length 8 20 8Hidden size [1024, 1024, 512] [1024, 1024, 512] [1024, 1024, 512]Clip range 0.2 0.2 0.2Max grad norm 1 1 1Learning rate 3.e-4 3.e-4 3.e-4Discount ( γ) 0.96 0.96 0.9GAE lambda ( λ) 0.95 0.95 0.95Init noise std 0.8 0.8 0.8Desired kl 0.016 0.016 0.016Ent-coef 0 0 0Table 12: Hyperparameters of PPO in Tool Positioning.Hyperparameters Grasping In-hand OrientingNum mini-batches 4 4Num opt-epochs 5 10Num episode-length 8 20Hidden size [1024, 1024, 512] [1024, 1024, 512]Clip range 0.2 0.2Max grad norm 1 1Learning rate 3.e-4 3.e-4Discount ( γ) 0.96 0.96GAE lambda ( λ) 0.95 0.95Init noise std 0.8 0.8Desired kl 0.016 0.016Ent-coef 0 021 |
Rb0nGIt_kh5 | Distilled Feature Fields Enable Few-ShotLanguage-Guided ManipulationWilliam Shen∗1, Ge Yang∗1,2, Alan Yu1, Jansen Wong1,Leslie Pack Kaelbling1, Phillip Isola11MIT CSAIL,2Institute for Artificial Intelligence and Fundamental InteractionsAbstract: Self-supervised and language-supervised image models contain richknowledge of the world that is important for generalization. Many robotic tasks,however, require a detailed understanding of 3D geometry, which is often lackingin 2D image features. This work bridges this 2D-to-3D gap for robotic manip-ulation by leveraging distilled feature fields to combine accurate 3D geometrywith rich semantics from 2D foundation models. We present a few-shot learningmethod for 6-DOF grasping and placing that harnesses these strong spatial andsemantic priors to achieve in-the-wild generalization to unseen objects. Using fea-tures distilled from a vision-language model, CLIP, we present a way to designatenovel objects for manipulation via free-text natural language, and demonstrate itsability to generalize to unseen expressions and novel categories of objects. Projectwebsite: https://f3rm.csail.mit.edu1 IntroductionWhat form of scene representation would facilitate open-set generalization for robotic manipulationsystems? Consider a warehouse robot trying to fulfill an order by picking up an item from clutteredstorage bins filled with other objects. The robot is given a product manifest, which contains the textdescription it needs to identify the correct item. In scenarios like this, geometry plays an equallyimportant role as semantics, as the robot needs to comprehend which parts of the object geometryafford a stable grasp. Undertaking such tasks in unpredictable environments — where items froma diverse set can deviate markedly from the training data, and can be hidden or jumbled amidstclutter — underscores the critical need for robust priors in both spatial and semantic understanding.In this paper, we study few-shot and language-guided manipulation, where a robot is expected to pickup novel objects given a few grasping demonstrations or text descriptions without having previouslyseen a similar item. Toward this goal, we build our system around pre-trained image embeddings,which have emerged as a reliable way to learn commonsense priors from internet-scale data [1, 2, 3].Figure 1 illustrates how our system works. The robot first scans a tabletop scene by taking a sequenceof photos using an RGB camera mounted on a selfie stick (Figure 1, left). These photos are usedto construct a neural radiance field (NeRF) of the tabletop, which, crucially, is trained to rendernot just RGB colors but also image features from a pre-trained vision foundation model [4, 1]. Thisproduces a scene representation, called a Distilled Feature Field (DFF), that embeds knowledge from2D feature maps into a 3D volume (Figure 1, middle). The robot then references demonstrationsand language instructions to grasp objects specified by a user (Figure 1, right).Distilled Feature Fields (DFFs) were introduced in computer graphics for tasks such as decomposingand editing images [5, 6]. The main contribution of this work is to study the use of DFFs insteadfor robotic manipulation. We evaluate the robot’s ability to generalize using features sourced fromself-supervised vision transformers (DINO ViT, see [4]). These features have been shown to beeffective out-of-the-box visual descriptors for dense correspondence [7]. We also source features*Equal contribution. Correspondence to {willshen,geyang }@csail.mit.edu7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.1.Scan Scene 2. Distill FeaturesExtract Dense2D FeaturesCoffee MugBaymax3. Language-Guided Manipulation3D Feature FieldFigure 1: Distilled Feature Fields Enable Open-Ended Manipulation. (1) Robot uses a selfiestick to scan RGB images of the scene (camera frustums shown). (2) Extract patch-level densefeatures for the images from a 2D foundation model, and distill them into a feature field (PCAshown) along with modeling a NeRF. (3) We can query CLIP feature fields with language to generateheatmaps and infer 6-DOF grasps on novel objects given only ten demonstrations.from a vision-language model, CLIP [1], which is a strong zero-shot learner on various vision andvisual question-answering tasks.One challenge that makes distilled feature fields unwieldy for robotics is the long time it takes tomodel each scene. To address this, we build upon the latest NeRF techniques, and employ hierarchi-cal hashgrids to significantly reduce the modeling time [8, 9, 10]. When it comes to vision-languagefeatures, CLIP is trained to produce image-level features, whereas 3D feature distillation requiresdense 2D descriptors. Our solution is to use the MaskCLIP [11] reparameterization trick, whichextracts dense patch-level features from CLIP while preserving alignment with the language stream.We demonstrate that Distilled Feature Fields enable open-ended scene understanding and can beleveraged by robots for 6-DOF object manipulation. We call this approach Feature Fields for RoboticManipulation (F3RM). We present few-shot learning experiments on grasping and placing tasks,where our robot is able to handle open-set generalization to objects that differ significantly in shape,appearance, materials, and poses. We also present language-guided manipulation experiments whereour robot grasps or places objects in response to free-text natural language commands. By takingadvantage of the rich visual and language priors within 2D foundation models, our robot generalizesto new categories of objects that were not seen among the four categories used in the demonstrations.2 Problem FormulationWe consider the class of manipulation problems that can be parameterized via a single rigid-bodytransformation T∈SE(3), and focus on grasping and placing tasks. We parameterize a 6-DOFgrasp or place pose as T= (R,t)in the world frame (see Figure 2), where Ris the rotation matrix,andtis the translation vector. In each scene, the robot is given a set of RGB images {I}with theircorresponding camera poses.Few-Shot Manipulation. We aim to build robots that can manipulate objects given only a fewdemonstrations of a task, such as grasping a mug by its handle. During learning, each demonstrationDconsists of the tuple ⟨{I},T∗⟩,where {I}Ni=1areNRGB camera views of the scene and T∗is apose that accomplishes the desired task. During testing, the robot is given multiple images {I′}of anew scene which may contain distractor objects and clutter. The robot’s goal is to predict a pose Tthat achieves the task. We want to test for open-ended generalization: the new scene contains relatedbut previously unseen objects that differ from the demo objects in shape, size, pose, and material.2Open-Text Language-Guided Manipulation. We extend few-shot manipulation to include open-text conditioning via natural language. Given a text description of an object, the robot’s objectiveis to grasp the objects that match this description. The robot has access to a few demonstrationsfor multiple object categories. During testing , the user provides the robot with a text query L+tospecify which object to manipulate and negative texts L−to reject distractors. In practice, L−canbe sampled automatically. The object category we care about at test time does not necessarily appearin the demonstrations. We explicitly seek open-ended generalization to new object categories, givenjust a few demonstrations limited to a small class of object categories.3 Feature Fields for Robotic Manipulation (F3RM)We present Feature Fields for Robotic Manipulation (F3RM), our approach for distilling pre-trainedrepresentations from vision and vision-language models into 3D feature fields for open-endedrobotic manipulation. Doing so involves solving three separate problems. First, how to producethe feature field of a scene automatically at a reasonable speed; second, how to represent and infer6-DOF grasping and placing poses; and finally, how to incorporate language guidance to enableopen-text commands. We include a formal summary of NeRFs [12] in Appendix A.1.3.1 Feature Field DistillationDistilled Feature Fields (DFFs) [5, 6] extend NeRFs by including an additional output to reconstructdense 2D features from a vision model fvis. The feature field fis a function that maps a 3D position xto a feature vector f(x). We assume that fdoes not depend on the viewing direction d.1Supervisionof this feature field fis provided through the rendered 2D feature maps, where each feature vectoris given by the feature rendering integral between the near and far plane ( tnandtf):F(r) =ZtftnT(t)σ(rt)f(rt) dtwith T(t) = exp−Zttnσ(rs) ds, (1)where ris the camera ray corresponding to a particular pixel, tis the distance along the ray, σis thedensity field from NeRF, and T(t)represents the accumulated transmission from the near plane tntot. Observe this is similar to the volume rendering integral (Eq. 5).Feature Distillation. We begin with a set of N2D feature maps {Ifi}Ni=1, where If=fvis(I)foreach RGB image I. We optimize fby minimizing the quadratic loss Lfeat=Pr∈RˆF(r)−If(r)22,where If(r)is the target feature vector from If, andˆF(r)is estimated by a discrete approximationof the feature rendering integral in Eq. 1.Extracting Dense Visual Features from CLIP. The common practice for extracting dense fea-tures from ViTs is to use the key/value embeddings from the last layer before pooling [4]. Whilethis approach has yielded great results for vision-only models on image segmentation and image-to-image correspondence tasks [7, 15], it removes the transformations needed to project the visualfeatures into the shared feature space with the language stream in vision-language models such asCLIP [1, 16]. Re-aligning the dense features typically requires additional training, which negativelyaffects the model’s open-text generalization.Rather than using CLIP as an image-level feature (Alg. 1, App. A.2), we extract dense features fromCLIP using the MaskCLIP reparameterization trick (Alg. 2) [11]. These features retain a sufficientalignment with the language embedding to support zero-shot language guidance in our experiments(see Fig.7). We present pseudocode for this technique in Appendix A.2. A second modification weapply is to interpolate the position encoding (see 4) to accommodate larger images with arbitraryaspect ratios. This is needed because CLIP uses a small, fixed number of input patches from asquare crop. These two techniques combined enable us to extract dense, high-resolution patch-level2D features from RGB images at about 25frames per second and does not require fine-tuning CLIP.1Prior work that uses the association of image pixels in 3D to supervise learning 2D image features makesthe same assumption [13, 14].3Task EmbeddingDemo Embeddings(c) Average Over n DemosFeatures at Query Points(b) Sample Feature VectorsQuery Points6 DOF Gripper Pose (a) Collect Demonstrations in VRconcat =Example ObjectFeature Fieldz1z2ZMFigure 2: Representing 6-DOF Poses. (a) Recording the gripper pose T∗in virtual reality (VR) onan example mug. (b) We approximate the continuous local field via a fixed set of query points inthe gripper’s canonical frame. (c) We concatenate feature vectors at these query points, then averageovern(we use n= 2) demonstrations. This gives a task embedding ZMfor the task M.3.2 Representing 6-DOF Poses with Feature FieldsWe wish to represent the pose of the gripper in a demonstration by the local 3D feature field in thegripper’s coordinate frame. We approximate this local context via a discrete set of query points andthe feature vectors measured at each point. We sample a fixed set of Nqquery points X={x∈R3}Nqin the canonical gripper frame for each task Mfrom a 3D Gaussian. We adjust its mean andvariance manually to cover parts of the object we intend to target, as well as important context cues(e.g., body of the mug when grasping the handle) and free-space (Fig.2b). For a 6-DOF gripper poseT, we sample the feature field fat each point in the query point cloud, transformed by T(Fig.2b).To account for the occupancy given by the local geometry, we weigh the features by their corre-sponding alpha values from the density field σof the NeRF model, integrated over the voxel. At apointxin the world frame, this produces the α-weighted featuresfα(x) =α(x)·f(x), where α(x) = 1−exp(−σ(x)·δ)∈(0,1), (2)andδis the distance between adjacent samples. We sample a set of features {fα(x)|x∈TX}using the transformed query points TX, and concatenate along the feature-dimension into a vector,zT∈RNq·|f|. The query points Xand demo embedding zTthus jointly encode the demo pose T.We specify each manipulation task Mby a set of demonstrations {D}. We average zTover thedemos for the same task to obtain a task embedding ZM∈RNq·|f|(Fig. 2c). This allows us toreject spurious features and focus on relevant parts of the feature space. This representation schemeis similar to the one used in Neural Descriptor Fields [17]. The main distinction is that NDF istrained from scratch on object point clouds, whereas our feature field is sourced from 2D foundationmodels that are trained over internet-scale datasets. The capabilities that emerge at this scale hold thepotential for open-ended generalization beyond the few examples that appear in the demonstrations.Inferring 6-DOF Poses. Our inference procedure involves a coarse pre-filtering step for the trans-lational DOFs, and an optimization-based fine-tuning step for the rotational DOFs. First, we samplea dense voxel grid over the workspace, where each voxel vhas a grid-size δ. We remove free spaceby rejecting voxels with alphas α(v)< ε free. We then remove voxels that are irrelevant to the task,using the cosine similarity between the voxel feature fα(v)and the task embedding ZM. To get thecomplete 6-DOF poses T={T}, we uniformly sample Nrrotations for each remaining voxel v.Pose Optimization. We optimize the initial poses with the following cost functionJpose(T) =−cos(zT,ZM) (3)using the Adam optimizer [18] to search for poses that have the highest similarity to the task embed-dingZM. After each optimization step, we prune poses that have the highest costs. We also rejectposes that are in collision by thresholding the number of overlapping voxels between a voxelizedgripper model and the scene geometry. This leaves us with a ranked list of poses that we feed intoa motion planner in PyBullet [19, 20]. We execute the highest-ranked grasp or place pose that has a4Pick up the BowlUserText Features from /gid00036/gid00045/gid00042/gid00049arg maxd=Selected Demo(a) Retrieving DemonstrationsFeatures at Query Pointsminimize SelectedDemoText Featuresfrom CLIPPick up the BowlUserAverage over (b) Language-Guided Pose OptimizationFigure 3: Pipeline for Language-Guided Manipulation. (a) Encode the language query with CLIP,and compare its similarity to the average query point features over a set of demos. The mug lipdemos have the highest similarity to “Pick up the Bowl”. (b) Generate and optimize grasp proposalsusing the CLIP feature field by minimizing Jlang. We use the selected demo from (a) in Jpose, andcompute the language-guidance weight with the text features and average query point features.valid motion plan. Observe that our scheme operates over the entire feature field, and does not relyon assumptions about objectness such as segmentation masks or object poses.3.3 Open-Text Language-Guided ManipulationNatural language offers a way to extend robotic manipulation to an open-set of objects, servingas an attractive alternative when photos of the target object are inaccurate or unavailable. In ourlanguage-guided few-shot manipulation pipeline, the learning procedure and the representation forthe demonstrations remain consistent with Section 3.2. At test time, the robot receives open-textlanguage queries from the user that specify the object of interest to manipulate. Our language-guidedpose inference procedure comprises three steps (see Fig.3): (i) retrieving relevant demonstrations,(ii) initializing coarse grasps, and (iii) language-guided grasp pose optimization.Retrieving Relevant Demonstrations. We select the two demonstrations whose average featureFd(averaged among the query points of each demo pose T∗) is closest to the text embeddingq=emb CLIP(L+)(Fig.3a). We found that using the positive query text ( L+) alone is sufficient.This means finding the demonstration that maximizes the cosine similarity cos(q,Fd). Note thatthe objects used in the demonstrations do not have to come from the same category as the targetobject. For instance, asking the robot to pick up the “measuring beaker” or “bowl” leads to the robotchoosing the demonstration of picking up a mug by its lip (Fig.4).Initializing Grasp Proposals. We speed up grasp pose inference by first running a coarse proposalstep where we filter out regions in the feature field that are irrelevant to the text query. We start bysampling a dense voxel grid among the occupied regions by masking out free space (see Sec.3.2).Afterward, we prune down the number of voxels by keeping those more similar to the positive queryL+than any one of the negative queries L−. Formally, let q−i=emb CLIP(L−i)|i∈ {1, . . . , n }be the text embeddings of the negative queries. We compute the softmax over the pair-wise cosinesimilarity between the voxel’s feature fα(v)and the ensemble [q,q−1,q−2, . . . ,q−n], and identify theclosest negative query q−. We remove voxels that are closer to q−than the positive query q. Thecosine similarity between the voxel embedding and [q,q−]pair forms a binomial distribution thatallows us to reject voxels that have <50% probability of being associated with the negative query.Finally, to get the set of initial poses T={T}, we sample Nrrotations for each remaining voxel.Language-Guided Grasp Pose Optimization. To incorporate language guidance, we first com-puteJposefrom Eq.3 using the two demonstrations retrieved in the first step. We then assign a lowercost to regions that are more similar to the language query qby computing a language-guidanceweight Cq=meanx∈TX[q⊗fα(x)], and multiply it with Jpose(Fig.3b)Jlang(T) =meanx∈TXhq⊗fα(x)i· Jpose(T). (4)5(a) Grasp mug (lip) (b) Grasp screwdriver(c) Grasp caterpillar (d) Place cup on rackFigure 4: Five Grasping and Place Tasks. (a)grasping a mug by its lip or handle (Fig.2); (b) ascrewdriver by the handle; (c) the caterpillar byits ears; and (d) placing a cup onto a drying rack.Gripper poses indicate one of two demonstrations.(a) Top 10 Grasps (b) Robot Execution(c) Top 10 Grasps (d) Robot Execution.Figure 5: Generalizing to Novel Objects.(Top Row) Mug is much bigger than the onesused for demonstration. (Bottom Row) Thisrack has shorter pegs with a square cross-section. Demo rack is cylindrical (cf. Fig.4d).The first term, Cq, is the normalized cosine similarity between the text embedding qand the averageα-weighted query point feature for a pose T. We iteratively update the pose Tvia gradient descentwhile pruning using the procedure from Section 3.2 till convergence.4 Results4.1 Learning to Grasp from DemonstrationsWe consider five 6-DOF grasping and placing tasks and provide two demonstrations per task (Fig.4).To label the demonstrations, we load a NeRF-reconstructed point cloud into virtual reality, and usea hand controller to move the gripper to the desired pose (Fig.2a). We compare the performance ofthree types of distilled features: (1) DINO ViT, (2) CLIP ViT, and (3) CLIP ResNet. We considerthree baselines, including (1) using density σfrom the NeRF, (2) the intermediate NeRF features,and (3) the RGB color value as features, and compare against MIRA [21], a recent work which usesNeRFs to render orthographic viewpoints for pixel-wise affordance predictions. For each task, weevaluate in ten scenes that contain novel objects in arbitrary poses and distractor objects. The novelobjects belong to the same or related object category as the demo objects, but differ in shape, size,material and appearance. We reset the scenes to about the same configuration for each comparedmethod. We include the full details on the experimental setup in Appendix A.4.We present the success rates in Table 1 and examples of robot executions in Figure 5. While thebaselines using density, RGB color values, or intermediate features from NeRF achieve respectableperformance, they struggle to identify the semantic category of the objects we care about, especiallyin complex scenes with more distractors. We find that DINO and CLIP feature fields exhibit im-pressive generalization capabilities and have complementary advantages. The DINO ViT has a goodpart-level understanding of object geometry with 7/19failure cases caused by inaccuracies in thegrasp rotations and occasionally, the translations. In comparison, 21/27failures for CLIP ViT andResNet combined may be attributed to this issue. We find that CLIP favors semantic and categoricalinformation over geometric features, which are essential for grasping and placing objects. DINO, onthe other hand, struggles with distinguishing target objects from distractor objects that contain sim-ilar visual appearance to the objects used in the demonstrations. CLIP struggles less in this regard.The fusion between semantic features and detailed 3D geometry offers a way to model multipleobjects piled tightly together: in Figure 6b, for instance, a caterpillar toy is buried under other toys.Figure 6c shows our robot grasping the caterpillar, and dragging it from the bottom of the pile.6(a) Demonstration (1 of 2) (b) Feature Field of Cluttered Scene (c) Robot ExecutionFigure 6: Grasping in a Cluttered Scene. (a) Demonstration for grasping the caterpillar in itsDINO feature field (color is PCA, red dots show query points). (b) A cluttered scene with severaltoys on top of each other. Inset shows the top 10 inferred grasps. Observe the caterpillar’s ears sharethe same features with the demo. (c) Robot successfully grasps the caterpillar.Mug Mug Caterpillar Screwdriver Cup Totallip handle ear handle on rackMIRA [21] 1/10 2 /10 6 /10 3 /10 3 /10 15 /50Density 5/10 5 /10 10/10 2/10 5 /10 27 /50Intermediate 2/10 2 /10 1 /10 3 /10 1 /10 9 /50RGB 4/10 3 /10 9 /10 1 /10 4 /10 21 /50DINO ViT 5/10 4 /10 8 /10 6 /10 8/10 31/50CLIP ViT 7/10 7/10 8/10 6 /10 6 /10 34 /50CLIP ResNet 9/10 6/10 9 /10 8/10 7/10 39/50Table 1: Success rates on grasping and placing tasks. We comparethe success rates over ten evaluation scenes given two demonstrationsfor each task. We consider a run successful if the robot grasps orplaces the correct corresponding object part for the task.Color 7/10Material 7/10Relational 4/10General 4/10OOD 9/10Total 31/50Table 2: Success ratesof Language-Guided Ma-nipulation. Languagequery success rates acrosssemantic categories.4.2 Language-Guided Object ManipulationWe set up 13table-top scenes to study the feasibility of using open-text language and CLIP featurefields for designating objects to manipulate. We reuse the ten demonstrations from the previoussection (Sec. 4.1), which span four object categories (see Fig.4). We include three types of objectsin our test scenes: (1) novel objects from the same categories as the demonstrations, (2) out-of-distribution (OOD) objects from new categories that share similar geometry as the demonstrateditems (e.g., bowls, measuring beakers, utensils), and (3) distractor items that we desire the systemignore. Success metric : we consider a language query successful if the robot stably grasps the targetobject and places it in a plastic bin at a known location. We include more details in Appendix A.7.We break down the success rates by category in Table 2, and show the robot’s execution sequence foran example scene in Figure 7 (video). This scene contained eleven objects, four were sourced fromthe YCB object dataset (the apple, black marker, mango, and a can of SPAM), the rest collected fromthe lab and bought online. We present five successful grasps (Figure 7). The robot failed to graspthe stainless steel jug by its handle due to a small error in the grasp rotation. This is a typical failurecase — six out of 19failures stem from these poor grasp predictions with rotational or translationalerrors. The remaining 13/19failed grasps are due to CLIP features behaving like a bag-of-wordsand struggling to capture relationships, attributes, and ordinal information within sentences [22].For instance, in queries such as “black screwdriver” and “mug on a can of spam,” CLIP paid moreattention to other black objects and the can of spam, respectively. We found that retrieving demosvia text occasionally (particularly in the rack scene) benefits from prompt engineering.In total, our robot succeeds in 31out of 50language queries, including both fairly general queries(e.g., mug, drying rack) and ones that specify properties, such as color, material, and spatial rela-tions (e.g., screwdriver on the block). Notably, our robot generalizes to out-of-distribution objectcategories including bowls, rolls of tape, whiteboard markers and utensils using demonstrationsonly on mugs and screwdrivers. Although this success rate is far from practical for industrial use,our overall strategy of using 2D visual priors for 3D scene understanding can leverage the rapidadvancements in VLMs, which hold significant potential for improving performance.7Figure 7: Language-Guided Manipulation Execution. (Top Row) Heatmaps given the languagequeries. (Bottom Row) Robot executing grasps sequentially without rescanning. CLIP can behavelike a bag-of-words, as shown by the bleed to the blue bowl for “blue screwdriver.”5 Related WorkOpen-Ended Generalization via Language. A number of prior work use natural language fortask-specification and long-horizon planning in applications such as tabletop and mobile manipula-tion [23, 24, 25, 26] and navigation [27, 28]. A recent line of work seeks to replicate the success ofvision and language models in robotics, by jointly pre-training large foundation models on behaviordata [29, 30]. The goal of our work is different. F3RM seeks to incorporate geometric informationwith anypre-trained features by lifting imagery and language prior knowledge into 3D.Dense 2D Visual Descriptors. [31, 32] use dynamic 3D reconstruction from RGB-D videos toprovide association labels between pixels from different video frames. Dense Object Nets eliminatethe need for dynamic reconstruction, using multi-view RGB-D images of static scenes in the contextof robotics [13, 33]. NeRF-Supervision extends descriptor learning to thin structures and reflectivematerials via NeRFs for 3D reconstruction [14]. In contrast, recent work indicates that excellentdense correspondence can emerge at a larger scale without the need for explicit supervision [4, 34].Geometric Aware Representations for Robotics. Geometric understanding is an essential partof mapping [35, 36, 37], grasping [38, 39, 17, 40], and legged locomotion [41]. These work eitherrequire direct supervision from 3D data such as point clouds, or try to learn representations fromposed 2D images or videos [13, 14]. [21, 42, 43] leverage neural scene representations to takeadvantage of their ability to handle reflective or transparent objects and fine geometry. Our workincorporates pre-trained vision foundation models to augment geometry with semantics.3D Feature Fields. A number of recent work integrate 2D foundation models with 3D neural fieldsin contexts other than robotic manipulation [44, 45, 46, 47, 48]. See Appendix A.8 for a comprehen-sive overview. Our work shares many similarities with LERF [49]. However, unlike the multi-scaleimage-level features used in LERF, we extract dense patch-level features from CLIP. Additionally,we take a step further by exploring the utilization of feature fields for robotic manipulation.6 ConclusionWe have illustrated a way to combine 2D visual priors with 3D geometry to achieve open-endedscene understanding for few-shot and language-guided robot manipulation. Without fine-tuning,Distilled Feature Fields enable out-of-the-box generalization over variations in object categories,material, and poses. When the features are sourced from vision-language models, distilled featurefields offer language-guidance at various levels of semantic granularity.Limitations. Our system takes 1m40s to collect 50images of the scene, and 90s to model theNeRF and feature field. This highlights the need to develop generalizable NeRFs that can recovergeometry quickly with just a few views [9, 43], opening the possibility for closed-loop dynamic ma-nipulation. More generally, novel view synthesis is a generative process not too different from imagegeneration with GANs [50] and diffusion models [51]. These alternatives, to which our philosophyequally applies, hold promise for solving general-purpose visual and geometric understanding.8AcknowledgementWe gratefully acknowledge support from Amazon.com Service LLC, Award #2D-06310236; fromthe National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI In-stitute for Artificial Intelligence and Fundamental Interactions, http://iaifi.org/); from NSF grant2214177; from AFOSR grant FA9550-22-1-0249; from ONR MURI grant N00014-22-1-2740; fromARO grant W911NF-23-1-0034; from the MIT-IBM Watson Lab; and from the MIT Quest for In-telligence. The authors also thank Tom ́as Lozano-P ́erez, Lin Yen-Chen and Anthony Simeonovfor their advice; Boyuan Chen for initial discussions; Rachel Holladay for her extensive help withsetting up the robot; and Tom Silver for providing feedback on an earlier draft.References[1] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International Conference on Machine Learning . PMLR, 2021.[2] G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V . Shankar,H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt. OpenCLIP, July 2021.URL https://doi.org/10.5281/zenodo.5143773 .[3] C. Jia, Y . Yang, Y . Xia, Y .-T. Chen, Z. Parekh, H. Pham, Q. Le, Y .-H. Sung, Z. Li, and T. Duerig.Scaling Up Visual and Vision-Language Representation Learning with Noisy Text Supervision.InInternational Conference on Machine Learning . PMLR, 2021.[4] M. Caron, H. Touvron, I. Misra, H. J ́egou, J. Mairal, P. Bojanowski, and A. Joulin. Emergingproperties in self-supervised vision transformers. In Proceedings of the IEEE/CVF Interna-tional Conference on Computer Vision , 2021.[5] V . Tschernezki, I. Laina, D. Larlus, and A. Vedaldi. Neural Feature Fusion Fields: 3D Dis-tillation of Self-Supervised 2D Image Representations. In Proceedings of the InternationalConference on 3D Vision (3DV) , 2022.[6] S. Kobayashi, E. Matsumoto, and V . Sitzmann. Decomposing NeRF for Editing via FeatureField Distillation. In Advances in Neural Information Processing Systems , volume 35, 2022.[7] S. Amir, Y . Gandelsman, S. Bagon, and T. Dekel. Deep ViT Features as Dense Visual Descrip-tors. ECCVW What is Motion For? , 2022.[8] T. M ̈uller, A. Evans, C. Schied, and A. Keller. Instant Neural Graphics Primitives with aMultiresolution Hash Encoding. ACM Transactions on Graphics (ToG) , 2022.[9] A. Yu, V . Ye, M. Tancik, and A. Kanazawa. pixelNeRF: Neural radiance fields from one orfew images. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , 2021.[10] M. Tancik, E. Weber, E. Ng, R. Li, B. Yi, J. Kerr, T. Wang, A. Kristoffersen, J. Austin,K. Salahi, A. Ahuja, D. McAllister, and A. Kanazawa. Nerfstudio: A Modular Frameworkfor Neural Radiance Field Development. In ACM SIGGRAPH 2023 Conference Proceedings ,SIGGRAPH ’23, 2023.[11] C. Zhou, C. C. Loy, and B. Dai. Extract free dense labels from clip. In European Conferenceon Computer Vision (ECCV) , 2022.[12] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. NeRF:Representing Scenes as Neural Radiance Fields for View Synthesis. In European Conferenceon Computer Vision (ECCV) , 2020.9[13] P. R. Florence, L. Manuelli, and R. Tedrake. Dense Object Nets: Learning Dense Visual ObjectDescriptors By and For Robotic Manipulation. In Proceedings of the 2nd Conference on RobotLearning (CoRL) , June 2018.[14] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y . Lin, A. Rodriguez, and P. Isola. NeRF-Supervision: Learning dense object descriptors from neural radiance fields. In IEEE Con-ference on Robotics and Automation (ICRA) , 2022.[15] M. Hamilton, Z. Zhang, B. Hariharan, N. Snavely, and W. T. Freeman. Unsupervised SemanticSegmentation by Distilling Feature Correspondences. In International Conference on LearningRepresentations , 2022. URL https://openreview.net/forum?id=SaKO6z6Hl0c .[16] Y . Rao, W. Zhao, G. Chen, Y . Tang, Z. Zhu, G. Huang, J. Zhou, and J. Lu. Denseclip:Language-guided dense prediction with context-aware prompting. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022.[17] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation.InIEEE Conference on Robotics and Automation (ICRA) , 2022.[18] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations , 2015.[19] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016.[20] C. Garrett. PyBullet Planning. https://github.com/caelan/pybullet-planning , 2018.[21] L. Yen-Chen, P. Florence, A. Zeng, J. T. Barron, Y . Du, W.-C. Ma, A. Simeonov, A. R. Garcia,and P. Isola. MIRA: Mental imagery for robotic affordances. In Conference on Robot Learning(CoRL) , 2022.[22] M. Yuksekgonul, F. Bianchi, P. Kalluri, D. Jurafsky, and J. Zou. When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It? In InternationalConference on Learning Representations , 2023. URL https://openreview.net/forum?id=KRLUvxh8uaX .[23] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Proceedings of the 5th Conference on Robot Learning (CoRL) , 2021.[24] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Proceedings of the 6th Conference on Robot Learning (CoRL) , 2022.[25] S. Karamcheti, S. Nair, A. S. Chen, T. Kollar, C. Finn, D. Sadigh, and P. Liang. Language-driven representation learning for robotics. In Robotics: Science and Systems (RSS) , 2023.[26] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. Ben Amor. Language-Conditioned Imitation Learning for Robot Manipulation Tasks. Advances in Neural Infor-mation Processing Systems , 33, 2020.[27] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich,F. Xia, C. Finn, et al. Open-world object manipulation using pre-trained vision-language mod-els.arXiv preprint arXiv:2303.00905 , 2023.[28] D. Shah, B. Osinski, B. Ichter, and S. Levine. LM-nav: Robotic navigation with large pre-trained models of language, vision, and action. In 6th Annual Conference on Robot Learning ,2022. URL https://openreview.net/forum?id=UW5A3SweAH .10[29] D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, W. Huang, Y . Chebotar, P. Sermanet, D. Duckworth, S. Levine, V . Vanhoucke,K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. Florence. PaLM-E: AnEmbodied Multimodal Language Model. arXiv preprint arXiv:2303.03378 , Mar. 2023.[30] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. J. Joshi,R. Julian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manju-nath, I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. RT-1:Robotics Transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 , Dec.2022.[31] C. B. Choy, J. Gwak, S. Savarese, and M. Chandraker. Universal Correspondence Network. InAdvances in Neural Information Processing Systems , June 2016.[32] T. Schmidt, R. Newcombe, and D. Fox. Self-supervised visual descriptor learning for densecorrespondence. IEEE Robotics and Automation Letters (RA-L) , Apr. 2017.[33] P. Florence, L. Manuelli, and R. Tedrake. Self-Supervised Correspondence in VisuomotorPolicy Learning. IEEE Robotics and Automation Letters (RA-L) , 5(2):492–499, Apr. 2020.[34] M. Oquab, T. Darcet, T. Moutakanni, H. V . V o, M. Szafraniec, V . Khalidov, P. Fernandez,D. Haziza, F. Massa, A. El-Nouby, R. Howes, P.-Y . Huang, H. Xu, V . Sharma, S.-W. Li,W. Galuba, M. Rabbat, M. Assran, N. Ballas, G. Synnaeve, I. Misra, H. Jegou, J. Mairal,P. Labatut, A. Joulin, and P. Bojanowski. DINOv2: Learning robust visual features withoutsupervision. arXiv:2304.07193 , 2023.[35] J. J. Leonard and H. F. Durrant-Whyte. Simultaneous map building and localization for anautonomous mobile robot. In IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , volume 3, pages 1442–1447, 1991.[36] H. Durrant-Whyte, D. Rye, and E. Nebot. Localization of Autonomous Guided Vehicles. InRobotics Research , pages 613–625. Springer London, 1996.[37] H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part i. IEEERobotics and Automation Magazine , 13(2):99–110, 2006. doi:10.1109/MRA.2006.1638022.[38] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic graspmetrics. In Robotics: Science and Systems (RSS) , 2017.[39] X. Yan, J. Hsu, M. Khansari, Y . Bai, A. Pathak, A. Gupta, J. Davidson, and H. Lee. Learning6-dof grasping interaction via deep geometry-aware 3d representations. In IEEE InternationalConference on Robotics and Automation (ICRA) , pages 3766–3773. IEEE, 2018.[40] J. Urain, N. Funk, J. Peters, and G. Chalvatzaki. SE(3)-DiffusionFields: Learning smoothcost functions for joint grasp and motion optimization through diffusion. IEEE InternationalConference on Robotics and Automation (ICRA) , 2023.[41] R. Yang, G. Yang, and X. Wang. Neural volumetric memory for visual locomotion control.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) , 2023.[42] J. Kerr, L. Fu, H. Huang, Y . Avigal, M. Tancik, J. Ichnowski, A. Kanazawa, and K. Goldberg.Evo-NeRF: Evolving NeRF for Sequential Robot Grasping of Transparent Objects. In 6thAnnual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=Bxr45keYrf .11[43] Q. Dai, Y . Zhu, Y . Geng, C. Ruan, J. Zhang, and H. Wang. GraspNeRF: Multiview-based6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF. InIEEE International Conference on Robotics and Automation (ICRA) , 2023.[44] S. Peng, K. Genova, C. M. Jiang, A. Tagliasacchi, M. Pollefeys, and T. Funkhouser. Open-Scene: 3D Scene Understanding with Open V ocabularies. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR) , 2023.[45] N. M. M. Shafiullah, C. Paxton, L. Pinto, S. Chintala, and A. Szlam. Clip-fields: Weaklysupervised semantic fields for robotic memory. In Robotics: Science and Systems (RSS) , 2023.[46] B. Bolte, A. S. Wang, J. Yang, M. Mukadam, M. Kalakrishnan, and C. Paxton. USA-Net:Unified Semantic and Affordance Representations for Robot Memory. ArXiv , abs/2304.12164,2023.[47] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual Language Maps for Robot Navigation.InIEEE International Conference on Robotics and Automation (ICRA) , London, UK, 2023.[48] K. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, J. Tenenbaum, C. de Melo, M. Krishna, L. Paull, F. Shkurti, and A. Tor-ralba. Conceptfusion: Open-set multimodal 3d mapping. In Robotics: Science and Systems(RSS) , 2023.[49] J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik. Lerf: Language embeddedradiance fields. In International Conference on Computer Vision (ICCV) , 2023.[50] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,and Y . Bengio. Generative adversarial nets. In Advances in Neural Information ProcessingSystems , volume 27, 2014.[51] Y . Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole. Score-Based Generative Modeling through Stochastic Differential Equations. In International Con-ference on Learning Representations , 2021. URL https://openreview.net/forum?id=PxTIG12RRHS .[52] J. T. Kajiya and B. P. V on Herzen. Ray tracing volume densities. ACM SIGGRAPH computergraphics , 18(3):165–174, 1984.[53] N. Max. Optical models for direct volume rendering. IEEE Trans. Vis. Comput. Graph. , 1(2):99–108, June 1995.[54] J. L. Sch ̈onberger and J.-M. Frahm. Structure-from-Motion Revisited. In Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2016.[55] J. L. Sch ̈onberger, E. Zheng, M. Pollefeys, and J.-M. Frahm. Pixelwise View Selection forUnstructured Multi-View Stereo. In European Conference on Computer Vision (ECCV) , 2016.[56] C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey. Barf: Bundle-adjusting neural radiance fields.InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 5741–5751, 2021.[57] Z. Wang, S. Wu, W. Xie, M. Chen, and V . A. Prisacariu. NeRF −−: Neural Radiance FieldsWithout Known Camera Parameters. arXiv preprint arXiv:2102.07064 , 2021.[58] A. Simeonov, Y . Du, L. Yen-Chen, , A. Rodriguez, L. P. Kaelbling, T. L. Perez, and P. Agrawal.SE(3)-Equivariant Relational Rearrangement with Neural Descriptor Fields. In Proceedings ofthe 6th Conference on Robot Learning (CoRL) . PMLR, 2022.12[59] V . Zhong, T. Rockt ̈aschel, and E. Grefenstette. RTFM: Generalising to New EnvironmentDynamics via Reading. In International Conference on Learning Representations , 2020. URLhttps://arxiv.org/abs/1910.08210 .[60] D. Bahdanau, F. Hill, J. Leike, E. Hughes, A. Hosseini, P. Kohli, and E. Grefenstette. Learningto understand goal specifications by modelling reward. In International Conference on Learn-ing Representations , 2018. URL https://openreview.net/forum?id=H1xsSjC9Ym .[61] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision . Cambridge Uni-versity Press, Mar. 2004.[62] L. Li, Z. Shen, Z. Wang, L. Shen, and L. Bo. Compressing volumetric radiance fields to 1 mb.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 4222–4231, 2023.13AppendixA.1 Neural Radiance Fields (NeRFs)Neural radiance fields [12] model a scene as a 6D, vector-valued continuous function that maps froma position x= (x, y, z )and a normalized viewing direction d= (dx, dy, dz), to the differentialdensity σand emitted color (r, g, b ). In practice, this is achieved via two neural networks whichpartially share parameters: 1) the density network σ(x)which depends only on the position x; and2) the color network c(x,d)which depends on both the position xand viewing direction d.Novel-View Synthesis. NeRF synthesizes an image by casting a ray rfrom the camera origin othrough the center of each pixel. Points along the ray are parameterized as rt=o+td, where tisthe distance of the point to the camera origin o. The color C(r)of the ray rbetween the near andfar scene bounds tnandtfis given by the volume rendering integral [52]C(r) =ZtftnT(t)σ(rt)c(rt,d) dt, T (t) = exp−Zttnσ(rs) ds, (5)where T(t)is the accumulated transmittance along the ray from rtntort.Modeling a Scene with NeRFs. For a scene, we are given a dataset of NRGB images {I}Ni=1with camera poses {T}Ni=1. At each iteration, we sample a batch of rays R ∼ { T}Ni=1and optimizeσandcby minimizing the photometric loss Lrgb=Pr∈R∥ˆC(r)−I(r)∥22,where I(r)is the RGBvalue of the pixel corresponding to ray r∈ R, andˆC(r)is the color estimated by the model using adiscrete approximation of Equation 5 [12, 53].A.2 Dense 2D Feature Extraction via MaskCLIPWe provide pseudo code for the MaskCLIP method [11] for extracting dense, patch-level featuresfrom the CLIP model [1] below. Algorithm 1 is the computation graph of the last layer of vanillaCLIP. Algorithm 2 is MaskCLIP’s modified graph. Note that the two linear transformations via WvandWoutcan be fused into a single convolution operation. We provide our feature extraction codein our GitHub repository ( https://github.com/f3rm/f3rm ).Algorithm 1 Image Feature (Original)1def forward(x):2q, k, v = W_qkv @ self.ln_1(x)3v = (q[:1] * k).softmax(dim=-1) * v4x = x + W_out @ v5x = x + self.mlp(self.ln_2(x))6return x[:1] # the CLS tokenAlgorithm 2 Dense Features (MaskCLIP 11)1def forward(x):2v = W_v @ self.ln_1(x)3z = W_out @ v4return z[1:] # all but the CLS tokenA.3 Feature FieldsImplementation Details. Memory for caching the 2D feature map is a significant system bottle-neck that does not appear with RGB reconstruction because high-dimensional features, up-scaled tothe RGB image resolution, can grow to more than 40GB for a standard NeRF dataset. We solvethis issue by reconstructing patch-level feature maps without up-scaling them to pixel resolution.We speed up our feature distillation by building off newer NeRF implementations using hierarchicalhash grids [8] based on Nerfacto [10].Feature Field Quality. F3RM benefits from neural feature fields’ ability to reconstruct detailed3D geometry. We offer such an example in Figure A8. Notice the difference in resolution, betweenthe source 2D feature map (middle), and the final feature field.14(a) RGB Image (b) Raw DINO ViT Feat. (c) Distilled Feat.Figure A8: Level of Detail. (a) Mesh strainer and whisk. (b) Raw feature map from DINO ViT,very low in resolution. Colors correspond to PCA of the features. (c) 3D feature fields recover ahigher level of detail than the source 2D feature maps. Inset corresponds to (b) in its original sizefor comparison.(a) CLIP Feature Mean Squared Error0 2000 4000 6000 8000 10000Step0.240.250.260.27MSEMLP HeadHash Grid (b) DINO Feature Mean Squared Error0 2000 4000 6000 8000 10000Step0.150.160.170.18MSEMLP HeadHash GridFigure A9: Feature Error During Feature Distillation. The mean squared error on a held-outset of feature maps for (a) CLIP and (b) DINO using the MLP head and hash grid architecturesdescribed in Section A.3.1. The hash grid architecture consistently achieves a lower error.A.3.1 Ablation on Feature Field ArchitectureWe implement our feature field as a hierarchical hash grid [8] that takes a 3D position xas input, andoutputs the feature vector. We compare this against a MLP head that takes the intermediate featuresoutput by NeRF as input, which is similar to the architectures in [5, 6]. We first train a NeRF onimages collected by the robot of a tabletop scene, then distill features for 10000 steps over 3seeds.Figure A9 shows that the hash grid architecture achieves a lower mean squared error (MSE), becauseit is able to capture higher-frequency signals and finer details. While the difference in the MSE seemsmarginal between these two architectures, the hash grid-based architecture qualitatively results insignificantly more well-defined semantic boundaries between objects as shown in Figure A10.A.4 Experimental SetupWe provide details about our experimental setup used across our experiments for learning to graspfrom demonstrations and language-guided object manipulation.Physical Setup. We collect RGB images with a RealSense D415 camera (the depth sensor is notused) mounted on a selfie stick. The selfie stick is used to increase the coverage of the workspace, asa wrist-mounted camera can only capture a small area of the workspace due to kinematic limitations.We program a Franka Panda arm to pick up the selfie stick from a magnetic mount, scan 50×1280×720RGB images of the scene following a fixed trajectory of three helical passes at different heights,and place the selfie stick back on the mount.15(a) MLP Head (b) Hash GridFigure A10: Comparing Feature Field Architectures. We show the similarity heatmaps for thelanguage query “metal mug” on the scene shown in Fig.7. (a) When using the MLP head, regionsunrelated to the metal mug exhibit high similarity, as shown by the red bleed onto objects includingthe red screwdriver and tip of the whiteboard marker. (b) In contrast, the hash grid architectureresults in significantly less bleed and more well-defined semantic boundaries between objects.To calibrate the camera poses, we run COLMAP [54, 55] on a scan of a dedicated calibration scenewith calibration markers placed by the robot at known poses. We use these objects to solve for thetransformation from the COLMAP coordinate system to the world coordinate system. These cameraposes are reused on subsequent scans. Given that the true camera poses vary due to small differencesin how the robot grasps the selfie-stick, we optimize them as part of NeRF modeling to minimizeany errors and improve reconstruction quality [10, 56, 57].Labeling Demonstrations. We label demonstrations in virtual reality (VR) using a web-based 3Dviewer based on Three.js that we developed which supports the real-time rendering of NeRFs, pointclouds, and meshes. Given a NeRF of the demonstration scene, we sample a point cloud and exportit into the viewer. We open the viewer in a Meta Quest 2 headset to visualize the scene, and move agripper to the desired 6-DOF pose using the hand controllers (see Fig.2a).Feature Type ResolutionDINO ViT 98×55CLIP ViT 42×24CLIP ResNet 24×14Table 3: Feature Map Resolu-tions. Resolutions of the fea-tures output by the vision modelsgiven a 1280×720RGB image.NeRF and Feature Field Modeling. We downscale the im-ages to 640×480to speed up modeling of the RGB NeRF, anduse the original 1280×720images as input to the vision modelfor dense feature extraction. We optimize the NeRF and featurefield sequentially for 2000 steps each, which takes at most 90s(average is 80s) on a NVIDIA RTX 3090, including the time toload the vision model into memory and extract features.In our experiments, we distill the features at their original fea-ture map resolution which is significantly smaller than the RGBimages (see Table 3). We achieve this by transforming the cam-era intrinsics to match the feature map resolutions, and sampling rays based on this updated cameramodel. The specific models we used were dino vits8 for DINO ViT, ViT-L/14@336px for CLIPViT, and RN50x64 for CLIP ResNet.A.5 Ablation on Number of Training ViewsAlthough our robot scans 50images per scene in our experiments, we demonstrate that it is possi-ble to use a significantly smaller number of views for NeRF and feature field modeling without asignificant loss in quality. To investigate this, we ablate the number of training images by evenlysubsampling from the 50scanned images and modeling a NeRF and feature field.Figure A11 qualitatively compares the RGB, depth, and segmentation heatmaps. We observe anincrease in floaters as we reduce the number of training images, with approximately 20imagesbeing the lower bound before a drastic decline in quality.1649 Training Images 30 Training Images 20 Training Images 18 Training ImagesFigure A11: Ablating Number of Training Views. We qualitatively compare feature fields trainedon different numbers of views. (Top Row) Segmentation heatmap for “Baymax” from a CLIP featurefield overlaid on the RGB image from NeRF. (Bottom Row) Depth map rendered from NeRF.A.6 Learning to Grasp from DemonstrationsSampling Query Points. We use Nq= 100 query points across all the tasks shown in Fig.4.As other works have observed [17, 58], the downstream performance can vary significantly acrossdifferent samples of the query points. To address this issue, we sample five sets of query points overdifferent seeds for each task, and run the grasp optimization procedure across a set of test scenesused for method development. We select the query points that achieved the highest success rate onthe test scenes. The covariance of the Gaussian is manually tuned to fit the task.Grasp Pose Optimization. We first discuss how we initialize the grasp poses. We consider atabletop workspace of size 0.7×0.8×0.35meters, and sample a dense voxel grid over the workspacewith voxels of size δ= 0.0075 m (we use 0.005m for the cup on racks experiment), where each voxelv= (x, y, z )represents the translation for a grasp pose.Next, we compute the alpha value α(v)for each voxel using the NeRF density network σ, and filterout voxels with α(v)< ε free= 0.1. This removes approximately 98% of voxels by ignoring freespace. The cosine similarity of the voxel features fα(v)is thresholded with the task embeddingZMto further filter out voxels. This threshold is adjusted depending on the task and type of featuredistilled, and typically cuts down 80% of the remaining voxels. Finally, we uniformly sample Nr=8rotations for each voxel to get the initial grasp proposals T.We minimize Equation 3 to find the grasp pose that best matches the demonstrations using the Adamoptimizer [18] for 50steps with a learning rate of 5e-3. This entire procedure takes 15s on average,but could easily be sped up.Grasp Execution. We reject grasp poses which cause collisions by checking the overlap betweena voxelized mesh of the Panda gripper and NeRF geometry by querying the density field σ. Weinput the ranked list of grasp poses into an inverse kinematics solver and BiRRT motion planner inPyBullet [19, 20], and execute the highest-ranked grasp with a feasible motion plan.A.6.1 BaselinesWe provide implementation details of the four baselines used in our few-shot imitation learningexperiments. The first three baselines use NeRF-based outputs as features for the query point-basedpose optimization:1. Density: we use the alpha α∈(0,1)values for NeRF density to ensure the values are scaledconsistently through different scenes, as the density values output by the density field σareunbounded.17(a) Test Scene with a Screwdriver (b) Affordance Prediction (c) Predicted GraspFigure A12: MIRA Failure Case. (a) Test scene with a screwdriver and other distractors for thescrewdriver task (Fig.4b). (b) The orthographic render of the view selected by MIRA, we showthe RGB (top) and depth (bottom) renders. The pixel circled in cyan indicates the action with thehighest pixel-wise affordance across all views. (c) The predicted 6-DOF grasp incorrectly targetsthe silicone brush, as it shares resemblance to a screwdriver from a top-down perspective.2. Intermediate Features: we use the features output by the intermediate density embedding MLPin Nerfacto [10], which have a dimensionality of 15.3. RGB: we use [r, g, b, α ]as the feature for this baseline. αis used to ensure that this baselinepays attention to both the color and geometry, as we found that using RGB only with the alpha-weighted feature field fα(Eq.2) collapsed RGB values to (0,0,0)for free space, which corre-sponds to the color black.MIRA Baseline. The fourth baseline we consider is Mental Imagery for Robotic Affordances(MIRA) [21], a NeRF-based framework for 6-DOF pick-and-place from demonstrations that rendersorthographic views for pixel-wise affordance prediction. Orthographic rendering ensures that anobject has the same size regardless of its distance to the camera, and is used to complement thetranslation equivariance of the Fully Convolutional Network (FCN) for predicting affordances.MIRA formulates each pixel in a rendered orthographic view as a 6-DOF action T= (R,t), withthe orientation of the view defining the rotation Rand the estimated depth from NeRF definingthe translation t. The FCN is trained to predict the pixels in the rendered views corresponding tothe demonstrated 6-DOF actions, and reject pixels sampled from a set of negative views. Duringinference, MIRA renders several orthographic views of the scene and selects the pixel that has themaximum affordance across all views.In our experiments, we train a separate FCN for 20000 steps for each task in Fig.4 specified by twodemonstrations, and sample negative pixels from datasets consisting solely of distractor objects. Weuse data augmentation following Yen-Chen et al. [21]’s provided implementation and apply randomSE(2) transforms to the training views. Given a test scene, we scan 50RGB images as describedin Appendix A.4, render 360orthographic viewpoints randomly sampled over an upper hemispherelooking towards the center of the workspace, and infer a 6-DOF action.MIRA was designed for suction cup grippers and does not predict end-effector rotations. We at-tempted to learn this rotation, but found that the policy failed to generalize. To address this issueand give MIRA the best chance of success, we manually select the best end-effector rotation toachieve the task. We additionally find that MIRA often selects floater artifacts from NeRF, andmanually filter these predictions out along with other unreasonable grasps (e.g., grasping the tableitself).Given that MIRA is trained from scratch given just two demonstrations, we find that it struggles togeneralize and is easily confused by floaters and distractors despite data augmentations and negativesamples. MIRA additionally reasons over 2.5D by using the rendered RGB and depth from NeRFas inputs to the FCN, while our query point-based formulation reasons explicitly over 3D. Becauseof this, we observe that MIRA can fail when there are occlusions or distractor objects that look18(a) Novel Scene (b) DINO ViT Heatmap (c) CLIP ViT HeatmapFigure A13: Comparing DINO and CLIP feature fields. We depict the cosine similarity for thetask of grasping a mug by the handle. Two demos are provided on a red and a white mug (cf. Fig.3b).(b) DINO overfits to the red color of the apple, while (c) CLIP captures higher-level semantics, andidentifies the metal mug.like the demonstration objects from certain viewpoints. For example, one of the demonstrations forthe screwdriver task was a top-down grasp on a screwdriver standing vertically in a rack (Fig.4b).Figure A12 depicts a test scene for the screwdriver grasping task where MIRA incorrectly selectsthe silicone brush as it looks similar to a screwdriver from a top-down 2.5D view. DINO and CLIPResNet feature fields successfully grasp the screwdriver in this scene, highlighting the benefits ofusing pretrained features and reasoning explicitly over 3D geometry.A.6.2 DINO Failure CasesOur experiments show that DINO struggles with distractor objects which have high feature similarityto the demonstrations, despite not representing the objects and their parts we care about (Fig.A13b).We observe that DINO has the tendency to overfit to color. On the other hand, CLIP struggles farless with distractors due to its stronger semantic understanding (Fig.A13c).A.7 Language-Guided ManipulationFor the language-guided experiments, we distilled CLIP ViT features from ViT-L/14@336px . Wereuse the 10demonstrations from the learning to grasp from demonstrations section and their asso-ciated Nq= 100 query points, and sample Nr= 8 rotations for each voxel when initializing thegrasp proposals. We minimize the language-guided cost function in Equation 4 for 200steps withAdam using a learning rate of 2×10−3.Retrieving Demonstrations via Text In practice, we compare the user’s embedded text queryqwith the task embedding ZMfor each task Mwhich is specified by two demonstrations. Werandomly sample object names as our negatives ( L−).Inference Time. The inference time to optimize for a grasp pose given a language query is 6.9seconds on average. We did not make any substantial attempts to speed this up, but note that reducingthe number of optimization steps (we use 200steps but observe convergence usually within 50-100steps), pruning more aggressively, and improving the implementation will significantly reduceinference time.A.8 Additional Related WorkWe provide a more comprehensive overview of the related work discussed in Section 5.Open-Ended Generalization via Language. A number of prior work use natural language fortask-specification and long-horizon planning in applications such as tabletop and mobile manipula-tion [23, 26], navigation [27, 28], and more generally, sequential decision-making in games [59, 60].A recent line of work seeks to replicate the success of vision and language models in robotics, byjointly pre-training large foundation models on behavior data [29, 30]. One can refer to [25] for a19more comprehensive coverage. The goal of our work is different. We seek to find a way to incorpo-rate geometric information with anypre-trained features. F3RM is a model-agnostic approach forlifting imagery and language prior knowledge into 3D. In this regard, we are more closely connectedto CLIPort [23], which focuses on top-down grasping, and PerAct [24], which trains a voxel gridconditioned behavior transformer from scratch given waypoints in motion trajectories.Semantic Understanding and Dense 2D Visual Descriptors. Learning visual descriptors fordense correspondence estimation is a fundamental problem in computer vision. Among the earliestworks, Choy et al. [31] and Schmidt et al. [32] used dynamic 3D reconstruction from RGB-D videosto provide labels of association between pixels from different video frames. Dense Object Netstackle this problem in the context of robotics [13], and eliminate the need for dynamic reconstructionusing multi-view RGB-D images of static scenes [33]. NeRF-Supervision [14] leverages NeRFs for3D reconstruction, extending descriptor learning to thin structures and reflective materials that posechallenges for depth sensors. Unlike these prior works, recent work in vision foundation modelsshows that self-supervised vision transformers offer excellent dense correspondence [4]. Whenscaled to a large, curated dataset, such models offer phenomenal few-shot performance on variousdense prediction tasks [34].Geometric Aware Representations for Robotics. Geometric understanding has been a long-standing problem in computer vision [61] and is an essential and mission-critical part of mappingand navigation [35, 36, 37], grasping [38, 39, 17, 40], and legged locomotion [41]. These workeither require direct supervision from 3D data such as Lidar point clouds, or try to learn represen-tations from posed 2D images or videos [14]. More recently, the robot grasping community hasexperimented with neural scene representations to take advantage of their ability to handle reflectiveor transparent objects and fine geometry [21, 42, 43]. Our work incorporates pre-trained vision andvision-language foundation models to augment geometry with semantics.3D Feature Fields in Vision and Robotics A number of recent work integrate 2D foundationmodels with 3D neural fields in contexts other than robotic manipulation. For example, Open-Scene [44] and CLIP-Fields [45] distill 2D features into a neural field by sampling points fromexisting 3D point clouds or meshes. USA-Net [46] and VLMap [47] build 3D feature maps fromRGB-D images and VLMs for navigation. Our approach uses RGB images and integrates geomet-ric modeling via NeRF with feature fusion in a single pipeline. ConceptFusion [48] uses surfelsto represent the features, which requires 50−100sGB per scene. Distilled feature fields offer asignificant reduction in space. Each of our scenes requires between 60−120MBs and could befurther compressed using lower-rank approximations [62]. Our work shares many similarities withLERF [49]. However, unlike the image-level features used in LERF, which are derived from a hier-archy of image crops, we extract dense, patch-level features from CLIP. Additionally, we take a stepfurther by exploring the utilization of these feature fields for robotic manipulation.20 |
xgrZkRHliXR | Learning to Design and Use Toolsfor Robotic ManipulationZiang Liu∗, Stephen Tian∗, Michelle Guo, C. Karen Liu, Jiajun WuDepartment of Computer ScienceStanford University, United Statesziangliu,tians@stanford.eduAbstract: When limited by their own morphologies, humans and some speciesof animals have the remarkable ability to use objects from the environment to-ward accomplishing otherwise impossible tasks. Robots might similarly unlocka range of additional capabilities through tool use. Recent techniques for jointlyoptimizing morphology and control via deep learning are effective at designing lo-comotion agents. But while outputting a single morphology makes sense for loco-motion, manipulation involves a variety of strategies depending on the task goalsat hand. A manipulation agent must be capable of rapidly prototyping specializedtools for different goals. Therefore, we propose learning a designer policy , ratherthan a single design. A designer policy is conditioned on task information andoutputs a tool design that helps solve the task. A design-conditioned controllerpolicy can then perform manipulation using these tools. In this work, we takea step towards this goal by introducing a reinforcement learning framework forjointly learning these policies. Through simulated manipulation tasks, we showthat this framework is more sample efficient than prior methods in multi-goal ormulti-variant settings, can perform zero-shot interpolation or fine-tuning to tacklepreviously unseen goals, and allows tradeoffs between the complexity of designand control policies under practical constraints. Finally, we deploy our learnedpolicies onto a real robot. Please see our supplementary video and website athttps://robotic-tool-design.github.io/ for visualizations.Keywords: tool use, manipulation, design1 IntroductionHumans and some species of animals are able to make use of tools to solve manipulation taskswhen they are constrained by their own morphologies. Chimpanzees have been observed usingtools to access food and hold water [1], and cockatoos are able to create stick-like tools by cuttingshapes from wood [2] with their beaks. To flexibly and resourcefully accomplish a range of taskscomparable to humans, embodied agents should also be able to leverage tools.But critically, while any object in a human or robot’s environment is a potential tool, not everyobject is directly a useful aid for the task goal at hand. How can a robotic system also acquire theextraordinary ability of animals to create an appropriate tool to help solve a task after reasoningabout a scene’s physics and its own goals? In this work, we investigate not only how agents canperform control using tools, but also how they can learn to design appropriate tools when presentedwith a particular task goal, such as a target position or object location.Prior works have studied joint learning of agent morphologies and control policies for locomotiontasks [3, 4, 5, 6, 7]. However, these approaches optimize designs for a single, predefined genericgoal, such as maintaining balance or forward speed. In this work, we take a step towards agents thatcan learn to rapidly prototype specialized tools for different goals or initial configurations.∗indicates equal contribution.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: A robot may need to use different toolsto fetch an out-of-reach book (blue) or push it intothe bookshelf (pink). It should rapidly prototypethe tool it needs.We propose tackling this challenge by learning de-signer andcontroller policies, that are conditionedon task information, solely from task rewards viareinforcement learning (RL). We find that whentrained with a multi-stage Markov decision process(MDP) formulation, these policies can be efficientlylearned together in a high-dimensional combinedspace. We train agents on multiple instantiations ofeach task so that they can learn to produce and ma-nipulate designs best suited to each situation.We argue that there are three important properties ofan embodied agent that designs and uses tools in re-alistic settings. Firstly, it must be able to learn designand control policies without explicit guidance, using signals specified only on task progress. Fur-thermore, it should form specialized tools based on the task goal at hand, as motivated by Figure 1.Finally, it should adjust to real-world constraints, rather than creating infeasible designs.Our main contribution is a learning framework for embodied agents to design and use tools basedon the task at hand . We demonstrate that our approach can jointly learn these policies in a sample-efficient manner from only downstream task rewards for a variety of simulated manipulation tasks,outperforming existing stochastic optimization approaches. We empirically analyze the generaliza-tion and few-shot finetuning capabilities of the learned policies. By introducing a tradeoff parameterbetween the complexity of design and control components, our approach can adapt to fit constraintssuch as material availability or energy costs. Finally, we demonstrate the real-world effectiveness ofthe learned policies by deploying them on a real Franka Panda robot.2 Related WorkComputational approaches to agent and tool design. Many works have studied the problem ofoptimizing the design of robotic agents, end-effectors, and tools via model-based optimization [8,9], generative modeling [10, 11], evolutionary strategies [3, 5], stochastic optimization [12], orreinforcement learning [13]. Li et al. [14] use differentiable simulation to find parameters of atool that are robust to task variations. These methods provide feedback to the design procedure byexecuting pre-defined trajectories or policies, or performing motion planning. In contrast, we aim tojointly learn control policies along with designing tool structures.In settings where the desired design is known but must be assembled from subcomponents, geome-try [15] and reinforcement learning [16] have been used to compose objects into tools. In this work,we address the fabrication stage of the pipeline using rapid prototyping tools (e.g. 3D printing).Learning robotic tool use. Several approaches have been proposed for empowering robots to learnto use tools. Learning affordances of objects, or how they can be used, is one common paradigm[17, 18, 19, 20]. Noguchi et al. [21] integrate tool and gripper action spaces in a Transporter-styleframework. Learned or simulated dynamics models [22, 23, 24, 25] have also been used for model-based optimization of tool-aided control. These methods assume that a helpful tool is already presentin the scene, whereas we focus on optimizing tool design in conjunction with learning manipulation,which is a more likely scenario for a generalist robot operating for example in a household.Joint optimization of morphology and control. One approach for jointly solving tool design andmanipulation problems is formulating and solving nonlinear programs, which have been shown tobe especially effective at longer horizon sequential manipulation tasks [26, 27]. In this work, weaim to apply our framework to arbitrary environments, and so we select a purely learning-basedapproach at the cost of increasing the complexity of the search space.Reinforcement learning and Bayesian optimization (BO) [28] based approaches have also been ap-plied to jointly learn morphology and control. These include policy gradient methods with eitherseparate [29] or weight-sharing [30] design and control policies or methods that consider the agent2design as a computational graph [31]. Evolutionary or BO algorithms have been combined withcontrol policies learned via RL [32] to solve locomotion and manipulation tasks. Luck et al. [4] usean actor-critic RL formulation with a graph neural network (GNN) value function for improved sam-ple efficiency. Pathak et al. [33] learn to design modular agents by adding morphology-modifyingactions to an MDP. Yuan et al. [7] provide a generalized formulation with a multi-stage MDP andGNN policy and value networks. These methods have demonstrated promising performance on lo-comotion tasks. In this work, we focus on developing tools for manipulation, where the challengeis learning designer and controller policies that can create and operate different tools depending onthe task variation at hand , and it is less feasible to use actuated joints as design components.3 Problem SettingOur framework tackles learning tool design and use for agents to solve manipulation problems,without any supervision except for task progress. We represent the agent’s environment as a two-phase MDP consisting of the “design phase” and “control phase”. We use environment interactionsfrom both phases to jointly train a designer policy and controller policy.Designer Policy Initial state , goal observationTool designControl Policy Control actionObservation , Reward EnvironmentFigure 2: Solving a task using learned designer andcontroller policies. During the design phase, the de-signer policy observes the task at hand and outputs theparameters for a tool. In the control phase, the con-troller policy outputs motor commands given the toolstructure, task specification, and observation.At the start of each episode, the environmentbegins in the design phase , visualized at the topof Figure 2. During the design phase, the actionad∈ A Dspecifies the parameters of the toolthat will be used for the rest of the episode. Theenvironment state s0∈ Staskconsists of a vec-tor of task observations: the positions and ve-locities of objects in the scene and the Cartesianend-effector position of the robot if present.After a single transition, the MDP switches tothecontrol phase , illustrated in the lower halfof Figure 2. During the control phase, the ac-tions ac∈ ACrepresent inputs to a position orvelocity controller actuating the base of the pre-viously designed tool. The control phase stateis the concatenation of the task observation st∈ Stask and the previously taken design action ad.That is, the control state space SC=Stask× AD. The agent receives rewards rtat each timestep tbased on task progress (e.g., the distance of an object being manipulated to the target position). Thecontrol phase continues until the task is solved or a time limit is reached, and the episode then ends.To learn design and control policies for multiple goals, we condition our policies on a suppliedparameter gfrom a goal or task variation space G. In this paper, we choose goals that representthe final position of an object to be manipulated or the number of objects that must be trans-ported. The objective is then to find the optimal goal-conditioned designer and controller policiesπ⋆D(a|s, g),π⋆C(a|s, g)that maximize the expected discounted return of a goal-dependent rewardfunction R(s, a, g ).4 Instantiating Our FrameworkNext, we concretely implement our framework toward solving a series of manipulation tasks. Specif-ically, we select a tool design space ,policy learning procedure , and auxiliary reward function .Tool design space. The design space significantly impacts the difficulty of the joint design andcontrol optimization problem. When the set of possible designs is large but many of them areunhelpful for anytask, the reward signal for optimization is sparse. Thus, we would like to selecta design parameterization that is low-dimensional but can also enable many manipulation tasks.Furthermore, we prefer designs that are easy to deploy in the real world.Toward these goals, we consider tools composed of rigid links. While simple, we find that this pa-rameter space includes tools that are sufficient to help solve a variety of manipulation tasks. They3can also be easily deployed in the real world on soft robots [12] or through rapid fabrication tech-niques like 3D printing. However, we note that our framework is not limited to this design space.For example, the parameterization used in our 2D environments consists of three links attached end-to-end as shown in Figure 2, where a tool is represented by a vector [l1, l2, l3, θ1, θ2]∈R5=AD,where lrepresents each link length and θrepresents the relative angle between the links.Policy learning. Similarly to Yuan et al. [7], we interactively collect experience in the environmentusing the design and control policies, where each trajectory spans the design and control phase.We then train the policies jointly using proximal policy optimization (PPO) [34], a popular policygradient method. We adopt the graph neural network (GNN) policy and value function architecturefrom Yuan et al. [7]. When training in goal-conditioned environments, we supply the policies withrandomized goals sampled from the environment for each interaction episode.Auxiliary reward. An embodied agent that creates and uses tools in the real world must alsoconsider resource constraints. Many prior co-optimization procedures assume that actuated joints orbody links can be arbitrarily added to the agent morphology. However, as an example, when an agentsolves manipulation tasks in a household environment, it may not have access to additional motorsor building materials. On the other hand, when possible, constructing a larger tool may reduce theamount of power expended for motor control, especially if a task must be completed many times.We enable our framework to accommodate preferences in this trade-off between design material costand control energy consumption, proxied by end-effector velocity, using a parameter αthat adjustsan auxiliary reward that is added to the task reward at each environment step:rtradeoff =K1−α·duseddmax+(1−α)·cusedcmax, (1)where Kis a scaling hyperparameter, α∈[0,1]controls the emphasis on either the control or designcomponent, dusedanddmaxrepresent the utilized and maximum possible combined length of the toolcomponents in the design respectively, and cusedandcmaxrepresent the control velocity at the currentstep and the maximum single-step control velocity allowed by the environment. Intuitively the agentfavors using less material for tool construction when αis large, and less energy for the control policywhen αis small. Except in Section 5.3, we use K= 0to isolate this reward’s effects.5 Experiments(a)Push (b)Catch balls (c)Scoop(d)Fetch cube (e)Lift cup (f)Scoop (3D)Figure 3: Simulated manipulation environments.For our experiments, we introduce three 2Dmanipulation environments in the Box2Dsimulator [35] and three 3D environments inPyBullet [36]. These tasks showcase the ad-vantages of using different tools when theredoes not exist a single tool that can solve allinstances. The six tasks are shown in Fig-ure 3. For each task, we initialize the de-signed tool by matching a fixed point on thetool to a fixed starting position regardless ofthe goal. During the control phase, we sim-ulate a scenario in which a robot has graspedthe tool and manipulates it, via end-effectorvelocity control in 2D or position control in3D. All 2D tasks use the 3-link chain parameterization described in Section 4. A short descriptionof each task is as follows, with additional details in the Appendix:•Push (2D) : Push a round puck using the tool such that it stops at the specified 2D goal location.•Catch balls (2D) : Use the tool to catch three balls that fall from the sky. The agent’s goal isto catch all three balls, which start from random locations on the x-yplane.•Scoop (2D) : Use the tool to scoop balls out of a reservoir containing 40total balls. Here wespecify goals of scooping n∈ {1,2, ...,7}balls.4Figure 4: Learning curves for our framework, prior methods, and baselines. Across all tasks, our frameworkachieves improved performance and sample efficiency. Shaded areas indicate standard error across 6randomseeds on all methods, except the scoop (3D) task where we use 3seeds due to computational constraints.•Fetch cube (3D) : Use the tool to retrieve an object randomly positioned beneath an overhang.This task is challenging because the end effector is restricted to a 0.8m×0.2m region of the x-yplane to avoid collision with the overhang. The tool is a three-link chain where each link is a boxparameterized by its width, length, height, and relative angle to its parent link.•Lift cup (3D) : Use the tool to lift a cup of randomized geometry from rest to a certain height.This task requires careful tool design to match cup geometry. The tool is a four-link fork with twoprongs symmetrically parameterized by their separation, tilt angle, width, length, and height.•Scoop (3D) : An analog of the 2D scoop task (with the same goal space), but the tool in 3D isa six-link scoop composed of a rectangular base plate parameterized by its length and width, andfour rectangular side plates attached to each side of the base plate, parameterized by their heightand relative angle to the bottom plate. A fixed-dimension handle is attached to one side plate.In our experiments, we analyze whether the instantiation of our framework on these manipulationtasks has the following four desirable properties:• Can our framework jointly learn designer and controller policies in a sample-efficient manner,using just rewards based on task progress?• Do learned designer and controller policies generalize or enable fine-tuning for unseen goals?• Can the adjustable parameter αenable agents to trade off design and control complexity?• Can designer and controller policies learned by our framework be deployed on a real robot?5.1 Evaluating sample efficiencySample efficiency is critical for a performant joint tool design and control learning pipeline, as manysampled designs will be unhelpful for anygoal or initialization. As prior joint optimization worksdo not handle situations where diverse designs may be produced depending on the particular taskvariation, we compare to the following prior methods and baselines (details in Appendix B.2):•Bi-level optimization (CMA-RL): This follows the common bi-level paradigm for joint opti-mization [4]. We use CMA-ES [37] to perform outer-loop stochastic design optimization andlearn design-conditioned policies with PPO in the inner loop, as the reward signal for design.•Hardware-as-Policy Minimal [31]/Ha [30]: The variant of HWasP without differentiablephysics assumptions (HWasP-minimal) and the method from Ha [30] both perform policy-gradientoptimization of a single set of design parameters together with the control policy jointly using RL.5SuccessCutoutFailureTrain(a) Initialization ranges and zero-shot per-formance when cutting out 60% of the areaof the entire possible training region.(b) Returns for policiestrained with varying relativecutout region area.(c) Fine-tuning performancecompared to learning fromscratch across 4target goals.Figure 6: We find that our policies can solve instances of the fetch cube task unseen at training time eitherzero-shot or by rapid finetuning. In (a), we plot the goal regions along with zero-shot policy performance. Areaswithin the dotted yellow borders and outside the teal region are unseen during training. The region within theteal border (but outside cutout regions) is the training region. Training curves are averaged over 3seeds; shadedregions show standard error.•Single-trajectory CMA-ES baseline: We optimize a set of design and control actions for anepisode independent of the goal or starting task state, demonstrating the required policy reactivity.•Ours (shared arch.): An ablation of our framework that uses a single policy network with designand control heads, demonstrating the importance of separate designer and controller architectures.Figure 5: Tool designs outputted by a singlelearned designer policy for the push task as thegoal position, in green, varies. The range of out-putted designs enable the agent to push the ball inthe desired direction.In Figure 4, we compare the learning curves of ourmethod and demonstrate results for all six tasks. Ourmethod strongly outperforms the prior methods andbaselines, achieving superior final performance infewer samples. It does so by producing specializedtools to solve each task instance, while the othermethods optimize for a single design across all taskvariations. The ablation shows the importance oflearning separate designer and controller policies.We present qualitative examples of tool designs out-putted for different goals on the push task in Fig-ure 5. The designer policy outputs a range of toolsdepending on the goal location.5.2 Generalization to unseen task variationsIn simulation, agents can experience millions of trials for a range of task variations. However, whendeployed to the real world, designer and controller policies cannot be pre-trained on all possiblefuture manipulation scenarios. In this section, we test the ability of our policies to generalize totask variations unseen during training. For these experiments, we focus on the fetch cube taskbecause it has an initial pose space that can be manipulated in a semantically meaningful way. Wetrain policies using our framework on a subset of initial poses from the entire initial pose spaceby removing a region of the space, which we call the “cutout” region. (see Figure 6a). Then,we evaluate the generalization performance of learned policies on the initial poses from the cutoutregion and outside the training region in two scenarios.Zero-shot performance. In the first scenario, we test the ability of the designer and controllerpolicy to tackle a previously unseen initial pose directly. Using a policy trained on the initial posespace with a cutout region removed, we evaluate the zero-shot performance on unseen initial poses.In Figure 6a, we visualize the zero-shot performance of our design and control policies on initialposes across the entire environment plane, finding that our policies are able to solve even task varia-tions outside the training region boundaries. We also analyze how decreasing the number of possibletraining poses affects generalization performance. In Figure 6b, we show the returns over the train-ing region, cutout region, and the regions outside of the training region, as the size of the cutoutregion for training poses varies.6Figure 8: Real world rollouts for the fetch cube (top) and lift cup (bottom) tasks. For fetch cube , weshow the task success threshold in green and the volume covered by the overhang in yellow. The design policygenerates specialized tools, and the control policy adjusts its strategy to use the tool to complete each task.When the cutout region is very small, the performance of our learned policies on seen and unseenposes is similar. As the area of the cutout region increases, the performance on unseen goals degradesgracefully and can still solve a significant portion of unseen tasks.(a) Control/Design ratiowith different α./uni03B1=0.3/uni03B1=0.7/uni03B1=1.0/uni03B1=0.0α = 1.0α = 0.7α = 0.3α = 0.0(b) Tools produced at dif-ferent αvalues.Figure 7: Examples of tools generated by varying thetradeoff parameter α. Asαincreases, the created toolshave shorter links at the sides to decrease material us-age. With lower α, large tools reduce the control pol-icy’s required movement.Fine-tuning performance. Sometimes, newtask variations cannot be achieved zero-shot bydesigner and controller policies. In this section,we test whether our design policies and con-troller policies can still serve as good instantia-tions for achieving these variations. We test thisby pre-training policies with our framework onthe entire training region and fine-tuning themto solve initial poses outside that region.In Figure 6c, we show the results of the fine-tuning experiment. We find that even for posesthat are far away from the initial training region,our policies are able to learn to solve the taskwithin a handful of gradient steps, and is muchmore effective than learning from scratch.5.3 Trading off design and control complexityIn this section, we aim to determine whether our introduced tradeoff parameter α(defined in Equa-tion 1) can actualize preferences in the tradeoff between design material cost and control energyconsumption. For this experiment, we focus on the catch balls task, because the tradeoff has anintuitive interpretation in this setting: a larger tool can allow the agent to catch objects with mini-mal movement, while a smaller tool can reduce material cost but requires a longer trajectory withadditional energy costs. We train four agents on catch balls with different values of α.In Figure 7a, we plot the ratiodused/dmaxcused/cmax, where drepresents the combined length of all tool links andcis the per-step control velocity. dmaxandcmaxindicate the maximum tool size and control velocityallowed by the environment. We find that this ratio indeed correlates with α, which indicates thatagents that are directed to prefer saving either material or energy are doing so, at the cost of theother. We also visualize the outputted tool designs in Figure 7b.5.4 Evaluation on a real robotNext, we provide a demonstration of our pipeline on a real robot by transferring learned policiesin two of the 3D environments, fetch cube andlift cup , directly to the real world. We use aFranka Panda 7-DoF robot arm equipped with its standard parallel jaw gripper (Figure 11). FiveRealSense D435 RGBD cameras perform object tracking using AprilTags [38].7Fabricating tools. In order to evaluate our policies in the real world, we fabricate the tools that areoutputted by the design policy via 3D printing. Specifically, based on an initial state observationand/or goal, we convert tool parameters from design policy outputs into meshes and construct themfrom PLA using consumer Ender 3 and Ender 5 fused deposition modeling (FDM) 3D printers. SeeFigure 9 for examples of printed tools generated by our policy.Real robot evaluation. For the fetch cube andlift cup tasks, we evaluate policies on fourinitial cube positions and four cup geometries respectively. For each task instance, we fabricate thedesigned tool and evaluate control over five trials. We report success rates for each tool in Table 1,finding that our policies successfully transfer their performance to the real world.Figure 9: 3D-printed tools for fetchcube (top) and lift cup (bottom).Figure 8 shows qualitative examples of real rollouts. In the toprow, the robot designs an ‘L’-shaped hook. Because the end-effector has a constrained workspace due to the risk of colli-sion with the overhang, the robot employs a two-stage strategy– first hooking the block from under the overhang, and thenusing the backside of the tool to drag it closer to itself. In thesecond row, the robot uses a ‘U’-shaped tool to quickly retrievethe nearby block and finally pushes it to the goal with the grip-per fingers when it is within reach. This exemplifies that ourframework flexibly allows an agent to use its original morphol-ogy in combination with designed tools when needed. Finally,in the lift cup task, the design policy selects appropriate dis-tances between the tool arms and an angle of approach to holdthe cup securely when elevated.Tool number 1 2 3 4Fetch cube (single init.) 5/5 5/5 5/5 5/5Fetch cube (grid inits.) 10/12 4/12 6/12 5/12Lift cup 5/5 5/5 5/5 5/5Table 1: Real world evaluation performance. “Sin-gle init” denotes the evaluation of a tool on the initialenvironment state it was designed for. “Grid of inits”evaluates the same tool on a range of initializations.Forfetch cube , we further analyze theperformance of the control policy whenusing tools for initial positions that theywere not directly designed for . Specifi-cally, we take the four generated tools andevaluate how well the control policy canuse them to solve 12tasks on a 12cm×85.6cm grid of initial cube positions.The results are presented in Table 1. While the control policy can reuse tools to solve new tasks,no tool can solve all the tasks. Tool 1solves the most tasks, but tool 4solves both tasks that tool1cannot. We also find that the policy takes a greater number of timesteps to solve each task withtool1compared to tools specialized for those initializations. These experiments demonstrate that adiverse set of tools is indeed important for different variations of this task.6 ConclusionWe have introduced a framework for agents to jointly learn design and control policies, as a steptowards generalist embodied manipulation agents that are unconstrained by their own morphologies.Because the best type of tool and control strategy can vary depending on the goal, we propose tolearn designer and controller policies to generate useful tools based on the task at hand and thenperform manipulation with them. Our work is a step towards building embodied agents that canreason about novel tasks and settings and then equip themselves with the required tools to solvethem, without any human supervision. Such systems may lead the way towards autonomous robotsthat can perform continuous learning and exploration in real-world settings.Limitations. In this work, we focus on rigid, non-articulated tools composed of linked primitiveshapes. A promising direction for future work is to explore other parameterizations. In addition,we do not address fabrication: we use 3D-printing for prototyping and do not consider the prob-lem of constructing tools from a set of available objects. Finally, we focus on tool geometry, butconsideration of tool stability and applied forces could lead to improved real-world performance.8AcknowledgmentsWe thank Josiah Wong as well as anonymous reviewers for helpful feedback on earlier versions ofthe manuscript, Samuel Clarke for recording the video voiceover, and the Stanford Product Real-ization Lab for 3D printing resources and advice. The work is in part supported by ONR MURIN00014-22-1-2740, the Stanford Institute for Human-Centered AI (HAI), Amazon, Autodesk, andJPMC. ST and MG are supported by NSF GRFP Grant No. DGE-1656518.References[1] J. Goodall. Tool-using and aimed throwing in a community of free-living chimpanzees. Nature ,201, 1964.[2] A. M. Auersperg, S. Borasinski, I. Laumer, and A. Kacelnik. Goffin’s cockatoos make thesame tool type from different materials. Biology Letters , 12, 2016.[3] K. Sims. Evolving virtual creatures. In Proceedings of the 21st Annual Conference on Com-puter Graphics and Interactive Techniques , SIGGRAPH ’94, page 15–22, New York, NY ,USA, 1994. Association for Computing Machinery. ISBN 0897916670.[4] K. S. Luck, H. B. Amor, and R. Calandra. Data-efficient co-adaptation of morphology andbehaviour with deep reinforcement learning. In 3rd Annual Conference on Robot Learning,CoRL 2019, Osaka, Japan, October 30 - November 1, 2019, Proceedings , volume 100 ofProceedings of Machine Learning Research , pages 854–869, 2019.[5] D. Hejna, III, P. Abbeel, and L. Pinto. Task-agnostic morphology evolution. In InternationalConference on Learning Representations , 2021.[6] A. Gupta, S. Savarese, S. Ganguli, and L. Fei-Fei. Embodied intelligence via learning andevolution. Nature Communications , 2021.[7] Y . Yuan, Y . Song, Z. Luo, W. Sun, and K. M. Kitani. Transform2act: Learning a transform-and-control policy for efficient agent design. In International Conference on Learning Repre-sentations , 2022.[8] K. Kawaharazuka, T. Ogawa, and C. Nabeshima. Tool shape optimization through backprop-agation of neural network. In 2020 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 8387–8393, 2020.[9] K. R. Allen, T. Lopez-Guevara, K. Stachenfeld, A. Sanchez-Gonzalez, P. Battaglia, J. Hamrick,and T. Pfaff. Physical design using differentiable learned simulators. arXiv preprint arXiv:Arxiv-2202.00728 , 2022.[10] Y . Wu, S. Kasewa, O. Groth, S. Salter, L. Sun, O. P. Jones, and I. Posner. Imagine That! lever-aging emergent affordances for 3D tool synthesis. arXiv preprint arXiv: Arxiv-1909.13561 ,2019.[11] H. Ha, S. Agrawal, and S. Song. Fit2form: 3D generative model for robot gripper form design.In4th Conference on Robot Learning, CoRL 2020, 16-18 November 2020, Virtual Event /Cambridge, MA, USA , volume 155 of Proceedings of Machine Learning Research , pages 176–187. PMLR, 2020.[12] I. Exarchos, K. Wang, B. H. Do, F. Stroppa, M. M. Coad, A. M. Okamura, and C. K. Liu.Task-specific design optimization and fabrication for inflated-beam soft robots with growablediscrete joints. In 2022 International Conference on Robotics and Automation, ICRA 2022,Philadelphia, PA, USA, May 23-27, 2022 , pages 7145–7151. IEEE, 2022.[13] Y . Li, T. Kong, L. Li, Y . Li, and Y . Wu. Learning to design and construct bridge withoutblueprint. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS2021, Prague, Czech Republic, September 27 - Oct. 1, 2021 , pages 2398–2405. IEEE, 2021.9[14] M. Li, R. Antonova, D. Sadigh, and J. Bohg. Learning tool morphology for contact-richmanipulation tasks with differentiable simulation. arXiv preprint arXiv:2211.02201 , 2022.[15] L. Nair, N. Shrivatsav, and S. Chernova. Tool macgyvering: A novel framework for combiningtool substitution and construction. arXiv preprint arXiv: Arxiv-2008.10638 , 2020.[16] S. K. S. Ghasemipour, S. Kataoka, B. David, D. Freeman, S. S. Gu, and I. Mordatch. Blocksassemble! learning to assemble with large-scale structured reinforcement learning. In Interna-tional Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland,USA, volume 162 of Proceedings of Machine Learning Research , pages 7435–7469. PMLR,2022.[17] K. Fang, Y . Zhu, A. Garg, A. Kurenkov, V . Mehta, L. Fei-Fei, and S. Savarese. Learning task-oriented grasping for tool manipulation from simulated self-supervision. In Robotics: Scienceand Systems XIV , Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, June 26-30,2018 , 2018.[18] Z. Qin, K. Fang, Y . Zhu, L. Fei-Fei, and S. Savarese. Keto: Learning keypoint representationsfor tool manipulation. In 2020 IEEE International Conference on Robotics and Automation(ICRA) , pages 7278–7285. IEEE, 2020.[19] J. Brawer, M. Qin, and B. Scassellati. A causal approach to tool affordance learning. In 2020IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 8394–8399, 2020.[20] D. Xu, A. Mandlekar, R. Mart ́ın-Mart ́ın, Y . Zhu, S. Savarese, and L. Fei-Fei. Deep affordanceforesight: Planning through what can be done in the future. In IEEE International Conferenceon Robotics and Automation (ICRA) , pages 6206–6213. IEEE, 2021.[21] Y . Noguchi, T. Matsushima, Y . Matsuo, and S. S. Gu. Tool as embodiment for recursivemanipulation. arXiv preprint arXiv: Arxiv-2112.00359 , 2021.[22] K. R. Allen, K. A. Smith, and J. Tenenbaum. Rapid trial-and-error learning with simulationsupports flexible tool use and physical reasoning. Proceedings of the National Academy ofSciences , 2019.[23] A. Xie, F. Ebert, S. Levine, and C. Finn. Improvisation through physical understanding: Usingnovel objects as tools with visual foresight. In Robotics: Science and Systems XV , Universityof Freiburg, Freiburg im Breisgau, Germany, June 22-26, 2019 , 2019.[24] R. Girdhar, L. Gustafson, A. Adcock, and L. van der Maaten. Forward prediction for physicalreasoning. arXiv preprint arXiv: Arxiv-2006.10734 , 2020.[25] X. Lin, Z. Huang, Y . Li, J. B. Tenenbaum, D. Held, and C. Gan. DiffSkill: Skill abstractionfrom differentiable physics for deformable object manipulations with tools. In InternationalConference on Learning Representation (ICLR) , 2022.[26] M. Toussaint, K. R. Allen, K. A. Smith, and J. B. Tenenbaum. Differentiable physics and stablemodes for tool-use and manipulation planning. In Proc. of Robotics: Science and Systems(R:SS) , 2018.[27] M. Toussaint, J.-S. Ha, and O. S. Oguz. Co-optimizing robot, environment, and tool designvia joint manipulation planning. In Proc. of the IEEE Int. Conf. on Robotics and Automation(ICRA) , 2021.[28] T. Liao, G. Wang, B. Yang, R. Lee, K. Pister, S. Levine, and R. Calandra. Data-efficientlearning of morphology and controller for a microrobot. In 2019 International Conference onRobotics and Automation (ICRA) , pages 2488–2494. IEEE, 2019.10[29] C. B. Schaff, D. Yunis, A. Chakrabarti, and M. R. Walter. Jointly learning to construct andcontrol agents using deep reinforcement learning. In International Conference on Roboticsand Automation, ICRA 2019, Montreal, QC, Canada, May 20-24, 2019 , pages 9798–9805.IEEE, 2019.[30] D. Ha. Reinforcement learning for improving agent design. arXiv:1810.03779 , 2018.[31] T. Chen, Z. He, and M. Ciocarlie. Hardware as policy: Mechanical and computational co-optimization using deep reinforcement learning. Corl, 2020.[32] J. Bhatia, H. Jackson, Y . Tian, J. Xu, and W. Matusik. Evolution gym: A large-scale benchmarkfor evolving soft robots. Advances in Neural Information Processing Systems , 34, 2021.[33] D. Pathak, C. Lu, T. Darrell, P. Isola, and A. A. Efros. Learning to control self-assembling mor-phologies: A study of generalization via modularity. arXiv preprint arXiv: Arxiv-1902.05546 ,2019.[34] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv: Arxiv-1707.06347 , 2017.[35] E. Catto. Box2D. URL https://box2d.org/ .[36] E. Coumans and Y . Bai. PyBullet, a Python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016–2019.[37] N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolutionstrategies: the covariance matrix adaptation. In Proceedings of IEEE International Conferenceon Evolutionary Computation , pages 312–317, 1996. doi:10.1109/ICEC.1996.542381.[38] E. Olson. AprilTag: A robust and flexible visual fiducial system. In 2011 IEEE InternationalConference on Robotics and Automation , pages 3400–3407, 2011.[39] A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann. Stable-Baselines3:Reliable reinforcement learning implementations. Journal of Machine Learning Research , 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/20-1364.html .[40] Y . Lin, A. S. Wang, G. Sutanto, A. Rai, and F. Meier. Polymetis. https://facebookresearch.github.io/fairo/polymetis/ , 2021.11A Environment DetailsHere we provide additional details for our simulation environments. An unabridged version of thedescription from Section 5 is as follows:•Push (2D) : Push a round puck using the tool such that it stops at the specified goal loca-tion. The goal space is a subset of 2D final puck locations G ⊂R2, and the control actionspaceAC∈R2specifies the xandytool velocities.•Catch balls (2D) : Use the tool to catch three balls that fall from the sky. The agent’sgoal is to catch all three balls, which start from varying locations on the x-yplane. We usea1-dimensional control action space that specifies the xvelocity of the tool at each step.•Scoop (2D) : Use the tool to scoop balls out of a reservoir containing 40total balls. Herewe specify goals of scooping x∈ {1,2, ...,7}balls. The control action space AC∈R3specifies the velocity of the rigid tool in xandydirections, along with its angular velocity.•Fetch cube (3D) : Use the tool to retrieve an object randomly positioned beneath a ver-tical overhang. This task is additionally challenging because the position of the robot endeffector is restricted to a rectangular region in the x-yplane of dimensions 0.8m×0.2m toavoid collision with the overhang. The tool is a three-link chain where each link is a boxparameterized by its width, length, and height. The design space also includes the relativeangle between two connected links, with a total of n= 11 parameters. The control actionspaceAC∈R3represents a change in end-effector position.•Lift cup (3D) : Use the tool to lift a cup of randomized geometry from the ground intothe air. This task requires careful design of the tool to match cup geometry. The tool is afour-link fork with two prongs parameterized by the separation, tilt angle, width, length,and height of the prongs. The same parameters are applied to both prongs to maintainsymmetry. The handle dimensions are fixed. The design space has n= 5 parameters.AC∈R3represents a change in end-effector position.•Scoop (3D) : A 3D analog of the 2D scoop task. This task has the same goal space as the2Dscoop task, but the tool in 3D is a six-link scoop composed of a rectangular bottomplate parameterized by its width and length, and four rectangular side plates attached toeach side of the bottom plate. Each side plate is parameterized by its height and relativeangle to the bottom plate. A handle with fixed dimensions is attached to one of the sideplates. There are n= 10 total design parameters. AC∈R6represents a change in end-effector pose.Our task selection is motivated by real-world tasks that a robot may need to perform for example ina home robot setting. Examples include:• Fetching objects (fetch cube): retrieving fallen or misplaced objects e.g. underneath sofas,tables, or chairs in homes, or tight spaces like inside cars, enabling scene “resets” fromstates where objects are lost for continuous robotic learning settings• Scooping (scoop 2D, 3D): manipulating granular materials such as rice, beans, cereals ormeasuring and transferring liquids for cooking• Pushing (push): home robotics applications when robots are impeded by obstacles suchas countertops, tables, beds; industrial applications to aligning and grouping objects forrobotic packing• Lifting objects (lift cup): Transporting objects that are too large for a parallel jaw gripperor for a suction gripper to stably grasp, for example, pots and pans, garbage cans, kitchenappliances; or risky to grasp directly, for example, in high temperature industrial weldingand forging12B Experimental DetailsB.1 Training Hyperparameters & Architecture Details for Our FrameworkIn Tables 5, 6, 7, 8, 9, and 10, we provide detailed hyperparameters for our framework for eachenvironment. Unless otherwise specified, we use the neural network architectures for the designpolicy, control policy, and value function from [7].Robot arm base(a) Tool 1Robot arm base(b) Tool 2Robot arm base(c) Tool 3Robot arm base(d) Tool 4Figure 10: Heatmaps of success and failures for real world fetch cube trials where fixed toolsare used for a range of initializations. Recall that we test the policies on a 3×4grid of initialpositions that span a total dimension of 12cm×85.6cm, for a total of 12trials per tool. This meansthat evaluations were performed every 6cm along one dimension and 28.53cm along the other. Thegrid here is a top-down view of the robot workspace and directly maps to the set of 2D initial cubelocations tested for each tool, where the base of the robot is at the bottom of each diagram. Heregreen indicates a success, red indicates failure, and orange indicates a failure where the final cubeposition is within 5cm of success.B.2 Training Hyperparameters & Architecture Details for BaselinesFor the CMA-ES baseline, we perform hyperparameter sweeps for a fair comparison with our frame-work. For the CMA-RL baseline, we use the same set of best performing hyperparameters for theouter CMA-ES loop. The tested hyperparameter configurations for each baseline are listed in Ta-ble 2. Except model architecture differences, we use the same optimization hyperparameters forOurs, Ours(shared arch.), and HWasP-minimal. Note that we control for the number of networkparameters in the “shared arch” ablation – notably, we used MLP policies implemented in StableBaselines 3 [39] and ensure that the number of trainable network parameters is either the same orstrictly larger than in our method across all tasks.B.3 Computational ResourcesWe train each of our policies using a single GPU (NVIDIA RTX 2080Ti or TITAN RTX)and 32 CPU cores. The total wall clock training time varies per environment from 213Method Hyperparameters ValuesCMA-ESPopulation Size 10, 24, 100, 1000Initial Stdev 0.1, 1.0, 10.0Center Learning Rate 0.01, 0.1, 1.0Covariance Learning Rate 0.01, 0.1, 1.0Rank μLearning Rate 0.01, 0.1, 1.0Rank One Learning Rate 0.01, 0.1, 1.0CMA-RLPoicy Net (256, 256, 256, 256, 256)Value Net (256, 256, 256, 256, 256)Learning Rate 3e-4Batch Size (50000, 20000(3D scoop))Minibatch Size 2000Ours(sharedarch.)Poicy Net (256, 256, 256, 256, 256)Value Net (256, 256, 256, 256, 256)Learning Rate 1e-4Batch Size (50000, 20000(3D scoop))Minibatch Size 2000Table 2: We tune over these values for hyperparameters of baseline methods. Bolded values indicatethe best performing settings for CMA-ES, which we use in our comparisons.hours for the Catch Balls environment to 24 hours for the Scoop(3D) environment. Wedetail the training/inference time for our model and baselines on the fetch cube task:Ours Ours (shared arch) CMA-RL HWasP-minimal/Ha 2018 CMA-ESTraining time 2.12e+2 7.29e+2 1.18e+3 2.98e+2 3.46e+2Inference time 1.68e-3 5.24e-4 5.32e-4 1.75e-3 N/AFigure 11: Real world setup. We use a FrankaPanda arm and five RealSense D435 cameras fortracking. The cube pictured, used for the fetchcube task has a side length of 5cm.The training time represents the wall clock time inminutes needed to train the model for 107environ-ment steps. The inference time denotes the wallclock time in seconds needed to perform one forwardpass through the model. These results are generatedusing a NVIDIA Titan RTX GPU and an Intel XeonGold 5220 CPU.B.4 Generalizationto Unseen Goals Experiment DetailsForFetch cube , the rectangular region of initialposes is defined by x∈[−0.395,0.395] andy∈[0.4,0.7]. The cutout region corresponds to twodisconnected rectangular patches contained in thetraining region defined by x1∈[−0.350,−0.045],y1∈[0.434,0.666] andx2∈[0.045,0.350],y2∈[0.434,0.666] respectively.Zero-shot performance. We train six policies using our framework where the cutout region re-moves a fraction of the total training area equal to 0.1,0.2,0.4,0.6,0.8, and0.9respectively.Fine-tuning performance. For the fine-tuning experiment, we specifically selectfour initializations that we find our policies do not complete successfully zero-shot:{(−0.167,0.367),(−0.129,0.357),(0.430,0.493),(0.415,0.610)}.B.5 Trading off Design and Control Complexity Experiment DetailsWe train four agents independently on catch balls , setting the value of α, the tradeoff rewardparameter defined in Equation 1, to 0,0.3,0.7, and1.0respectively.14B.6 Real Robot Experiment DetailsFor our real-world experiments, we use a Franka Emika Panda arm. We control the robot using animpedance controller from the Polymetis [40] library.Tools are 3D printed using polylactic acid (PLA) on commercially available Ender 3 and Ender 5printers with nozzle diameter 0.4mm. We print using a layer height of 0.3mm and 10% infill. Weperform slicing using the Ultimaker CURA software.We roll out each policy for 100environment steps or until a success is detected.For the fetch cube task, we measure the success based on whether the center of mass of the 5cmcube is closer than 0.5m from the base of the robot. Please see Table 3 for per-tool details. The toolimages are shown in Figure 9, from left to right: Tools 4, 2, 3, 1 respectively.Tool Initial cube position (x, y, z)Tool 1 (-0.110, -0.803, 0.025)Tool 2 (-0.339, -0.588, 0.025)Tool 3 (0.155, -0.731, 0.025)Tool 4 (0.211, -0.633, 0.025)Table 3: Initial cube positions corresponding to tools fabricated in real experiments.For the lift cup task, we measure success based on whether the cup has been lifted higher than0.4m off of the plane of the workspace. Please see Table 4 for per-tool details. The tool images areshown in Figure 9, from left to right: Tools 1, 2, 3, 4.Tool Cup geometry parameters(length/width, height)Tool 1 (0.3, 0.6)Tool 2 (0.3, 0.9)Tool 3 (0.5, 0.8)Tool 4 (0.9, 0.6)Table 4: Cup geometry parameters corresponding to tools fabricated in real experiments. Note thatthe length and width parameters share a single value.We also present detailed results for the fetch cube experiments using tools generated for a specificinitial position for a range of initializations. Recall that we test the policies on a 3×4grid of initialpositions that span a range of 12cm×85.6cm, for a total of 12trials per tool. We plot the successesand failures for each tool according to geometric position in Figure 10. We can see that the controlpolicy is able to use each tool to solve the task for several initializations, but each tool is specializedfor particular regions.15Hyperparameter ValueTool Position Init. (20, 10)Control Steps Per Action 1Max Episode Steps 150Slack Reward -0.001Tool Length Ratio (-0.5, 0.5)Tool Length Init. (2.0, 2.0, 2.0)Tool Angle Init. (0.0, 0.0, 0.0)Tool Angle Ratio (-1.0, 1.0)Tool Angle Scale 90.0Control GNN (64, 64, 64)Control Index MLP (128, 128)Design GNN (64, 64, 64)Design Index MLP (128, 128)Control Log Std. -1.0Design Log Std. -2.3Fix Design & Control Std. TruePolicy Learning Rate 2e-5Entropy β 0.01Value Learning Rate 1e-4KL Divergence Threshold 0.005Batch Size 50000Minibatch Size 2000PPO Steps Per Batch 10Table 5: Hyperparameters used for our framework on the push task.Hyperparameter ValueTool Position Init. (20, 10)Control Steps Per Action 1Max Episode Steps 150Slack Reward -0.001Tool Length Ratio (-0.5, 2.0)Tool Length Init. (2.0, 1.0, 1.0)Tool Angle Init. (0.0, 0.0, 0.0)Tool Angle Ratio (-1.0, 1.0)Tool Angle Scale 60.0Control GNN (64, 64, 64)Control Index MLP (128, 128)Design GNN (64, 64, 64)Design Index MLP (128, 128)Control Log Std. 0.0Design Log Std. 0.0Fix Design & Control Std. TruePolicy Learning Rate 2e-5Entropy β 0.01Value Learning Rate 1e-4KL Divergence Threshold 0.002Batch Size 50000Minibatch Size 2000PPO Steps Per Batch 10Table 6: Hyperparameters used for our framework on the catch balls task.16Hyperparameter ValueTool Position Init. (15, 10)Control Steps Per Action 5Max Episode Steps 30Slack Reward -0.001Tool Length Ratio (-0.7, 0.2)Tool Length Init. (6.0, 3.0, 3.0)Tool Angle Init. (0.0, 0.0, 0.0)Tool Angle Ratio (-0.1, 0.7)Tool Angle Scale 90.0Control GNN (64, 64, 64)Control Index MLP (128, 128)Design GNN (64, 64, 64)Design Index MLP (128, 128)Control Log Std. 0.0Design Log Std. 0.0Fix Design & Control Std. TruePolicy Learning Rate 2e-5Entropy β 0.01Value Learning Rate 3e-4KL Divergence Threshold 0.1Batch Size 50000Minibatch Size 2000PPO Steps Per Batch 10Table 7: Hyperparameters used for our framework on the scoop task.Hyperparameter ValueTool Position Init. (0.0, 0.5, 0.02)Control Steps Per Action 10Max Episode Steps 100Slack Reward -0.001Success Reward 10.0Box Dimensions Min (0.005, 0.05, 0.005)Box Dimensions Max (0.015, 0.1, 0.02)Tool Angle Min (-90.0, -90.0, -90.0)Tool Angle Max (90.0, 90.0, 90.0)Control Action Min (-1.0, -1.0, -1.0, -0.2, -0.2, -0.2)Control Action Max (1.0, 1.0, 1.0, 0.2, 0.2, 0.2)Control Action Scale 0.1Control GNN (128, 128, 128)Control Index MLP (128, 128)Design GNN (128, 128, 128)Design Index MLP (128, 128)Control Log Std. 0.0Design Log Std. 0.0Fix Design & Control Std. FalsePolicy Learning Rate 1e-4Entropy β 0.0Value Learning Rate 3e-4KL Divergence Threshold 0.5Batch Size 50000Minibatch Size 2000PPO Steps Per Batch 10Table 8: Hyperparameters used for our framework on the fetch cube task.17Hyperparameter ValueTool Position Init. (0.0, 1.2, 0.05)Control Steps Per Action 10Max Episode Steps 150Slack Reward -0.001Success Reward 10.0Box Dimensions Min (0.005, 0.02, 0.01)Box Dimensions Max (0.01, 0.1, 0.03)Tool Angle Min (-30.0, -30.0, -30.0)Tool Angle Max (30.0, 30.0, 30.0)Control Action Min (-1.0, -1.0, -1.0, -1.57, -1.57, -1.57)Control Action Max (1.0, 1.0, 1.0, 1.57, 1.57, 1.57)Control Actioin Scale 0.1Control GNN (128, 128, 128)Control Index MLP (128, 128)Design GNN (128, 128, 128)Design Index MLP (128, 128)Control Log Std. 0.0Design Log Std. -1.0Fix Design & Control Std. TruePolicy Learning Rate 2e-5Entropy β 0.01Value Learning Rate 3e-4KL Divergence Threshold 0.5Batch Size 50000Minibatch Size 2000PPO Steps Per Batch 5Table 9: Hyperparameters used for our framework on the lift cup task.18Hyperparameter ValueTool Position Init. (0.0, 0.05, 0.1)Control Steps Per Action 10Max Episode Steps 100Slack Reward -0.001Success Reward 10.0Box Dimensions Min (0.04, 0.005, 0.02)Box Dimensions Max (0.08, 0.005, 0.05)Tool Angle Min (-15.0, -15.0, -15.0)Tool Angle Max (15.0, 15.0, 15.0)Control Action Min (-1.0, -1.0, -1.0, -1.57, -1.57, -1.57)Control Action Max (1.0, 1.0, 1.0, 1.57, 1.57, 1.57)Control Action Scale 0.05Control GNN (128, 128, 128)Control Index MLP (128, 128)Design GNN (128, 128, 128)Design Index MLP (128, 128)Control Log Std. 0.0Design Log Std. 0.0Fix Design & Control Std. FalsePolicy Learning Rate 1e-4Entropy β 0.01Value Learning Rate 3e-4KL Divergence Threshold 0.5Batch Size 20000Minibatch Size 2000PPO Steps Per Batch 5Table 10: Hyperparameters used for our framework on the 3D scoop task.19 |
Id4b5SY1Y8 | PairwiseNet: Pairwise Collision Distance Learningfor High-dof Robot SystemsJihwan KimSeoul National Universityjihwankim@robotics.snu.ac.krFrank Chongwoo ParkSeoul National Universityfcp@snu.ac.krAbstract: Motion planning for robot manipulation systems operating in complexenvironments remains a challenging problem. It requires the evaluation of boththe collision distance and its derivative. Owing to its computational complexity,recent studies have attempted to utilize data-driven approaches to learn the colli-sion distance. However, their performance degrades significantly for complicatedhigh-dof systems, such as multi-arm robots. Additionally, the model must be re-trained every time the environment undergoes even slight changes. In this paper,we propose PairwiseNet, a model that estimates the minimum distance betweentwo geometric shapes and overcomes many of the limitations of current models.By dividing the problem of global collision distance learning into smaller pairwisesub-problems, PairwiseNet can be used to efficiently calculate the global collisiondistance. PairwiseNet can be deployed without further modifications or trainingfor any system comprised of the same shape elements (as those in the trainingdataset). Experiments with multi-arm manipulation systems of various dof indi-cate that our model achieves significant performance improvements concerningseveral performance metrics, especially the false positive rate with the collision-free guaranteed threshold. Results further demonstrate that our single trained Pair-wiseNet model is applicable to all multi-arm systems used in the evaluation. Thecode is available at https://github.com/kjh6526/PairwiseNet .Keywords: Robot Collision, Collision Distance, Machine Learning1 IntroductionMotion planning algorithms such as RRT [1, 2, 3] and its many variants [4, 5, 6, 7] all require thecollision distance - the minimum distance between the robot and its nearest obstacle (including otherlinks for self-collision avoidance). Among these, some even require the derivatives of the collisiondistances. It is well-known that calculating this distance involves finding the minimum distancebetween each robot link and the obstacles, which can be computationally intensive, especially forhigh-dof robots with complex geometries.To alleviate the computational burden, one possible solution is to train a collision distance functionusing data. By collecting sufficient data consisting of robot configurations and their correspondingcollision distances, machine learning models such as kernel perceptron models [8], support vectormachines (SVM) [9], and neural networks [10, 11, 12, 13, 14], can be used to learn the collisiondistance function. This learned function can then be used to quickly determine if a given configura-tion is collision-free. While these data-driven approaches have demonstrated satisfactory results forlow-dof robot systems, often they perform poorly for higher-dof robots. The challenge lies in thefact that the collision distance function for higher-dof robots is complex and highly non-convex.Another challenge faced by existing data-driven methods is their sensitivity to small environmentalchanges. For example, the addition of new obstacles or a change of the robot’s base position can leadto a completely different collision distance function; for many of these methods, the entire training7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Collision Distancedcol(a) Robot Env. (b) Pairwise col. distancePairwiseNetElement PairsdijNeuralnetwork(c) Global col. distancemin(⋅)Figure 1: An illustration of the global collision distance estimation through PairwiseNet. (a) Robotenvironment at a given joint configuration. (b) Pairwise collision distances for all element pairs aredetermined through PairwiseNet. (c) The smallest of these distances becomes the global collisiondistance.procedure must be repeated, from data collection to model training. (One possible exception is[8], which proposes an efficient model update strategy for dynamic environment updates, but theirmethod is limited to low-dof robot systems and still requires an additional training procedure.)We present PairwiseNet, a collision distance estimation method that provides a promising alterna-tive to existing data-driven approaches used for predicting the global collision distance. Instead ofdirectly estimating the global collision distance, PairwiseNet focuses on estimating the pairwise col-lision distance: the minimum distance between two elements in the robot system. The PairwiseNetmodel takes as input the point cloud data of two geometric shapes and their relative transformationand outputs the minimum distance between these two shapes. To estimate the global collision dis-tance, PairwiseNet first predicts the minimum distances for every possible pair of elements in thesystem. It then selects the minimum of these pairwise distances as the estimate for the global colli-sion distance (see Figure 1). The efficient parallel batch computation of the neural network enablesthe rapid prediction of minimum distances between pairs of elements.Compared to the complex and highly non-convex function of the global collision distance, the min-imum distance function between a pair of elements is simpler and easier to train. By breaking downthe challenging task of learning the global collision distance into smaller sub-problems of the pair-wise collision distance learning, our PairwiseNet achieves significant performance improvementsfor high-dof robot systems.Another advantage of PairwiseNet is its applicability to any system composed of known shape ele-ments (shape elements that are sufficiently trained for estimating pairwise collision distance). Thetrained PairwiseNet model can be used without the need for additional training or modificationsin such systems. For example, consider a scenario in which a sufficiently large dataset containingpairwise collision distances between the links of a Panda robot is available. In this case, the Pair-wiseNet model trained using this dataset can be applied to any system consisting of multiple Pandarobots, regardless of the number of robots or their respective positions, as this is possible because thecollision distance estimation for such systems can be broken down into pairwise collision distanceestimations for each element pair, and these pairwise distances are already known by the trainedPairwiseNet model. As long as the system is exclusively comprised of shape elements that havebeen learned during training, the trained PairwiseNet model is applicable to any such system. Evenin cases where the system undergoes changes, such as changing the robot’s base position or addinganother robot arm, if the geometric shape elements of the system remain unchanged, the trainedPairwiseNet model remains applicable to the changed system without any modifications.2Our approach has been evaluated in high-dof multi-arm robot manipulation systems, ranging fromthe two-arm (14-dof) to four-arm (28-dof) systems, as well as a single-arm robot with obstacles.The results demonstrate that our approach outperforms existing learning-based methods in terms ofcollision distance regression error, collision checking accuracy, and notably the False Positive Ratewith the collision-free guaranteed threshold (Safe-FPR). Moreover, our approach performs bettereven when using a single trained PairwiseNet model for all multi-arm systems.2 Related WorksSeveral machine learning-based methods for collision distance estimation have been proposed dueto their computationally efficient inference procedures for collision distances and derivatives. [6]used SVM classifiers to identify whether each pair of parts of a humanoid robot was in a safe ordangerous self-collision status given a specific joint configuration. Only the minimum distances ofthe dangerous pairs of parts were estimated using a capsule-based BV algorithm, simplifying thecalculation of collision distances and derivatives. [9] also employed an SVM classifier for a 14-dofdual-arm robot manipulation system. The SVM classifier inputs a vector consisting of the positionsof all joints in the system and outputs a collision label of either <1 for a collision or >1 for acollision-free state. In [13], SVM and neural network models were trained to predict the collisionlabel of a humanoid manipulation system at a given joint configuration. Separate collision classifiermodels were trained for every sub-part pairs, such as the left arm and right leg, resulting in a totalof 10 sub-models used for collision label predictions.Similar to [9], [10] utilized joint positions as inputs for their multi-layer perceptron neural networkmodel. Meanwhile, [11] employed a positional encoding vector of the joint configuration as inputfor their neural network model. [12] trained a neural network model to estimate the collision distanceusing an extended configuration containing both joint and workspace configurations as input, withthe model outputting the collision distance of the system. DiffCo [8] is a collision classifier modelbased on kernel perceptron that generates both the collision score and its derivative. DiffCo alsoutilizes an efficient active learning strategy that adjusts the trained collision score function for dy-namic updates in the environment. Similarly, CollisionGP [15], a Gaussian process-based collisionclassifier model, has been proposed. CollisionGP determines the collision query for a given jointconfiguration and also measures the uncertainty of the model in its prediction. Recently, GraphDis-tNet [14] was proposed as a Graph Neural Network (GNN) model for collision distance estimation.The model inputs the information on geometric shapes, which are represented as graphs for both themanipulator links and obstacles. GraphDistNet then utilizes the geometric relationship between thetwo graphs to predict the collision distance.Similar to our method, some works [16, 17] approached the challenge by decomposing a complexproblem into several simpler sub-problems. [16] proposed a novel configuration space decomposi-tion method. This method separates the robot into disjoint components and trains a classifier for thecollision-free configuration space of each component. Since the components near the base link havea relatively low-dimensional configuration space, training classifiers for these components is easierthan training a single classifier for the whole system. [17] trained a collision predictor for generatingcollision-free human poses. They focused on the fact that collisions only affect local regions of thehuman body. Therefore, they designed a set of local body parts, and the collision prediction wasaccordingly decomposed into these local parts.The effectiveness of these existing methods has been experimentally demonstrated only for low-dofrobot systems; their performance degrades substantially for high-dof systems operating in complexenvironments. In particular, most learning-based collision distance estimation methods establish acollision-free guaranteed threshold to ensure that no collisions occur during actual manipulation.However, existing methods often suffer from a high false positive rate, resulting in overly cau-tious collision detection when utilizing the collision-free guaranteed threshold. In comparison, ourmethod demonstrates effective collision distance estimation performance even in high-dof robotsystems and maintains low false positive rates when using the collision-free guaranteed threshold.3Robot Env.Collision DistancedcolA Batch of size Bof Element PairsBof Pointcloud PairsT12∈SE(3)Bof TranformationsEncoderEncoderSharedB×32B×32B×12VectorizeMin. Distances dminof All PairsExtract Element PairsB×76Fully -connectedNeural Networkmin(⋅)Stack as a BatchConcat.B×1* Only when Training....Element 2T12Element 1(a) Data Preprocessing (b) Encoder (c) RegressorFigure 2: An illustration of estimating the global collision distance via PairwiseNet, our pairwisecollision distance learning method.3 Learning Pairwise Collision Distance3.1 Problem FormulationWe assume the availability of a simulator environment of the target system, which includes therobot kinematics and geometric shapes of links and obstacles. We aim to determine the optimalmodel parameter ψfor the pairwise collision distance estimation model fψ, which can predict thecollision distance between any pair of geometric shapes. The model takes the point cloud data oftwo geometric shapes Pi,Pj(expressed in each corresponding object coordinates) and the relativetransformation Tij∈SE(3)as input, and outputs the estimated pairwise collision distance ˆdijbetween the two shapes.ˆdij=fψ(Pi,Pj, Tij) (1)After training the model, the global collision distance can be determined by the procedure asshown in Figure 2. First, a set of element pairs and corresponding transformations S(q) ={(Pi,Pj, Tij(q))}i,jin the given joint configuration qis extracted from the target robot system.Next, PairwiseNet determines the pairwise collision distance between each element pair in S(q),and the minimum distance found among these is taken as the global collision distance of the robotsystem. The global collision distance estimator function Fψcan be expressed in the form ofˆdcol(q) =Fψ(q;fψ,S) (2)= min(Pi,Pj,Tij(q))∈S(q)fψ(Pi,Pj, Tij(q)) (3)where ˆdcol(q)is the estimated global collision distance in the joint configuration q. Using the batchcomputation of the neural network model, we can efficiently estimate the minimum distances ofelement pairs.3.2 Network ArchitecturePairwiseNet consists of two main components: an encoder that creates a shape feature vector froma point cloud data of a geometric shape (Figure 2b), and a regressor that predicts the minimumdistance between two shape feature vectors and a transformation (Figure 2c). The encoder em-ploys two EdgeConv layers from Dynamic Graph Convolutional Neural Network [18] to extract32-dimensional shape feature vectors from the point cloud data. The regressor then combines thetwo shape feature vectors and the transformation into a single vector and uses four fully connectedlayers with hidden state dimensions of (128,128,128) to output the minimum distance (Figure 2c).The training of PairwiseNet uses the mean-squared error (MSE) between the estimated and actual4(a) (b) (c)dd= 0.3mdθθ= 120°d= 0.5mdd= 0.5mFigure 3: Test environments for the collision distance learning performance evaluation. We selected(a) two arms, (b) three arms, and (c) four arms robot systems.collision distances as the loss functionL=1|Dtrain|X(Pi,Pj,Tij,dij)∈D train||fψ(Pi,Pj, Tij)−dij||2(4)where dij∈Ris the ground-truth pairwise collision distance, and Dtraindenotes the training dataset.3.3 Efficient Inference Strategy of PairwiseNetOur approach includes an efficient inference strategy for the global collision distance calculation byeliminating the need to run the encoder, a deep neural network that transforms the point cloud datainto feature vectors. Since the point cloud data of element pairs remains unchanged regardless ofthe joint configuration, shape feature vectors of element pairs can be calculated and saved once foreach robot system before calculating the collision distance. Using these pre-calculated shape featurevectors, PairwiseNet is able to estimate the collision distance only using the regressor, a simpleneural network composed of fully-connected layers. Implemented in PyTorch [19], PairwiseNet iscapable of performing collision distance estimation for the joint configuration in less than 0.5ms.Details on the inference time for PairwiseNet can be found in Appendix B.1.4 Experiments4.1 Collision Distance Learning for Multi-arm Robot SystemsTarget Systems For the test environments, we selected three high-dof multi-arm robot systems asillustrated in Figure 3. We employed 7-dof Franka Emika Panda robot arms for our test environ-ments. For the test dataset of each target environment, we sampled one million joint configurationsfrom a uniform distribution within the joint limits.Robot 1Robot 2φθRRobot 1Robot 2Figure 4: An illustration of multi-arm robot systems for gen-erating the training dataset. Training data points are gener-ated from dual-arm robot environments with various relativepositions between two arms.Training Dataset for PairwiseNetWe collected pairwise collision dis-tance data from dual-arm robot ma-nipulation systems. For the diver-sity of the dataset, we utilize dual-arm robot systems with various rel-ative positions between the two armsas illustrated in Figure 4. We sampleθandφwith eight equally spaced val-ues within the range [0,2π), and sam-pleRwith five equally spaced valueswithin the range [0.1m,1.0m]. In to-tal, we use 320 different combinations of (R, θ, φ ), resulting in 320 different dual-arm robot systems.5For each system, we sampled joint configurations uniformly within the joint limits and extracted aset of element pairs S(q)at each joint configuration q(Figure 2a). To obtain the ground-truth pair-wise collision distance dij∈Rbetween the element pair, we use the collision distance estimationalgorithm implemented in PyBullet [20]. If the two elements collide, the collision distance is thenegative of their penetration depth (the distance by which one convex object enters into the inte-rior of another during a collision [21]). The resulting training dataset Dtraincontains 3 million datapoints.Baselines We trained our method and other existing collision distance estimation methods.•Capsule : A bounding volume method with capsule-shape collision primitives used in [22, 6].•JointNN : A fully-connected neural network model that directly uses joint configurations as in-puts (the input representation used in [6, 13]).•PosNN : A fully-connected neural network model that uses joint positions as inputs (the inputrepresentation used in [10, 9]).•jointNERF [11]: A fully-connected neural network model that uses positional embedding vec-tors of joint configurations as inputs.•ClearanceNet [12]: A neural network model that takes joint configurations as inputs and utilizestwo fully-connected layers, each followed by a dropout layer.•DiffCo [8]: A kernel perceptron model that takes joint configurations as inputs and outputs thecollision score.Existing collision distance learning methods were trained on one million uniformly sampled datapoints within the joint limits for each target system. For the DiffCo model, we were limited toa dataset size of 50,000 data points, as this was the maximum feasible size for kernel perceptrontraining on our hardware (AMD Ryzen Threadripper 3960X, 256GB RAM, and NVIDIA GeForceRTX4090 with 24GB VRAM).Performance Evaluation We evaluate the performance of collision distance learning using fourmetrics: MSE, AUROC, Accuracy, and Safe-FPR. These metrics target both the collision distanceregression and collision classification, with a robot configuration being classified as a collision if thecollision distance is below the threshold ε(for DiffCo, if the collision score is above the threshold).MSE represents the mean squared error between the ground truth and estimated global collisiondistance. AUROC is the area under the receiver operating characteristic curve for the collisionclassification. Accuracy is the classification accuracy of collisions with the threshold ε= 0. Lastly,in order for the trained collision distance estimation model to be used in actual path planning tasks,a sufficiently conservative threshold must be used to ensure that collisions cannot occur. However,the more conservative the threshold used, the more false alarms will occur where non-collision robotconfigurations are incorrectly classified as collisions. Safe-FPR is used to evaluate performance inthese situations, representing the false alarm rate when using the least yet sufficient conservativethreshold that can classify all collision configurations in the test dataset as collisions.Table 1 presents the evaluation results of PairwiseNet and other existing methods. PairwiseNetoutperforms the existing methods in all performance metrics in all three multi-arm robot systemscompared to the existing methods, with the top-performing metrics highlighted in bold. Notably,PairwiseNet even shows better performance metrics than Capsule, a computationally expensive BV-based collision distance estimation method.Figure 5 shows the collision distance estimation results for the four-arm robot system during acollision-free trajectory. The robot configurations at the start and end points of the path are suchthat they involve all four robot arms intricately intertwined, so the robot arms move through thenear-collision region. The ground-truth collision distance and the estimated collision distances byPairwiseNet and other baselines during the trajectory are represented in the bottom-right plot. Theperformance of PairwiseNet can be confirmed, as it is the only method that accurately estimates thecomplex ground-truth collision distance of the four-arm robot system.6Table 1: Collision Distance Estimation PerformancesEnv. Methods MSE AUROCAccuracy(ε= 0)safe-FPRFig. 3 (a)(Two arms)Capsule 5.47e-4 0.9995 0.9776 0.0247JointNN 3.63e-4 0.9955 0.9794 0.3200PosNN 2.71e-4 0.9970 0.9823 0.1476jointNERF 2.98e-4 0.9962 0.9808 0.2371ClearanceNet 1.11e-3 0.9853 0.9621 0.4570DiffCo* - 0.9824 0.9818 0.3141PairwiseNet (our) 0.24e-4 0.9998 0.9941 0.0200Fig. 3 (b)(Three arms)Capsule 5.96e-4 0.9993 0.9775 0.0241JointNN 9.69e-4 0.9902 0.9721 0.2679PosNN 5.41e-4 0.9951 0.9801 0.1336jointNERF 8.22e-4 0.9920 0.9747 0.2213ClearanceNet 4.63e-3 0.9499 0.9395 0.6067DiffCo* - 0.9603 0.9453 0.5858PairwiseNet (our) 0.24e-4 0.9997 0.9944 0.0189Fig. 3 (c)(Four arms)Capsule 6.66e-4 0.9986 0.9468 0.0694JointNN 1.59e-3 0.9718 0.9183 0.6988PosNN 7.18e-4 0.9885 0.9478 0.5371jointNERF 1.30e-3 0.9778 0.9280 0.6260ClearanceNet 6.67e-3 0.8738 0.8202 0.9965DiffCo* - 0.8811 0.8306 0.9874PairwiseNet (our) 0.46e-4 0.9994 0.9858 0.0650*DiffCo outputs the collision score from -1 (collision-free) to 1 (collision).(a) (b) (c) (d) (e) (f)(g) (h) (i) Figure 5: Collision distance estimation for the four-arm robot system: Images depict the four-armrobots following a trajectory, and the plot illustrates ground truth and estimated collision distancesof PairwiseNet and other baselines during the trajectory.Generalizability Existing learning methods have used individual models trained from each sys-tem’s individual dataset to estimate collision distance in the three target robot systems. In contrast,PairwiseNet estimates collision distance with only one model for all three target robot systems, andits performance is even better than that of existing methods that use individual models for each sys-tem. Therefore, PairwiseNet, trained with sufficient datasets, can apply the same model withoutadditional training, even if the robot base positions or number of robot arms change.4.2 Collision Detection in a Real-world EnvironmentWe perform experiments in real-world environments with a 7-dof Panda robot arm (Figure 6). Thisworkspace is populated with obstacles such as shelves and tables that add complexity to the robotarm’s operational landscape. To validate the performance, we provide a human-guided demonstra-tion with the robot arm occasionally sweeping close to, and sometimes colliding with, tables andshelves.7Ground Truth PairwiseNet ClearanceNet JointNN(a) (b) (c) (d)Figure 6: Collision distance estimation for the 7-dof Panda robot arm environment with obstacles(tables and shelves). The top images display a human-guided robot arm, while the correspondingplots at the bottom illustrate the ground truth and estimated collision distances from PairwiseNetand other baselines at time t=(a) 22.9s, (b) 39.6s, (c) 53.7s, and (d) 61.6s, respectively.The training procedure of PairwiseNet and other baselines, as well as the dataset and model archi-tectures used, are identical to those employed in the experiments for multi-arm robot systems. Inthe case of PairwiseNet, the dataset consists of element pairs representing tables and shelves, whichare further divided into sub-parts. Specifically, a table is divided into a tabletop and four legs, whilea shelf is divided into six plates. The pairwise collision distances between these sub-parts and therobot links are calculated by PyBullet for the training dataset.The collision distance estimation results are presented in Figure 6. Note that Figure 6 displays onlyfour snapshots from the demonstration; the complete video can be accessed in the supplementary1video. Compared to the evaluated baselines, PairwiseNet consistently exhibits the highest accuracyin estimating the actual collision distance. It reliably detects collisions between the robot arm andobstacles in the majority of cases. In contrast, the other baselines either fail to trigger collisionalarms when a collision occurs (Figure 6 (b)) or produce false collision alarms when no collisionis present (Figure 6 (c), (d)). PairwiseNet stands out by consistently and accurately identifyingcollisions between the robot and obstacles.5 ConclusionsIn this paper, we present PairwiseNet, a novel collision distance estimation method that estimatesthe minimum distance between a pair of elements instead of directly predicting the global collisiondistance of the robot system. By simplifying the problem into smaller sub-problems, our approachachieves significant performance improvements for high-dof robot systems compared to methodsthat directly predict the global collision distance. Additionally, PairwiseNet is capable of handlingenvironmental changes such as robot base repositioning without requiring additional training or fine-tuning. We evaluate and compare the collision distance estimation performance of PairwiseNet forboth high-dof multi-arm robot systems and single-arm systems in the presence of obstacles, andvalidate its accurate collision distance estimation and generalization to environmental changes.Limitations and Future works The generalizability of PairwiseNet is currently limited to sys-tems that exclusively consist of known shape elements. While PairwiseNet demonstrates robustperformance in estimating collision distances within this scope, its applicability to environmentswith unknown (untrained) shape elements has not been fully investigated. Future work is aimed atenhancing the generalizability of PairwiseNet to accommodate systems that include previously un-seen shape elements. This could involve, e.g., expanding the training dataset to incorporate a widerrange of environmental variations, and incorporating techniques to handle unknown or novel shapeelements [23].1https://youtu.be/N5Q8ZXbB6Uc8AcknowledgmentsThis work was supported in part by IITP-MSIT grant 2021-0-02068 (SNU AI Innovation Hub), IITP-MSIT grant 2022-0-00480 (Training and Inference Methods for Goal-Oriented AI Agents), KIATgrant P0020536 (HRD Program for Industrial Innovation), ATC+ MOTIE Technology InnovationProgram grant 20008547, SRRC NRF grant RS-2023-00208052, SNU-AIIS, SNU-IAMD, SNUBK21+ Program in Mechanical Engineering, and SNU Institute for Engineering Research.References[1] S. M. LaValle et al. Rapidly-exploring random trees: A new tool for path planning. 1998.[2] J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query pathplanning. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conferenceon Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 2, pages995–1001. IEEE, 2000.[3] J. Huh and D. D. Lee. Learning high-dimensional mixture models for fast collision detectionin rapidly-exploring random trees. In 2016 IEEE International Conference on Robotics andAutomation (ICRA) , pages 63–69. IEEE, 2016.[4] O. Stasse, A. Escande, N. Mansard, S. Miossec, P. Evrard, and A. Kheddar. Real-time (self)-collision avoidance task on a hrp-2 humanoid robot. In 2008 ieee international conference onrobotics and automation , pages 3200–3205. IEEE, 2008.[5] A. Dietrich, T. Wimbock, A. Albu-Schaffer, and G. Hirzinger. Integration of reactive, torque-based self-collision avoidance into a task hierarchy. IEEE Transactions on Robotics , 28(6):1278–1293, 2012.[6] C. Fang, A. Rocchi, E. M. Hoffman, N. G. Tsagarakis, and D. G. Caldwell. Efficient self-collision avoidance based on focus of interest for humanoid robots. In 2015 IEEE-RAS 15thInternational Conference on Humanoid Robots (Humanoids) , pages 1060–1066. IEEE, 2015.[7] J. J. Quiroz-Oma ̃na and B. V . Adorno. Whole-body control with (self) collision avoidanceusing vector field inequalities. IEEE Robotics and Automation Letters , 4(4):4048–4053, 2019.[8] Y . Zhi, N. Das, and M. Yip. Diffco: Autodifferentiable proxy collision detection with multi-class labels for safety-aware trajectory optimization. IEEE Transactions on Robotics , 2022.[9] N. B. Figueroa Fernandez, S. S. Mirrazavi Salehian, and A. Billard. Multi-arm self-collisionavoidance: A sparse solution for a big data problem. In Proceedings of the Third MachineLearning in Planning and Control of Robot Motion (MLPC) Workshop. , number CONF, 2018.[10] D. Rakita, B. Mutlu, and M. Gleicher. Relaxedik: Real-time synthesis of accurate and feasiblerobot arm motion. In Robotics: Science and Systems , pages 26–30. Pittsburgh, PA, 2018.[11] M. Bhardwaj, B. Sundaralingam, A. Mousavian, N. D. Ratliff, D. Fox, F. Ramos, and B. Boots.Storm: An integrated framework for fast joint-space model-predictive control for reactive ma-nipulation. In Conference on Robot Learning , pages 750–759. PMLR, 2022.[12] J. Chase Kew, B. Ichter, M. Bandari, T.-W. E. Lee, and A. Faust. Neural collision clearanceestimator for batched motion planning. In International Workshop on the Algorithmic Founda-tions of Robotics , pages 73–89. Springer, 2020.[13] M. Koptev, N. Figueroa, and A. Billard. Real-time self-collision avoidance in joint space forhumanoid robots. IEEE Robotics and Automation Letters , 6(2):1240–1247, 2021.[14] Y . Kim, J. Kim, and D. Park. Graphdistnet: A graph-based collision-distance estimator forgradient-based trajectory optimization. IEEE Robotics and Automation Letters , 7(4):11118–11125, 2022.9[15] J. Mu ̃noz, P. Lehner, L. E. Moreno, A. Albu-Sch ̈affer, and M. A. Roa. Collisiongp: Gaussianprocess-based collision checking for robot motion planning. IEEE Robotics and AutomationLetters , 2023.[16] Y . Han, W. Zhao, J. Pan, and Y .-J. Liu. Configuration space decomposition for learning-based collision checking in high-dof robots. In 2020 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 5678–5684. IEEE, 2020.[17] Q. Tan, Z. Pan, and D. Manocha. Lcollision: Fast generation of collision-free human posesusing learned non-penetration constraints. In Proceedings of the AAAI Conference on ArtificialIntelligence , volume 35, pages 3913–3921, 2021.[18] Y . Wang, Y . Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graphcnn for learning on point clouds. Acm Transactions On Graphics (tog) , 38(5):1–12, 2019.[19] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learninglibrary. Advances in neural information processing systems , 32, 2019.[20] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016–2021.[21] G. Van Den Bergen. Proximity queries and penetration depth computation on 3d game objects.InGame developers conference , volume 170, 2001.[22] A. El Khoury, F. Lamiraux, and M. Taix. Optimal motion planning for humanoid robots. In2013 IEEE international conference on robotics and automation , pages 3136–3141. IEEE,2013.[23] S. Kim, T. Ahn, Y . Lee, J. Kim, M. Y . Wang, and F. C. Park. Dsqnet: A deformable model-based supervised learning algorithm for grasping unknown occluded objects. IEEE Transac-tions on Automation Science and Engineering , 2022.10AppendixA Further Experimental DetailsA.1 Strategy for Decomposing System ElementsPairwiseNet achieves robust collision distance estimation performance by dividing the robot sys-tem into multiple elements and calculating the pairwise collision distance between these elements.Therefore, to apply PairwiseNet, the robot system must be divided into these elements as a prelimi-nary step. Each element must be a rigid body with an unchanging shape, and in the case of the Pandarobot arm used in our experiments, each link was treated as a separate element. Since PairwiseNetdoes not require each element to be convex, non-convex links of the Panda robot arm can be usedwithout additional decomposition.However, PairwiseNet’s superior performance is attributed to the fact that the pairwise collisiondistance functions between elements are much easier to learn than the global collision distancefunction. Therefore, if an individual element’s shape is complex and highly non-convex, learningthe corresponding pairwise collision distance may become difficult, potentially diminishing Pair-wiseNet’s effectiveness. Hence, a well-considered balance must be found between the complexityof decomposing the system and the performance of PairwiseNet.In our real robot experiments, we decomposed tables into a tabletop and four legs, and shelvesinto six plates. This strategy was employed not only to simplify complex obstacles and make thepairwise collision distance more tractable but also to leverage the symmetry of the obstacle structure.For example, the bottom, middle, and top plates of the shelf have the same shape, so dividing theminto separate elements allows for more efficient use of training data. Furthermore, the fact thatthe decomposed elements of the tables and shelves take the form of flat rectangular shapes can bebeneficial to the learning process.A.2 Capsule-based Bounding Volume Method for a Panda Robot ArmWe construct capsule-shaped collision primitives for a Panda robot arm as a baseline collision dis-tance estimation method (see Figure 7). Our approach follows a similar methodology of [22], whichformulates an optimization problem as follows:minai,bi,ri||ai−bi||πr2i+43πr3i (5)s.t. dist (p,aibi)≤ri,for all p∈ M i (6)Here, idenotes the link index of the Panda robot arm, Mirepresents the vertices of the ithlinkmesh, and ai,bi, andrirefer to the two endpoints and the radius of a capsule, respectively, and aibirepresents the line segment connecting the two endpoints. This formulation results in the creationof minimal volume capsules that encapsulate all vertices of the link meshes. The collision distanceof the multi-arm robot systems can be estimated through the minimum distance calculation betweencapsules.A.3 The Collision-free Guaranteed ThresholdThe collision-free guaranteed threshold εsaferefers to a predefined distance value that is establishedin collision distance estimation methods. This threshold is set to ensure that during testing or actualoperation, the estimated collision distance remains above this threshold for all valid configurationsor movements of the robot system. In other words, if the estimated collision distance between therobot and any obstacles remains above the collision-free guaranteed threshold ( ˆdcol(q)> ε safe), it isconsidered safe and collision-free. In our experiments, we set the collision-free guaranteed thresholdto the least conservative value that allows us to classify all the collision configurations in the testdataset as collisions. These thresholds are then utilized for measuring the Safe-FPR.11Figure 7: Illustrations of the Panda robot arm with capsule-shape collision primitives.Table 2: The Collision-free Guaranteed ThresholdsMethods Two arms Three arms Four armsCapsule 0.0 0.0 0.0JointNN 0.2111 0.2015 0.2231PosNN 0.1141 0.1189 0.1756jointNERF 0.1661 0.1734 0.2001ClearanceNet 0.2840 0.3713 0.4944DiffCo -1.2789 -1.4672 -0.9535PairwiseNet 0.0150 0.0152 0.0184A.4 HyperparametersTable 3 shows hyperparameters employed in our experiments.Table 3: Hyperparametershyperparameter valuebatch size, learning rate, epoch for PairwiseNet 1000, 1e-3, 2000batch size, learning rate, epoch for ClearanceNet 191, 1.75e-4, 400batch size, learning rate, epoch for other NN baselines 10000, 1e-3, 10000kfork-nearest neighbor of EdgeConv layers 5# of points in the point cloud data of an shape element 100hidden nodes of EdgeConv layers 64B Additional Experimental ResultsB.1 Inference Time Comparison between PairwiseNet and a Standard Collision DistanceEstimation MethodPairwiseNet incorporates an efficient inference strategy – using only the regressor network duringthe inference process, and computing multiple element pairs as a single batch. Thus, despite cal-culating collision distances pairwise like traditional non-data-driven methods, it demonstrates aninference speed as fast as other existing data-driven approaches. We have compared the inferencespeed of PairwiseNet with that of standard non-data-driven methods and displayed the results inTable 4. The standard collision distance calculation algorithms used for comparison are from theFlexible Collision Library (FCL) and its extended implementation (HPP-FCL).The test environments were the same three multi-arm robot systems used to evaluate the collisiondistance estimation performance. The Panda robot arm provides a simplified convex mesh, rep-resenting the original link mesh with only 1/60 of the number of vertices, allowing for efficientcollision distance estimation. In our experiments, we measured the inference time using both theoriginal complex mesh and the simplified mesh. The inference time was measured as the time takento estimate the collision distance for a total of 1,000 joint configurations.12Table 4: Inference Time ComparisonMethodsInference time for 1000 joint poses (s)Two arms(64 pairs)Three arms(192 pairs)Four arms(384 pairs)FCL w/ original mesh 13.87 43.97 92.63FCL w/ simplified mesh 2.479 9.079 17.25HPP-FCL w/ simplified mesh 0.3920 0.8560 1.9490CPU GPU CPU GPU CPU GPUClearanceNet 0.1450 0.0919 0.1484 0.0818 0.1493 0.0825ClearanceNet (Batch) 0.0299 0.0001 0.0250 0.0001 0.0222 0.0001PairwiseNet 0.1054 0.0988 0.1590 0.0998 0.2025 0.1045PairwiseNet (Batch) 0.0362 0.0207 0.1070 0.0224 0.2121 0.0253Additionally, PairwiseNet is capable of receiving multiple joint configurations as a single batch in-put; it can simultaneously compute the collision distances for all the joint configurations. This is anefficient feature for tasks that require simultaneous collision checks for multiple joint configurations,such as sampling-based path planning methods like RRT or control techniques such as Model Pre-dictive Control (MPC). In our experiments, we also measured the time taken to calculate the collisiondistances for 1,000 joint configurations at once through PairwiseNet, and this is represented underthe PairwiseNet (Batch) entry in Table 4. PairwiseNet was implemented using PyTorch [19]. FCLand HPP-FCL were implemented using the python-fcl andhpp-fcl libraries, respectively.Thus, all the time measurements are conducted within Python code. The experiments were carriedout in an environment equipped with an AMD Ryzen 9 7950X (16 cores, 32 threads), NVIDIA RTX4090, and 125GB of RAM.PairwiseNet demonstrates an inference speed that is at least 20 times and up to more than 150 timesfaster than FCL. As the number of element pairs requiring collision distance calculation increases,the gap between the two widens. This is because in the case of FCL, the time taken for inferenceincreases as the number of element pairs grows, whereas, with PairwiseNet, there is little changein inference time even as the number of pairs increases. HPP-FCL is shown to be 6 ∼9 times fasterthan the original FCL. However, PairwiseNet still demonstrates an inference speed that is 4 ∼18times faster than HPP-FCL. Moreover, the inference time of HPP-FCL increases as the number ofpairs increases, which is not the case with PairwiseNet. Even when performing inference througha CPU, the inference time does not increase significantly. For batch calculation, where 1,000 jointconfigurations are computed at once, PairwiseNet can estimate 1,000 collision distances in as littleas 25 milliseconds.B.2 Training Complexity of PairwiseNetWe utilized 3 million data points (3 million shape element pairs with their corresponding distances)to train PairwiseNet for collision distance estimation in multi-arm robot systems and for single-armsystems with obstacles. For a more detailed description of the complexity of PairwiseNet’s trainingprocess, additional information is presented in Table 5.•Unique element pairs refers to the count of element pairs in the system, excluding those withduplicate shapes (for example, the pair of the robot arm’s seventh link with the top shelf plate andthe pair with the middle shelf plate are considered the same since the shapes of the top and middleplates are identical). The more unique element pairs, the greater the number of pairwise collisiondistances PairwiseNet must learn, resulting in higher training complexity.•Gradient step refers to the number of times the learnable parameters were updated to minimizethe loss during the training process.•Training time elapsed refers to the total time taken to complete the training.• The table also includes Validation loss to represent each training result.13The training time was measured on an environment with AMD Ryzen 9 7950X (16 cores, 32threads), NVIDIA RTX 4090, and 125GB of RAM environment.Table 5: Training Complexity of PairwiseNetTraining Env.Uniqueelement pairsTrainingdata pointsGradientstepsTrainingtime elapsed (h)Validationloss (MSE)Multi-arm 36 1,000,000 2,860,000 24.8 1.46e-5Multi-arm 36 3,000,000 4,286,000 35.6 1.43e-5Single arm w/ obstacles 70 3,000,000 4,286,000 36.1 8.03e-6Two arms w/ obstacles 64 1,000,000 2,860,000 23.3 9.82e-6We conducted additional experiments to analyze the training complexity of PairwiseNet. Initially,for the existing multi-arm robot systems, while we began with 3 million data points for trainingPairwiseNet, we also conducted experiments using fewer data points (1 million) and fewer gradientsteps. While the training time decreased, the learning results were comparable to those with theoriginal 3 million data points.Next, we examined the previously mentioned single-arm system with obstacles. Although there weremany unique element pairs (70), since all obstacle elements were rectangular, they were relativelyeasier to learn. Additionally, we trained PairwiseNet with a two-arm robot system, adding non-rectangular household objects as obstacles (as shown in Figure 8). Despite using merely a total of 1million data points and fewer gradient steps in this scenario compared to the original PairwiseNet,the validation loss was successfully reduced to 9.82e-6, confirming successful learning.CanMugBowl 1Bowl 21mFigure 8: A two-arm robot system with four household objectsB.3 Evaluating PairwiseNet in Multi-arm Robot Systems with Various Base PositionsWe extended our validation of PairwiseNet to include robot arm systems arranged asymmetricallyand in an irregular manner, in addition to the three multi-arm robot systems used in our experiments.These additional systems are illustrated in Figure 9. In each systems, the robot arms are arrangedin a more irregular and complex manner than in the previously used systems, resulting in a greatervariety of relative positional relationships between the arms.We used the trained PairwiseNet model that was originally used for our multi-arm robot systems.Although these new robot systems have complex arrangements of robot arms, they still consist ofPanda robot arms, which have been sufficiently trained. Therefore, PairwiseNet can be applieddirectly to these systems without the need for additional training.We have presented the results of PairwiseNet’s collision distance estimation performance in Table 6.Upon examining these results, we observed that PairwiseNet performed well across all robot sys-tems and metrics, regardless of the complexity of the arrangement of robot arms. The consistencyin performance across these varying configurations demonstrates the robustness of PairwiseNet, af-14d,θ=0.5m,45°(a)d(b)d1(c) (d)θθd2d1θdd2d=0.5md1,d2,θ=0.5m,0.7m,30°d1,d2,θ=0.5m,0.3m,45°Figure 9: Illustrations of the top views of various base positions within multi-arm robot systems.firming that there is no significant difference in the quality of collision distance estimation betweenthese different scenarios.Table 6: Collision distance estimation performances of PairwiseNet for various multi-arm systemsEnv. MSE AUROCAccuracy(ε= 0)safe-FPRFig. 9 (a) (Three arms) 0.24e-4 0.9997 0.9943 0.0341Fig. 9 (b) (Three arms) 0.17e-4 0.9999 0.9987 0.0048Fig. 9 (c) (Four arms) 0.43e-4 0.9991 0.9859 0.0611Fig. 9 (d) (Four arms) 0.27e-4 0.9995 0.9931 0.0292B.4 Comparing the Scalability to High-dof Systems of PairwiseNet with GraphDistNetWithout directly comparing the collision distance estimation performance, we can highlight thenovelty of PairwiseNet relative to GraphDistNet [14]. A key difference lies in scalability to high-dofsystems. GraphDistNet’s performance was only verified in systems with a maximum of 7-dof, andthat was within the synthetic planar robot systems where simple graph representations were possible.Additionally, experiments with real robots were only conducted with 3-dof. Since GraphDistNetemploys a complex GNN structure that takes the graph itself as the input, the model’s inferencespeed is inevitably sensitive to the complexity of the graph. In examining the experimental results,we observed that the inference speed slows down as the graph becomes more complex, being upto 120 times slower than ClearanceNet [12]. In contrast, PairwiseNet maintains inference speedscomparable to ClearanceNet, even in complex systems with 28 DOF across 4 robot arms.15 |
_xFJuqBId8c | Shelving, Stacking, Hanging: Relational PoseDiffusion for Multi-modal RearrangementAnthony Simeonov1;3, Ankit Goyal2;, Lucas Manuelli2;, Lin Yen-Chen1,Alina Sarmiento1;3,Alberto Rodriguez1,Pulkit Agrawal1;3;y,Dieter Fox2;y1Massachusetts Institute of Technology2NVIDIA3Improbable AI LabAbstract: We propose a system for rearranging objects in a scene to achieve adesired object-scene placing relationship, such as a book inserted in an open slot ofa bookshelf. The pipeline generalizes to novel geometries, poses, and layouts ofboth scenes and objects, and is trained from demonstrations to operate directly on3D point clouds. Our system overcomes challenges associated with the existenceof many geometrically-similar rearrangement solutions for a given scene. Byleveraging an iterative pose de-noising training procedure, we can fit multi-modaldemonstration data and produce multi-modal outputs while remaining precise andaccurate. We also show the advantages of conditioning on relevant local geometricfeatures while ignoring irrelevant global structure that harms both generalizationand precision. We demonstrate our approach on three distinct rearrangement tasksthat require handling multi-modality and generalization over object shape andpose in both simulation and the real world. Project website, code, and videos:https://anthonysimeonov.github.io/rpdiff-multi-modalKeywords: Object Rearrangement, Multi-modality, Manipulation, Point Clouds1 IntroductionConsider Figure 1, which illustrates (1) placing a book on a partially-filled shelf and (2) hanginga mug on one of the multiple racks on a table. These tasks involve reasoning about geometricinteractions between an object and the scene to achieve a goal, which is a key requirement in manycleanup and de-cluttering tasks of interest to the robotics community [ 1]. In this work, we enablea robotic system to perform one important family of such tasks: 6-DoF rearrangement of rigidobjects [ 2]. Our system uses point clouds obtained from depth cameras, allowing real-world operationwith unknown 3D geometries. The rearrangement behavior is learned from a dataset of examples thatshow the desired object-scene relationship – a scene and (segmented) object point cloud are observedand a demonstrator transforms the object into a final configuration. For example, from a datasetshowing books placed on shelves, our model learns how to transform new books into open shelf slots.Real-world scenes are often composed of objects whose shapes and poses can vary independently.Such composition creates scenes that (i) present combinatorial variation in geometric appearance andlayout (e.g., individual racks may be placed anywhere on a table) and (ii) offer many locations andgeometric features for object-scene interaction (e.g., multiple slots for placing the book and multipleracks for hanging the mug). These features of real-world scenes bring about two key challenges forlearning that go hand-in-hand: multi-modal placements and generalization to diverse scene layouts.•Multi-modality appears in the rearrangement outputs . There may be many scene locations to placean object, and these multiple possibilities create difficulties during both learning and deployment.Namely, a well-known challenge in learning from demonstrations is fitting a dataset containingsimilar inputs that have different associated targets (modes). Moreover, during deployment,predicting multiple candidate rearrangements can help the robot choose the ones that also satisfyany additional constraints, such as workspace limits and collision avoidance. Therefore, the systemmust predict multi-modal outputs that span as many different rearrangement solutions as possible.•Generalization must be addressed when processing the inputs to the system. A scene is composedof many elements that vary in both shape and layout. For example, a shelf can be located anywhereWork done in part during NVIDIA internship,Equal contribution,yEqual advising7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.(A)(B)Figure 1: By learning from a set of demonstrations of a rearrangement task, such as place the book in the shelf(A) and hang the mug on the rack (B), Relational Pose Diffusion (RPDiff) can produce multiple transformationsthat achieve the same object-scene relationship for new object/scene pairs.in the environment, and there are many possible arrangements of existing books within a shelf.The point clouds that are presented to the model reflect this diversity. Generalizing to such inputvariability is harder than generalizing to shape and pose variations for a single object, due to thecombinatorially many arrangements and layouts of scenes.Given a dataset of final object-scene point clouds (obtained by transforming the observed objectpoint cloud into its resultant configuration at the end of the demo), we can synthesize many initialobject configurations as perturbations of the final point clouds. Using this data, we can naturallycast rearrangement prediction as point cloud pose de-noising . From a final object-scene point cloud,we create a “noised” point cloud by randomly transforming the object and train a neural networkto predict how to transform the noised point cloud back into the original configuration (using theknown perturbation for ground truth supervision, see Fig. 2(b)). During deployment, we similarlypredict a de-noising object transformation that satisfies the learned relation with the scene and use thispredicted transformation as the rearrangement action. The robot executes the predicted rearrangementusing a combination of grasp sampling, inverse kinematics, and motion planning.Unfortunately, learning to de-noise from large perturbations (e.g., the “fully-noised” red point cloudin Fig. 2(b)) in one step can be ineffective when considering multi-modality [ 3] – creating similar-looking noised point clouds with prediction targets that differ can lead the model to learn an averagesolution that fits the data poorly. We overcome this difficulty by training the predictor as a diffusionmodel [ 4,5] to perform iterative de-noising. By creating a multi-step noising process, diffusionmodels are trained to incrementally reverse the process one step at a time. Intuitively, early stepsin this reverse process are closer to the ground truth and the associated prediction targets are morelikely to be unique across samples – the prediction “looks more unimodal” to the model. The modelsimilarly generates the test-time output in an iterative fashion. By starting this inference procedurefrom a diverse set of initial guesses, the predictions can converge to a diverse set of final solutions.While iterative de-noising helps with multi-modality, we must consider how to support generalizationto novel scene layouts. To achieve this, we propose to locally encode the scene point cloud by croppinga region near the object (e.g., see Fig. 2(c)). Locally cropping the input helps the model generalizeby focusing on details in a local neighborhood and ignoring irrelevant and distant distractors. Thefeatures for representing smaller-scale patches can also be re-used across different spatial regionsand scene instances [ 6–9]. We use a larger crop size on the initial iterations because the inferenceprocedure starts from random guesses that may be far from a good solution. As the solution convergesover multiple iterations, we gradually reduce the crop size to emphasize a more local scene context.In summary, we present Relational Pose Diffusion (RPDiff), a method that performs 6-DoF relationalrearrangement conditioned on an object and scene point cloud, that (1) generalizes across shapes,poses, and scene layouts, and (2) gracefully handles scenarios with multi-modality. We evaluate ourapproach in simulation and the real world on three tasks, (i) comparing to existing methods that eitherstruggle with multi-modality and complex scenes or fail to achieve precise rearrangement, and (ii)ablating the various components of our overall pipeline.2Network predicts SE(3) transform from object and cropped scene(C)Training: Iterative pose de-noising for object-scene point cloud(B)Ground truthDe-noising(predictions)...Noising(data gen)Evaluation: Starting from diverse initial poses, pose diffusion outputs a diverse set of rearrangement solutions(A)Output:Rearrangement transformsInput: Diverse initial posesFigure 2: Method Overview. (A) Starting from an object and scene point cloud POandPS, we transformPOto a diverse set of initial poses. RPDiff takes the initial object-scene point clouds as input, iteratively updatesthe object pose, and outputs a setof object configurations that satisfy a desired relationship with the scene. Thisenables integrating RPDiff with a planner to search for a placement to execute while satisfying additional systemconstraints. (B) The model is trained to perform iterative pose de-noising . Starting from object-scene pointclouds that satisfy the desired task, we apply a sequence of perturbations to the object and train the model topredict SE(3)transforms that remove the noise one step at a time. (C) To facilitate generalization to novel scenelayouts, we crop the scene point cloud to the region near the object point cloud.2 Problem SetupOur goal is to predict a set of SE(3)transformationsfTkgKk=1that accomplish an object rearrange-ment task given the scene Sand the object O, represented as 3D point clouds ( PS2RM3andPO2RN3, respectively). By selecting (i.e., via a learned scoring function) and applying onetransformation from this set, we can place the object in a manner that fulfills the desired geometricrelationship with the scene. We assume the object point cloud is segmented from the whole scene,which does not have any additional segmented objects (e.g., we cannot segment any individualbooks on the shelf). We also assume a training dataset D=f(PO;PS)gLl=1where each data pointrepresents an object placed at the desired configuration. For example, Dcould include point clouds ofbooks and bookshelves (with different shapes, poses, and configurations of books on the shelf), andSE(3)transformations that place the books in one of the available slots. These demonstrations couldcome from a human or a scripted algorithm with access to ground truth object states in simulation.Critically, depending on constraints imposed by other system components (e.g., available grasps,robot reachability, collision obstacles), the system must be capable of producing multi-modal outputtransformations. Predicting diverse outputs enables searching for a placement that can be feasiblyexecuted. For execution on a robot, the robot has access to a grasp sampler [ 10], inverse kinematics(IK) solver, and motion planner to support generating and following a pick-and-place trajectory.3 MethodThe main idea is to iteratively de-noise the 6-DoF pose of the object until it satisfies the desiredgeometric relationship with the scene point cloud. An overview of our framework is given in Fig. 2.3.1 Object-Scene Point Cloud Diffusion via Iterative Pose De-noisingWe represent a rearrangement action Tas the output of a multi-step de-noising process for acombined object-scene point cloud, indexed by discrete time variable t= 0;:::;T . This processreflects a transformation of the object point cloud in its initial noisy configuration PO(T)to afinal configuration PO(0)that satisfies a desired relationship with the scene point cloud PS, i.e.,PO(0)=TPO(T). To achieve this, we train neural network f:RN3RM3!SE(3)to predictanSE(3)transformation from the combined object-scene point cloud at each step. The network istrained as a diffusion model [ 4,5] to incrementally reverse a manually constructed noising processthat gradually perturbs the object point clouds until they match a distribution PO(T)p(T)O(jPS),which we can efficiently sample from during deployment to begin de-noising at test time.3Test-time Evaluation. Starting with POandPS, we sample Kinitial transformsf^T(I)kgKk=1*andapply these to POto create initial object point clouds f^P(I)O;kgKk=1where ^P(I)O;k=^T(I)kPO. For eachof theKinitial transforms, we then perform the following update for Isteps.†At each iteration i:T(i)=TRandf^P(i)O;PS;posembtt=itot(i) (1)^T(i1)=T(i)^T(i) ^P(i1)O =T(i)^P(i)O (2)The update T(i)is formed by multiplying the denoising transform predicted by our model fwitha perturbation transform TRandthat is sampled from an iteration-conditioned normal distributionwhich converges toward deterministically producing an identify transform as itends toward 0. In thede-noising process, TRandhelps each of the Ksamples converge to different multi-modal pose basins(analogously to the perturbation term in Stochastic Langevin Dynamics [ 11]). The function posembrepresents a sinusoidal position embedding. Since fis only trained on a finite set of tvalues (i.e.,t= 1;:::;5) but we might want to perform the update in Eq. 2 for a larger number of steps, we usethe function itotto map the iteration ito a timestep value tthat the model has been trained on.Details on external noise scheduling and mapping itotcan be found in Appendix A3.Generally, we search through Ksolutionsf^T(0)kgKk=1for one that can be executed while satisfyingall other constraints (e.g., collision-free trajectory). However, we also want a way to select a singleoutput to execute assuming there are no other constraints to satisfy. We may also want to reject“locally optimal” solutions that fail to complete the desired task. To achieve this, we use a separateclassifierhto score the predicted poses (i.e., sk=h(PO(0)k;PS)wheres2[0;1]), such that thesample indexed with kexec=argmaxfskgKk=1can be selected for execution‡.Training. Given a dataset sample (PO;PS), we start with final “placed” object point cloud PO(0)=POand randomly sampled timestep t2[1;T]. We then obtain a perturbation transform T(t)noisefroma timestep-conditioned distribution with appropriately scaled variance and create a noised pointcloud PO(t)=T(t)noisePO. The task is to predict a transformation that takes one de-noising step as^T(t)=f(P(t)O;PS;posemb(t)). Network parameters are trained to minimize a loss between theprediction ^T(t)and a ground truth target T(t);GT. We use the Chamfer distance between the pointcloud obtained by applying the predicted transform and the ground-truth next point cloud as the lossto minimize.A natural target for fto predict is the inverse of the perturbation, i.e., T(t);GT=T(t)noise;inv=T(t)noise1, to encourage recovering the original sample. However, as the perturbation magnitudevaries across timesteps, this requires output predictions of different scales for different timesteps.In supervised learning with neural networks, it is advisable to keep the magnitudes of both inputand output signals consistent in order to minimize large fluctuations in gradient magnitudes betweensamples [ 12]. For this reason, an alternative approach is to encourage the network to take shorter“unit steps” in the direction of the original sample. We achieve this by uniformly interpolating the fullinverse perturbation as fT(s)interpgts=1=interp (T(t)noise, inv;t)and training the network to predict oneinterval in this interpolated set, i.e., T(t);GT= [T(t1)interp]1T(t)interp (details in Appendix A2 and A7).For the success classifier, we generate positive and negative rearrangement examples, where positivesuse the final demonstration point cloud, PO(0), and negatives are obtained by sampling diverseperturbations of PO(0). The classifier weights (separate from weights ) are trained to minimize abinary cross-entropy loss between the predicted likelihood and the ground truth success labels.3.2 ArchitectureWe use a Transformer [ 13] for processing point clouds and making pose predictions. A Transformeris a natural architecture for both (i) identifying important geometric parts within the object and thescene and (ii) capturing relationships that occur between the important parts of the object and the*Initial rotations are drawn from a uniform grid over SO(3), and we uniformly sample translations thatposition the object within the bounding box of the scene point cloud.† We denote application of SE (3)transform T= (R;t)to 3D point xasTx=Rx+t‡See Appendix A7 for results showing that scoring with hperforms better than, e.g., uniform output sampling4scene. Starting with POandPS, we tokenize the point clouds to obtain input features. This can beperformed by passing through a point cloud encoder [ 14,15], but we simply downsample the pointclouds and use the downsampled 3D point features as input. We then pass these input tokens througha Transformer encoder and decoder, which performs self-attention on the scene point cloud, andcross-attention between the scene and the object. This produces output features for each point, whichare mean-pooled to obtain a global feature vector. The global feature is passed to a set of MLPswhich predict the rotation R2SO(3)and a translation t2R3. As in [ 10,16], we represent therotation by predicting vectors a2R3andb2R3, finding the component of bthat is orthogonal to a,and normalizing to obtain ^aand^b. We then take a cross product to obtain ^c= ^a^b, and construct Ras^a^b^c. We incorporate iteration tby passing posemb(t)as a global token in the decoder andadding it to the global output feature. To predict success likelihood, we process point clouds with thesame Transformer but output a single scalar followed by a sigmoid.3.3 Local ConditioningThe approach described above conditions the transform regression on both the object and the scene.However, distant global information can act as a distraction and hamper both precision and gener-alization. Prior work has also observed this and suggested hard attention mechanisms on the inputobservation like cropping task-relevant regions to improve generalization by ignoring irrelevantdistractors [ 8,9]. Building on this intuition, we modify the scene point cloud by cropping PSto onlyinclude points that are near the current object point cloud PO(i). Our modified pose prediction thusbecomes ^T(i)=f^P(i)O;P(i)S;posembitot(i)where P(i)S=crop (^P(i)O;PS). The functioncrop returns the points in PSthat are within an axis-aligned box centered at the mean of ^P(i)O. We tryone variant of the crop function that returns a fixed-size crop, and another that adjusts the crop sizedepending on the iteration variable i(the size starts large and gradually decreases for later iterations).4 Experiments: Design and SetupOur quantitative experiments in simulation are designed to answer the following questions:1. How well does RPDiff achieve the desired tasks compared to other methods for rearrangement?2. How successful is RPDiff in producing a diverse set of transformations compared to baselines?3. How does our performance change with different components modified or removed?We also demonstrate RPDiff within a pick-and-place pipeline in the real world to further highlightthe benefits of multi-modal generation and our ability to transfer from simulation to the real world.4.1 Task Descriptions and Training Data GenerationWe evaluate our method on three tasks that emphasize multiple available object placements: (1)placing a book on a partially-filled bookshelf, (2) stacking a can on a stack of cans or an open shelfregion, and, (3) hanging a mug on one of many racks with many hooks. As a sanity check for ourbaseline implementations, we also include two easier versions of “mug on rack” tasks that are “lessmulti-modal”. These consist of (i) hanging a mug on one rack with a single hook and (ii) hanging amug on one rack with two hooks. We programmatically generate 1k-3k demonstrations of eachtask in simulation with a diverse set of procedurally generated shapes (details in Appendix A2). Weuse each respective dataset to train both RPDiff and each baseline (one model for each task). For ourreal-world experiments, we directly transfer and deploy the models trained on simulated data.4.2 Evaluation Environment SetupSimulation. We conduct quantitative experiments in the PyBullet [ 17] simulation engine. Thepredicted transform is applied to the object by simulating an insertion controller which directlyactuates the object’s center of mass (i.e., there is no robot in the simulator). The insertion is executedfrom a “pre-placement” pose that is offset from the predicted placement. This offset is obtained usingprior knowledge about the task and the objects and is not predicted (see Appendix A6 for details). Toquantify performance, we report the success rate over 100 trials, using the final simulator state tocompute success. We also quantify coverage by comparing the set of predictions to a ground truth setof feasible solutions and computing the corresponding precision and recall. Details on the insertioncontroller, computation of Tpre-place, and the task success criteria can be found in the Appendix.5Method Mug/EasyRack Mug/MedRack Book/Shelf Mug/Multi-MedRack Can/CabinetC2F Q-attn 0.31 0.31 0.57 0.26 0.51R-NDF-base 0.75 0.29 0.00 0.00 0.14NSM-base 0.83 0.17 0.02 0.01 0.08NSM-base + CV AE – 0.39 0.17 0.27 0.19RPDiff ( ours ) 0.92 0.83 0.94 0.86 0.85Table 1: Rearrangement success rates in simulation. On tasks with a unimodal solution space and simplerscene geometry, each method performs well (see Mug/EasyRack task). However, on tasks involving moresignificant shape variation and multi-modality, RPDiff works better than all other approaches.Real World. We also apply RPDiff to object rearrangement in the real world using a Franka Pandarobotic arm with a Robotiq 2F140 parallel jaw gripper. We use four calibrated depth cameras toobserve the tabletop environment. From the cameras, we obtain point clouds POandPSof object Oand scene Sand apply our method to predict transformation T.Tis applied to Oby transformingan initial grasp pose Tgrasp(using a separate grasp predictor [ 10]) byTto obtain a placing poseTplace=TT grasp, and inverse kinematics and motion planning is used to reach TgraspandTplace.4.3 BaselinesCoarse-to-Fine Q-attention (C2F-QA). This method adapts the classification-based approach pro-posed in [ 8] to relational rearrangement. We train a fully convolutional network to predict a distri-bution of scores over a voxelized representation of the scene, denoting a heatmap over candidatetranslations of the object centroid. The model runs in a “coarse-to-fine” fashion by performing thisoperation multiple times over a smaller volume at higher resolutions. On the last step, we pool thevoxel features and predict a distribution over a discrete set of rotations to apply to the object. We useour success classifier to rank the predicted transforms and execute the output with the top score.Relational Neural Descriptor Fields (R-NDF). R-NDF [ 18] uses a neural field shape representationtrained on category-level 3D models as a feature space wherein local coordinate frames can bematched via nearest-neighbor search. R-NDFs have been used to perform relational rearrangementtasks via the process of encoding and localizing task-relevant coordinate frames near the object partsthat must align to achieve the desired rearrangement. We call this method “R-NDF-base” because itdoes not feature the additional energy-based model for refinement proposed in the original work.Neural Shape Mating (NSM) + CV AE. Neural Shape Mating (NSM) [ 3] uses a Transformer toprocess a pair of point clouds and predict how to align them. Architecturally, NSM is the same asour relative pose regression model, with the key differences of (i) being trained on arbitrarily largeperturbations of the demonstration point clouds, (ii) not using local cropping, and (iii) only makinga single prediction. We call this baseline “NSM-base” because we do not consider the auxiliarysigned-distance prediction and learned discriminator proposed in the original approach [ 3]. Whilethe method performs well on unimodal tasks, the approach is not designed to handle multi-modality.Therefore, we modify NSM to act as a conditional variational autoencoder (CV AE) [ 19] to betterenable learning from multi-modal data. We use NSM+CV AE to predict multiple transforms andexecute the output with the top score produced by our success classifier.5 Results5.1 Simulation: Success Rate EvaluationTable 1 shows the success rates achieved by each method on each task and highlights that our methodperforms best across the board. The primary failure mode from C2F-QA is low precision in therotation prediction. Qualitatively, the C2F-QA failures are often close to a successful placement butstill cause the insertion to fail. In contrast, our refinement procedure outputs very small rotations thatcan precisely align the object relative to the scene.Similarly, we find R-NDF performs poorly on more complex scenes with many available placements.We hypothesize this is because R-NDF encodes scene point clouds into a global latent representation.Since the single set of latent variables must capture all possible configurations of the individualscene components, global encodings fail to represent larger-scale scenes with significant geometricvariability [ 6,7]. For instance, R-NDF can perform well with individual racks that all have a singlehook, but fails when presented with multiple racks.60.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8Recall0.60.81.0 Precisionk=4k=8 k=16 k=32 k=64k=128k=256k=4k=8 k=16k=32k=64k=128k=256Precision vs. Recall (per # of inits)mrp (ours)c2f-qa(a)Crop MethodSuccess RateMug/Rack Book/Shelf Can/CabinetNone 0.58 0.62 0.42Fixed 0.76 0.92 0.75Varying 0.86 0.94 0.85(b)Figure 3: (a)Coverage evaluation in simulation. Both RPDiff and C2F-QA achieve high placement coverage,but the prediction quality of C2F-QA reduces with an increase in coverage, while RPDiff produces outputs thatremain precise while achieving high coverage. (b) Cropping ablations. Success rate of RPDiff with differentkinds of scene point cloud conditioning. The increased success rate achieved when using local scene croppinghighlights the generalization and precision benefits of focusing on a local spatial region.(B)(A)(C)Figure 4: Real-world multi-modal rearrangement. Executing Can/Cabinet (A), Book/Shelf (B), andMug/Rack (C) in the real world. For each task, the initial object-scene configuration is shown in the top-leftimage, and examples of executing multiple inferred placements are shown in the main image sequence.Finally, while NSM+CV AE improves upon the unimodal version of NSM, we find the generatedtransforms vary too smoothly between the discrete modes (e.g., book poses that lie in between theavailable shelf slots), an effect analogous to the typical limitation of V AE-based generators producingblurry outputs in image generation. We hypothesize this over-smoothing is caused by trying to makethe approximate posterior match the unimodal Gaussian prior. This contrasts RPDiff’s ability to“snap on” to the available placing locations in a given scene. More discussion on the performanceobtained by the baseline methods and how they are implemented can be found in Appendix A6.5.2 Simulation: Coverage EvaluationNext, we evaluate the ability to produce multi-modal outputs that cover the space of rearrangementsolutions and examine the tradeoff between prediction quality and coverage. Since coverage isaffected by the number of parallel runs we perform, we compute average recall and average precisionfor different values of K(the number of initial poses that are refined). Precision and recall arecomputed with respect to a set of ground truth rearrangement solutions for a given object-sceneinstance. We consider positive predictions as those that are within a 3.5cm position and 5-degreerotation threshold of a ground truth solution.Fig. 3a shows the results for our approach along with C2F-QA, the best-performing baseline. Weobserve a trend of better coverage (higher recall) with more outputs for both approaches. For a modestvalue ofK= 32 , we observe RPDiff is able to cover over half of the available placement solutionson average, with C2F-QA achieving slightly lower coverage. However, we see a stark differencebetween the methods in terms of precision as the number of outputs is increased. C2F-QA suffersfrom more outputs being far away from any ground truth solution, while our approach maintainsconsistently high generation quality even when outputting upwards of 200 rearrangement poses.5.3 Simulation: Local Cropping Ablations and ModificationsFinally, we evaluate the benefits of introducing local scene conditioning into our relative poseregression model. Table 3b shows the performance variation of our method with different kindsof scene point cloud conditioning. We achieve the best performance with the version of localconditioning that varies the crop sizes on a per-iteration basis. Using a fixed crop size marginally7reduces performance, while conditioning on the whole uncropped scene point cloud performs muchworse. This highlights the generalization and precision benefits of focusing on a local spatial regionnear the object in its imagined configuration. It also suggests an advantage of using a coarse-to-fineapproach that considers a larger region on earlier iterations. Additional results examining the effect ofthe success classifier, external noise, and parameterization of itotcan be found in Appendix A7.5.4 Real World: Object rearrangement via pick-and-placeFinally, we use RPDiff to perform relational rearrangement via pick-and-place on real-world objectsand scenes. Fig. 1 and Fig. 4 show the robot executing multiple inferred placements on our threetasks. We relied on our approach’s ability to output multiple solutions, as some geometrically validplacements were not kinematically feasible for the robot based on its workspace limits and thesurrounding collision geometry. Please see the supplemental video for real-world execution.6 Related WorkObject Rearrangement from Perception . Object rearrangement that operates with unknown objectsin the real world by operating from perceptual input has been an area of growing interest [ 2,3,8,18,20–46]. One straightforward method is end-to-end training to directly regress the relativetransformation, as in Neural Shape Mating (NSM) [ 3]. Others have explored identifying task-relevantobject parts and then solving for the desired alignment, as in TAX-Pose and R-NDF [ 18,37,45].However, many of these approaches in their naive form struggle when there is multi-modality (NSMand TAX-Pose can only output a single solution). There has been success addressing multi-modalityby performing classification over a discretized version of the search space [ 8,39,41,43,44,47], butthese methods are typically less precise.Denoising Diffusion and Iterative Regression . Diffusion models [ 4,48] use an iterative de-noisingprocess to perform generative modeling. While they were originally designed for generating images,they have been extended to other domains including waveforms [ 49,50], 3D shapes [ 51,52], anddecision-making[ 53–55]. Several approaches have applied diffusion models (and related energy-based models) to a variety of robotics domains, including policy learning [ 56,57], motion plan-ning/trajectory optimization [58–60], grasping [54], and object rearrangement [18, 38, 61]. The useof iterative regression has also been successful in other domains such as pose estimation [62–65].SE(3)-DiffusionFields [ 54] integrate learned 6-DoF grasp distributions within a trajectory optimiza-tion framework, and LEGO-Net [ 55] employs iterative de-noising to generate realistic-looking roomlayouts. Our work differs in that we do not assume known object states or 3D models. Most similarto our work, StructDiffusion [ 38] uses a diffusion model to perform language-conditioned objectrearrangement with point clouds. While the focus in [ 38] is to rearrange multiple objects intoabstract structures (e.g., circles, lines) specified via natural language, we emphasize covering allrearrangement modes and integrating with sampling-based planners.7 Limitations and ConclusionLimitations . The amount of demonstration data we use may be difficult to obtain in the real world,thus we rely on scripted policies that use privileged information in simulation for demo collection.Furthermore, sim2real distribution shifts reduce our real-world performance, we lack a closed-loopcontrol policy for placement execution that is robust to perturbations, and we do not show anytransfer to new tasks. Finally, a subtle yet important limitation is our use of manually-computedpre-placement offset poses. Predicting the final desired object configuration is an important steptoward general-purpose rearrangement, but it would be even better to also predict additional waypointtransforms that help obtain a feasible path to the final pose.Conclusion . This work presents an approach for rearranging objects in a scene to achieve a desiredplacing relationship, while operating with novel geometries, poses, and scene layouts. Our systemcan produce multi-modal distributions of object transformations for rearrangement, overcoming thedifficulty of fitting multi-modal demonstration datasets and facilitating integration with planningalgorithms that require diverse actions to search through. Our results illustrate the capabilities of ourframework across a diverse range of rearrangement tasks involving objects and scenes that present alarge number of feasible rearrangement solutions.88 AcknowledgementThe authors would like to thank NVIDIA Seattle Robotics Lab members and the MIT ImprobableAI Lab for their valuable feedback and support in developing this project. In particular, we wouldlike to acknowledge Idan Shenfeld, Anurag Ajay, and Antonia Bronars for helpful suggestions onimproving the clarity of the draft. This work was partly supported by Sony Research Awards andAmazon Research Awards. Anthony Simeonov is supported in part by the NSF Graduate ResearchFellowship.Author ContributionsAnthony Simeonov conceived the overall project goals, investigated several approaches for address-ing multi-modality in rearrangement prediction, implemented the pose diffusion framework, wroteall the code, ran simulation and real-world experiments, and was the primary author of the paper.Ankit Goyal advised the project, made technical suggestions on clarifying the method and improvingthe experimental evaluation, supported iteration on obtaining real robot results, and helped withwriting the paper.Lucas Manuelli engaged in research discussions about rearrangement prediction, suggested initialideas for addressing multi-modality, advised the project in its early stages, and provided valuablefeedback on the paper.Lin Yen-Chen supported early project brainstorming, helped develop direct connections with diffu-sion models, gave feedback on evaluation tasks, and helped edit the paper.Alina Sarmiento helped implement the framework on the real robot and implemented the graspgeneration model that enabled the pick-and-place demos on the Franka Panda.Alberto Rodriguez engaged in technical discussions on the connections to iterative optimizationmethods and integrating the framework in the context of a sampling-based planner.Pulkit Agrawal suggested connections to work on iterative regression that came before diffusionmodels, helped clarify key technical insights on the benefits of iterative prediction, suggested ablations,helped with paper writing/editing, and co-advised the project.Dieter Fox was involved in technical discussions on relational tasks involving object part interactions,proposed some of the evaluation tasks, helped formalize connections to other related work, andadvised and supported the overall project.References[1]C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart ́ın-Mart ́ın, C. Wang, G. Levine,M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everydayactivities and realistic simulation. In Conference on Robot Learning , pages 80–93. PMLR,2023.[2]D. Batra, A. X. Chang, S. Chernova, A. J. Davison, J. Deng, V . Koltun, S. Levine, J. Malik,I. Mordatch, R. Mottaghi, et al. Rearrangement: A challenge for embodied ai. arXiv preprintarXiv:2011.01975 , 2020.[3]Y .-C. Chen, H. Li, D. Turpin, A. Jacobson, and A. Garg. Neural shape mating: Self-supervisedobject assembly with adversarial shape priors. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 12724–12733, 2022.[4]J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. arXiv preprintarxiv:2006.11239 , 2020.[5]Y . Song and S. Ermon. Generative modeling by estimating gradients of the data distribution.Advances in neural information processing systems , 32, 2019.[6]Y . Xie, T. Takikawa, S. Saito, O. Litany, S. Yan, N. Khan, F. Tombari, J. Tompkin, V . Sitzmann,and S. Sridhar. Neural fields in visual computing and beyond. Computer Graphics Forum , 2022.ISSN 1467-8659. doi:10.1111/cgf.14505.9[7]S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger. Convolutional occupancynetworks. In Proc. ECCV , 2020.[8]S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine q-attention: Efficient learningfor visual robotic manipulation via discretisation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 13739–13748, 2022.[9]S. James and A. J. Davison. Q-attention: Enabling efficient learning for vision-based roboticmanipulation. IEEE Robotics and Automation Letters , 7(2):1612–1619, 2022.[10] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dofgrasp generation in cluttered scenes. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 13438–13444. IEEE, 2021.[11] M. Welling and Y . W. Teh. Bayesian learning via stochastic gradient langevin dynamics.InProceedings of the 28th international conference on machine learning (ICML-11) , pages681–688, 2011.[12] L. Huang, J. Qin, Y . Zhou, F. Zhu, L. Liu, and L. Shao. Normalization techniques in trainingdnns: Methodology, analysis and application. IEEE Transactions on Pattern Analysis andMachine Intelligence , 2023.[13] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, andI. Polosukhin. Attention is all you need. Advances in neural information processing systems ,30, 2017.[14] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on pointsets in a metric space. In Advances in neural information processing systems , pages 5099–5108,2017.[15] Y . Wang, Y . Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graphcnn for learning on point clouds. Acm Transactions On Graphics (tog) , 38(5):1–12, 2019.[16] Y . Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the continuity of rotation representations inneural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 5745–5753, 2019.[17] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. GitHub repository , 2016.[18] A. Simeonov, Y . Du, L. Yen-Chen, , A. Rodriguez, , L. P. Kaelbling, T. L. Perez, and P. Agrawal.Se(3)-equivariant relational rearrangement with neural descriptor fields. In Conference on RobotLearning (CoRL) . PMLR, 2022.[19] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditionalgenerative models. In Advances in neural information processing systems , pages 3483–3491,2015.[20] S. Cheng, K. Mo, and L. Shao. Learning to regrasp by learning to place. In 5th Annual Confer-ence on Robot Learning , 2021. URL https://openreview.net/forum?id=Qdb1ODTQTnL .[21] R. Li, C. Esteves, A. Makadia, and P. Agrawal. Stable object reorientation using contact planeregistration. In 2022 International Conference on Robotics and Automation (ICRA) , pages6379–6385. IEEE, 2022.[22] S. Thompson, L. P. Kaelbling, and T. Lozano-Perez. Shape-based transfer of generic skills. In2021 IEEE International Conference on Robotics and Automation (ICRA) , pages 5996–6002.IEEE, 2021.[23] A. Simeonov, Y . Du, B. Kim, F. R. Hogan, J. Tenenbaum, P. Agrawal, and A. Rodriguez.A long horizon planning framework for manipulating rigid pointcloud objects. In Confer-ence on Robot Learning (CoRL) , 2020. URL https://anthonysimeonov.github.io/rpo-planning-framework/ .10[24] S. Lu, R. Wang, Y . Miao, C. Mitash, and K. Bekris. Online object model reconstruction andreuse for lifelong improvement of robot manipulation. In 2022 International Conference onRobotics and Automation (ICRA) , pages 1540–1546. IEEE, 2022.[25] M. Gualtieri and R. Platt. Robotic pick-and-place with uncertain object instance segmentationand shape completion. IEEE robotics and automation letters , 6(2):1753–1760, 2021.[26] P. Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotor policylearning. IEEE Robotics and Automation Letters , 2019.[27] A. Curtis, X. Fang, L. P. Kaelbling, T. Lozano-P ́erez, and C. R. Garrett. Long-horizon manipu-lation of unknown objects via task and motion planning with estimated affordances. In 2022International Conference on Robotics and Automation (ICRA) , pages 1940–1946. IEEE, 2022.[28] C. Paxton, C. Xie, T. Hermans, and D. Fox. Predicting stable configurations for semanticplacement of novel objects. In Conference on Robot Learning , pages 806–815. PMLR, 2022.[29] W. Yuan, C. Paxton, K. Desingh, and D. Fox. Sornet: Spatial object-centric representations forsequential manipulation. In 5th Annual Conference on Robot Learning , pages 148–157. PMLR,2021.[30] A. Goyal, A. Mousavian, C. Paxton, Y .-W. Chao, B. Okorn, J. Deng, and D. Fox. Ifor:Iterative flow minimization for robotic object rearrangement. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 14787–14797, 2022.[31] A. H. Qureshi, A. Mousavian, C. Paxton, M. Yip, and D. Fox. NeRP: Neural RearrangementPlanning for Unknown Objects. In Proceedings of Robotics: Science and Systems , Virtual, July2021. doi:10.15607/RSS.2021.XVII.072.[32] D. Driess, J.-S. Ha, and M. Toussaint. Learning to solve sequential physical reasoning problemsfrom a scene image. The International Journal of Robotics Research , 40(12-14):1435–1466,2021.[33] D. Driess, J.-S. Ha, and M. Toussaint. Deep visual reasoning: Learning to predict actionsequences for task and motion planning from an initial scene image. In Robotics: Science andSystems 2020 (RSS 2020) . RSS Foundation, 2020.[34] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conference onRobotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[35] W. Goodwin, S. Vaze, I. Havoutis, and I. Posner. Semantically grounded object matchingfor robust robotic scene rearrangement. In 2022 International Conference on Robotics andAutomation (ICRA) , pages 11138–11144. IEEE, 2022.[36] M. Danielczuk, A. Mousavian, C. Eppner, and D. Fox. Object rearrangement using learnedimplicit collision functions. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 6010–6017. IEEE, 2021.[37] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. Tax-pose: Task-specific cross-poseestimation for robot manipulation. In 6th Annual Conference on Robot Learning .[38] W. Liu, T. Hermans, S. Chernova, and C. Paxton. Structdiffusion: Object-centric diffusion forsemantic rearrangement of novel objects. arXiv preprint arXiv:2211.04604 , 2022.[39] L. Yen-Chen, P. Florence, A. Zeng, J. T. Barron, Y . Du, W.-C. Ma, A. Simeonov, A. R. Garcia,and P. Isola. MIRA: Mental imagery for robotic affordances. In Conference on Robot Learning(CoRL) , 2022.[40] K. Wada, S. James, and A. J. Davison. ReorientBot: Learning object reorientation for specific-posed placement. In IEEE International Conference on Robotics and Automation (ICRA) ,2022.11[41] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world forrobotic manipulation. Conference on Robot Learning (CoRL) , 2020.[42] H. Huang, D. Wang, R. Walters, and R. Platt. Equivariant Transporter Network. In Proceedingsof Robotics: Science and Systems , New York City, NY , USA, June 2022. doi:10.15607/RSS.2022.XVIII.007.[43] K. Mo, Y . Qin, F. Xiang, H. Su, and L. Guibas. O2O-Afford: Annotation-free large-scaleobject-object affordance learning. In Conference on Robot Learning (CoRL) , 2021.[44] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Proceedings of the 6th Conference on Robot Learning (CoRL) , 2022.[45] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In2022 International Conference on Robotics and Automation (ICRA) , pages 6394–6400. IEEE,2022.[46] Y . You, L. Shao, T. Migimatsu, and J. Bohg. Omnihang: Learning to hang arbitrary objectsusing contact point correspondences and neural collision estimation. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 5921–5927. IEEE, 2021.[47] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. In Thirty-Sixth Conference on Neural Information Processing Systems ,2022. URL https://openreview.net/forum?id=agTr-vRQsa .[48] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learningusing nonequilibrium thermodynamics. In International Conference on Machine Learning ,pages 2256–2265. PMLR, 2015.[49] Z. Kong, W. Ping, J. Huang, K. Zhao, and B. Catanzaro. Diffwave: A versatile diffusion modelfor audio synthesis. arXiv preprint arXiv:2009.09761 , 2020.[50] N. Chen, Y . Zhang, H. Zen, R. J. Weiss, M. Norouzi, and W. Chan. Wavegrad: Estimatinggradients for waveform generation. arXiv preprint arXiv:2009.00713 , 2020.[51] S. Luo and W. Hu. Diffusion probabilistic models for 3d point cloud generation. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , June 2021.[52] A. Nichol, H. Jun, P. Dhariwal, P. Mishkin, and M. Chen. Point-e: A system for generating 3dpoint clouds from complex prompts. arXiv preprint arXiv:2212.08751 , 2022.[53] M. Janner, Y . Du, J. Tenenbaum, and S. Levine. Planning with diffusion for flexible behaviorsynthesis. In International Conference on Machine Learning , 2022.[54] J. Urain, N. Funk, J. Peters, and G. Chalvatzaki. Se(3)-diffusionfields: Learning smoothcost functions for joint grasp and motion optimization through diffusion. IEEE InternationalConference on Robotics and Automation (ICRA) , 2023.[55] Q. A. Wei, S. Ding, J. J. Park, R. Sajnani, A. Poulenard, S. Sridhar, and L. Guibas. Lego-net:Learning regular rearrangements of objects in rooms. arXiv preprint arXiv:2301.09629 , 2023.[56] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion, 2023.[57] P. Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch,and J. Tompson. Implicit behavioral cloning. Conference on Robot Learning (CoRL) , 2021.[58] M. Janner, Y . Du, J. B. Tenenbaum, and S. Levine. Planning with diffusion for flexible behaviorsynthesis, 2022.[59] A. Ajay, Y . Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is conditional generativemodeling all you need for decision-making?, 2022.12[60] Y . Du, T. Lin, and I. Mordatch. Model based planning with energy based models, 2021.[61] M. Wu, F. Zhong, Y . Xia, and H. Dong. Targf: Learning target gradient field for objectrearrangement. arXiv preprint arXiv:2209.00853 , 2022.[62] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative errorfeedback. In Proceedings of the IEEE conference on computer vision and pattern recognition ,pages 4733–4742, 2016.[63] Y . Li, G. Wang, X. Ji, Y . Xiang, and D. Fox. Deepim: Deep iterative matching for 6d poseestimation. In Proceedings of the European Conference on Computer Vision (ECCV) , pages683–698, 2018.[64] Y . Labb ́e, J. Carpentier, M. Aubry, and J. Sivic. Cosypose: Consistent multi-view multi-object6d pose estimation. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow,UK, August 23–28, 2020, Proceedings, Part XVII 16 , pages 574–591. Springer, 2020.[65] Y . Labb ́e, L. Manuelli, A. Mousavian, S. Tyree, S. Birchfield, J. Tremblay, J. Carpentier,M. Aubry, D. Fox, and J. Sivic. Megapose: 6d pose estimation of novel objects via render &compare. arXiv preprint arXiv:2212.06870 , 2022.[66] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva,S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprintarXiv:1512.03012 , 2015.[67] T. Karras, M. Aittala, T. Aila, and S. Laine. Elucidating the design space of diffusion-basedgenerative models. arXiv preprint arXiv:2206.00364 , 2022.[68] T. Chen. On the importance of noise scheduling for diffusion models. arXiv preprintarXiv:2301.10972 , 2023.[69] J. J. Kuffner. Effective sampling and distance metrics for 3d rigid body path planning. In IEEEInternational Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 ,volume 4, pages 3993–3998. IEEE, 2004.[70] D. Q. Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of MathematicalImaging and Vision , 35(2):155–164, 2009.[71] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[72] J. Sola, J. Deray, and D. Atchuthan. A micro lie theory for state estimation in robotics. arXivpreprint arXiv:1812.01537 , 2018.[73] T. Chen, A. Simeonov, and P. Agrawal. AIRobot. https://github.com/Improbable-AI/airobot , 2019.[74] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discoveringclusters in large spatial databases with noise. In kdd, volume 96, pages 226–231, 1996.[75] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3dclassification and segmentation. In Proceedings of the IEEE conference on computer vision andpattern recognition , pages 652–660, 2017.[76] K. Murphy, C. Esteves, V . Jampani, S. Ramalingam, and A. Makadia. Implicit-pdf: Non-parametric representation of probability distributions on the rotation manifold. arXiv preprintarXiv:2106.05965 , 2021.[77] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu,E. Romo, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance graspingand cross-domain image matching. In 2018 IEEE international conference on robotics andautomation , pages 1–8. IEEE, 2018.13[78] C. Deng, O. Litany, Y . Duan, A. Poulenard, A. Tagliasacchi, and L. J. Guibas. Vector neurons:A general framework for so(3)-equivariant networks. In ICCV , 2021. URL https://arxiv.org/abs/2104.12229 .[79] A. Fishman, A. Murali, C. Eppner, B. Peele, B. Boots, and D. Fox. Motion policy networks. InConference on Robot Learning , pages 967–977. PMLR, 2023.[80] T. Chen, J. Xu, and P. Agrawal. A system for general in-hand object re-orientation. InConference on Robot Learning , pages 297–307. PMLR, 2022.[81] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.arXiv preprint arXiv:2107.04034 , 2021.[82] A. Mousavian, C. Eppner, and D. Fox. 6-dof graspnet: Variational grasp generation for objectmanipulation. In Proceedings of the IEEE International Conference on Computer Vision , pages2901–2910, 2019.[83] J.-S. Ha, D. Driess, and M. Toussaint. Deep visual constraints: Neural implicit models formanipulation planning from visual input. IEEE Robotics and Automation Letters , 7(4):10857–10864, 2022.[84] Y . LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. A tutorial on energy-basedlearning. Predicting structured data , 1(0), 2006.[85] Y . Du and I. Mordatch. Implicit generation and generalization in energy-based models. arXivpreprint arXiv:1903.08689 , 2019.[86] S. Singh, S. Tu, and V . Sindhwani. Revisiting energy based models as policies: Ranking noisecontrastive estimation and interpolating energy models. arXiv preprint arXiv:2309.05803 , 2023.[87] Y . Du, C. Durkan, R. Strudel, J. B. Tenenbaum, S. Dieleman, R. Fergus, J. Sohl-Dickstein,A. Doucet, and W. S. Grathwohl. Reduce, reuse, recycle: Compositional generation withenergy-based diffusion models and mcmc. In International Conference on Machine Learning ,pages 8489–8510. PMLR, 2023.[88] Y . Song and D. P. Kingma. How to train your energy-based models. arXiv preprintarXiv:2101.03288 , 2021.[89] N. Gkanatsios, A. Jain, Z. Xian, Y . Zhang, C. G. Atkeson, and K. Fragkiadaki. Energy-basedModels are Zero-Shot Planners for Compositional Scene Rearrangement. In Proceedings ofRobotics: Science and Systems , 2023.14Shelving, Stacking, Hanging: Relational Pose Diffusion forMulti-modal Rearrangement – Supplementary MaterialSection A1 includes additional visualizations of iterative test-time evaluation on simulated shapesand examples of object-scene point clouds that were used as training data. In Section A2, we presentdetails on data generation, model architecture, and training for RPDiff. In Section A3 we elaboratein more detail on the multi-step iterative regression inference procedure which predicts the set ofrearrangement transforms. Section A4 describes more details about how the success classifier istrained and used in conjunction with our transform predictor as a simple mechanism for selectingwhich among multiple candidate transforms to execute. In Section A5, we describe more detailsabout our experimental setup, and Section A6 discusses more details on the evaluation tasks, robotexecution pipelines, and methods used for computing pre-placement offset poses. In Section A7 wepresent an additional set of ablations to highlight the impact of other hyperparameters and designdecisions. Section A8 describes additional implementation details for the real-world executionsalong with an expanded discussion on limitations and avenues for future work. Section A9 includespreliminary results on training a multi-task model for iterative pose de-noising and using RPDiff toperform multi-step manipulation, and Section A10 includes additional discussion on demo collection(and the manually-designed heuristics it uses), performance analysis and sim2real considerations,system engineering details, expanded related works. Finally, Section A11 shows model architecturediagrams a summarized set of relevant hyperparameters that were used in training and evaluation.A1 Additional Test-time and Training Data VisualizationsHere, we show additional visualizations of the tasks used in our simulation experiments and the noisedpoint clouds used to train our pose regression model. Figure A1 shows snapshots of performing theiterative de-noising at evaluation time with simulated objects, and Figure A2 shows examples ofthe combined object-scene point clouds and their corresponding noised versions that were used fortraining to perform iterative de-noising.A2 Iterative Pose Regression Training and Data GenerationThis section describes the data used for training our pose diffusion model, the network architecturewe used for processing point clouds and predicting SE (3)transforms, and details on training.A2.1 Training Data GenerationObjects used in simulated rearrangement demonstrations . We create the rearrangement demon-strations in simulation with a set of synthetic 3D objects. The three tasks we consider include objectsfrom five categories: mugs, racks, cans, books, “bookshelves” (shelves partially filled with books),and “cabinets” (shelves partially-filled with stacks of cans). We use ShapeNet [ 66] for the mugs andprocedurally generate our own dataset of .obj files for the racks, books, shelves, and cabinets. SeeFigure A3 for representative samples of the 3D models from each category.Procedurally generated rearrangement demonstrations in simulation . The core regressionmodelfin RPDiff is trained to process a combined object-scene point cloud and predict anSE(3)transformation updates the pose of the object point cloud. To train the model to make theserelative pose predictions, we use a dataset of demonstrations showing object and scene point cloudsin final configurations that satisfy the desired rearrangement tasks. Here we describe how we obtainthese “final point cloud” demonstrationsWe begin by initializing the objects on a table in PyBullet [ 17] in random positions and orientationsand render depth images with the object segmented from the background using multiple simulatedcameras. These depth maps are converted to 3D point clouds and fused into the world coordinateframe using known camera poses. To obtain a diverse set of point clouds, we randomize the numberof cameras (1-4), camera viewing angles, distances between the cameras and objects, object scales,and object poses. Rendering point clouds in this way allows the model to see some of the occlusionpatterns that occur when the objects are in different orientations and cannot be viewed from belowthe table. To see enough of the shelf/cabinet region, we use the known state of the shelf/cabinet toposition two cameras that roughly point toward the open side of the shelf/cabinet.15(a) Mug/Rack(b) Book/Shelf(c) Can/CabinetFigure A1: Visualizations of multiple steps of iterative de-noising on simulated objects. Starting from the leftside, each object is initialized in a random SE(3)pose in the vicinity of the scene. Over multiple iterations,RPDiff updates the object pose. The right side shows the final set of converged solutions.16Can/CabinetFinal combinedobject/scene Interpolated noise stepsSample w/cropped sceneBook/ShelfFinal combinedobject/scene Interpolated noise stepsSample w/cropped sceneMug/Rack-MultiFinal combinedobject/scene Interpolated noise stepsSample w/cropped sceneFigure A2: Example point clouds from the demonstrations for each task of Can/Cabinet (top), Book/Shelf(middle) and Mug/RackMed-Multi (bottom). For each task, the top row shows the ground truth combinedobject-scene point cloud. Scene point clouds are in black and object point clouds are in dark blue. The middlerow in each task shows an example of creating multiple steps of noising perturbations by uniformly interpolatinga single randomly sampled perturbation transform (with a combination of linear interpolation for the translationand SLERP for the rotation). Different colors show the point clouds at different interpolated poses. The bottomrow shows a sampled step among these interpolated poses, with the corresponding “noised” object point cloud(dark blue), ground truth target point cloud (light blue), and cropped scene point cloud (red).17mugsracksbooks +shelvescans +cabinetsFigure A3: Example 3D models used to train RPDiff and deploy RPDiff on our rearrangement tasks. Mugsare from ShapeNet [ 66] while we procedurally generated our own synthetic racks, books, cans, shelves, andcabinets.After obtaining the initial object and scene point clouds, we obtain an SE(3)transform to apply tothe object, such that transforming into a “final” objct pose using this transform results in the desiredplacement. This transform is used to translate and rotate the initial object point cloud, such thatthe combined “final object” and scene point cloud can be used for generating training examples.Figure A2 shows example visualizations of the final point clouds in the demonstrations for each task.We obtain the final configuration that satisfies these tasks using a combination of privileged knowledgeabout the objects in the simulator (e.g., ground truth state, approximate locations of task-relevantobject parts, 3D mesh models for each object, known placing locations that are available) and humanintuition about the task. To create mug configurations that satisfy “hanging” on one of the pegs of arack, we first approximately locate one of the pegs on one of the racks (we select one uniformly atrandom) and the handle on the mug (which is straightforward because all the ShapeNet mugs arealigned with the handle pointing in the +y axis of the body frame). We then transform the mug so thatthe handle is approximately “on” the selected hook. Finally, we sample small perturbations about thisnominal pose until we find one that does not lead to any collision/penetration between the two shapes.We perform an analogous process for the other tasks, where the ground truth available slots in thebookshelf and positions that work for placing the mug (e.g., on top of a stack, or on a flat shelf regionin between existing stacks) are recorded when the 3D models for the shelves/cabinets are created.The exact methods for generating these shapes and their corresponding rearrangement poses can befound in our code.A2.2 Pose Prediction ArchitectureTransformer point cloud processing and pose regression . We follow the Transformer [ 13] ar-chitecture proposed in Neural Shape Mating [ 3] for processing point clouds and computing shapefeatures that are fed to the output MLPs for pose prediction.18We first downsample the observed point clouds PO2RN03andPS2RM03using farthestpoint sampling into PO2RN3andPS2RM3. We then normalize to create POnorm2RN3andPSnorm2RM3, based on the centroid of the scene point cloud and a scaling factor thatapproximately scales the combined point cloud to have extents similar to a unit bounding box:PS=264pS1pS2:::pSM375PO=264pO1pO2:::pOM375 pS;cent=1MMXi=1pSi a=maxfpSigminfpSigPSnorm=2664pS;norm1pS;norm2:::pS;normM3775pS;normi =a(pSipS;cent)8i21;:::;MPOnorm=2664pO;norm1pO;norm2:::pO;normM3775pO;normi =a(pOipS;cent)8j21;:::;NNext, we “tokenize” the normalized object/scene point clouds into d-dimensional input featuresO2RNdandS2RMd. We directly use the 3D coordinate features from the downsampledand normalized point clouds as input tokens, and then project the input to a d-dimensional vectorwith a linear layer Win2Rd3:S=2664WinpS;norm1WinpS;norm2:::WinpS;normM3775O=2664WinpO;norm1WinpO;norm2:::WinpO;normM3775Note we could also pass the point cloud through a point cloud encoder to pool local features together,as performed in NSM via DGCNN [ 15]. We did not experiment with this as we obtained satisfactoryresults by directly operating on the individual point features, but it would likely perform similarly oreven better if we first passed through a point cloud encoder. We also incorporate the timestep tthatthe current prediction corresponds to by including the position-encoded timestep as an additionalinput token together with the object point tokens as O2R(N+1)dwhere O=Oposemb(t).We then use a Transformer encoder and decoder to process the combined tokenized point cloud (seeFigure A4 for visual depiction). This consists of performing multiple rounds of self-attention onthe scene features (encoder) and then performing a combination of self-attention on the object pointcloud together with cross-attention between the object point cloud and the output features of thescene point cloud (decoder):qS=QE(S)kS=KE(S)vS=VE(S)sS=Attention (qS;kS;vS) =softmaxqSkSTpdvSqO=QD(O)kO=KD(O)vO=VD(O)sO=Attention (qO;kO;vO) =softmaxqOkOTpdvOhO=Attention (q=sO;k=sS;v=sS) =softmaxsOsSTpdsSThis gives a set of output features hO2R(N+1)dwheredis the dimension of the embeddingspace. We compute a global feature by mean-pooling the output point features and averaging withthe timestep embedding as a residual connection, and then use a set of output MLPs to predict thetranslation and rotation (the rotation is obtained by converting a pair of 3D vectors into an orthonormal19Timestep embeddingPoolSelfAttnTransform SelfAttnCrossAttnPoint tokensMLPtoFigure A4: Architecture diagram showing a combination of self-attention and cross-attention among object andscene point cloud for SE(3)transform prediction. The scene point cloud is processed via multiple rounds ofself-attention, while the object features are combined via a combination of self-attention and cross-attentionwith the scene point cloud. The timestep embedding is incorporated as both an input token and via a residualconnection with the pooled output feature. The global output feature is used to predict the translation and rotationthat are applied to the object point cloud.basis and then stacking into a rotation matrix [10, 16]):hO=121NN+1Xi=1hO;i+posemb(t)hO2Rdt=MLP trans(hO) t2R3a; b=MLP rot(hO) a2R3; b2R3^a=ajjajj^b=bh^a;bi^ajjbjj^c= ^a^bR=24j j j^a^b^cj j j35Local scene point cloud cropping . As shown in the experimental results, local cropping helpsimprove performance due to increasing precision while generalizing well to unseen layouts of thescene. Our “Fixed” cropping method uses a box with fixed side length Lbox=Lmin, centered at thecurrent object point cloud iterate across all timesteps, and selects scene point cloud points that liewithin this box. Our “Varying” cropping method adjusts the length of the box based on the timestep,with larger timesteps using a larger crop, and smaller timesteps using a smaller crop. We parameterizethis as a function of the timestep tvia the following linear decay function:Lbox=Lmin+ (LmaxLmin)TtTwhereLminandLmaxare hyperparameters.Applying Predicted Transforms to Object Point Cloud . We apply the predicted rotation andtranslation by first mean-centering the object point cloud, applying the rotation, and then translatingback to the original world frame position, and then finally translating by the predicted translation.This helps reduce sensitivity to the rotation prediction, whereas if we rotate about the world framecoordinate axes, a small rotation can cause a large configuration change in the object.A2.3 Training DetailsHere we elaborate on details regarding training the RPDiff pose diffusion model using the demonstra-tion data and model architecture described in the sections above. A dataset sample consists of a tuple(PO;PS). From this tuple, we want to construct a perturbed object point cloud PO(t)for a particulartimestept21;:::;T , where lower values of tcorrespond to noised point clouds that are more similar20Skill TypeNumberof samplesMug/EasyRack 3190Mug/MedRack 950Mug/Multi-MedRack 3240Book/Shelf 1720Can/Cabinet 2790Table 2: Number of demonstrations used in each task. The same set of demonstrations is used to trainboth our method and each baseline method.to the ground truth, and larger values of Tare more perturbed. At the limit, the distribution of pointclouds corresponding to t=Tshould approximately match the distribution we will sample fromwhen initializing the iterative refinement procedure at test time.Noising schedules and perturbation schemes are an active area of research currently in the diffusionmodeling litierature [ 67,68], and there are many options available for applying noise to the datasamples. We apply a simple method that makes use of uniformly interpolated SE(3)transforms.First, we sample one “large” transform from the same distribution we use to initialize the test-timeevaluation procedure from – rotations are sampled uniformly from SO(3) and translations aresampled uniformly within a bounding box around the scene point cloud. We then use a combinationof linear interpolation on the translations, and spherical-linear interpolation (SLERP) on the rotations,to obtain a sequence of Tuniformly-spaced transforms (see Fig. A2 for example visualizations).Based on the sampled timestep t, we select the transform corresponding to timestep tin this sequenceas the noising perturbation T(t)noise, and use the transform corresponding to timestep t1to computethe “incremental”/“interval” transform to use as a prediction target. As discussed in Section 3.1, usingthe incremental transform as a prediction target helps maintain a more uniform output scale amongthe predictions across samples, which is beneficial for neural network optimization as it minimizesgradient fluctuations [ 12]. We also provide quantitative evidence that predicting only the incrementinstead of the full inverse perturbation benefits overall performance. See Section A7 for details.The main hyperparameter for this procedure is the number of steps T. In our experiments, weobserved it is important to find an appropriate value for T. WhenTis too large, the magnitude of thetransforms between consecutive timesteps is very small, and the iterative predictions at evaluationtime make tiny updates to the point cloud pose, oftentimes failing to converge. When Tis too small,most of the noised point clouds will be very far from the ground truth and might look similar acrosstraining samples but require conflicting prediction targets, which causes the model to fit the datapoorly. We found that values in the vicinity of T= 5work well across our tasks ( T= 2andT= 50both did not work well). This corresponds to an average perturbation magnitude of 2.5cm for thetranslation and 18 degrees for the rotation.After obtaining the ground truth prediction target, we compute the gradient with respect to the lossbetween the prediction and the ground truth, which is composed of the Chamfer distance betweenthe point cloud obtained by applying the predicted transform and the ground truth next point cloud.We also found the model to work well using combined translation mean-squared error and geodesicrotation distance [69, 70] loss.We trained a separate model for each task, with each model training for 500 thousand iterations ona single NVIDIA V100 GPU with a batch size of 16. We used a learning rate schedule of linearwarmup and cosine decay, with a maximum learning rate of 1e-4. Training takes about three days.We train the models using the AdamW [ 71] optimizer. Table 2 includes the number of demonstrationswe used for each task.A3 Test time evaluationHere, we elaborate in more detail on the iterative de-noising procedure performed at test time. StartingwithPOandPS, we sample Kinitial transformsf^T(I)kgKk=1, where initial rotations are drawn froma uniform grid over SO(3), and we uniformly sample translations that position the object within the21bounding box of the scene point cloud. We create Kcopies of POand apply each correspondingtransform to create initial object point clouds f^P(I)O;kgKk=1where ^P(I)O;k=^T(I)kPO. We then performthe following update for Isteps for each of the Kinitial transforms:^T(i1)= (TRand^T)^T(n) ^P(n1)O = (TRand^T)^P(i)Owhere transform ^Tis obtained as ^T=f(^P(i)O;PS;posemb(itot(i))). Transform TRandissampled from a timestep-conditioned uniform distribution that converges toward deterministicallyproducing an identify transform as itends toward 0. We obtain the random noise by sampling froma Gaussian distribution for both translation and rotation. For the translation, we directly output a3D vector with random elements. For the rotation, we represent the random noise via axis angle 3Drotation R0aa2R3and convert it to a rotation matrix using the SO(3)exponential map [ 72] (and a3D translation t02R3). We exponentially decay the variance of these noise distributions so thatthey produce nearly zero effect as the iterations tend toward 0. We perform the updates in a batch.The full iterative inference procedure can be found in Alg. 1.Evaluation timestep scheduling and prediction behavior for different timestep values. . Thefunction itotis used to map the iteration number ito a timestep value tthat the model has beentrained on. This allows the number of steps during evaluation ( I) to differ from the number of stepsduring training ( T). For example, we found values of T= 5to work well during training but useda default value of I= 50 for evaluation. We observed this benefits performance since runningthe iterative evaluation procedure for many steps helps convergence and enables “bouncing out” of“locally optimal” solutions. However, we found that if we provide values for ithat go beyond thesupport of what the model is trained on (i.e., for i>T ), the predictions perform poorly. Thus, thefunction itotensures all values i21;:::;I are mapped to an appropriate value t21;:::;T thatthe model has seen previously.There are many ways to obtain this mapping, and different implementations produce different kindsof behavior. This is because different itotschedules emphasize using the model in different wayssince the model learns qualitatively different behavior for different values of t. Specifically, forsmaller values of t, the model has only been trained on “small basins of attraction” and thus thepredictions are more precise and local, which allows the model to “snap on” to any solution in theimmediate vicinity of the current object iterate. Figure A5 shows this in a set of artifically constrainedevaluation runs where the model is constrained to use the same timestep for every step i= 1;:::;I .However, this can also lead the model to get stuck near regions that are far from any solution. On theother hand, for larger perturbations, the data starts to look more multi-modal and the model averagesout toward either a biased solution in the direction of a biased region, or just an identity transformthat doesn’t move the object at all.We find the pipeline performs best when primarily using predictions corresponding to smallertimesteps, but still incorporating predictions from higher timesteps. We thus parameterize thetimestep schedule itotsuch that it exponentially increases the number of predictions used forsmaller values of t. While there are many ways this can be implemented, we use the followingprocedure: we construct an array Dof lengthIwhere each element lies between 1 and T, and definethe mapping itotast=itot(i) =Di subscriptidenotes thei-th element of DThe arrayDis parameterized by a constant value A(where higher value of Acorresponds to usingmore predictions with smaller timesteps, while A= 1corresponds to using each timestep an equalnumber of times) and ensures that predictions for each timestep are made at least once:22Algorithm 1 Rearrangement Transform Inference via Iterative Point Cloud De-noising1:Input: Scene point cloud PS, object point cloud PO, number of parallel runs K, number ofiterations to use in evaluation I, number of iterations used in training T, pose regression model f,success classifier h, function to map from evaluation iteration values to training iteration valuesitot, parameters for controlling what fraction of evaluation iterations correspond to smallertraining timestep values A, local cropping function crop , distribution for sampling external posenoisepAnnealedRandSE(3)#Init transforms, transformed object, and cropped scene2:forkin 1,...,Kdo3: R(H)kpUnifSO(3) ()4: t(H)kpUnifBoundingBox (jPO;PS)5: ^T(H)k=R t016: ^P(H)O;k=^T(H)kPO7: P(H)S;k=crop (^P(H)O;k;PS)8:end for#Init set of transform and final point cloud solutions and classifier scores9:initS=;10:initT=;11:initP=;#Iterative pose regression12:foriinI,...,1 do#Map evaluation timestep to in-distribution training timestep13:t=itot(i;A)14: forkin 1,...,Kdo15: ^T;k=f(P(t)O;k;P(t)S;k;posemb(t))16: ifi>(0:2I)then#Apply random external noise, with noise magnitude annealed as iapproaches 017: TRand;kpAnnealedRandSE(3) (ji)18: else#Remove all external noise for the last 20% of the iterations19: TRand;k=I420: end if21: ^T(i1)k=TRand;kT;k^T(i)k22: ^P(i1)O;k=TRand;kT;k^P(i)O;k23: P(i1)S;k=crop (^P(i1)O;k;PS;t;T)24: ifi== 1 then#Predict success probabilities from final objects25: sk=h(P(0)O;k;PS)#Save final rearrangement solutions and predicted scores26:S=S[fskg27:T=T[f ^T(0)kg28:P=T[f ^P(0)O;kg29: end if30: end for31:end for#Decision rule (e.g., argmax) for output32:kout=argmax (S)33:Tout=T[kout]#Return top-scoring transform and full set of solutions for potential downstream planning/search34:return Tout;T;P;S23Figure A5: Examples of running our full iterative evaluation procedure for Isteps with the model constrainedto use a fixed value for ton each iteration. This highlights the different behavior the model has learned fordifferent timesteps in the de-noising process. For timesteps near 1, the model has learned to make very localupdates that “snap on” to whatever features are in the immediate vicinity of the object. As the timesteps getlarger, the model considers a more global context and makes predictions that reach solutions that are fartheraway from the initial object pose. However, these end up more biased to a single solution in a region where theremay be many nearby solutions (see the top row of shelves where there are four slots that the model finds whenusing timestep 1, but the model only reaches two of them with timestep t= 2and one of them with t= 3). Foreven larger values of t, the model has learned a much more biased and “averaged out” solution that fails to rotatethe object and only approximately reaches the scene regions corresponding to valid placements.B= [AT;;AT1:::;A2;A1] Exponentially decreasing valuesC=dAIPTi=1Aie Normalize, scale up by I, and round up (minimum value per element is 1)C=dCIPTi=1Cie Normalize again soTXi=1CiIwith Ci2N8i= 1;:::;TC1=C0(TXi=1CiI) EnsureTXi=1Ci=IexactlyThen, from C, we construct multiple arrays with values ranging from 1 to T, each with lengthscorresponding to values in C,D1= [D1;1D1;2:::]with D1;k= 18k21;:::;C1D2= [D2;1D2;2:::]with D2;k= 28k21;:::;C2:::DT= [DT;1DT;2:::]with DT;k=T8k21;:::;CTand then stack these arrays together to obtain Das a complete array of length I:D= [D1D2:::DT]A4 Success Classifier DetailsIn this section, we present details on training and applying the success classifier hthat we use forranking and filtering the set of multiple predicted SE (3)transforms produced by RPDiff.24Training Data . To train the success classifier, we use the demonstrations to generate positive andnegative examples, where the positives are labeled with success likelihood 1.0 and the negatives havesuccess likelihood 0.0. The positives are simply the unperturbed final point clouds and the negativesare perturbations of the final object point clouds. We use the same sampling scheme of sampling arotation from a uniform distribution over SO(3)and sampling a translation uniformly from within abounding box around the scene point cloud.Model Architecture . We use an identical Transformer architecture as described in Section A2,except that we use a single output MLP followed by a sigmoid to output the predicted successlikelihood, we do not condition on the timestep, and we provide the uncropped scene point cloud.Training Details . We supervise the success classifier predictions with a binary cross entropy lossbetween the predicted and ground truth success likelihood. We train for 500k iterations with batch size64 on a V100 GPU which takes 5 days. We augment the data by rotating the combined object-scenepoint cloud by random 3D rotations to increase dataset diversity.A5 Experimental SetupThis section describes the details of our experimental setup in simulation and the real world.A5.1 Simulated Experimental SetupWe use PyBullet [ 17] and the AIRobot [ 73] library to set up the tasks in the simulation and quantita-tively evaluate our method along with the baselines. The environment consists of a table with theshapes that make up the object and the scene, and the multiple simulated cameras that are used toobtain the fused 3D point cloud. We obtain segmentation masks of the object and the scene usingPyBullet’s built-in segmentation abilities.A5.2 Real World Experimental SetupIn the real world, we use a Franka Robot arm with a Robotiq 2F140 parallel jaw gripper attachedfor executing the predicted rearrangements. We also use four Realsense D415 RGB-D cameraswith known extrinsic parameters. Two of these cameras are mounted to provide a clear, close-upview of the object, and the other two are positioned to provide a view of the scene objects. Weuse a combination of Mask-RCNN, density-based Euclidean clustering [ 74], and manual keypointannotation to segment the object, and use simple cropping heuristics to segment the overall scenefrom the rest of the background/observation (e.g., remove the table and the robot from the observationso we just see the bookshelf with the books on it).A6 Evaluation DetailsThis section presents further details on the tasks we used in our experiments, the baseline methodswe compared RPDiff against, and the mechanisms we used to apply the predicted rearrangement tothe object in simulation and the real world.A6.1 Tasks and Evaluation CriteriaTask Descriptions . We consider three relational rearrangement tasks for evaluation: (1) hanging amug on the hook of a rack, where there might be multiple racks on the table, and each rack mighthave multiple hooks, (2) inserting a book into one of the multiple open slots on a randomly posedbookshelf that is partially filled with existing books, and (3) placing a cylindrical can upright eitheron an existing stack of cans or on a flat open region of a shelf where there are no cans there. Eachof these tasks features many placing solutions that achieve the desired relationship between theobject and the scene (e.g., multiple slots and multiple orientations can be used for placing, multipleracks/hooks and multiple orientations about the hook can be used for hanging, multiple stacks and/ormultiple regions in the cabinet can be used for placing the can, which itself can be placed with eitherflat side down and with any orientation about its cylindrical axis).25Evaluation Metrics and Success Criteria . To quantify performance, we report the average successrate over 100 trials, where we use the ground truth simulator state to compute success. For a trial tobe successful, the object OandSmust be in contact and the object must have the correct orientationrelative to the scene (for instance, the books must be onthe shelf, and must not be oriented with thelong side facing into the shelf). For the can/cabinet task, we also ensure that the object Odid not runinto any existing stacks in the cabinet, to simulate the requirement of avoiding hitting the stacks andknocking them over.We also quantify coverage via recall between the full set of predicted solutions and the precomputedset of solutions that are available for a given task instance. This is computed by finding the closestprediction to each of the precomputed solutions and checking whether the translation and rotationerror between the prediction and the solution is within a threshold (we use 3.5cm for the translationand 5 degrees for the rotation). If the error is within this threshold, we count the solution as “detected”.We compute recall for a trial as the total number of “detected solutions” divided by the total number ofsolutions available and report overall recall as the average over the 100 trials. Precision is computedin an analogous fashion but instead checks whether each prediction is within the threshold for at leastone of the ground truth available solutions.A6.2 Baseline Implementation and DiscussionIn this section, we elaborate on the implementation of each baseline approach in more detail andinclude further discussion on the observed behavior and failure modes of each approach.A6.2.1 Coarse-to-Fine Q-attention (C2F-QA).C2F-QA adapts the classification-based approach proposed in [ 8], originally designed for pick-and-place with a fixed robotic gripper, to the problem of relational object rearrangement. We voxelize thescene and use a local PointNet [ 75] that operates on the points in each voxel to compute per-voxelinput features. We then pass this voxel feature grid through a set of 3D convolution layers to computean output voxel feature grid. Finally, the per-voxel output features are each passed through a sharedMLP which predicts per-voxel scores. These scores are normalized with a softmax across the grid torepresent a distribution of “action values” representing the “quality” of moving the centroid of theobject to the center of each respective voxel. This architecture is based on the convolutional pointcloud encoder used in Convolutional Occupany Networks [7].To run in a coarse-to-fine fashion, we take the top-scoring voxel position (or the top- kvoxels ifmaking multiple predictions), translate the object point cloud to this position, and crop the scenepoint cloud to a box around the object centroid position. From this cropped scene and the translatedobject, we form a combined object-scene input point cloud and re-voxelize just this local portion ofthe point cloud at a higher resolution. We then compute a new set of voxel features with a separatehigh-resolution convolutional point cloud encoder. Finally, we pool the output voxel features fromthis step and predict a distribution over a discrete set of rotations to apply to the object. We founddifficulty in using the discretized Euler angle method that was applied in [ 8], and instead directlyclassify in a binned version of SO(3)by using an approximate uniform rotation discretization methodthat was used in [76].We train the model to minimize the cross entropy loss for both the translation and the rotation (i.e.,between the ground truth voxel coordinate containing the object centroid in the demonstrations andthe ground truth discrete rotation bin). We use the same object point cloud perturbation scheme tocreate initial “noised” point clouds for the model to de-noise but have the model directly predict howto invert the perturbation transform in one step.Output coverage evaluation . Since C2F-QA performs the best in terms of task success among allthe baselines and is naturally suited for handling multi-modality by selecting more than just theargmax among the binned output solutions, we evaluate the ability of our method and C2F-QA toachieve high coverage among the available placing solutions while still achieving good precision(see Section 5.2). To obtain multiple output predictions from C2F-QA, we first select multiple voxelpositions using the top-k voxel scores output by the PointNet !3D CNN!MLP pipeline. We thencopy the object point cloud and translate it to each of the selected voxel positions. For each selectedposition, we pool the local combined object-scene point cloud features and use the pooled featuresto predict a distribution of scores over the discrete space of rotations. Similar to selecting multiple26voxel positions, we select the top-k scoring rotations and use this full set of multiple translations +multiple rotations-per-translation as the set of output transforms to use for computing coverage.Relationship to other “discretize-then-classify” methods . C2F-QA computes per-voxel featuresfrom the scene and uses these to output a normalized distribution of scores representing the quality ofa “translation” action executed at each voxel coordinate. This idea of discretizing the scene and usingeach discrete location as a representation of a translational action has been successfully applied by anumber of works in both 2D and 3D [ 41,44,77]. In most of these pipelines, the translations typicallyrepresent gripper positions, i.e., for grasping. In our case, the voxel coordinates represent a locationto move the object for rearrangement.However, techniques used by “discreteize-then-classify” methods for rotation prediction somewhatdiverge. C2F-QA and the recently proposed PerceiverActor [ 44] directly classify the best discreterotation based on pooled network features. On the other hand, TransporterNets [ 41] and O2O-Afford [ 43] exhaustively evaluate the quality of different rotation actions by “convolving” somerepresentation of the object being rearranged (e.g., a local image patch or a segmented object pointcloud) in allpossible object orientations, with respect to each position in the entire discretized scene(e.g., each pixel in the overall image or each point in the full scene point cloud). The benefit is theability to help the model more explicitly consider the “interaction affordance” between the object andthe scene at various locations and object orientations and potentially make a more accurate predictionof the quality of each candidate rearrangement action. However, the downside of this “exhaustivesearch” approach is the computational and memory requirements are much greater, hence thesemethods have remained limited to lower dimensions.A6.2.2 Relational Neural Descriptor Fields (R-NDF).R-NDF [ 18] uses a neural field shape representation trained on category-level 3D models of the objectsused in the task. This consists of a PointNet encoder with SO(3)-equivariant Vector Neuron [ 78]layers and an MLP decoder. The decoder takes as input a 3D query point and the output of the pointcloud encoder, and predicts either the occupancy or signed distance of the 3D query point relative tothe shape. After training, a point or a rigid set of points in the vicinity of the shape can be encoded byrecording their feature activations of the MLP decoder. The corresponding point/point set relative to anew shape can then be found by locating the point/point set with the most similar decoder activations.These point sets can be used to parameterize the pose of local oriented coordinate frames, which canrepresent the pose of a secondary object or a gripper that must interact with the encoded object.R-NDFs have been used to perform relational rearrangement tasks via the process of encoding task-relevant coordinate frames near the object parts that must align to achieve the desired rearrangement,and then localizing the corresponding parts on test-time objects so a relative transform that aligns themcan be computed. We use the point clouds from the demonstrations to record a set of task-relevantcoordinate frames that must be localized at test time to perform each of the tasks in our experiments.The main downside of R-NDF is if the neural field representation fails to faithfully represent the shapecategory, the downstream corresponding matching also tends to fail. Indeed, owing to the globalpoint cloud encoding used by R-NDF, the reconstruction quality on our multi-rack/bookshelf/cabinetscenes is quite poor, so the subsequent correspondence matching does not perform well on any of thetasks we consider.A6.2.3 Neural Shape Mating (NSM) + CV AE.Neural Shape Mating (NSM) [ 3] uses a Transformer to process a pair of point clouds and predict howto align them. The method was originally deployed on the task of “mating” two parts of an object thathas been broken but can be easily repurposed for the analogous task of relational rearrangement givena point cloud of a manipulated object and a point cloud of a scene/“parent object”. Architecturally,NSM is the same as our relative pose regression model, with the key differences of (i) being trainedon arbitrarily large perturbations of the demonstration point clouds, (ii) not using local cropping,and (iii) only making a single prediction. We call this baseline “NSM-base” because we do notconsider the auxiliary signed-distance prediction and learned discriminator proposed in the originalapproach [ 3]. As shown in Table 1, the standard version of NSM fails to perform well on any ofthe tasks that feature multi-modality in the solution space (nor can the model successfully fit thedemonstration data). Therefore, we adapted it into a conditional variational autoencoder (CV AE) thatat least has the capacity to learn from multi-modal data and output a distribution of transformations.27We use the same Transformer architecture for both the CV AE encoder and decoder with some smallmodifications to the inputs and outputs to accommodate (i) the encoder also encoding the groundtruth de-noising transforms and predicting a latent variable z, and (ii) the decoder conditioning on zin addition to the combined object-scene point cloud to reconstruct the transform. We implementthis with the same method that was used to incorporate the timestep information in our architecture –for the encoder, we include the ground truth transform as both an additional input token and via aresidual connection with the global output feature, and for the decoder, we include the latent variablein the same fashion. We also experimented with concatenating the residually connected features anddid not find any benefit. We experimented with different latent variable dimensions and weightingcoefficients for the reconstruction and the KL divergence loss terms, since the CV AE models stillstruggled to fit the data well when the KL loss weight was too high relative to the reconstruction.However, despite this tuning to enable the CV AE to fit the training data well, we found it struggled toperform well at test time on unseen objects and scenes.A6.3 Common failure modesThis section discusses some of the common failure modes for each method on our three tasks.ForBook/Shelf , our method occasionally outputs a solution that ignores an existing book alreadyplaced in the shelf. We also sometimes face slight imprecision in either the translation or rotationprevents the book from being able to be inserted. Similarly, the main failure modes on this taskfrom the baselines are more severe imprecision. C2F-QA is very good at predicting voxel positionsaccurately (i.e., detecting voxels near open slots of the shelf) and the rotation predictions are regularlyclose to something that would work for book placement, but the predicted book orientations areregularly too misaligned with the shelf to allow the insertion to be completed.ForMug/Rack , a scenario where our predictions sometimes fail is when there is a tight fit betweenthe nearby peg and the handle of the mug. For C2F-QA, the predictions appear to regularly ignore thelocation of the handle when orienting the mug – the positions are typically reasonable (e.g., right nextto one of the pegs on a rack) but the orientation oftentimes appears arbitrary. We also find C2F-QAachieves the highest training loss on this task (and hypothesize this occurs for the same reason).Finally, for Can/Cabinet , a common failure mode across the board is predicting a can position thatcauses a collision between the can being placed and an existing stack of cans, which we don’t allowto simulate the requirement of avoiding knocking over an existing stack.A6.4 Task ExecutionThis section describes additional details about the pipelines used for executing the inferred relationsin simulation and the real world.A6.4.1 Simulated Execution PipelineThe evaluation pipeline mirrors the demonstration setup. Objects from the 3D model dataset for therespective categories are loaded into the scene with randomly sampled position and orientation. Wesample a rotation matrix uniformly from SO(3), load the object with this orientation, and constrainthe object in the world frame to be fixed in this orientation. We do not allow it to fall on the tableunder gravity, as this would bias the distribution of orientations covered to be those that are stable ona horizontal surface, whereas we want to evaluate the ability of each method to generalize over allofSO(3). In both cases, we randomly sample a position on/above the table that are in view for thesimulated cameras.After loading object and the scene, we obtain point clouds POandPSand use RPDiff to obtain arearrangement transform to execute. The predicted transformation is applied by resetting the objectstate to a “pre-placement” pose and directly actuating the object with a position controller to follow astraight-line path. Task success is then checked based on the criteria described in the section above.Pre-placement Offset and Insertion Controller . Complications with automatic success evaluationcan arise when directly resetting the object state based on the predicted transform. To avoid suchcomplications, we simulate a process that mimics a closed-loop controller executing the last fewinches of the predicted rearrangement from a “pre-placement” pose that is a pure translational offsetfrom the final predicted placement. For our quantitative evaluations, we use the ground truth state of28the objects in the simulator together with prior knowledge about the task to determine the direction ofthis translational offset. For the mug/rack task, we determine the axis that goes through the handleand offset by a fixed distance in the direction of this axis (taking care to ensure it does not go in theopposite direction that would cause an approach from the wrong side of the rack). For the can/cabinettask and the book/bookshelf task, we use the known top-down yaw component of the shelf/cabinetworld frame orientation to obtain a direction that offsets along the opening of the shelf/cabinet.To execute the final insertion, we reset to the computed pre-placement pose and directly actuate theobject with a position controller to follow a straight line path from the pre-placement pose to thefinal predicted placement. To simulate some amount of reactivity that such an insertion controllerwould likely possess in a full-stack rearrangement system, we use the simulator to query contactforces that are detected between the object and the scene. If the object pose is not close to the finalpredicted pose when contacts are detected, we back off and sample a small “delta” translation andbody-frame rotation to apply to the object before attempting another straight-line insertion. Thesesmall adjustments are attempted up to a maximum of 10 times before the execution is counted as afailure. If, upon detecting contact between the object and the scene, the object is within a threshold ofits predicted place pose, the controller is stopped and the object is dropped and allowed to fall undergravity (which either allows it to settle stably in its final placement among the scene object, or causesit to fall away from the scene). We use this same procedure across all methods that we evaluated inour experiments.We use this combination of a heuristically-computed pre-placement pose and “trial-and-error” inser-tion controller because (i) it removes the need for a full object-path planning component that searchesfor a feasible path the object should follow to the predicted placement pose (as this planning problemwould be very challenging to solve to due all the nearby collisions between the object and the scene),(ii) it helps avoid other artificial execution failures that can arise when we perform the insertion fromthe pre-placement pose in a purely open-loop fashion, and (iii) it enables us to avoid complicationsthat can arise from directly resetting the object state based on the predicted rearrangement transform.However, we also observe some failure modes and brittleness that arises from our use of manualcomputation and heuristics to compute these pre-placement poses, and in the future, we would like toexplore predicting additional feasible waypoint poses that help construct a full path from start to goalfor the object. Below, we include further details and discussion on the heuristics used for computingthe pre-placement offsets in simulation.A6.4.2 Computing pre-placement offset poses with task-specific heuristics in simulationFuture versions ought to introduce predictions of more intermediate waypoints (note diffusion hasshown to be useful in this context as well, e.g,. for motion planning/trajectory modeling [1, 2, 3]).Book/bookshelf and Can/cabinet . Since the simulator pose of each object is available, we usethe top-down orientation of the shelf/cabinet to obtain the offset vector. The [x;y]world-framecomponents of the vector are computed such that, from a top-down perspective, the 2D vector isperpendicular to the front opening of the shelf/cabinet. The zcomponent of the vector is set to 0.This allows the books/cans to be moved to the vicinity of the final placement, with a pure 2D offsetsuch that moving along this offset in a straight line can achieve successful insertion/stacking. If theorientation or the position of the predicted pose is wrong following the 2D vector from the offsetversion of these poses can cause the insertion/placement to fail. Example reasons for this failureinclude the book not fitting, due to an incorrect orientation, and the can colliding with one of theexisting stacks (which we check for and count as a failure).Mug/rack . We use simulated ShapeNet mugs that have a canonical orientation. Based on thiscanonical orientation, we know the 3D vector direction corresponding to a vector that points throughthe opening of the handle on the mug. This vector can point in two different directions; we selectthe one with the larger +zcomponent, based on the knowledge that the mug should approach therack from above (since the hooks are angled slightly upward, to avoid the mugs falling when theyare hung). Using this vector, we translate the mug from its predicted hanging pose along a directionthat goes through the handle, so that when we actuate it from this offset (assuming the prediction isaccurate), the hook ends up going through the handle. If the position or orientation of the predictionis incorrect, then the offset pose will be computed so that the mug either cannot be placed on the rack(due to collisions occurring between the handle and the hook) or the placement will miss the hookentirely (so the mug falls away and fails to be hung) – both of these cases are treated as failures.29A6.4.3 Real World Execution PipelineHere, we repeat the description of how we execute the inferred transformation using a robot arm withadditional details. At test time, we are given point clouds POandPSof object and scene, and weobtain T, the SE(3)transform to apply to the object from RPDiff. Tis applied to Oby transformingan initial grasp pose Tgrasp, which is obtained using a separate grasp predictor [ 10], byTto obtain aplacing pose Tplace=TT grasp. As in the simulation setup, we use a set of task-dependent heuristicsto compute an additional “pre-placement” pose Tpre-place , from which we follow a straight-line end-effector path to reach Tplace. We then use off-the-shelf inverse kinematics and motion planning tomove the end-effector to TgraspandTplace.To ease the burden of collision-free planning with a grasped object whose 3D geometry is unknown,we also compute an additional set of pre-grasp and post-grasp waypoints which are likely to avoidcausing collisions between the gripper and the object during the execution to the grasp pose, andcollisions between the object and the table or the rest of the scene when moving the object to thepre-placement pose. Each phase of the overall path is executed by following the joint trajectory inposition control mode and opening/closing the fingers at the correct respective steps. The wholepipeline can be run multiple times in case the planner returns infeasibility, as the inference methodsfor both grasp and placement generation have the capacity to produce multiple solutions.A6.4.4 Computing pre-placement offset poses with task-specific heuristics in the real worldHere, we describe details on the heuristics we used to compute the pre-placement poses are includedin the subsection below. We acknowledge that noise in the computation of these pre-placement poseswas a common source of execution failure in our real-world qualitative trials, and future work thatalso learns to robustly predict additional feasible waypoint poses that help reach the final placementis likely to support improved rearrangement performance in the real world.Book/bookshelf . In the real world, we again use the knowledge that the placement offset shouldprimarily consist of a 2D [x;y]translation from the predicted pose on the shelf. We fit an orientedbounding box to the predicted book point cloud and select the 2D vector corresponding to the longestcorner on the bottom face of the bounding box (with a small +zcomponent, to help the placementavoid clipping the shelf with the bottom of the book by approaching from slightly above). Again, this2D vector can have two potential directions, and we select the one that points toward the center of thetable (assuming we are not placing on a shelf from the far edges of the table).Can/cabinet . For the real-world can stacking task, we cropped a portion of the cabinet point cloud(to avoid any outlier points from the table), fit a bounding box to it, and selected the 2D vectorcorresponding to the corner on the bottom face of the bounding box that pointed from the center ofthe cabinet point cloud most closely to the center of the table (again, assuming we were approachingfrom near the center of the table, rather than the table edges).Mug/rack . In the real world, we attempt to approximate the 3D offset vector based on fitting a3D line to the part of the point cloud corresponding to the nearby hook. Due to noise in the pointcloud and an imperfect ability to solely segment out the hook from the body of the rack, this offsetcomputation was the least robust and introduced some failed execution attempts.A7 Extra AblationsIn this section, we perform additional experiments wherein different system components are modifiedand/or ablated.With vs. Without Success Classifier . We use neural network hto act as a success classifier andsupport selecting a “best” output among the Kpredictions made by our iterative de-noising procedure.Another simple mechanism for selecting an output index kexecfor execution would be to uniformlysample among the Koutputs. However, due to the local nature of the predictions at small values of tand the random guess initializations used to begin the inference procedure, some final solutions endin configurations that don’t satisfy the task (see the book poses that converge to a region where thereis no available slot for placement in Figure A5 for A= 10 ).Therefore, a secondary benefit of incorporating his to filter out predictions that may have convergedto these “locally optimal” solutions, as these resemble some of the negatives that the classifier has30No external noiseWith external noise ( )small noise scale medium noise scaleFigure A6: Examples of running our full iterative evaluation procedure for Isteps with different values ofA(and subsequently, D) in our itotfunction (which maps from test-time iteration values n= 1; :::; I tothe timestep values t= 1; ::; T that were used in training), and with different amounts of external noise TRandadded from the annealed external noise distribution pAnnealedRandSE(3) (). We observe that with large values of A,the model makes more predictions with smaller values of t. These predictions are more local and the overallsolutions converge to a more broad set of rearrangement transforms. This sometimes leads to “locally optimal”solutions that fail at the desired task (see top right corner with A= 10 ). With small A, the early iterations aremore biased toward the average of a general region, so the set of transforms tends to collapse on a single solutionwithin a region. By incorporating external noise, a better balance of coverage for smaller values of Aand “localoptima” avoidance for larger values of Acan be obtained.seen during training. Indeed, we find the average success rate across tasks with RPDiff when using thesuccess classifier is 0.88, while the average success when uniformly sampling the output predictionsis 0.83. This difference is relatively marginal, indicating that the majority of the predictions madeby the pose de-noising procedure in RPDiff are precise enough to achieve the task. However, theperformance gap indicates that there is an additional benefit of using a final success classifier to rankand filter the outputs based on predicted success.Noise vs. No Noise . In each update of the iterative evaluation procedure, we update the overallpredicted pose and the object point cloud by a combination of a transform predicted by fand arandomly sampled “external noise” transform TRand. The distribution that TRandis sampled from isparameterized by the iteration number ito converge toward producing an identity transform so thefinal pose updates are purely a function of the network f.The benefit of incorporating the external noise is to better balance between precision and coverage.First, external noise helps the pose/point cloud at each iteration “bounce out” of any locally optimalregions and end up near regions where a high quality solution exists. Furthermore, if there aremany high-quality solutions close together, the external noise on later iterations helps maintain somevariation in the pose so that more overall diversity is obtained in the final set of transform solutions.For instance, see the qualitative comparisons in Figure A6 that include iterative predictions both withand without external noise. For a value of A= 1initot, only two of the available shelf slots arefound when no noise is included. With noise, however, the method finds placements that cover fourof the available slots. Quantitatively, we also find that incorporating external noise helps in terms of31overall success rate and coverage achieved across tasks. The averageSuccess Rate, Recallacrossour three tasks with and without noise was found to be (0.88, 0.44) and (0.83, 0.36), respectively.Number of diffusion steps Tduring training . The total number of steps Tand the noise distributionfor obtaining perturbation a transform T(t)noiseaffects the magnitude of the translation and rotationpredictions that must be made by the model f. While we did not exhaustively search over thesehyperparameters, early in our experiments we found that very small values of T(e.g.,T= 2) causethe predictions to be much more imprecise. This is due to the averaging that occurs between trainingsamples when they are too far away from the ground truth. In this regime, the examples almostalways “look multi-modal” to the model. On the other hand, for large values of T(e.g.,T= 50 ),the incremental transforms that are used to de-noise become very small and close to the identitytransform. When deployed, models trained on this data end up failing to move the object from itsinitial configuration because the network has only learned to make extremely small pose updates.We found a moderate value of T= 5works well across each of our tasks, though other similar valuesin this range can likely also provide good performance. This approximately leads the average outputscale of the model to be near 2.5cm translation and 18-degree rotation. We also observe a benefit inbiasing sampling for the timesteps t= 1;:::;T to focus on smaller values. This causes the model tosee more examples close to the ground truth and make more precise predictions on later iterationsduring deployment. We achieve this biased sampling by sampling tfrom an exponentially decayingcategorical probability distribution over discrete values 1;2;:::;T .Incremental targets vs. full targets . As discussed in Section 3.1, encouraging the network ftopredict values with roughly equal magnitude is beneficial. To confirm this observation from theliterature, we quantitatively evaluate a version of the de-noising model ftrained to predict the fullde-noising transformT(t)noise1. The quantitativeSuccess Rate, Recallresults averaged acrossour three tasks with the incremental de-noising targets are (0:88;0:44), while the model trained onfull de-noising targets are (0:76;0:34). These results indicate a net benefit in using the incrementaltransforms as de-noising prediction targets during training.Value ofAinitot. In this section, we discuss the effect of the value Ain the itotfunctionused during the iterative evaluation procedure. The function itotmaps evaluation iteration values ito timestep values tthat were seen during training. For instance, we may run the evaluation procedurefor 50 iterations, while the model may have only been trained to take values up to t= 5as input. Ouritotfunction is parameterized by Asuch that larger values of Alead to more evaluation iterationswith small values of t. AsAapproaches 1, the number of iterations for each value of tbecomes equal(i.e., forA= 1, the number of predictions made for each value of tis equal toI=T).Figure A6 shows qualitative visualizations of de-noising the pose of a book relative to a shelf withmultiple available slots with different values of Ain the itotfunction. This example shows thatthe solutions are more biased to converge toward a single solution for smaller values of A. This isbecause more of the predictions use larger values of t, which correspond to perturbed point cloudsthat are farther from the ground truth in training. For these perturbed point clouds, their associationwith the correct target pose compared to other nearby placement regions is more ambiguous. Thus,for larget, the model learns an averaged-out solution that is biased toward a region near the averageof multiple placement regions that may be close together. On the other hand, for large A, morepredictions correspond to small values of tliket= 1andt= 0. For these timesteps, the model haslearned to precisely snap onto whatever solutions may exist nearby. Hence, the pose updates aremore local and the overall coverage across the Kparallel runs is higher. The tradeoff is that thesepredictions are more likely to remain stuck near a “locally optimal” region where a valid placementpose may not exist. Table 3 shows the quantitative performance variation on the Book/Shelf task fordifferent values of Ain the itotfunction. These results reflect the trend toward higher coverageand marginally lower success rate for larger values of A.A8 Further Discussion on Real-world System Engineering and LimitationsThis section provides more details on executing rearrangement via pick-and-place on the real robot(to obtain the results shown in Figures 1 and 4) and discusses additional limitations of our approach.32MetricValue ofAinitot1 2 5 10 20Success Rate 1.00 0.95 0.96 0.94 0.90Recall (coverage) 0.37 0.41 0.48 0.48 0.52Table 3: Performance for different values of Ainitot. Larger values of Aobtain marginally betterprecision at the expense of worse coverage (lower recall).A8.1 Executing multiple predicted transforms in sequence in real-world experimentsThe output of the pose diffusion process in RPDiff is a set of KSE(3)transformsfT(0)kgKk=1. Toselect one for execution, we typically score the outputs with success classifier hand search throughthe solutions while considering other feasibility constraints such as collision avoidance and robotworkspace limits. However, to showcase executing a diverse set of solutions in our real-worldexperiments, a human operator performs a final step of visually inspecting the set of feasible solutionsand deciding which one to execute. This was mainly performed to ease the burden of recording robotexecutions that span the space of different solutions (i.e., to avoid the robot executing multiple similarsolutions, which would fail to showcase the diversity of the solutions produced by our method).A8.2 Expanded set of limitations and limiting assumptionsSection 7 mentions some of the key limitations of our approach. Here, we elaborate on these anddiscuss additional limitations, as well as potential avenues for resolving them in future work.•We train from scratch on demonstrations and do not leverage any pre-training or feature-sharingacross multiple tasks. This means we require many demonstrations for training. A consequence ofthis is that we cannot easily provide enough demonstrations to train the diffusion model in the realworld (while still enabling it to generalize to unseen shapes, poses, and scene layouts). Furthermore,because we train only in simulation and directly transfer to the real world, the domain gap causessome challenges in sim2real transfer, so we do observe worse overall prediction performance in thereal world. This could be mitigated if the number of demonstrations required was lower and wecould train the model directly on point clouds that appear similar to those seen during deployment.•In both simulation and the real world, we manually completed offset poses for moving the objectbefore executing the final placement. A more ideal prediction pipeline would involve generating“waypoint poses” along the path to the desired placement (or even the full collision-free path, e.g.,as in [79]) to support the full insertion trajectory rather than just specifying the final pose.•Our method operates using a purely geometric representation of the object and scene. As such,there is no notion of physical/contact interaction between the object and the scene. If physicalinteractions were considered in addition to purely geometric/kinematic interactions/alignment, themethod may be even more capable of accurate final placement prediction and avoid some of thesmall errors that sometimes occur. For instance, a common error in hanging a mug on a rack isto have the handle justmiss the hook on the rack. While these failed solutions are geometricallyvery close to being correct, physically, they are completely different (i.e., in one, contact occursbetween the two shapes, while in the other, there is no contact that can support the mug hanging).•Our method operates using 3D point clouds which are currently obtained from depth cameras.While this permits us to perform rearrangements with a wide variety of real-world objects/scenesthat can be sensed by depth cameras, there are many objects which cannot be observed by depthcameras (e.g., thin, shiny, transparent objects). Investigating a way to perform similar relationalobject-scene reasoning in 6D using signals extracted from RGB sensors would be an excitingavenue to investigate.33A9 Additional ResultsA9.1 Training multi-task models for pose de-noisingWhile Section 5 shows results for models trained on datasets corresponding to single tasks, here, wediscuss preliminary results on training one model on data from all the tasks together. In particular,we trained the diffusion model fon the combined set of demonstrations across all three tasks andevaluated its performance on held-out test instances of each task. The average success rate acrosstasks was 85%, which is comparable to the performance achieved by the single-task models (88%).A9.2 Multi-step Manipulation with RPDiffIn this section, we provide a qualitative example of how RPDiff can be used to support predictingand executing rearrangement actions for tasks requiring multiple steps and/or manipulating multipleobjects in sequence. We use the example of placing three books on a table into a shelf, one by one.Initial scene + 3 book point cloudsFinal transformed book point clouds, after sequence of predicted placementsFigure A7: (Left) Initial environment with three books on a table (with corresponding point clouds shown ingreen, red, and blue) along with a bookshelf with multiple open slots (bookshelf point cloud shown in black).The task is to place all three books into the shelf by sequentially predicting transforms that should be applied toeach of them. We will use RPDiff to achieve this by cycling through each book point cloud and updating thecorresponding scene point cloud on each step. (Right) The predicted transform for each book is shown (see thegreen, red, and blue point clouds transformed into configurations on the shelf).In Fig. A7 (left), we show the original scene with three books and a shelf. In Fig. A7 (right), weshow the same scene and initial objects, along with corresponding transformed objects (with colorsindicating which initial and final book point clouds go together). These final point clouds have beenobtained by sequentially (i) inferring a relative SE(3) transform of one of the books, followed by (ii)modifying the scene point cloud to include the newly-transformed book point cloud, so that it can beconsidered as part of the scene on the prediction of where to place the next book.To begin, the first book (picked at random - shown in blue in Fig. A8) is transformed into theconfiguration shown in yellow (Fig. A8, left) using a prediction from RPDiff. Subsequently, thetransformed book point cloud is added to the point cloud representing the scene (Fig. A8, right).Next, we perform the same process (i.e. apply RPDiff with a new book point cloud, shown in bluein Fig. A9) with the newscene point cloud resulting from step one. Note this could be “imagined”by directly applying the predicted transform to the originally-observed point cloud of the first book.Alternatively, we could execute the first predicted step and then re-perceive the whole scene (whichwould now include the just-placed first book). For simplicity, we have shown “imagining” the newscene point cloud by transforming the originally-observed book point cloud based on the predictedplacement transform. Finally, we repeat this process for a third step with the remaining book (shownin blue in Fig. A10), again updating the “full” scene point cloud to reflect the placement of the first twobooks (see Fig. A9, right). Note that each of these placements was selected as the maximum-scoringprediction as evaluated by our success classifier.Overall, as shown in Fig. A7 (right), we have obtained a set of SE(3) transforms corresponding toeach of the three books on the table, and by either imagining the execution of each step or performingthe execution of each step (and then re-perceiving), we can take into account the placement of earlierbooks when having RPDiff infer where to place the next books.34(1) Imagined book point cloud added to scene point cloudFigure A8: Multi-step book placement, step 1. The first book is selected at random (point cloud shown inblue) and RPDiff is used to obtain a transform for placing this book on the shelf. The transformed point cloudis shown in yellow on the left, and the corresponding newscene point cloud (with the transformed book pointcloud included) is shown in red on the right .(2) Imagined book point cloud added to scene point cloudFigure A9: Multi-step book placement, step 2. The updated “full scene point cloud” from step 1 is used as theinput scene point cloud to step 2. The second book is selected (point cloud shown in blue) and a correspondingtransform is obtained with RPDiff. Once again, the transformed point cloud is shown in yellow on the left, andthe corresponding newscene point cloud (with the transformed book point cloud included) is shown in red ontheright .(3) Imagined book point cloud added to scene point cloudFigure A10: Multi-step book placement, step 3. This process is repeated a final time with the third book(point cloud shown in blue) and the scene point cloud with both other books in their “imagined” placementconfiguration. RPDiff predicts a rearrangement transform to place the third book (shown in yellow on the left),such that the process could be continued with the newly updated scene point cloud ( right ) if needed.35A10 Expanded DiscussionA10.1 Demo collection in simulation with manually-designed heuristics, scripted policies,and known 3D modelsHere we clarify the details and context for our specific choice of data generation/demonstrationcollection procedure and mention alternatives that could have been employed instead. While thespecific approach taken in this work involves scripted mechanisms for generating demonstrations ofeach rearrangement task, it is important to note our method does notfundamentally rely on thesecomponents. We only rely on demonstrations of object-scene point clouds in final configurations thatsatisfy the desired object-scene relation. These demonstrations could equivalently come from the realworld (i.e., via teleoperation or kinesthetic teaching, similar to [ 18,37]), but we were unable to collecta large number of these demonstrations in the real world, and instead opted to automatically collectthem in simulation. With this goal, we take advantage of privileged simulator and model information,as well as task-specific heuristics, purely for automating data generation. This information includes3D models, simulator states, canonical object poses, and available placing locations in the scenes(which were recorded when scene objects were generated). One goal of future work is to bring downthe number of demos required so the training examples can be directly shown in the real world onreal objectsRelated to this choice of collecting demos with scripted policies and manually designed heuristics,potential questions and concerns may arise regarding why the same heuristics could not have beenused to solve the tasks in the first place. To this point, we emphasize that the information used togenerate the demos is notavailable in the real world, and that we only use the heuristics and privilegedinformation to generate training data. From this data, we obtain models that operate with point clouds,and these models canbe deployed in the real world on unknown objects and scenes due to their use ofa representation that can be obtained from standard perception systems. This is somewhat analogousto the paradigm of teacher-student training in reinforcement learning and imitation learning, whereina “teacher” policy is trained using privileged information and then distilled to a “student” policy thatoperates from perception [80, 81].A10.2 Scripted demo collection using task-specific heuristics and privileged informationBook/Shelf Placing . We manually generated the bookshelf 3D models as shelves with books placedrandomly among the available locations on each shelf. In creating these shelf models, we placedbooks in some of the existing slot locations and recorded locations and orientations near each of theremaining open slots. These open slot poses were then used when generating object-scene point clouddemonstrations, so that we could directly obtain SE(3) transforms that move books from their initialposes into final poses within the shelf. The real-world alternative would be to manually configure adiverse set of shelves and book placements within the shelves, and collect demonstrations of placingnew books on these shelves.Can/Cabinet Stacking . Similar to the approach for book/shelf placing, we programmaticallygenerated the 3D scenes with cabinets and existing stacks of cans and recorded poses of availableposes either on top of the existing can stacks or in regions large enough for making new stacks.Again, we use this information in the data generation phase to directly transform new cans intoconfigurations that are either “on” an existing stack or begin a new stack in an open area. Thereal-world alternative would be to manually configure a diverse set of cabinets and can stacks withinthe cabinets. With these scenes, we could then collect demonstrations of placing/stacking new canswithin these cabinets.Mug/Rack-Multi Hanging . The mugs come from ShapeNet, where the 3D models for each categoryare all canonically aligned (i.e., the opening is aligned with the +yaxis and the handle is alignedwith the +zaxis of the body frame). We use knowledge of these canonical poses to approximatethe pose of the handle on each mug and similarly use approximate knowledge of the hook on eachrack to roughly align the mug handle with the hook of a rack. Since these estimates are not perfect,we sample random perturbations of these poses until one is found that (1) does not cause collisionbetween the objects and (2) leads to stable hanging of the mug on the rack when it falls under gravity.Similar to the above two tasks, with a few racks and mug instances, such demos could be collected inthe real world without introducing any such assumptions.36A10.3 Expanded Performance Analysis and DiscussionHere we provide further analysis and characterization of the system performance.A10.3.1 Simulation performance breakdownWhile our simulation success rates are high, there remains room for improved performance. First, wedid not achieve complete coverage over all possible modes in the space of placing solutions (e.g., notethat recall terminates near 0.68 in Fig. 3a). We find that there remains some bias toward and/or awayfrom certain modes. This could possibly arise from bias or spurious correlations in the object pointclouds and demonstrated placements. For example, the initial book point clouds often had one largevisible face, with the other less visible (due to laying flat on the table). It is possible that the datacontained a spurious correlation where the visible side was aligned with the open slot in a particularorientation more frequently, thus resulting in the model acquiring this bias. Another observation isthat sometimes two modes are located very close together (e.g., two open slots might be directly nextto each other). In these cases, the model may have again learned a biased solution either preferringone over the other or reaching an average between these close-together solutions. Either of thesewould lead to the less-visited/separate solutions to be predicted less frequently.The model also occasionally tries to place objects in parts of the scene that cannot be reached (e.g.,placing a book where there is already a book). This is sometimes due to ambiguity in the sceneobject poses (i.e., it can be hard to distinguish the front vs. back of the shelf). Finally, the modelsometimes places an object “just off”. For example, the handle of the mug is aligned to the rack,but is not far enough “on” the hook, resulting in the mug just missing the hook when dropped undergravity. Similarly, predicted can placements are sometimes very close but cause a collision with thestack of cans below. This highlights the large precision demands placed on a system that can performrearrangement with high performance.A10.3.2 Real world performance breakdown and sim2real challengesPlacement prediction accuracy . The distribution shift between simulated and real point cloudsappeared to be the largest source of lower-quality transform predictions that did not succeed at placing.This shift is in part caused by depth sensor noise, point cloud outliers, imperfect camera calibration,and shiny object surfaces. Another source of distribution shift is generated training scenes not havingperfect realism. Sim2real pipelines are still heavily bottlenecked by the effort of creating highlyrealistic 3D assets andconfiguring diverse yet structurally/physically valid scene layouts to train on.We explored some techniques to mitigate the negative effects of sim2real gaps such as adding smallper-point Gaussian noise to the point clouds and simulating additional occlusions from randomlyposed synthetic cameras. We observed some marginal benefit in applying these techniques but didnot rigorously evaluate how much they helped. Other ideas to explore for reducing sim2real gapswith 3D point clouds include using predicted depth models that can produce cleaner depth imagesthan our depth sensors, using better depth sensors, and training and deploying more high-fidelitynoise models to augment the simulation data and make it more similar to the real world.Robot execution success . One aspect of robot execution success is unaccounted object motion thatoccurs while grasping/moving the object. We made the simplifying assumption that that object doesnot move during grasping and that, after grasping, the object is rigidly coupled to the gripper. Ifthere is object movement during grasping/trajectory execution, the placement may become inaccurate.One way to mitigate this would be to estimate object motion post-grasp and account for this whenreaching the placing pose. We did not implement this or similar system engineering improvementsbut it should be straightforward to pursue in the future.Another source of failure was imperfect computation of task-specific pre-placement offset poses.Noise in the point cloud and brittleness of the heuristics we used sometimes meant the pre-placementoffset we obtained did not allow a feasible approach to reach the predicted pose, even when our finalpose predictions looked great. In future versions of the system, we want to incorporate predictions ofadditional feasible waypoints along the path to the placement.Finally, another limitation having more to do with execution efficiency is the planning times requiredfor searching for IK solutions and motion plans. Improvements in this respect are orthogonal to our37primary objective of predicting object placements that satisfy the desired relation but are certainlyimportant for future versions of the system to operate more effectively.A10.4 Computational efficiencyWe timed the forward pass (with our hyperparameters) to require 280ms with K=32 and 49ms withK=1. We also measured the time required for 50 eval iterations. It took 3.3s with K=1 and 24.46swithK=32 (all on a V100 GPU). Multi-step inference is mainly bottlenecked by other operationslike (re-)cropping the scene point cloud - our batched implementations of these can be improved toreduce runtime. We leave optimizing computational speed for real-time performance to future work,but we’re optimistic based on the results shown in [ 56] and our observations of RPDiff achievinggood performance with as few as 10 iterations.A10.5 Additional Related Work DiscussionOmniHang [46] proposes a multi-stage approach for generating hanging placements in a category-agnostic fashion. Their pipeline involves coarse hanging pose regression, keypoint detection andalignment, and refinement via the cross-entropy method (CEM) with a learned reward function.Their first stage is analogous to our NSM + CV AE baseline, which we found to perform poorlydue to the limits of predicting a full transform in a single step and constraining the distribution oflatent variables to follow a unimodal Gaussian. The second stage of OmniHang is analogous to ourR-NDF baseline, which localizes task-relevant object parts and brings them into alignment. We couldhave explored using a supervised keypoint detector, as in OmniHang, instead of matching featureslearned from self-supervision as in R-NDF, but this would have required additional manual keypointlabeling. Finally, stage three of OmniHang is analogous to using our learned success classifier torefine the prediction by performing local optimization, i.e., to maximize the score of the classifier(this has also been deployed in other work for 6-DoF grasp generation [ 82]). However, optimizing alearned binary classifier can be susceptible to finding solutions in out-of-distribution regions wherethe classifier outputs erroneously large scores, thus requiring extra components to constrain the searchto a local region. Other related work has compared against similar baselines of optimizing learnedcost functions [ 38,54] and found it can be difficult to achieve a good balance between diversity andsolution quality with such approaches.Deep Visual Constraints (DVC) [83] is another closely related method, which learns shape-conditioned functional representations that represent an underlying kinematic constraint function.RPDiff can be interpreted as very similar to DVC if DVC was to be (i) extended to deal with unknownobjects andscenes, (ii) simplified to avoid representing objects as neural field representations, and (iii)directly predicted gradients of the constraint function rather than representing the constraint functionitself. More specifically, the update in each RPDiff iteration at test-time can be viewed as predictingthe direction to move in, so as to come closer to minimizing/satisfying some underlying constraintfunction that encodes the geometric relation/constraint that should be satisfied. On the other hand,applying DVC to our scenario would involve the steps of predicting the constraint function valueitself, and then performing either gradient-based or zero-order optimization to produce a solution thatsatisfies the learned constraint function. We did not implement or compare against this version of ourapproach, as we found satisfactory results using our method of directly predicting pose updates anddirectly encoding observable point clouds (rather than mapping to neural field representations), but itmay be worth exploring the differences and tradeoffs between these two highly related yet subtlydistinct approaches in the future.Relationship with energy-based learning . Recently, many works have drawn relationships betweenthe paradigms of energy-based modeling (EBM) [ 84,85] and de-noising diffusion (e.g., [ 56,57,86–88]). As such, there have also been associated successes applying EBMs to object rearrangementin robotics (e.g., [ 89]). We highlight one core relationship between these two learning paradigmsin our discussion on DVC above, which is that of prediction by explicitly optimizing a learnedenergy/cost/constraint function vs. training a diffusion model to directly approximate a gradient ofsuch an underlying function. A common current intuition is that diffusion models have the advantageof being more stable and easy to train than EBMs [ 56], but there remains more work to be done inclarifying the degree to which this is true (i.e., see [ 86]) and exploring which among these closelyrelated approaches is most suitable in specific robotics problems.38A11 Model Architecture DiagramsParameter ValueNumber of POandPSpoints 1024Batch size 16Transformer encoder blocks 4Transformer decoder blocks 4Attention heads 1Timestep position embedding SinusoidalTransformer embedding dimension 256Training iterations 500kOptimizer AdamWLearning rate 1e-4Minimum learning rate 1e-6Learning rate schedule linear warmup, cosine decayWarmup epochs 50Optimizer momentum 1= 0:9,2= 0:95Weight decay 0.1Maximum training timestep T 5Maximum PScrop sizeLmax PSbounding box maximum extentMinimum PScrop sizeLmin 18cmTable 4: Training hyperparametersParameter ValueNumber of evaluation iterations I 50Number of parallel runs K 32Default value of Ainitot 10Expression for pAnnealedRandSE(3) (ji)N(j0;(i))(i)inpAnnealedRandSE(3) (for trans and rot) aexp(bi=I)Value ofa(axis-angle rotation, in degrees) 20Value ofb(axis-angle rotation) 6Value ofa(translation, in cm) 3Value ofb(translation) 6Table 5: Evaluation hyperparameters39Downsample point clouds (N+M)3One-hot concat (N+M)5Linear (N+M)dConcat posemb(t) ( N+M+ 1)dSelf-attention (scene)4MdSelf-attention (object)Cross-attention (object, scene)4(N+ 1)dGlobal Pooling dResidual posemb(t) dMLP (translation) d!3MLP!orthonormalize (rotation) d!6!33Table 6: Transformer architecture for predicting SE (3)transformsDownsample point clouds (N+M)3One-hot concat (N+M)5Linear (N+M)dSelf-attention (scene)4MdSelf-attention (object)Cross-attention (object, scene)4NdGlobal Pooling dMLP!sigmoid (success) d!1Table 7: Transformer architecture for predicting success likelihood40 |
dk-2R1f_LR | MimicGen: A Data Generation System for ScalableRobot Learning using Human DemonstrationsAjay Mandlekar1, Soroush Nasiriany2∗, Bowen Wen1∗, Iretiayo Akinola1,Yashraj Narang1, Linxi Fan1, Yuke Zhu1,2, Dieter Fox1∗equal contribution,1NVIDIA,2The University of Texas at AustinAbstract: Imitation learning from a large set of human demonstrations has provedto be an effective paradigm for building capable robot agents. However, thedemonstrations can be extremely costly and time-consuming to collect. We intro-duce MimicGen, a system for automatically synthesizing large-scale, rich datasetsfrom only a small number of human demonstrations by adapting them to new con-texts. We use MimicGen to generate over 50K demonstrations across 18 tasks withdiverse scene configurations, object instances, and robot arms from just ∼200 hu-man demonstrations. We show that robot agents can be effectively trained on thisgenerated dataset by imitation learning to achieve strong performance in long-horizon and high-precision tasks, such as multi-part assembly and coffee prepa-ration, across broad initial state distributions. We further demonstrate that theeffectiveness and utility of MimicGen data compare favorably to collecting ad-ditional human demonstrations, making it a powerful and economical approachtowards scaling up robot learning. Datasets, simulation environments, videos, andmore at https://mimicgen.github.io .Keywords: Imitation Learning, Manipulation1 IntroductionImitation learning from human demonstrations has become an effective paradigm for training robotsto perform a wide variety of manipulation behaviors. One popular approach is to have human op-erators teleoperate robot arms through different control interfaces [1,2], resulting in several demon-strations of robots performing various manipulation tasks, and consequently to use the data to trainthe robots to perform these tasks on their own. Recent attempts have aimed to scale this paradigmby collecting more data with a larger group of human operators over a broader range of tasks [3–6].These works have shown that imitation learning on large diverse datasets can produce impressiveperformance, allowing robots to generalize toward new objects and unseen tasks. This suggests thata critical step toward building generally capable robots is collecting large and rich datasets.However, this success does not come without costly and time-consuming human labor. Considera case study from robomimic [7], in which the agent is tasked with moving a coke can from onebin into another. This is a simple task involving a single scene, single object, and single robot;however, a relatively-large dataset of 200 demonstrations was required to achieve a modest successrate of 73.3%. Recent efforts at expanding to settings with diverse scenes and objects have requiredorders of magnitude larger datasets spanning tens of thousands of demonstrations. For example, [3]showed that a dataset of over 20,000 trajectories enables generalization to tasks with modest changesin objects and goals. The nearly 1.5-year data collection effort from RT-1 [5] spans several humanoperators, months, kitchens, and robot arms to produce policies that can rearrange, cleanup, andretrieve objects with a 97% success rate across a handful of kitchens. Yet it remains unclear howmany years of data collection would be needed to deploy such a system to kitchens in the wild.We raise the question — how much of this data actually contains unique manipulation be-haviors? Large portions of these datasets may contain similar manipulation skills applied indifferent contexts or situations. For example, human operators may demonstrate very sim-ilar robot trajectories to grasp a mug, regardless of its location on one countertop or an-7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.other. Re-purposing these trajectories in new contexts can be a way to generate diversedata without much human effort. In fact, several recent works build on this intuitionand propose imitation learning methods that replay previous human demonstrations [8–11].Small set of human demonstrationsLarge, broad dataset generated automatically with MimicGenMimicGenDemo 1Demo 2Demo 3...Diverse robot hardware...Diverse scene configurations...Diverse objectsFigure 1: MimicGen Overview. We introduce adata generation system that can produce large diversedatasets from a small number of human demonstrationsby re-purposing the demonstrations to make them appli-cable in new settings. We apply MimicGen to generatedata across diverse scene configurations, objects, androbot hardware.While promising, these methods make as-sumptions about specific tasks and algorithmsthat limit their applicability. Instead, we seekto develop a general-purpose system that canbe integrated seamlessly into existing imita-tion learning pipelines and improve the perfor-mance of a wide spectrum of tasks.In this paper, we introduce a novel data col-lection system that uses a small set of hu-man demonstrations to automatically gener-ate large datasets across diverse scenes. Oursystem, MimicGen , takes a small numberof human demonstrations and divides theminto object-centric segments. Then, given anew scene with different object poses, it se-lects one of the human demonstrations, spa-tially transforms each of its object-centric seg-ments, stitches them together, and has therobot follow this new trajectory to collect anew demonstration. While simple, we foundthat this method is extremely effective at generating large datasets across diverse scenes and that thedatasets can be used to train capable agents through imitation learning.We make the following contributions:•We introduce MimicGen, a system for generating large diverse datasets from a small number ofhuman demonstrations by adapting the human demonstrations to novel settings.•We demonstrate that MimicGen is able to generate high-quality data to train proficient agents viaimitation learning across diverse scene configurations, object instances, and robot arms, all of whichare unseen in the original demos (see Fig. 1). MimicGen is broadly applicable to a wide range oflong-horizon and high-precision tasks that require different manipulation skills, such as pick-and-place, insertion, and interacting with articulated objects. We generated 50K+ new demonstrationsfor 18 tasks across 2 simulators and a physical robot arm using only ∼200 source human demos.•Our approach compares favorably to the alternative of collecting more human demonstrations —using MimicGen to generate an equal amount of synthetic data (e.g. 200 demos generated from10 human vs. 200 human demos) results in comparable agent performance — this raises importantquestions about when it is actually necessary to request additional data from a human.2 Related WorkSome robot data collection efforts have employed trial-and-error [12–17] and pre-programmeddemonstrators in simulation [18–22], but it can be difficult to scale these approaches to more com-plex tasks. One popular data source is human demonstrators that teleoperate robot arms [2–6,23–27],but collecting large datasets can require extensive human time, effort, and cost. Instead, MimicGentries to make effective use of a small set of human samples to generate large datasets. We trainpolicies from our generated data using imitation learning, which has been used extensively in priorwork [1,19,25,28–34]. Some works have used offline data augmentation to increase the dataset sizefor learning policies [7,35–45] — in this work we generate new datasets online. Our data generationmethod employs a similar mechanism to replay-based imitation approaches [8–11, 46–48], whichsolve tasks by having the robot replay prior demonstrations. More discussion in Appendix E.3 Problem SetupImitation Learning. We consider each robot manipulation task as a Markov Decision Process(MDP), and aim to learn a robot manipulation policy πthat maps the state space Sto the action spaceA. The imitation dataset consists of Ndemonstrations D={(si0, ai0, si1, ai1, ..., siHi)}Ni=1whereeachsi0∼D(·)is sampled from the initial state distribution D. In this work, we use BehavioralCloning [28] to train the policy with the objective arg min θE(s,a)∼D[−logπθ(a|s)].2Parse source demonstrations into segmentsDemo N. . .Subtask 1Subtask 2. . . Subtask MDemo 2Demo 1Pipeline for generating new trajectoriesObtain reference segment to mimicGenerate segmentExecute segmentTransform segmentInterpolate to startCurrent ObservationFigure 2: MimicGen System Pipeline. (left) MimicGen first parses the demos from the source dataset intosegments, where each segment corresponds to an object-centric subtask (Sec. 4.1). (right) Then, to generatenew demonstrations for a new scene, MimicGen generates and follows a sequence of end-effector target posesfor each subtask by (1) choosing a segment from a source demonstration (chosen segments shown with blueborder in figure above), (2) transforming it for the new scene, and (3) executing it (Sec. 4.2).Problem Statement and Assumptions. Our goal is to use a source dataset Dsrcthat consists ofa small set of human demonstrations collected on a task Mand use it to generate a large datasetDon either the same task or task variants (where the initial state distribution D, the objects, orthe robot arm can change). To generate a new demo: (1) a start state is sampled from the task wewant to generate data for, (2) a demonstration τ∈ D srcis chosen and adapted to produce a newrobot trajectory τ′, (3) the robot executes the trajectory τ′on the current scene, and if the task iscompleted successfully, the sequence of states and actions is added to the generated dataset D(seeSec. 4 for details of each step). We next outline some assumptions that our system leverages.Assumption 1: delta end effector pose action space. The action space Aconsists of delta-posecommands for an end-effector controller and a gripper open/close command. This is a commonaction space used in prior work [3–7, 33]. This gives us an equivalence between delta-pose actionsand controller target poses, and allows us to treat the actions in a demonstration as a sequence oftarget poses for the end effector controller (Appendix N).Assumption 2: tasks consist of a known sequence of object-centric subtasks. LetO={o1, ..., o K}be the set of objects in a task M. As in Di Palo et al. [11], we assume that tasksconsist of a sequence of object-centric subtasks (S1(oS1), S2(oS2), ..., S M(oSM)), where the ma-nipulation in each subtask Si(oSi)is relative to a single object’s coordinate frame ( oSi∈ O). Weassume this sequence is known (it is typically easy for a human to specify — see Appendix K).Assumption 3: object poses can be observed at the start of each subtask during data collection.We assume that we can observe the pose of the relevant object oSiat the start of each subtask Si(oSi)during data collection (not, however, during policy deployment).4 MethodWe describe how MimicGen generates new demonstrations using a small source dataset of humandemonstrations (see Fig. 2 for an overview). MimicGen first parses the source dataset into segments— one for each object-centric subtask in a task (Sec. 4.1). Then, to generate a demonstration for anew scene, MimicGen generates and executes a trajectory (sequence of end-effector control poses)for each subtask, by choosing a reference segment from the source demonstrations, transforming itaccording to the pose of the object in the new scene, and then executing the sequence of target posesusing the end effector controller (Sec. 4.2).4.1 Parsing the Source Dataset into Object-Centric SegmentsEach task consists of a sequence of object-centric subtasks (Assumption 2, Sec. 3) — we wouldlike to parse every trajectory τin the source dataset into segments {τi}Mi=1, where each segmentτicorresponds to a subtask Si(oSi). In this work, to parse source demonstrations into segmentsfor each subtask, we assume access to metrics that allow the end of each subtask to be detectedautomatically (see Appendix K for full details). After this step, every trajectory τ∈ D srchas beensplit into a contiguous sequence of segments τ= (τ1, τ2, ..., τ M), one per subtask.34.2 Transforming Source Data Segments for a New SceneTo generate a task demonstration for a new scene, MimicGen generates and executes a segment foreach object-centric subtask in the task. As shown in Fig. 2 (right), this consists of three key stepsfor each subtask: (1) choosing a reference subtask segment in the source dataset, (2) transformingthe subtask segment for the new context, and (3) executing the segment in the scene.Choosing a reference segment: Recall that MimicGen parses the source dataset into segments thatcorrespond to each subtask Dsrc={(τj1, τj2, ..., τjM)}Nj=1where N=|Dsrc|. At the start of eachsubtask Si(oSi), MimicGen chooses a corresponding segment from the set {τji}Nj=1. These segmentscan be chosen at random or by using the relevant object poses (more details in Appendix N).Transforming the source subtask segment: We can consider the chosen source subtask segmentτifor subtask Si(oSi)as a sequence of target poses for the end effector controller (Assumption 1,Sec. 3). Let TABbe the homogeneous 4 ×4 matrix that represents the pose of frame Awith respectto frame B. Then we can write τi= (TC0W, TC1W, ..., TCKW)where Ctis the controller target poseframe at timestep t,Wis the world frame, and Kis the length of the segment. Since this motionis assumed to be relative to the pose of the object oSi(frame O0with pose TO0W) at the start of thesegment, we will transform τiaccording to the new pose of the corresponding object in the currentscene (frame O′0with pose TO′0W) so that the relative poses between the target pose frame and theobject frame are preserved at each timestep ( TCtO0=TC′tO′0) resulting in the transformed sequenceτ′i= (TC′0W, TC′1W, ..., TC′KW)where TC′tW=TO0W(TO′0W)−1TCtW(derivation in Appendix M). As anexample, see how the source segment and transformed segment in the right side of Fig. 2 approachthe mug in consistent ways. However, the first target pose of the new segment TC′0Wmight be farfrom the current end-effector pose of the robot in the new scene TE′0W(where Eis the end-effectorframe). Consequently, MimicGen adds an interpolation segment at the start of τ′ito interpolatelinearly from the current end-effector pose ( TE′0W) to the start of the transformed segment TC′0W.Executing the new segment: Finally, MimicGen executes the new segment τ′iby taking the targetpose at each timestep, transforming it into a delta pose action (Assumption 1, Sec. 3), pairing it withthe appropriate gripper open/close action from the source segment, and executing the new action.The steps above repeat for each subtask until the final segment has been executed. However, thisprocess can be imperfect — small trajectory deviations due to control and arm kinematics issues canresult in task failure. Thus, MimicGen checks for task success after executing all segments, and onlykeeps successful demonstrations. We refer to the ratio between the number of successfully generatedtrajectories and the total number of attempts as the data generation rate (reported in Appendix P).This pipeline only depends on object frames and robot controller frames — this enables data gener-ation to take place across tasks with different initial state distributions, objects (assuming they havecanonical frames defined), and robot arms (assuming they share a convention for the end effectorcontrol frame). In our experiments, we designed task variants for each robot manipulation taskwhere we vary either the initial state distribution ( D), an object in the task ( O), or the robot arm ( R),and showed that MimicGen enables data collection and imitation learning across these variants.5 Experiment SetupWe applied MimicGen to a broad range of tasks (see Fig. 3) and task variants, in order to showcasehow it can generate useful data for imitation learning across a diverse set of manipulation behaviors,including pick-and-place, contact-rich interactions, and articulation.Tasks and Task Variants. Each task has a default reset distribution ( D0) (all source datasets werecollected on this task variant), a broader reset distribution ( D1), and some have another ( D2), meantto pose even higher difficulty for data generation and policy learning. Consider the Threading taskshown in Fig. 5 — in the D0variant, the tripod is always initialized in the same location, while intheD1variant, both the tripod and needle can move, and in the D2variant, the tripod and needle arerandomized in novel regions of the workspace. In some experiments, we also applied MimicGen totask variants with a different robot arm ( R) or different object instances ( O) within a category.4(a) Stack Three (b) Square (c) Threading (d) 3 Pc. Assembly (e) Pick Place(f) Kitchen (g) Coffee Prep (h) Mobile Kitchen (i) Gear Assembly (j) Frame AssemblyFigure 3: Tasks. We use MimicGen to generate demonstrations for several tasks — these are a subset. Theyspan a wide variety of behaviors including pick-and-place, insertion, interacting with articulated objects, andmobile manipulation, and include long-horizon tasks requiring chaining several behaviors together.We group the tasks into categories and summarize them below (full tasks and variants in Ap-pendix L). Some tasks are implemented with the robosuite framework [49] (MuJoCo backend [50])and others are implemented in Factory [51] (Isaac Gym [52] backend). Basic Tasks (Stack, StackThree): a set of box stacking tasks. Contact-Rich Tasks (Square, Threading, Coffee, Three PieceAssembly, Hammer Cleanup, Mug Cleanup): a set of tasks that involve contact-rich behaviors suchas insertion or drawer articulation. Long-Horizon Tasks (Kitchen, Nut Assembly, Pick Place, Cof-fee Preparation): require chaining multiple behaviors together. Mobile Manipulation Tasks (Mo-bile Kitchen): requires base and arm motion. Factory Tasks (Nut-Bolt-Assembly, Gear Assembly,Frame Assembly): a set of high-precision assembly tasks in Factory [51].Data Generation and Imitation Learning Methodology. For each task, one human operator col-lected a source dataset of 10 demonstrations on the default variant ( D0) using a teleoperation sys-tem [2,23] (with the exception of Mobile Kitchen, where we used 25 demos due to the large numberof object variants, and Square, where we used 10 demos from the robomimic Square PH dataset [7]).MimicGen was used to generate 1000 demonstrations for each task variant, using each task’s sourcedataset (full details in Appendix N). Since data generation is imperfect, each data generation at-tempt is not guaranteed to result in a task success. Attempts that did not achieve task success werediscarded, and data collection kept proceeding for each task variant until 1000 task successes werecollected. Each generated dataset was then used to train policies using Behavioral Cloning withan RNN policy [7]. We also adopt the convention from Mandlekar et al. [7] for reporting policyperformance — the maximum success rate across all policy evaluations, across 3 different seeds(full training details in Appendix O). All policy learning results are shown on image-based agentstrained with RGB observations (see Appendix Q for low-dim agent results).6 ExperimentsWe present experiments that (1) highlight the diverse array of situations that MimicGen can generatedata for, (2) show that MimicGen compares favorably to collecting additional human demonstra-tions, both in terms of effort and downstream policy performance on the data, (3) offer insights intodifferent aspects of the system, and (4) show that MimicGen can work on real-world robot arms.6.1 Applications of MimicGenWe outline a number of applications that showcase useful properties of MimicGen.MimicGen data vastly improves agent performance on the source task. A straightforward ap-plication of MimicGen is to collect a small dataset on some task of interest and then generate moredata for that task. Comparing the performance of agents trained on the small source datasets vs.those trained on D0datasets generated by MimicGen, we see that there is substantial improvementacross all our tasks (see Fig. 4). Some particularly compelling examples include Square (11.3% to90.7%), Threading (19.3% to 98.0%), and Three Piece Assembly (1.3% to 82.0%).MimicGen data can produce performant agents across broad initial state distributions. Asshown in Fig. 4), agents trained using datasets generated on broad initial state distributions ( D1,D2) are performant (42% to 99% on D1), showing that MimicGen generates valuable datasets onnew initial state distributions. In several cases, certain objects in the 10 source demonstrations nevermoved (the peg in Square, the tripod in Threading, the base in Three Piece Assembly, etc), but5Task Source D0 D1 D2Stack 26.0±1.6 100 .0±0.0 99 .3±0.9 -Stack Three 0.7±0.9 92 .7±1.9 86 .7±3.4 -Square 11.3±0.9 90 .7±1.9 73 .3±3.4 49 .3±2.5Threading 19.3±3.4 98 .0±1.6 60 .7±2.5 38 .0±3.3Coffee 74.0±4.3 100 .0±0.0 90 .7±2.5 77 .3±0.9Three Pc. Assembly 1.3±0.9 82 .0±1.6 62 .7±2.5 13 .3±3.8Hammer Cleanup 59.3±5.7 100 .0±0.0 62 .7±4.7 -Mug Cleanup 12.7±2.5 80 .0±4.9 64 .0±3.3 -Kitchen 54.7±8.4 100 .0±0.0 76 .0±4.3 -Nut Assembly 0.0±0.0 53 .3±1.9 - -Pick Place 0.0±0.0 50 .7±6.6 - -Coffee Preparation 12.7±3.4 97 .3±0.9 42 .0±0.0 -Mobile Kitchen 2.0±0.0 46 .7±18.4 - -Nut-and-Bolt Assembly 8.7±2.5 92 .7±2.5 81 .3±8.2 72 .7±4.1Gear Assembly 14.7±5.2 98 .7±1.9 74 .0±2.8 56 .7±1.9Frame Assembly 10.7±6.8 82 .0±4.3 68 .7±3.4 36 .7±2.5Square (T0) Square (T1) Square (T2) TPA (T0) TPA (T1) TPA (T2)Task050100Success RateSource Dataset Size Comparison1 demo 10 demos 50 demos 200 demosStack Three (D1) Square (D0) Square (D2) TPA (D0) Threading (D1)Task0255075Success RatePolicy Training Data Comparison200 human 200 MG 1000 MG 5000 MGFigure 4: (left) Agent Performance on Source and Generated Datasets. Success rates (3 seeds) of image-based agents trained with BC on the 10 source demos and each 1000 demo MimicGen dataset. There is largeimprovement across all tasks on the default distribution ( D0) and agents are performant on the broader distribu-tions (D1,D2). (top-right) MimicGen with more source human demonstrations. We found that using largersource datasets to generate MimicGen data did not result in significant agent improvement. (bottom-right) Pol-icy Training Dataset Comparison. Image-based agent performance is comparable on 200 MimicGen demosand 200 human demos, despite MimicGen only using 10 source human demos. MimicGen can produce im-proved agents by generating larger datasets (200, 1000, 5000 demos), but there are diminishing returns.data was generated (and policies consequently were trained) on regimes where the objects move insubstantial regions of the robot workspace.MimicGen can generate data for different objects. The source dataset in the Mug Cleanup taskcontains just one mug, but we generate demonstrations with MimicGen for an unseen mug ( O1)and for a set of 12 mugs ( O2). Policies trained on these datasets have substantial task success rates(90.7% and 75.3% respectively) (full results in Appendix G).MimicGen can generate data for diverse robot hardware. We apply MimicGen to the Squareand Threading source datasets (which use the Panda arm) and generate datasets for the Sawyer,IIWA, and UR5e across the D0andD1reset distribution variants. Interestingly, although the datageneration rates differ greatly per arm (range 38%-74% for Square D0), trained policy performanceis remarkably similar across the 4 robot arms (80%-91%, full results in Appendix F). This showsthe potential for using human demonstrations across robot hardware using MimicGen, an excitingprospect, as teleoperated demonstrations are typically constrained to a single robot.Applying MimicGen to mobile manipulation. In the Mobile Kitchen task MimicGen yields again from 2.0% to 46.7% (image, Fig. 4) and 2.7% to 76.7% success rate (low-dim, Table Q.1 inAppendix), highlighting that our method can be applied to tasks beyond static tabletop manipulation.MimicGen is simulator-agnostic. We show that MimicGen is not limited to just one simulationframework by applying it to high-precision tasks (requiring millimeter precision ) in Factory [51],a simulation framework built on top of Isaac Gym [52] to accurately simulate high-precision ma-nipulation. We generate data for and train performant policies on the Nut-and-Bolt Assembly, GearAssembly, and Frame Assembly tasks. Policies achieve excellent results on the nominal tasks ( D0)(82%-99%), a significant improvement over policies trained on the source datasets (9%-15%), andare also able to achieve substantial performance on wider reset distributions ( D1,D2) (37%-81%).MimicGen can use demonstrations from inexperienced human operators and different tele-operation devices. Surprisingly, policies trained on these MimicGen datasets have comparableperformance to those in Fig. 4. See Appendix I for the full set of results.6.2 Comparing MimicGen to using more human dataIn this section, we contextualize the performance of agents trained on MimicGen data.Comparing task performance to prior works. Zhu et al. [53] introduced the Hammer Cleanupand Kitchen tasks and reported agent performance on 100 human demonstrations for their methodcalled BUDS. On Hammer Cleanup, BUDS achieved 68.6% ( D0), while BC-RNN achieves 59.3%on our 10 source demos, 100.0% on our generated 1000 D0demos, and 62.7% on the D1variant6where both the hammer and drawer move substantially. On Kitchen, BUDS achieved 72.0% ( D0),while BC-RNN achieves 54.7% on our 10 source demos, 100.0% on our generated D0data, and76.0% on the D1variant, where all objects move in wider regions. This shows that using MimicGento make effective use of a small number of human demonstrations can improve the complexity oftasks that can be learned with imitation learning. As another example, Mandlekar et al. [2] collectedover 1000 human demos across 10 human operators on both the Nut Assembly and Pick Place tasks,but only managed to train proficient policies for easier, single-stage versions of these tasks using acombination of reinforcement learning and demonstrations. By contrast, in this work we are able tomake effective use of just 10 human demonstrations to generate a set of 1000 demonstrations andlearn proficient agents from them (76.0% and 58.7% low-dim, 53.3% and 50.7% image).Agent performance on data generated by MimicGen can be comparable to performance on anequal amount of human demonstrations. We collect 200 human demonstrations on several tasksand compare agent performance on those demonstrations to agent performance on 200 demonstra-tions generated by MimicGen (see Fig. 4). In most cases, agent performance is similar, despite the200 MimicGen demos being generated from just 10 human demos — a small number of humandemos can be as effective (or even more) than a large number of them when used with MimicGen.MimicGen can also easily generate more demonstrations to improve performance (see Sec. 6.3),unlike the time-consuming nature of collecting more human data. This result also raises importantquestions on whether soliciting more human demonstrations can be redundant and not worth thelabeling cost, and where to collect human demonstrations given a finite labeling bandwidth.6.3 MimicGen AnalysisWe analyze some practical aspects of the system, including (1) whether the number of source demon-strations used impacts agent performance, (2) whether the choice of source demonstrations matters,(3) whether agent performance can keep improving by generating more demonstrations, and (4)whether the data generation success rate and trained agent performance are correlated.Can dataset quality and agent performance be improved by using more source human demon-strations? We used 10, 50, and 200 source human demonstrations on the Square and Three PieceAssembly tasks, and report the policy success rates in Fig. 4. We see that performance differencesare modest (ranging from 2% to 21%). We also tried using just 1 human demo — in some casesperformance was much worse (e.g. Square), while in others, there was no significant performancechange (e.g. Three Piece Assembly). It is possible that performance could improve with more sourcehuman demos if they are curated in an intelligent manner, but this is left for future work.Does the choice of source human demonstrations matter? For each generated dataset, we loggedwhich episode came from which source human demonstration — in certain cases, this distributioncan be very non-uniform. As an example, the generated Factory Gear Assembly task ( D1) had over850 of the 1000 episodes come from just 3 source demonstrations. In the generated Threading task(D0), one source demo had over 170 episodes while another had less than 10 episodes. In bothcases, the number of attempted episodes per source demonstration was roughly uniform (since wepicked them at random — details in Appendix N), but some were more likely to generate successfuldemonstrations than others. Furthermore, we found the source demonstration segment selectiontechnique (Sec. 4.2) to matter for certain tasks (Appendix N). This indicates that both the initialset of source demos provided to MimicGen ( Dsrc), and how segments from these demos are chosenduring each generation attempt ( τifor each subtask, see Sec. 4.1) can matter.Can agent performance keep improving by generating more demonstrations? In Fig. 4, wetrain agents on 200, 1000, and 5000 demos generated by MimicGen across several tasks. There is alarge jump in performance from 200 to 1000, but not much from 1000 to 5000, showing that therecan be diminishing returns on generating more data.Are the data generation success rate and trained agent performance correlated? It is temptingto think that data generation success rate and trained agent performance are correlated, but we foundthat this is not necessarily true — there are datasets that had low dataset generation success rates(and consequently took a long time to generate 1000 successes) but had high agent performance aftertraining on the data (Appendix P). A few examples are Object Cleanup ( D0) (29.5% generation rate,82.0% agent rate), Three Piece Assembly ( D0) (35.6% generation rate, 74.7% agent rate), Coffee(D2) (27.7% generation rate, 76.7% agent rate), and Factory Gear Assembly ( D1) (8.2% generationrate, 76.0% agent rate). These results showcase the value of using replay-based mechanisms for datacollection instead of directly using them to deploy as policy as in prior works [8, 11].7(a)D0 (b)D1 (c)D2Figure 5: (left) Reset Distributions. Each task has a default reset distribution for the objects ( D0), a broaderone (D1), and some had a more challenging one ( D2). The figure shows the sampling regions for the tripodand needle in the Threading task. The tripod is at a fixed location in D0, andD2swaps the relative locationsof the tripod and needle. We generate data across diverse scene configurations by taking source demos fromD0and generating data for all variants. (right) Real Robot Tasks. We apply MimicGen to two real robot tasks— Stack (top row) and Coffee (bottom row). In the first column, the blue and orange regions show the source(D0) and generated ( D1) reset distributions for each task. We use 10 source demos per task, and generate 100successful demos — MimicGen has a data generation success rate of 82.3% for Stack and 52.1% for Coffee.6.4 Real Robot EvaluationWe validate that MimicGen can be applied to real-world robot arms and tasks. We collect 10 sourcedemonstrations for each task in narrow regions of the workspace ( D0) and then generate demon-strations (200 for Stack, 100 for Coffee) for large regions of the workspace ( D1) (see Fig. 5). Thegeneration success rate was 82.3% for Stack (243 attempts) and 52.1% for Coffee (192 attempts),showing that MimicGen works in the real world with a reasonably high success rate. We thentrained visuomotor agents using a front-facing RealSense D415 camera and a wrist-mounted Re-alSense D435 camera (120 ×160 resolution). Over 50 evaluations, our Stack agent had 36% successrate and Coffee had 14% success rate (pod grasp success rate of 60% and pod insertion success rateof 20%). The lower numbers than from simulation might be due to the larger number of interpola-tion steps we used in the real world for hardware safety (50 total instead of 5) — these motions aredifficult for the agent to imitate since there is little association between the intermediate motion andobservations ( see Appendix H for more experiments and discussion ).We also compared to agents trained on the source datasets (10 demos) in the narrow regions (orangeregions in Fig. 5) where the source data came from — the Stack source agent had 0% success rateand the Coffee source agent had 0% success rate (with an insertion rate of 0% and pod grasp rate of94%). The Coffee ( D0) task in particular has barely any variation (the pod can move vertically ina 5cm region) compared to the D1task, which is substantially harder (pod placed anywhere in theright half of the workspace). Agents trained with MimicGen data compare favorably to these agents,as they achieve non-zero success rates on broader task reset distributions.7 LimitationsSee Appendix D for full set of limitations and discussion. MimicGen assumes knowledge of theobject-centric subtasks in a task and requires object pose estimates at the start of each subtask duringdata generation (Assumption 3, Sec. 3). MimicGen only filters data generation attempts based ontask success, so generated datasets can be biased (Appendix R). MimicGen uses linear interpolationbetween human segments (Appendix N.2), which does not guarantee collision-free motion, and canpotentially hurt agent performance (Appendix H). MimicGen was demonstrated on quasi-static taskswith rigid objects, and novel objects were assumed to come from the same category.8 ConclusionWe introduced MimicGen, a data generation system that can use small amounts of human demon-strations to generate large datasets across diverse scenes, object instances, and robots, and applied itto generate over 50K demos across 18 tasks from less than 200 human demos, including tasks involv-ing long-horizon and high-precision manipulation. We showed that agents learning from this datacan achieve strong performance. We further found that agent performance on MimicGen data can becomparable to performance on an equal number of human demos — this surprising result motivatesfurther investigation into when to solicit additional human demonstrations instead of making moreeffective use of a small number, and whether human operator time would be better spent collectingdata in new regions of the workspace. We hope that MimicGen motivates and enables exploring amore data-centric perspective on imitation learning in future work.8AcknowledgmentsThis work was made possible due to the help and support of Sandeep Desai (robot hardware), Ravin-der Singh (IT), Alperen Degirmenci (compute cluster), Anima Anandkumar (access to robot hard-ware), Yifeng Zhu (tasks from [53] and robot control software [54]), Cheng Chi (diffusion policyexperiments), Shuo Cheng (drawer design used in Coffee Preparation task), Balakumar Sundar-alingam (code release), and Stan Birchfield (dataset release).References[1] T. Zhang, Z. McCarthy, O. Jow, D. Lee, K. Goldberg, and P. Abbeel, “Deep imitationlearning for complex manipulation tasks from virtual reality teleoperation,” arXiv preprintarXiv:1710.04615 , 2017.[2] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, S. Savarese, and L. Fei-Fei, “RoboTurk: A Crowdsourcing Platform for RoboticSkill Learning through Imitation,” in Conference on Robot Learning , 2018.[3] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn, “Bc-z: Zero-shot task generalization with robotic imitation learning,” in Conference on RobotLearning . PMLR, 2022, pp. 991–1002.[4] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog et al. , “Do as i can, not as i say: Grounding language in roboticaffordances,” arXiv preprint arXiv:2204.01691 , 2022.[5] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, J. Hsu et al. , “Rt-1: Robotics transformer for real-world controlat scale,” arXiv preprint arXiv:2212.06817 , 2022.[6] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn,and S. Levine, “Bridge data: Boosting generalization of robotic skills with cross-domaindatasets,” arXiv preprint arXiv:2109.13396 , 2021.[7] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın, “What matters in learning from offline human demonstrationsfor robot manipulation,” in Conference on Robot Learning (CoRL) , 2021.[8] B. Wen, W. Lian, K. Bekris, and S. Schaal, “You only demonstrate once: Category-levelmanipulation from single visual demonstration,” in Robotics: Science and Systems (RSS) ,2022.[9] E. Johns, “Coarse-to-fine imitation learning: Robot manipulation from a single demonstra-tion,” in 2021 IEEE international conference on robotics and automation (ICRA) . IEEE,2021, pp. 4613–4619.[10] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns, “Demonstrate once, imitate imme-diately (dome): Learning visual servoing for one-shot imitation learning,” in 2022 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) . IEEE, 2022, pp. 8614–8621.[11] N. Di Palo and E. Johns, “Learning multi-stage tasks with one demonstration via self-replay,”inConference on Robot Learning . PMLR, 2022, pp. 1180–1189.[12] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, “Learning hand-eye coordination forrobotic grasping with large-scale data collection,” in ISER , 2016, pp. 173–184.[13] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning to grasp from 50k tries and700 robot hours,” in Robotics and Automation (ICRA), 2016 IEEE Int’l Conference on . IEEE,2016.[14] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke et al. , “Qt-opt: Scalable deep reinforcement learning forvision-based robotic manipulation,” arXiv preprint arXiv:1806.10293 , 2018.[15] D. Kalashnikov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine,and K. Hausman, “Mt-opt: Continuous multi-task robotic reinforcement learning at scale,”arXiv preprint arXiv:2104.08212 , 2021.9[16] K.-T. Yu, M. Bauza, N. Fazeli, and A. Rodriguez, “More than a million ways to be pushed.a high-fidelity experimental dataset of planar pushing,” in Int’l Conference on IntelligentRobots and Systems , 2016.[17] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine,and C. Finn, “Robonet: Large-scale multi-robot learning,” arXiv preprint arXiv:1910.11215 ,2019.[18] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison, “Rlbench: The robot learning benchmark &learning environment,” IEEE Robotics and Automation Letters , vol. 5, no. 2, pp. 3019–3026,2020.[19] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani et al. , “Transporter networks: Rearranging the visual world forrobotic manipulation,” arXiv preprint arXiv:2010.14406 , 2020.[20] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan, “Vima: General robot manipulation with multimodal prompts,” arXiv preprintarXiv:2210.03094 , 2022.[21] J. Gu, F. Xiang, X. Li, Z. Ling, X. Liu, T. Mu, Y . Tang, S. Tao, X. Wei, Y . Yao et al. ,“Maniskill2: A unified benchmark for generalizable manipulation skills,” arXiv preprintarXiv:2302.04659 , 2023.[22] M. Dalal, A. Mandlekar, C. Garrett, A. Handa, R. Salakhutdinov, and D. Fox, “Imitating taskand motion planning with visuomotor transformers,” arXiv preprint arXiv:2305.16309 , 2023.[23] A. Mandlekar, J. Booher, M. Spero, A. Tung, A. Gupta, Y . Zhu, A. Garg, S. Savarese, andL. Fei-Fei, “Scaling robot supervision to hundreds of hours with roboturk: Robotic manip-ulation dataset through human reasoning and dexterity,” arXiv preprint arXiv:1911.04052 ,2019.[24] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, Y . Zhu, L. Fei-Fei, and S. Savarese, “Human-in-the-loop imitation learning using remote teleoperation,” arXiv preprint arXiv:2012.06733 , 2020.[25] A. Tung, J. Wong, A. Mandlekar, R. Mart ́ın-Mart ́ın, Y . Zhu, L. Fei-Fei, and S. Savarese,“Learning multi-arm manipulation through collaborative teleoperation,” arXiv preprintarXiv:2012.06738 , 2020.[26] J. Wong, A. Tung, A. Kurenkov, A. Mandlekar, L. Fei-Fei, S. Savarese, and R. Mart ́ın-Mart ́ın,“Error-aware imitation learning from teleoperation data for mobile manipulation,” in Confer-ence on Robot Learning . PMLR, 2022, pp. 1367–1378.[27] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence,“Interactive language: Talking to robots in real time,” arXiv preprint arXiv:2210.06407 , 2022.[28] D. A. Pomerleau, “Alvinn: An autonomous land vehicle in a neural network,” in Advances inneural information processing systems , 1989, pp. 305–313.[29] A. J. Ijspeert, J. Nakanishi, and S. Schaal, “Movement imitation with nonlinear dynamicalsystems in humanoid robots,” Proceedings 2002 IEEE International Conference on Roboticsand Automation , vol. 2, pp. 1398–1403 vol.2, 2002.[30] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine, “One-shot visual imitation learning viameta-learning,” in Conference on robot learning . PMLR, 2017, pp. 357–368.[31] A. Billard, S. Calinon, R. Dillmann, and S. Schaal, “Robot programming by demonstration,”inSpringer Handbook of Robotics , 2008.[32] S. Calinon, F. D’halluin, E. L. Sauser, D. G. Caldwell, and A. Billard, “Learning and re-production of gestures by imitation,” IEEE Robotics and Automation Magazine , vol. 17, pp.44–54, 2010.[33] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, S. Savarese, and L. Fei-Fei, “Learning to generalizeacross long-horizon tasks from human demonstrations,” arXiv preprint arXiv:2003.06085 ,2020.[34] C. Wang, R. Wang, D. Xu, A. Mandlekar, L. Fei-Fei, and S. Savarese, “Generalizationthrough hand-eye coordination: An action space for learning spatially-invariant visuomotorcontrol,” arXiv preprint arXiv:2103.00375 , 2021.10[35] P. Mitrano and D. Berenson, “Data augmentation for manipulation,” arXiv preprintarXiv:2205.02886 , 2022.[36] M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas, “Reinforcement learningwith augmented data,” arXiv preprint arXiv:2004.14990 , 2020.[37] I. Kostrikov, D. Yarats, and R. Fergus, “Image augmentation is all you need: Regularizingdeep reinforcement learning from pixels,” arXiv preprint arXiv:2004.13649 , 2020.[38] S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto, “Visual imitation madeeasy,” arXiv e-prints , pp. arXiv–2008, 2020.[39] A. Zhan, P. Zhao, L. Pinto, P. Abbeel, and M. Laskin, “A framework for efficient roboticmanipulation,” arXiv preprint arXiv:2012.07975 , 2020.[40] S. Sinha, A. Mandlekar, and A. Garg, “S4rl: Surprisingly simple self-supervision for offlinereinforcement learning in robotics,” in Conference on Robot Learning . PMLR, 2022, pp.907–917.[41] S. Pitis, E. Creager, and A. Garg, “Counterfactual data augmentation using locally factoreddynamics,” Advances in Neural Information Processing Systems , vol. 33, pp. 3976–3990,2020.[42] S. Pitis, E. Creager, A. Mandlekar, and A. Garg, “Mocoda: Model-based counterfactual dataaugmentation,” arXiv preprint arXiv:2210.11287 , 2022.[43] Z. Mandi, H. Bharadhwaj, V . Moens, S. Song, A. Rajeswaran, and V . Kumar, “Cacti: Aframework for scalable multi-task multi-scene visual imitation learning,” arXiv preprintarXiv:2212.05711 , 2022.[44] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Per-alta, B. Ichter et al. , “Scaling robot learning with semantically imagined experience,” arXivpreprint arXiv:2302.11550 , 2023.[45] Z. Chen, S. Kiami, A. Gupta, and V . Kumar, “Genaug: Retargeting behaviors to unseensituations via generative augmentation,” arXiv preprint arXiv:2302.06671 , 2023.[46] V . V osylius and E. Johns, “Where to start? transferring simple skills to complex environ-ments,” arXiv preprint arXiv:2212.06111 , 2022.[47] A. Chenu, O. Serris, O. Sigaud, and N. Perrin-Gilbert, “Leveraging sequentiality in reinforce-ment learning from a single demonstration,” arXiv preprint arXiv:2211.04786 , 2022.[48] J. Liang, B. Wen, K. Bekris, and A. Boularias, “Learning sensorimotor primitives of sequen-tial manipulation tasks from visual demonstrations,” in 2022 International Conference onRobotics and Automation (ICRA) . IEEE, 2022, pp. 8591–8597.[49] Y . Zhu, J. Wong, A. Mandlekar, and R. Mart ́ın-Mart ́ın, “robosuite: A modular simulationframework and benchmark for robot learning,” in arXiv preprint arXiv:2009.12293 , 2020.[50] E. Todorov, T. Erez, and Y . Tassa, “Mujoco: A physics engine for model-based control,” inIEEE/RSJ International Conference on Intelligent Robots and Systems , 2012, pp. 5026–5033.[51] Y . Narang, K. Storey, I. Akinola, M. Macklin, P. Reist, L. Wawrzyniak, Y . Guo, A. Mora-vanszky, G. State, M. Lu et al. , “Factory: Fast contact for robotic assembly,” arXiv preprintarXiv:2205.03532 , 2022.[52] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller,N. Rudin, A. Allshire, A. Handa et al. , “Isaac gym: High performance gpu-based physicssimulation for robot learning,” arXiv preprint arXiv:2108.10470 , 2021.[53] Y . Zhu, P. Stone, and Y . Zhu, “Bottom-up skill discovery from unsegmented demonstrationsfor long-horizon robot manipulation,” IEEE Robotics and Automation Letters , vol. 7, no. 2,pp. 4126–4133, 2022.[54] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu, “Viola: Imitation learning for vision-based manipula-tion with object proposal priors,” 6th Annual Conference on Robot Learning , 2022.[55] S. Nasiriany, T. Gao, A. Mandlekar, and Y . Zhu, “Learning and retrieval from prior data forskill-based imitation learning,” in Conference on Robot Learning (CoRL) , 2022.11[56] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard, “Calvin: A benchmark for language-conditioned policy learning for long-horizon robot manipulation tasks,” IEEE Robotics andAutomation Letters , vol. 7, no. 3, pp. 7327–7334, 2022.[57] S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement learning: Tutorial, review,and perspectives on open problems,” arXiv preprint arXiv:2005.01643 , 2020.[58] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel, “Sim-to-real transfer of roboticcontrol with dynamics randomization,” in 2018 IEEE international conference on roboticsand automation (ICRA) . IEEE, 2018, pp. 3803–3810.[59] M. Kaspar, J. D. M. Osorio, and J. Bock, “Sim2real transfer for reinforcement learningwithout dynamics randomization,” in 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) . IEEE, 2020, pp. 4383–4388.[60] A. Allshire, M. MittaI, V . Lodaya, V . Makoviychuk, D. Makoviichuk, F. Widmaier,M. W ̈uthrich, S. Bauer, A. Handa, and A. Garg, “Transferring dexterous manipulation fromgpu simulation to a remote real-world trifinger,” in 2022 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) . IEEE, 2022, pp. 11 802–11 809.[61] M. Khansari, D. Ho, Y . Du, A. Fuentes, M. Bennice, N. Sievers, S. Kirmani, Y . Bai, andE. Jang, “Practical imitation learning in the real world via task consistency loss,” arXivpreprint arXiv:2202.01862 , 2022.[62] A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk,K. Van Wyk, A. Zhurkevich, B. Sundaralingam et al. , “Dextreme: Transfer of agile in-handmanipulation from simulation to reality,” arXiv preprint arXiv:2210.13702 , 2022.[63] O. Khatib, “A unified approach for motion and force control of robot manipulators: Theoperational space formulation,” IEEE Journal on Robotics and Automation , vol. 3, no. 1, pp.43–53, 1987.[64] S. Dasari, J. Wang, J. Hong, S. Bahl, Y . Lin, A. Wang, A. Thankaraj, K. Chahal, B. Calli,S. Gupta et al. , “Rb2: Robotic manipulation benchmarking with a twist,” arXiv preprintarXiv:2203.08098 , 2022.[65] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine, “Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning,” in Conference onrobot learning . PMLR, 2020, pp. 1094–1100.[66] T. Mu, Z. Ling, F. Xiang, D. Yang, X. Li, S. Tao, Z. Huang, Z. Jia, and H. Su, “Maniskill:Generalizable manipulation skill benchmark with large-scale demonstrations,” arXiv preprintarXiv:2107.14483 , 2021.[67] J. Zhang and K. Cho, “Query-efficient imitation learning for end-to-end autonomous driving,”arXiv preprint arXiv:1605.06450 , 2016.[68] R. Hoque, A. Balakrishna, C. Putterman, M. Luo, D. S. Brown, D. Seita, B. Thananjeyan,E. Novoseller, and K. Goldberg, “Lazydagger: Reducing context switching in interactiveimitation learning,” in 2021 IEEE 17th International Conference on Automation Science andEngineering (CASE) . IEEE, 2021, pp. 502–509.[69] R. Hoque, A. Balakrishna, E. Novoseller, A. Wilcox, D. S. Brown, and K. Goldberg,“Thriftydagger: Budget-aware novelty and risk gating for interactive imitation learning,”arXiv preprint arXiv:2109.08273 , 2021.[70] S. Dass, K. Pertsch, H. Zhang, Y . Lee, J. J. Lim, and S. Nikolaidis, “Pato: Policy assistedteleoperation for scalable robot data collection,” arXiv preprint arXiv:2212.04708 , 2022.[71] Y . Zhang, H. Ling, J. Gao, K. Yin, J.-F. Lafleche, A. Barriuso, A. Torralba, and S. Fidler,“Datasetgan: Efficient labeled data factory with minimal human effort,” in Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2021, pp. 10 145–10 155.[72] D. Li, H. Ling, S. W. Kim, K. Kreis, S. Fidler, and A. Torralba, “Bigdatasetgan: Synthesiz-ing imagenet with pixel-wise annotations,” in Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , 2022, pp. 21 330–21 340.[73] A. Kar, A. Prakash, M.-Y . Liu, E. Cameracci, J. Yuan, M. Rusiniak, D. Acuna, A. Torralba,and S. Fidler, “Meta-sim: Learning to generate synthetic datasets,” in Proceedings of theIEEE/CVF International Conference on Computer Vision , 2019, pp. 4551–4560.12[74] J. Devaranjan, A. Kar, and S. Fidler, “Meta-sim2: Unsupervised learning of scene structurefor synthetic data generation,” in Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part XVII 16 . Springer, 2020, pp. 715–733.[75] S. W. Kim, J. Philion, A. Torralba, and S. Fidler, “Drivegan: Towards a controllable high-quality neural simulation,” in Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , 2021, pp. 5820–5829.[76] D. Paschalidou, A. Kar, M. Shugrina, K. Kreis, A. Geiger, and S. Fidler, “Atiss: Autoregres-sive transformers for indoor scene synthesis,” Advances in Neural Information ProcessingSystems , vol. 34, pp. 12 013–12 026, 2021.[77] S. Tan, K. Wong, S. Wang, S. Manivasagam, M. Ren, and R. Urtasun, “Scenegen: Learning togenerate realistic traffic scenes,” in Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , 2021, pp. 892–901.[78] C. Chamzas, C. Quintero-Pena, Z. Kingston, A. Orthey, D. Rakita, M. Gleicher, M. Toussaint,and L. E. Kavraki, “Motionbenchmaker: A tool to generate and benchmark motion planningdatasets,” IEEE Robotics and Automation Letters , vol. 7, no. 2, pp. 882–889, 2021.[79] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet, “Learninglatent plans from play,” in Conference on Robot Learning , 2019.[80] K. Pertsch, Y . Lee, Y . Wu, and J. J. Lim, “Demonstration-guided reinforcement learning withlearned skills,” in Conference on Robot Learning , 2021.[81] A. Ajay, A. Kumar, P. Agrawal, S. Levine, and O. Nachum, “Opal: Offline primitive discov-ery for accelerating offline reinforcement learning,” in International Conference on LearningRepresentations , 2021.[82] K. Hakhamaneshi, R. Zhao, A. Zhan, P. Abbeel, and M. Laskin, “Hierarchical few-shot imita-tion with skill transition models,” in International Conference on Learning Representations ,2021.[83] A. Kumar, A. Singh, F. Ebert, Y . Yang, C. Finn, and S. Levine, “Pre-training for robots: Of-fline rl enables learning new tasks from a handful of trials,” arXiv preprint arXiv:2210.05178 ,2022.[84] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fit-ting with applications to image analysis and automated cartography,” Communications of theACM , vol. 24, no. 6, pp. 381–395, 1981.[85] M. Ester, H.-P. Kriegel, J. Sander, X. Xu et al. , “A density-based algorithm for discoveringclusters in large spatial databases with noise.” in kdd, vol. 96, no. 34, 1996, pp. 226–231.[86] K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEEinternational conference on computer vision , 2017, pp. 2961–2969.[87] M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg, “Segmentingunknown 3d objects from real depth images using mask r-cnn trained on synthetic data,”in2019 International Conference on Robotics and Automation (ICRA) . IEEE, 2019, pp.7283–7290.[88] B. Wen, C. Mitash, S. Soorian, A. Kimmel, A. Sintov, and K. E. Bekris, “Robust, occlusion-aware pose estimation for objects grasped by adaptive hands,” in 2020 IEEE InternationalConference on Robotics and Automation (ICRA) . IEEE, 2020, pp. 6210–6217.[89] Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Inter-national journal of computer vision , vol. 13, no. 2, pp. 119–152, 1994.[90] B. Wen and K. Bekris, “Bundletrack: 6d pose tracking for novel objects without instance orcategory-level 3d models,” in IROS , 2021.[91] T. Lee, J. Tremblay, V . Blukis, B. Wen, B.-U. Lee, I. Shin, S. Birchfield, I. S. Kweon, andK.-J. Yoon, “Tta-cope: Test-time adaptation for category-level object pose estimation,” inCVPR , 2023.[92] Y . Liu, Y . Wen, S. Peng, C. Lin, X. Long, T. Komura, and W. Wang, “Gen6d: Generalizablemodel-free 6-dof object pose estimation from rgb images,” in Computer Vision–ECCV 2022:17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII .Springer, 2022, pp. 298–315.13[93] J. Sun, Z. Wang, S. Zhang, X. He, H. Zhao, G. Zhang, and X. Zhou, “Onepose: One-shotobject pose estimation without cad models,” in Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , 2022, pp. 6825–6834.[94] B. Wen, J. Tremblay, V . Blukis, S. Tyree, T. Muller, A. Evans, D. Fox, J. Kautz, and S. Birch-field, “Bundlesdf: Neural 6-dof tracking and 3d reconstruction of unknown objects,” CVPR ,2023.[95] E. Valassakis, N. Di Palo, and E. Johns, “Coarse-to-fine for sim-to-real: Sub-millimetre pre-cision across wide task spaces,” in 2021 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) . IEEE, 2021, pp. 5989–5996.[96] P.-L. Guhur, S. Chen, R. G. Pinel, M. Tapaswi, I. Laptev, and C. Schmid, “Instruction-drivenhistory-aware policies for robotic manipulations,” in Conference on Robot Learning . PMLR,2023, pp. 175–187.[97] H. Ha, P. Florence, and S. Song, “Scaling up and distilling down: Language-guided robotskill acquisition,” arXiv preprint arXiv:2307.14535 , 2023.[98] A. Agarwal, A. Kumar, J. Malik, and D. Pathak, “Legged locomotion in challenging terrainsusing egocentric vision,” in Conference on Robot Learning . PMLR, 2023, pp. 403–415.[99] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomizationfor transferring deep neural networks from simulation to the real world,” in 2017 IEEE/RSJinternational conference on intelligent robots and systems (IROS) . IEEE, 2017, pp. 23–30.[100] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song, “Diffusion policy:Visuomotor policy learning via action diffusion,” arXiv preprint arXiv:2303.04137 , 2023.[101] Y . Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tunyasuvunakool, J. Kram ́ar, R. Had-sell, N. de Freitas et al. , “Reinforcement and imitation learning for diverse visuomotor skills,”arXiv preprint arXiv:1802.09564 , 2018.[102] A. Mandlekar, F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox, “Iris: Im-plicit reinforcement without interaction at scale for learning control from offline robot manip-ulation data,” in IEEE International Conference on Robotics and Automation (ICRA) . IEEE,2020, pp. 4414–4420.[103] L. Chen, R. Paleja, and M. Gombolay, “Learning from suboptimal demonstration via self-supervised reward regression,” in Conference on robot learning . PMLR, 2021, pp. 1262–1277.[104] D. Brown, W. Goo, P. Nagarajan, and S. Niekum, “Extrapolating beyond suboptimal demon-strations via inverse reinforcement learning from observations,” in International conferenceon machine learning . PMLR, 2019, pp. 783–792.[105] R. Jeong, J. T. Springenberg, J. Kay, D. Zheng, Y . Zhou, A. Galashov, N. Heess,and F. Nori, “Learning dexterous manipulation from suboptimal experts,” arXiv preprintarXiv:2010.08587 , 2020.[106] H. Xu, X. Zhan, H. Yin, and H. Qin, “Discriminator-weighted offline imitation learning fromsuboptimal demonstrations,” in International Conference on Machine Learning . PMLR,2022, pp. 24 725–24 742.[107] M. Yang, S. Levine, and O. Nachum, “Trail: Near-optimal imitation learning with suboptimaldata,” arXiv preprint arXiv:2110.14770 , 2021.[108] A. S. Morgan, B. Wen, J. Liang, A. Boularias, A. M. Dollar, and K. Bekris, “Vision-drivencompliant manipulation for reliable, high-precision assembly tasks,” RSS, 2021.14AppendixA OverviewThe Appendix contains the following content.•Author Contributions (Appendix B): list of each author’s contributions to the paper•FAQ (Appendix C): answers to some common questions•Limitations (Appendix D): more thorough list and discussion of MimicGen limitations•Full Related Work (Appendix E): more thorough discussion on related work•Robot Transfer (Appendix F): full set of results for generating data across robot arms•Object Transfer (Appendix G): full set of results for generating data across objects•Real Robot Results (Appendix H): additional details and discussion on the real robot ex-periments, including an explanation for the lower training results in the real world•Different Demonstrators (Appendix I): results that show MimicGen works just as wellwhen using source demos from suboptimal demonstrators and from different teleoperationdevices•Motivation for MimicGen over Alternative Methods (Appendix J): motivation for Mim-icGen over offline data augmentation and replay-based imitation•Additional Details on Object-Centric Subtasks (Appendix K): more details and intuitionon subtasks, including examples•Tasks and Task Variants (Appendix L): detailed descriptions all tasks and task variants•Derivation of Subtask Segment Transform (Appendix M): derivation of how MimicGentransforms subtask segments from the source data•Data Generation Details (Appendix N): in-depth details on how MimicGen generates data•Policy Training Details (Appendix O): details of how policies were trained from Mimic-Gen datasets via imitation learning•Data Generation Success Rates (Appendix P): data generation success rates for each ofour generated datasets•Low-Dim Policy Training Results (Appendix Q): full results for agents trained on low-dimobservation spaces (image agents presented in main text)•Bias and Artifacts in Generated Data (Appendix R): discussion on some undesirableproperties of MimicGen data•Using More Varied Source Demonstrations (Appendix S): investigation on whether hav-ing source demonstrations collected on a more varied set of task initializations is helpful•Data Generation with Multiple Seeds (Appendix T): results that show there is very littlevariance in empirical results across different data generation seeds•Tolerance to Pose Estimation Error (Appendix U): investigation of MimicGen’s toler-ance to pose error15B Author ContributionsAjay Mandlekar led the overall project, implemented the MimicGen code, ran most of the experi-ments in the paper, and wrote the paper.Soroush Nasiriany implemented the Mobile Manipulation code, ran the Mobile Manipulation ex-periments, and also helped with simulation environment development, including the mug assets. Healso wrote parts of the paper.Bowen Wen developed the perception pipeline used in real-world experiments and wrote parts ofthe paper.Iretiayo Akinola designed the Factory [51] tasks and assets and developed infrastructure to ensurecompatibility with human teleoperation, imitation learning, and MimicGen.Yashraj Narang helped design the Factory [51] tasks and assets, and advised on the project.Linxi Fan advised on the project.Yuke Zhu advised on the project and wrote parts of the paper.Dieter Fox advised on the project, and provided experiment ideas.16C FAQ1.How can I reproduce experiment results?We have released datasets, simulation environments, and instructions on reproducing thepolicy learning results at https://mimicgen.github.io . We also hope that theavailability of our datasets helps the community develop better policy learning methods.2.What are some limitations of MimicGen?See Appendix D for a discussion.3.Why are policy learning results worse in the real world than in simulation?See Appendix H for discussion and an additional experiment.4.Since data generation relies on open-loop replay of source human data, it seems likeMimicGen only works for low-precision pick-and-place tasks.We demonstrated that MimicGen can work for a large variety of manipulation tasks and be-haviors beyond standard pick-and-place tasks. This includes tasks with non-trivial contact-rich manipulation (Gear Assembly has 1mm insertion tolerance , and Picture Frame As-sembly needs alignment of 4 holes with 4mm tolerance each ), long-horizon manipulation(up to 8 subtasks), and behaviors beyond pick-and-place such as insertion, pushing, and ar-ticulation — see Appendix L for full details. The tasks also have pose variation well beyondtypical prior works using BC from human demos [1, 3–7, 30, 33, 55, 56].5.Is MimicGen robust to noisy object pose estimates during data generation?In the real world, we use the initial RGBD image to estimate object poses (see Appendix H).Thus, MimicGen is compatible with pose estimation methods and has some tolerance topose error. We further investigated tolerance to pose estimate errors in simulation (see Ap-pendix U) and found that while data generation rates can decrease (so data collection willtake longer), policies trained on the generated data maintained the same level of perfor-mance.6.Several recent works apply offline data augmentation to existing datasets to createmore data. What are the advantages of generating new data online like MimicGendoes?Offline data augmentation can be effective for generating larger dataset for robot manip-ulation [7, 35–45]; however, it can be difficult to generate plausible interactions withoutprior knowledge of physics [35] or causal dependencies [41,42], especially for new scenes,objects, or robots. In contrast, by generating new datasets through environment interaction,MimicGen data is guaranteed to be physically-consistent. Additionally, in contrast to manyoffline data augmentation methods, MimicGen is easy to implement and apply in practice,since only a small number of assumptions are needed (see Sec. 3). See more discussion inAppendix J.2.7.What is the advantage of using replay-based imitation for data generation and thentraining a policy with BC (like MimicGen does) over using it as the final agent?Replay-based imitation learning methods are promising for learning manipulation tasks us-ing a handful of demonstrations [8–11, 46–48], but they have some limitations comparedto MimicGen, which uses similar mechanisms during data generation, but trains an end-to-end closed-loop agent from the generated data. First, replay-based agents generallyconform to a specific policy architecture, while MimicGen datasets allow full compatibil-ity with a wide spectrum of offline policy learning algorithms [57]. Second, replay-basedmethods are typically open-loop , since they consist of replaying a demonstration blindly,while agents trained on MimicGen datasets can have closed-loop , reactive behavior, sincethe agent can respond to changes in observations. Finally, as we saw in Sec. 6 (and Ap-pendix P), in many cases, the data generation success rate (a proxy for the performance ofreplay-based methods) can be significantly lower than the performance of trained agents.See more discussion in Appendix J.1.8.Why might a data generation attempt result in a failure?One reason is that the interpolation segments are unaware of the geometry in the scene andconsist of naive linear interpolation (see Appendix N.2), so these segments might resultin unintended collisions. Another is that the way source segments are transformed do not17consider arm kinematics, so the end effector poses where segments start might be difficultto reach. A third reason is that certain source dataset motions might be easier for thecontroller to track than others.9.When can MimicGen be applied to generate data for new objects?We demonstrated results on geometrically similar rigid-body objects from the same cate-gory (e.g. mugs, carrots, pans) with similar scales. We also assumed aligned canonicalcoordinate frames for all objects in a category, and that the objects are well-described bytheir poses (e.g. rigid bodies, not soft objects). Extending the system for soft objects ormore geometrically diverse objects is left for future work.10.Can MimicGen data contain undesirable characteristics?See Appendix R for a discussion.11.Give a breakdown of how MimicGen was used to generate 50K demos from 200 hu-man demos.Here is the breakdown. It should be noted that this breakdown does not include our realrobot demonstrations (200 demos generated from 20 source demos) or any extra datasetsgenerated for additional experiments and analysis presented in the appendix.• 175 source demos: 10 source demos for each of 16 simulated tasks in Fig. 4 (exceptMobile Kitchen, which has 25)• 36K generated demos: 1000 demos for each of the 36 task variants in Fig. 4• 12K generated demos: robot transfer experiment (Appendix F) had 2 tasks, each ofwhich had 2 variants ( D0,D1) and 3 new robot arms for 12×1000 demos.• 2K generated demos: object transfer experiment (Appendix G) had 1000 demos fortheO1(new mug) and O2(12 mugs) variants.18D LimitationsIn this section, we discuss limitations of MimicGen that can motivate and inform future work.1.Known sequence of object-centric subtasks. MimicGen assumes knowledge of theobject-centric subtasks in a task (which object is involved at each subtask) and also as-sumes that this sequence of subtasks does not change (Assumption 2, Sec. 3).2.Known object poses at start of each subtask during data generation. During data gener-ation, at the start of each object-centric subtask, MimicGen requires an object pose estimateof the reference object for that subtask (Assumption 3, Sec 3). However, we demonstratedthat we can run MimicGen in the real world, using pose estimation methods (Sec. 6.4 andAppendix H), and has some tolerance to errors in pose estimates (Appendix U). Another av-enue for real world deployment is to generate data and train policies in simulation (whereobject poses are readily available) and then deploy simulation-trained agents in the realworld [58–62] — this is left for future work.3.One reference object per subtask. MimicGen assumes each task is composed of a se-quence of subtasks that are each relative to exactly one object (Assumption 2, Sec. 3).Being able to support subtasks where the motion depends on more than one object (forexample, placing an object relative to two objects, or on a cluttered shelf) is left for futurework.4.Naive filtering for generated data. MimicGen has a naive way to filter data generationattempts (just task success rates). However, this does not prevent the generated datasetsfrom being biased, or having artifacts (see discussion in Appendix R). Developing betterfiltering mechanisms is left for future work.5.Naive interpolation scheme and no guarantee on collision-free motion. MimicGen usesa naive linear interpolation scheme to connect transformed human segments together (Ap-pendix N.2). However, this method is not aware of scene geometry, and consequently canresult in unintended collisions if objects happen to be in the way of the straight line path.We opted for this simple approach to avoid the complexity of integrating a planner andensuring it uses the same action space (Operational Space Control [63]). We also saw thatlonger interpolation segments could be harmful to policy learning from generated data (Ap-pendix H). Similarly, ensuring that motion plans are not harmful to policy learning could benon-trivial. Developing better-quality interpolation segments (e.g. potentially with motionplanning) that are both amenable to downstream policy learning and safer for real-worldoperation is left for future work.6.Object transfer limitations. While MimicGen can generate data for manipulating dif-ferent objects (Appendix G), we only demonstrated results on geometrically similar rigid-body objects from the same category (e.g. mugs, carrots, pans) with similar scales. Wealso assumed aligned canonical coordinate frames for all objects in a category, and that theobjects are well-described by their poses (e.g. rigid bodies, not soft objects). Extending thesystem for soft objects or more geometrically diverse objects is left for future work.7.Task limitations. MimicGen was demonstrated on quasi-static tasks — it is unlikely towork on dynamic, non quasi-static tasks in its current form. However, a large number ofrobot learning works and benchmarks use quasi-static tasks [1,3–7,14,18,19,22,30,33,51,55, 56, 64–66], making the system broadly applicable. We also did not apply MimicGento tasks where objects had different dynamics from the source demonstrations (e.g. newfriction values). However, there is potential for MimicGen to work, depending on thetask. Recall that on each data generation attempt, MimicGen tracks a target end effectorpose path (Sec. 4.2) — this allows data generation for robot arms with different dynamics(Appendix F), and could potentially allow it to work for different object dynamics (e.g.pushing a cube across different table frictions).8.Mobile manipulation limitations. In Sec. 6.1, we presented results for MimicGen on theMobile Kitchen task, which requires mobile manipulation (base and arm motion). Ourcurrent implementation has some limitations. First, it assumes that the robot does notmove the mobile base and arm simultaneously. Second, we simply copy the mobile baseactions from the reference segment rather than transforming it like we do for end effectoractions. We found this simple approach sufficient for the Mobile Kitchen task (more details19in Appendix N.5). Future work could integrate more sophisticated logic for generating basemotion (e.g. defining and using a reference frame for each base motion segment, like theobject-centric subtasks used for arm actions, and/or integrating a motion planner for thebase).9.No support for multi-arm tasks. MimicGen only works for single arm tasks — extendingit to generate datasets for multi-manual manipulation [25] is left for future work.20E Full Related WorkThis section presents a more thorough discussion of related work than the summary presented in themain text.Data Collection for Robot Learning. There have been several data collection efforts to try andaddress the need for large-scale data in robotics. Some efforts have focused on self-superviseddata collection where robots gather data on tasks such as grasping through trial-and-error [12–17].RoboTurk [2, 23–26] is a system for crowdsourcing task demonstrations from human operators us-ing smartphone-based teleoperation and video streams provided in web browsers. Several relatedefforts [3–6, 27] also collect large datasets (e.g. 1000s of demonstrations) by using a large numberof human operators over extended periods of time. In contrast, MimicGen tries to make effectiveuse of a small number of human demonstrations (e.g. 10) to generate large datasets. Some workshave collected large datasets using pre-programmed demonstrators in simulation [18–22]; however,it can be difficult to scale these approaches up to more complex tasks, while we show that Mimic-Gen can be applied to a broad range of tasks. Prior work has also attempted to develop systems thatcan selectively query humans for demonstrations when they are needed, in order to reduce humanoperator time and burden [67–70]. In contrast, MimicGen only needs an operator to collect a fewminutes of demonstrations at the start of the process. Generating large synthetic datasets has beena problem of great interest in other domains as well [71–77], and has also been used as a tool forbenchmarking motion planning [78].Imitation Learning for Robot Manipulation. Imitation Learning (IL) seeks to train policies froma set of demonstrations. Behavioral Cloning (BC) [28] is a standard method for learning policiesoffline, by training the policy to mimic the actions in the demonstrations. It has been used extensivelyin prior work for robot manipulation [1,19,25,29–34] — in this work, we use BC to train single-taskpolicies from datasets generated by MimicGen. However, MimicGen can also be used to generatedatasets for a wide range of existing offline learning algorithms that learn from diverse multi-taskdatasets [53, 55, 79–83]. Some works have used offline data augmentation to increase the datasetsize for learning policies [7, 35–45] — in this work we collect new datasets.Replay-Based Imitation Learning. While BC is simple and effective, it typically requires severaldemonstrations to learn a task [7]. To alleviate this, many recent imitation learning methods try tolearn policies from only a handful of demonstrations by replaying demonstrations in new scenes [8–11, 46–48]. Some methods [9–11] use trained networks that help the robot end effector approachposes from which a demonstration can be replayed successfully. In particular, Di Palo et al. [11]proposes an approach to replay parts of a single demonstration to solve multi-stage tasks — this issimilar to the way MimicGen generates new datasets. However they make a number of assumptionsthat we do not (4D position and yaw action space vs. our 6-DoF action space, a single wrist cameraview to enable spatial generalization). Furthermore, this work and others use demonstration replayas a component of the final trained agent — in contrast, we use it as a data generation mechanism.Consequently, these prior approaches are complementary to our data generation system, and inprinciple, could be used as a part of alternative schemes for data generation. In this work, wefocus on the general framework of using such demonstration replay mechanisms to generate datathat can be seamlessly integrated into existing imitation learning pipelines, and opt for an approachthat emphasizes simplicity (more discussion in Appendix J). Our experiments also show that therecan be a large benefit from collecting large datasets and training agents from them, instead of directlydeploying a replay-based agent.21F Robot TransferIn Sec. 6, we summarized results that show MimicGen can generate data for diverse robot hardware.Recall that we took the source datasets from the Square and Threading tasks (which use the Pandaarm) and generated datasets for the Sawyer, IIWA, and UR5e robots across the D0andD1resetdistribution variants (see Fig. F.1). Here, we present the complete set of results.Notice that although the data generation rates have a large spread across robots (range 20%-74% forD0, see Table F.1), the policy success rates are significantly higher and remarkably similar acrossrobots (for example, 80%-91% on Square D0and 89%-98% on Threading D0— see the full image-based agent results in Table F.2 and low-dim agent results in Table F.3). This shows the potentialfor using human demonstrations across robot hardware using MimicGen, an exciting prospect, asteleoperated demonstrations are typically constrained to a single robot.PandaSawyerIIWAUR5eFigure F.1: Robots used in Robot Transfer Experiment. The figure shows the robot arms used for datageneration. Source datasets were collected on the Panda arm (blue border) and used to generate data for theSawyer, IIWA, and UR5e arms (orange border).Task Variant Panda Sawyer IIWA UR5eSquare ( D0) 73.7 55 .8 37 .7 64 .7Square ( D1) 48.9 38 .8 26 .5 34 .1Threading ( D0)51.0 28 .8 20 .4 21 .4Threading ( D1)39.2 23 .7 11 .5 18 .5Table F.1: Data Generation Rates on Different Robot Hardware. The success rates of data generation aredifferent across different robot arms (yet agents trained on these datasets achieve similar task success rates).Task Variant Panda Sawyer IIWA UR5eSquare ( D0) 90.7±1.9 86 .0±1.6 80 .0±4.3 84 .7±0.9Square ( D1) 73.3±3.4 60 .7±2.5 48 .0±3.3 56 .0±4.3Threading ( D0)98.0±1.6 88 .7±7.5 94 .0±3.3 91 .3±0.9Threading ( D1)60.7±2.5 50 .7±3.8 49 .3±4.1 60 .7±2.5Table F.2: Agent Performance on Different Robot Hardware. We use MimicGen to produce datasets acrossdifferent robot arms using the same set of 10 source demos (collected on the Panda arm) and train image-basedagents on each dataset (3 seeds). The success rates are comparable across the different robot arms, indicatingthat MimicGen can generate high-quality data across robot hardware.Task Variant Panda Sawyer IIWA UR5eSquare ( D0) 98.0±1.6 87 .3±1.9 79 .3±2.5 82 .0±1.6Square ( D1) 80.7±3.4 69 .3±2.5 55 .3±1.9 67 .3±3.4Threading ( D0)97.3±0.9 96 .7±2.5 93 .3±0.9 96 .0±1.6Threading ( D1)72.0±1.6 73 .3±2.5 67 .3±4.7 80 .0±4.9Table F.3: Low-Dim Agent Performance on Different Robot Hardware. We use MimicGen to producedatasets across different robot arms using the same set of 10 source demos (collected on the Panda arm) and trainagents on each dataset (3 seeds). The success rates are comparable across the different robot arms, indicatingthat MimicGen can generate high-quality data across robot hardware.22G Object TransferIn Sec. 6, we summarized results that show MimicGen can generate data for different objects. Recallthat we took the source dataset from the Mug Cleanup task and generated data with MimicGen foran unseen mug ( O1) and for a set of 12 mugs ( O2). Here, we present the complete set of results(Table G.1) and also visualize the mugs used for this experiment (Fig. G.1).The Mobile Kitchen task that we generated data for also had different object variants — we showthe 3 pans and 3 carrots in Fig. G.2. Results for this task are in Fig. 4 (image-based agents) and inTable Q.1 (low-dim agents).While these results are promising, we only demonstrated results on geometrically similar rigid-bodyobjects from the same category (e.g. mugs, carrots, pans) with similar scales. We also assumedaligned canonical coordinate frames for all objects in a category, and that the objects are well-described by their poses (e.g. rigid bodies, not soft objects). Extending the system for soft objectsor more geometrically diverse objects is left for future work.Task D0 O1 O2Mug Cleanup (DGR) 29.5 31 .0 24 .5Mug Cleanup (SR, image) 80.0±4.9 90 .7±1.9 75 .3±5.2Mug Cleanup (SR, low-dim) 82.0±2.8 88 .7±4.1 66 .7±2.5Table G.1: Object Transfer Results. We present data generation rates (DGR) and success rates (SR) of trainedagents on the O1andO2variants of the Mug Cleanup task, which have an unseen mug, and a set of 12 mugs(a new mug per episode) respectively.Figure G.1: Objects used in Object Transfer Experiment. The figure shows the mug used in the MugCleanup D0task (blue border), the unseen one in the O1task (orange border), and the complete set of mugs intheO2task.Figure G.2: Objects used in Mobile Kitchen task. The figure shows the 3 pans and 3 carrots used in theMobile Kitchen task. On each episode a random pan and carrot are selected and initialized in the scene.23H Real Robot ResultsStack (D1) Square (D1) TPA (D1) Coffee (D1) Threading (D1) PickPlace (D0)Task020406080100Success RateInterpolation ComparisonNum Steps5 50 Figure H.1: Effect of Increasing Interpolation Steps. Comparing the effort of interpolation steps on trainedimage-based agents. Using an increased amount of interpolation can cause agent performance to decreasesignificantly. This could explain the gap between real-world and simulation agent performance.In this section, we first provide further details on how we applied MimicGen to the real world tasksin Fig. 5, then we provide additional experiment results that help to explain the gap in trained policyperformance between simulation and real.Real Robot Data Collection Details. Recall that during data generation, MimicGen requires poseestimates at the start of each object-centric subtask (Assumption 3, Sec. 3). To do this, we use afront-view Intel RealSense D415 camera which has been calibrated (e.g. known extrinsics). Wefirst convert the RGBD image to a point cloud and remove the table plane via RANSAC [84]. Wethen apply DBSCAN [85] clustering to identify object segments of interest, though alternative seg-mentation methods such as [86, 87] are also applicable. In the Stack task, the cube instances aredistinguished by their color. In the Coffee task, the coffee machine and the pod are distinguishedbased on the segment dimensions. Finally for each identified object segment, we leverage [88] forglobal pose initialization, followed by ICP [89] refinement. Note that while the current pose esti-mation pipeline works reasonably well, our framework is not specific to certain types of perceptionmethods. Recent [90–94] and future advances in state estimation could be used to apply MimicGenin real-world settings with less assumptions about the specific objects.Gap in Policy Performance between Sim and Real. While we saw a significantly high data col-lection success rate (82.3% for Stack, 52.1% for Coffee), we saw much lower policy success rate onthese tasks than in simulation (36% vs. 100% for Stack, and 14% vs. ∼90% for Coffee), as describedin Sec. 6). While there was considerably less data in the real world due to the time-consuming natureof real-world data collection (100 demos instead of 1000 demos), there were also other factors thatcould explain this gap.As a safety consideration, our real-world tasks used much larger interpolation segments of ninterp=25,nfixed= 25 instead of the simulation default ( ninterp = 5,nfixed= 0) (see Appendix N.2 andAppendix N.6). We hypothesized that the increased duration of the interpolation segments madethem difficult to imitate, since there was little association between the motion and what the agent seesin the observations (the motions are slow, and do not generally move towards regions of interest).To further investigate this, we ran an experiment in simulation where we used the same settings forinterpolation for a subset of our tasks. The results are presented in Fig. H.1.We see that for certain tasks, the larger interpolation segments cause agent performance to decreasesignificantly — for example image-based agents on Stack D1decrease from 99.3% success to 68.7%success, and image based agents on Pick Place decrease from 50.7% to 11.3%. These results confirmthat the larger segments (together with the smaller dataset size) may have been responsible for lowerreal world performance. Developing better-quality interpolation segments that are both safe forreal-world operation and amenable to downstream policy learning is left for future work.Combining MimicGen with sim-to-real policy deployment methods [58–62, 95–98] is another ex-citing avenue for future work —simulation does not suffer from the same bottlenecks as real-worlddata collection (slow and time-consuming, requiring multiple arms and human supervisors to reset24the task), making simulation an ideal setting for MimicGen to generate large-scale diverse datasets.Recent sim2real efforts have been very promising — several works [61, 95–98] have been able totransfer policies trained via imitation learning from sim to real. Furthermore, MimicGen is entirelycomplementary to domain randomization techniques [99], which could also be applied to assist intransferring policies to the real world.Improved Performance with More Flexible Policy Models. One promising avenue to improvereal-world learning results is to develop and/or apply imitation learning algorithms that can betterdeal with multimodal and heterogeneous trajectories. We trained Diffusion Policy [100], a recentstate-of-the-art imitation learning model, on our real-world Stack dataset. The new agent achieveda success rate of 76% across 50 evaluations – a significant improvement over the 36% success rateachieved by BC-RNN. This result provides an optimistic outlook on producing capable agents fromreal-world MimicGen data.25I Different DemonstratorsTask D0 D1 D2Stack Three (Op. A, image) 92.7±1.9 86 .7±3.4 -Stack Three (Op. B, image) 86.0±0.0 69 .3±5.0 -Threading (Op. A, image) 98.0±1.6 60 .7±2.5 38 .0±3.3Threading (Op. B, image) 98.0±1.6 58 .0±4.3 38 .0±8.6Three Pc. Assembly (Op. A, image) 82.0±1.6 62 .7±2.5 13 .3±3.8Three Pc. Assembly (Op. B, image) 76.0±1.6 54 .7±6.8 5 .3±1.9Stack Three (Op. A, low-dim) 88.0±1.6 90 .7±0.9 -Stack Three (Op. B, low-dim) 82.7±0.9 84 .0±3.3 -Threading (Op. A, low-dim) 97.3±0.9 72 .0±1.6 60 .7±6.2Threading (Op. B, low-dim) 97.3±0.9 76 .0±4.3 70 .0±1.6Three Pc. Assembly (Op. A, low-dim) 74.7±3.8 61 .3±1.9 38 .7±4.1Three Pc. Assembly (Op. B, low-dim) 77.3±2.5 65 .3±7.4 46 .0±9.1Table I.1: MimicGen with Different Demonstrators. We show that policies trained on MimicGen data canachieve similar performance even when the source demonstrations come from different demonstrators. Oper-ator B used a different teleoperation device than Operator A, but policy training results on generated datasetsare comparable for both image-based and low-dim agents.Task D0 D1 D2Square (Better, image) 90.7±1.9 73 .3±3.4 49 .3±2.5Square (Okay, image) 90.0±1.6 64 .0±7.1 50 .0±2.8Square (Worse, image) 90.7±0.9 59 .3±2.5 45 .3±4.1Square (Better, low-dim) 98.0±1.6 80 .7±3.4 58 .7±1.9Square (Okay, low-dim) 95.3±0.9 82 .0±1.6 60 .7±1.9Square (Worse, low-dim) 95.3±0.9 76 .7±5.0 52 .7±1.9Table I.2: MimicGen with Lower Quality Demonstrators. We show that policies trained on MimicGen datacan achieve similar performance even when the source demonstrations come from lower quality demonstrators.We compare across source datasets from the ”Better”, ”Okay”, and ”Worse” subsets of the robomimic Square-MH dataset [7], which was collected by operators of different proficiency. Policy training results on generateddatasets are comparable for both image-based and low-dim agents.While most of our experiments use datasets from one particular operator, we show that Mimic-Gen can easily use demonstrations from different operators of mixed quality. We first collected 10source demonstrations from a different operator on the Stack Three, Threading, and Three PieceAssembly tasks — this operator also used a different teleoperation device (3D mouse [49, 101]).We also used 10 demonstrations from one of the “Okay” operators and one of the “Worse” opera-tors in the robomimic Square-MH dataset [7] to see if MimicGen could use lower-quality datasets.These source datasets were then provided to MimicGen to generate 1000 demonstrations for alltask variants, and subsequently train policies — the results are summarized in Table I.1 (differentdemonstrator with different teleoperation device) and Table I.2 (lower quality demonstrators).Interestingly, the operator using a different teleoperation interface produced policies that were ex-tremely similar in performance to our original results (deviations of 0% to 17%). Furthermore,the policies produced from the datasets generated with the “Worse” and “Okay” operator data arealso extremely similar in performance (deviations of 0% to 14%). This is quite surprising, as therobomimic study [7] found that there can be significant difficulty in learning from datasets producedby less experienced operators. Our results suggest that in the large data regime, the harmful ef-fects of low-quality data might be mitigated. This is an interesting finding that can inform futurework into learning from suboptimal human demonstrations [102–107].26J Motivation for MimicGen over Alternative MethodsIn this section, we expand on the motivation for using data generation with MimicGen over twoalternatives — replay-based imitation learning and offline data augmentation.J.1 Replay-Based Imitation LearningSeveral recent works learn policies using only a handful of demonstrations by replaying the demon-strations in new scenes [8–11, 46–48]. While these methods are promising, there are some limita-tions. One limitation is that their learned policy usually uses demonstration replay as a part of theiragent. This means that the policy is often composed of hybrid stages (such as a self-supervised net-work that learns to move the arm to configurations from which replay will be successful and a replaystage). By contrast, MimicGen uses a similar mechanism to generate datasets — this allows fullcompatibility with a wide spectrum of offline policy learning algorithms [57]. These datasets alsoallow for evaluating different design decisions (such as different observation spaces and learningmethods), including the potential for multi-task benchmarks consisting of high-quality human data.Furthermore, by easily allowing datasets to be created and curated, MimicGen can facilitate futurework to investigate how dataset composition can influence learned policy proficiency.Another limitation is that replay-based imitation methods are typically open-loop , since they consistof replaying a demonstration blindly (the trajectory executed by the robot cannot adapt to smallerrors). By contrast, agents trained on MimicGen datasets can have closed-loop , reactive behavior,since the agent can respond to changes in observations.Finally, as we saw in Sec. 6 (and Appendix P), in many cases, the data generation success rate (aproxy for the performance of replay-based methods) can be significantly lower than the performanceof trained agents (one reason for this might be because of only training the policy on the successfuldata generation attempts, and another might be due to agent generalization).J.2 Offline Data AugmentationSeveral works have used offline data augmentation to increase the dataset size for learning poli-cies [7, 35–45]. Since this process is offline, it can greatly increase the size of the dataset. In fact,this can be complementary to MimicGen— we leverage pixel shift randomization [7, 36–39] whentraining image-based agents on MimicGen data.However, because data augmentation is offline, it can be difficult to generate plausible interactionswithout prior knowledge of physics [35] or causal dependencies [41, 42], especially for new scenes,objects, or robots. Instead, MimicGen opts for generating new datasets through environment in-teraction by re-purposing existing human demonstrations — this automatically leads to physically-consistent data, since generation is online. In contrast to many offline data augmentation methods,MimicGen is easy to implement and apply in practice, since only a small number of assumptionsare needed (see Sec. 3).Similar to MimicGen, some recent works [43–45] have also shown an ability to create datasets withnew objects, but these works typically change distractor objects that are not involved in manipu-lation — this leads to encouraging behavioral invariances (e.g. tell the policy to apply the sameactions, even if the background and irrelevant objects are changed). By contrast, MimicGen gener-ates datasets with new objects that are a critical part of the manipulation task — it seeks to generatedata by adapting behavior to new contexts.27K Additional Details on Object-Centric SubtasksSubtasks FiguresStartMug Grasp(reference: mug)Mug Place(reference: machine)Pod Grasp(reference: pod)Pod Insert(reference: machine)Figure K.1: Illustrative Example of Object-Centric Subtasks. In this example, the robot must prepare a cupof coffee by placing the mug on the machine, and the coffee pod into the machine. This task is easily brokendown into a sequence of object-centric subtasks — this figure shows the end of each subtask, and the relevantobject for each subtask. There is a mug grasping subtask (motion relative to mug), a mug placement subtask(motion relative to machine), a pod grasping subtask (motion relative to pod), and a pod insertion subtask(motion relative to machine). The robot can solve this task by sequencing motions relative to each object frame(one per subtask).Object-centric subtasks (Assumption 2 in Sec. 3) are a key part of how MimicGen generates newdemonstrations. In this section, we provide more details on how they are defined, and how sub-task segments are parsed from the source demonstrations. We also show some examples to buildintuition.K.1 How Tasks can be broken up into Object-Centric SubtasksWe first restate Assumption 2 — we assume that tasks consist of a known sequence ofobject-centric subtasks . Let O={o1, ..., o K}be the set of objects in a task M. Asin Di Palo et al. [11], we assume that tasks consist of a sequence of object-centric subtasks(S1(oS1), S2(oS2), ..., S M(oSM)), where the manipulation in each subtask Si(oSi)is relative toa single object’s ( oSi∈ O) coordinate frame. We assume this sequence is known.Specifying the sequence of object-centric subtasks is generally easy and intuitive for a human to do.As a first example, consider the coffee preparation task shown in Fig. K.1 (and Fig. 2). A robot mustprepare a cup of coffee by grasping a mug, placing it on the coffee machine, grasping a coffee pod,inserting the pod into the machine, and closing the machine lid. This task can be broken down intoa sequence of object-centric subtasks: a mug-grasping subtask (motion is relative to mug), a mug-placement subtask (motion relative to machine), a pod-grasping subtask (motion relative to pod),and a final pod-insertion and lid-closing subtask (motion relative to machine). Consequently, therobot can solve this task by sequencing several object-centric motions together. This is the key ideabehind how MimicGen data generation works — it takes a set of source human demos, breaks themup into segments (where each segment solves a subtask), and then applies each subtask segment ina new scene. The subtasks are visualized in Fig. K.1.We also emphasize that a wide variety of tasks can be broken down into object-centric subtasks (e.g.Assumption 2 applies to a wide variety of tasks, especially those that are commonly considered inthe robot learning community). Fig. K.2 illustrates subtasks for some of our tasks (more discussionin Appendix K.3 below).K.2 Parsing the Source Dataset into Object-Centric Subtask SegmentsWe now provide more details on the parsing procedure described in Sec. 4.1. Recall that we wouldlike to parse every trajectory τin the source dataset into segments {τi}Mi=1, where each segment τicorresponds to a subtask Si(oSi). We assume access to metrics that allow the end of each subtaskto be detected automatically. In our running example from Fig. 2, this would correspond to metricsthat use the state of the robot and objects to detect when the mug grasp, mug placement, pod grasp,and machine lid close occurs. This information is usually readily available in simulation, as itis often required for checking task success. With these metrics, we can easily run through theset of demonstrations, detect the end of each subtask sequentially, and use those as the subtask28StartGrasp(ref: nut)Insert(ref: peg)StartStartStartStartGrasp(ref: needle)Thread(ref: tripod)Grasp(ref: gear)Insert and Crank(ref: base)Grasp(ref: red cube)Place(ref: green cube)Place(ref: red cube)Grasp(ref: blue cube)Grasp(ref: piece 1)Insert(ref: base)Grasp(ref: piece 2)Insert(ref: piece 1)Three Piece AssemblyStack ThreeGear AssemblyThreadingSquareFigure K.2: Object-Centric Subtasks for Selected Tasks This figure shows the end of each object-centricsubtask (and the reference object) for a subset of the tasks in the main text. MimicGen assumes that this subtaskstructure is known for each task; however, specifying this subtask structure is generally easy and intuitive for ahuman.boundaries, to end up with every trajectory τ∈ D srcsplit into a contiguous sequence of segmentsτ= (τ1, τ2, ..., τ M), one per subtask.However, another alternative that requires no privileged information (and hence is suitablefor real world settings) is to have a human manually annotate the end of each subtask. As thenumber of source demonstrations is usually small, this is easy for a human operator to do, eitherwhile collecting each demonstration or annotating them afterwards. In this work, we opted for theformer method (automated subtask end metrics) because they were readily available for our tasks oreasy to craft.K.3 Specific ExamplesWe provide some examples in this section of how some tasks are broken up into object-centricsubtasks. The examples are provided in Fig. K.2. For each task below, we outline the object-centric subtasks, and the subtask end detection metrics used for parsing the source human demosinto segments that correspond to each subtask. Note that these metrics are only used for parsing thesource human demos and are not assumed to be available during policy execution.Square. There are 2 subtasks — grasping the nut (motion relative to nut) and inserting the nutonto the peg (motion relative to peg). To detect the end of the grasp subtask, we check for contactbetween the robot fingers and the nut. For the insertion subtask, we just use the task success check.Threading. There are 2 subtasks — grasping the needle (motion relative to needle) and threadingthe needle into the tripod (motion relative to tripod). To detect the end of the grasp subtask, wecheck for contact between the robot fingers and the needle. For the threading subtask, we just usethe task success check.29Gear Assembly. There are 2 subtasks — grasping the gear (motion relative to gear) and insertingthe gear into the base and turning the crank (motion relative to base). To detect the end of the graspsubtask, we check if the gear has been lifted by a threshold. For the insertion subtask, we just usethe task success check.Stack Three. There are 4 subtasks — grasping the red block (motion relative to red block), placingthe red block onto the green block (motion relative to green block), grasping the blue block (motionrelative to blue block), and placing the blue block onto the red block (motion relative to red block).To detect the end of each grasp subtask we check for contact between the robot fingers and therelevant block. For each place subtask, we check that the relevant block has been lifted and is incontact with the block that should be underneath it.Three Piece Assembly. There are 4 subtasks — grasping the first piece (motion relative to firstpiece), inserting the first piece into the base (motion relative to base), grasping the second piece(motion relative to second piece), and inserting the second piece onto the first piece (motion relativeto first piece). To detect the end of each grasp subtask, we check for contact between the robotfingers and the relevant piece. For each insertion subtask, we re-use the insertion check from thetask success check.30L Tasks and Task Variants(a) Stack (b) Stack Three (c) Square (d) Coffee(e) Threading (f) Three Pc. Assembly (g) Hammer Cleanup (h) Mug Cleanup(i) Pick Place (j) Nut Assembly (k) Kitchen (l) Coffee Preparation(m) Mobile Kitchen (n) Nut-Bolt Assembly (o) Gear Assembly (p) Frame AssemblyFigure L.1: Tasks (all). We show all of the simulation tasks in the figure above. They span a wide variety ofbehaviors including pick-and-place, precise insertion and articulation, and mobile manipulation, and includelong-horizon tasks requiring chaining several behaviors together.In this section, we provide more detailed descriptions of each of our tasks and task variants. Thetasks (Fig. L.1) and task variants (especially their reset distributions) are best appreciated on thewebsite ( https://mimicgen.github.io ). We group the tasks into categories as in Sec. 5and describe the goal, the variants, and the object-centric subtasks in each task. As mentioned inSec. 3 and Appendix. N.1, the tasks have a delta-pose action space (implemented with an OperationalSpace Controller [63]). Control happens at 20 hz.Basic. A basic set of box stacking tasks.•Stack [49] Stack a red block on a green one. Blocks are initialized in a small (0.16mx 0.16m) region ( D0) and a large (0.4m x 0.4m) region ( D1) with a random top-downrotation. There are 2 subtasks (grasp red block, place onto green). We also develop aversion of this task in the real-world (Fig. 5) , where the D0region is a 0.21m x 0.30m boxand the D1region is a 0.44m x 0.85m box.•Stack Three. Same as Stack, but additionally stack a blue block on the red one. Blocks areinitialized in a small (0.20m x 0.20m) region ( D0) and a large (0.4m x 0.4m) region ( D1)with a random top-down rotation. There are 4 subtasks (grasp red block, place onto green,grasp blue block, place onto red).Contact-Rich. A set of tasks that involve contact-rich behaviors such as insertion or drawer articu-lation. In each D0variant, at least one object never moves.•Square [7]. Pick a square nut and place on a peg. ( D0) Peg never moves, nut is placed insmall (0.005m x 0.115m) region with a random top-down rotation. ( D1) Peg and nut movein large regions, but peg rotation fixed. Peg is initialized in 0.4m x 0.4m box and nut isinitialized in 0.23m x 0.51m box. ( D2) Peg and nut move in larger regions (0.5m x 0.5mbox of initialization for both) and peg rotation also varies. There are 2 subtasks (grasp nut,place onto peg).•Threading [24]. Pick a needle and thread through a hole on a tripod. ( D0) Tripod is fixed,needle moves in modest region (0.15m x 0.1m box with 60 degrees of top-down rotationvariation). ( D1) Tripod and needle move in large regions on the left and right portions ofthe table respectively. The needle is initialized in a 0.25m x 0.1m box with 240 degrees31of top-down rotation variation and the tripod is initialized in a 0.25m x 0.1m box with 120degrees of top-down rotation variation. ( D2) Tripod and needle are initialized on the rightand left respectively (reversed from D1). The size of the regions is the same as D1. Thereare 2 subtasks (grasp needle, thread into tripod).•Coffee [24]. Pick a coffee pod, insert into coffee machine, and close the machine hinge.(D0) Machine never moves, pod moves in small (0.06m x 0.06m) box. ( D1) Machineand pod move in large regions on the left and right portions of the table respectively. Themachine is initialized in a 0.1m x 0.1m box with 90 degrees of top-down rotation variationand the pod is initialized in a 0.25m x 0.13m box. ( D2) Machine and pod are initializedon the right and left respectively (reversed from D1). The size of the regions is the sameasD1. We also develop a version of this task in the real-world (Fig. 5) – in D0, the podis initialized in a 0.05m vertical strip and in D1, the pod is initialized in a 0.44m x 0.35mbox. There are 2 subtasks (grasp pod, insert-into and close machine).•Three Piece Assembly. Pick one piece, insert it into the base, then pick the second piece,and insert into the first piece to assemble a structure. ( D0) base never moves, both piecesmove around base with fixed rotation in a 0.44m x 0.44m region. ( D1) All three piecesmove in workspace (0.44m x 0.44m region) with fixed rotation. ( D2) All three pieces canrotate (the base has 90 degrees of top-down rotation variation, and the two pieces have 180degrees of top-down rotation variation). There are 4 subtasks (grasp piece 1, place intobase, grasp piece 2, place into piece 2).•Hammer Cleanup [53]. Open drawer, pick hammer, and place into drawer, and closedrawer. ( D0) Drawer is fixed, and hammer initialized in a small 0.08m x 0.07m box with11 degrees of top-down rotation variation. ( D1) Drawer and hammer both move in largeregions. The drawer is initialized in a 0.2m x 0.1m box with 60 degrees of top-downrotation variation and the hammer is initialized in a 0.4m x 0.12m box with a random top-down rotation. There are 3 subtasks (open drawer, grasp hammer, place into drawer andclose).•Mug Cleanup. Similar to Hammer Cleanup but with a mug and with additional variants.(D0)The drawer does not move and the mug moves in a 0.3m x 0.15m box with a randomtop-down rotation. (D1)The mug moves in a 0.2m x 0.1m box with 60 degrees of top-down rotation variation and the mug is initialized in a 0.4m x 0.15m box with a randomtop-down rotation. (O1)A different mug is used. ( O2) On each task reset, one of 12 mugsis sampled. There are 3 subtasks as in Hammer Cleanup.Long-Horizon. A set of tasks that require chaining multiple behaviors together.•Kitchen [53]. Switch stove on, place pot onto stove, place bread into pot, place pot in frontof serving region and push it there, and turn off the stove. ( D0) The bread is initializedin a 0.03m x 0.06m region with fixed rotation and the pot is initialized in a 0.005m x0.02m region with 11 degrees of top-down rotation variation. The other items do not move.(D1) Bread, pot, stove, button, and serving region all move in wider regions. Bread: 0.2mx 0.2m box with 180 degree top-down rotation variation, pot: 0.1m x 0.15m box with60 degrees top-down rotation variation, stove: 0.17m x 0.1505m box with fixed rotation,button: 0.26m x 0.15m box with fixed rotation, serving region: 0.15m horizontal strip.There are 7 subtasks (turn stove on, grasp pot, place pot on stove, grasp bread, place breadin pot, serve pot onto serving region, and turn stove off).•Nut Assembly [49]. Similar to Square, but place both a square nut and round nut onto twodifferent pegs. ( D0) Each nut is initialized in a small box (0.005m x 0.115m region with arandom top-down rotation). There are 4 subtasks (grasp each nut and place onto each peg).•Pick Place [49]. Place four objects into four different bins. ( D0) Objects are initializedanywhere within the large box (0.29m x 0.39m). We use a slightly simpler version of thistask where the objects are initialized with top-down rotations between 0 and 90 degrees(instead of any top-down rotation). There are 8 subtasks (grasp each obejct and place intoeach bin).•Coffee Preparation. A full version of Coffee — load mug onto machine, open machine,retrieve coffee pod from drawer and insert into machine. ( T0) The mug moves in modest(0.15m x 0.15m) region with fixed top-down rotation and the pod inside the drawer moves32in a 0.06m x 0.08m region while the machine and drawer are fixed. ( T1) The mug isinitialized in a larger region (0.35m x 0.2m box with uniform top-down rotation) and themachine also moves in a modest region (0.1m x 0.05m box with 60 degrees of top-downrotation variation). There are 5 subtasks (grasp mug, place onto machine and open lid, opendrawer, grasp pod, insert into machine and close lid).Mobile Manipulation. Tasks involving mobile manipulation.•Mobile Kitchen. Set up frying pan, by retrieving a pan from counter and placing ontostove, followed by retrieving a carrot from sink and placing onto pan. ( D0) The pan startsin a 0.2m x 0.4m region in the center of the countertop (with 120 degrees of top-downrotation variation) and the carrot starts in a 0.1m x 0.1m region inside the sink (with 60degrees of rotation variation). There are three possible pans and three possible carrotssampled randomly for each episode. There are 4 subtasks (grasp gap, place pan, graspcarrot, place carrot). The latter three stages involve operating the mobile base.Factory. A set of high-precision tasks in Factory [51].•Nut-and-Bolt Assembly. Pick nut and align onto a bolt. ( D0) Nut and bolt are initialized inmodest regions of size 0.2m x 0.2m with no rotation variation. ( D1) Nut and bolt initializedanywhere in workspace (0.35m x 0.8m box) with fixed rotation. ( D2) Nut and bolt canrotate (180 degrees of top-down rotation variation). There are 2 subtasks (pick nut andplace onto bolt)•Gear Assembly. Pick a gear, insert it onto a shaft containing other gears, and turn thegear crank to move the other gears. ( D0) Base is fixed, and gear moves in modest region(0.1m x 0.1m with no rotation variation). ( D1) Base and gear move in larger regions (ofsize 0.3m x 0.3m) with fixed rotation. ( D2) Both move with rotations (180 degrees of top-down variation for the gear and 90 degrees of top-down variation for the base). There are 2subtasks (grasp gear, insert into base and crank).•Frame Assembly. Pick a picture frame border with 4 holes and insert onto a base with 4bolts rigidly attached. ( D0) Frame border and base move in small regions of size 0.1m x0.1m with fixed rotation. ( D1) Frame border and base move in much larger regions of size0.3m x 0.3m with fixed rotation. ( D2) Both move with rotations (60 degrees of top-downvariation for both). There are 2 subtasks (grasp frame border and insert into base).33M Derivation of Subtask Segment TransformIn this section, we provide a complete derivation of the source subtask segment transformationpresented in Sec. 4.2. Recall that TABdenotes a homogenous 4 ×4 matrix that represents the poseof frame Awith respect to frame B. We have chosen a source subtask segment consisting of targetposes for the end effector controller (Assumption 1, Sec. 3) τi= (TC0W, TC1W, ..., TCKW)where Ctis the controller target pose frame at timestep t,Wis the world frame, and Kis the length of thesegment.We would like to transform τiaccording to the new pose of the corresponding object in the currentscene (frame O′0with pose TO′0W) so that the relative poses between the target pose frame and theobject frame are preserved at each timestep ( TC′tO′0=TCtO0). We can write TC′tO′0= (TO′0W)−1TC′tWandTCtO0= (TO0W)−1TCtW. Setting them equal, we have(TO′0W)−1TC′tW= (TO0W)−1TCtWRearranging for TC′tWby left-multiplying by TO′0Wwe obtainTC′tW=TO0W(TO′0W)−1TCtWwhich is the equation we use to transform the source segment.34N Data Generation DetailsIn this section, we provide additional details on how MimicGen generates data. We first pro-vide additional details about components of MimicGen that were not discussed in the main text.This includes further discussion on how MimicGen converts between delta-pose actions and con-troller target poses (Appendix N.1), more details on how interpolation segments are generated (Ap-pendix N.2), an overview of different ways the reference segment can be selected (Appendix N.3),details on how transformed trajectories are executed with action noise (Appendix N.4), additionaldetails on our pipeline for mobile manipulation tasks (Appendix N.5), and finally, a list of the datageneration hyperparameters for each task (Appendix N.6).N.1 Equivalence between delta-pose actions and controller target posesWe assume that the action space Aconsists of delta-pose commands for an end effector controller(Assumption 1, Sec. 3). As in [7], we assume that actions are 7-dimensional, where the first 3components are the desired translation from the current end effector position, the next 3 componentsrepresent the desired delta rotation from the current end effector rotation, and the final componentis the gripper open/close action. The delta rotation is represented in axis-angle form, where themagnitude of the 3-vector gives the angle, and the unit vector gives the axis. The robot controllerconverts the delta-pose action into an absolute pose target TCWby adding the delta translation to thecurrent end effector position, and applying the delta rotation to the current end effector rotation.Consequently, at each timestep in a demonstration {st, at}Tt=1, it is possible to convert each actionatto a controller pose target TCtWby using the end effector pose at each timestep. MimicGenuses this to represent each segment in the source demonstration as a sequence of controller poses.MimicGen also uses this conversion to execute a new transformed segment during data generation— it converts the sequence of controller poses in the segment to a delta-pose action at each timestepduring execution, using the current end effector position.N.2 Details on Interpolation SegmentsAs mentioned in Sec. 4.2, MimicGen adds an interpolation segment at the start of each transformedsegment during data generation to interpolate from the current end effector pose TE′0Wand the start ofthe transformed segment TC′0W. There are two relevant hyperparameters for the interpolation segmentin each subtask segment — ninterp andnfixed. We first use simple linear interpolation between thetwo poses (linear in position, and spherical linear interpolation for rotation) to add ninterp interme-diate controller poses between TE′0WandTC′0W, and then we hold TC′0Wfixed for nfixedsteps. Theseintermediate poses are all added to the start of the transformed segment, and given to MimicGen toexecute one by one.N.3 Reference Segment SelectionRecall that MimicGen parses the source dataset into segments that correspond to each subtaskDsrc={(τj1, τj2, ..., τjM)}Nj=1(Sec. 4.1). During data generation, at the start of each subtask Si(oSi),MimicGen must choose a corresponding segment from the set {τji}Nj=1ofNsubtask segments inDsrc. It suffices to choose only one source demonstration j∈ {1,2...., N}since this uniquely iden-tifies the subtask segment for the current subtask. We discuss some variants of how this selectionoccurs.Selection Frequency. As presented in the main text (Fig. 2), MimicGen can select a source demon-stration j(and corresponding segment) at the start of each subtask. However, in many cases, thiscan be undesirable, since different demonstrations might have used different strategies that are in-compatible with each other. As an example, two demonstrations might have different object graspsfor the mug in Fig. 2 — each grasp might require a different placement strategy. Consequently,we introduce a hyperparameter, per-subtask , which can toggle this behavior — if it is set to False,MimicGen chooses a single source demonstration jat the start of a data generation episode andholds it fixed (so all source subtask segments are from the same demonstration, (τj1, τj2, ..., τjM)).35Task normal no noise replay w/ noiseSquare ( D0) (DGR) 73.7 80 .5 88 .1Square ( D1) (DGR) 48.9 50 .7 -Square ( D2) (DGR) 31.8 33 .4 -Threading ( D0) (DGR) 51.0 84 .5 53 .8Threading ( D1) (DGR) 39.2 50 .8 -Threading ( D2) (DGR) 21.6 27 .3 -Square ( D0) (SR, image) 90.7±1.9 72 .0±3.3 42 .0±1.6Square ( D1) (SR, image) 73.3±3.4 56 .7±0.9 -Square ( D2) (SR, image) 49.3±2.5 42 .7±6.6 -Threading ( D0) (SR, image) 98.0±1.6 59 .3±6.8 74 .0±3.3Threading ( D1) (SR, image) 60.7±2.5 43 .3±9.3 -Threading ( D2) (SR, image) 38.0±3.3 22 .7±0.9 -Square ( D0) (SR, low-dim) 98.0±1.6 82 .0±1.6 60 .7±3.4Square ( D1) (SR, low-dim) 80.7±3.4 70 .0±1.6 -Square ( D2) (SR, low-dim) 58.7±1.9 55 .3±1.9 -Threading ( D0) (SR, low-dim) 97.3±0.9 69 .3±0.9 34 .7±6.6Threading ( D1) (SR, low-dim) 72.0±1.6 56 .7±5.0 -Threading ( D2) (SR, low-dim) 60.7±6.2 46 .0±7.5 -Table N.1: Effect of Action Noise. MimicGen adds Gaussian noise to actions when executing transformedsegments during data generation. These results show that removing the noise can increase the data generationrate (as expected), but can cause agent performance to decrease significantly. They also show that just replayingthe same task instances from the source human data with action noise is not sufficient (although it does improveresults over just using the source human data).Theper-subtask hyperparameter determines how frequently source demonstration selection occurs— we next discuss strategies for actually selecting the source demonstration.Selection Strategy. We now turn to how the source demonstration jis selected. We found randomselection to be a simple and effective strategy in many cases — here, we simply select the sourcedemonstration juniformly at random from {1,2...., N}. We used this strategy for most of ourtasks. However, we found some tasks benefit from a nearest-neighbor selection strategy. Considerselecting a source demonstration segment for subtask Si(oSi). We compare the pose TO′0Wof objectoSiin the current scene with the initial object pose TO0Wat the start of each source demonstrationsegment τji, and sort the demonstrations (ascending) according to the pose distance (to evaluatethe pose distance for each demonstration segment, we sum the L2position distance with the anglevalue of the delta rotation (in axis-angle form) between the two object rotations). We then select ademonstration uniformly at random from the first nnkmembers of the sorted list.N.4 Action NoiseWhen MimicGen executes a transformed segment during data generation, it converts the sequence oftarget poses into delta-pose actions atat each timestep. We found it beneficial to apply additive noiseto these actions — we apply Gaussian noise N(0,1)with magnitude σin each dimension (excludinggripper actuation). To showcase the value of including the noise we ran an ablation experiment(presented in Table N.1) that shows how much data generation success rate and agent performancechanges when the datasets are not generated with action noise during execution (compared to ourdefault value of σ= 0.05).As expected, the data generation success rate increases when using no noise, as noise can cause theend effector motion to deviate from the expected subtask segment that is being followed (the mostsignificant example is an increase of 33% on Threading D0). However, agent performance suffers,with performance drops as large as 30% on agents trained on low-dim observations, and up to 40%on agents trained on image observations.Another natural question is whether the benefits of MimicGen come purely from action noise in-jection. To investigate this, we also ran a comparison (“replay w/ noise” in Table N.1) where wetook the 10 source demos, and replayed them with the same level of action noise (0.05) used inour experiments until we collected 1000 successful demonstrations. We selected a random source36Task normal no NN no per-subtask no NN + no per-subtaskSquare ( D0) (DGR) 73.7 36 .7 - -Square ( D1) (DGR) 48.9 30 .6 - -Square ( D2) (DGR) 31.8 22 .4 - -Nut Assembly ( D0) (DGR) 50.0 27 .1 - -Stack ( D0) (DGR) 94.3 - 85.1 71 .6Stack ( D1) (DGR) 90.0 - 76.3 63 .3Stack Three ( D0) (DGR) 71.3 - 37.8 26 .7Stack Three ( D1) (DGR) 68.9 - 36.0 27 .5Pick Place ( D0) (DGR) 32.7 - 30.8 29 .7Square ( D0) (SR, low-dim) 98.0±1.6 94 .7±2.5 - -Square ( D1) (SR, low-dim) 80.7±3.4 79 .3±2.5 - -Square ( D2) (SR, low-dim) 58.7±1.9 57 .3±0.9 - -Nut Assembly ( D0) (SR, low-dim) 76.0±1.6 64 .7±5.7 - -Stack ( D0) (SR, low-dim) 100.0±0.0 - 99.3±0.9 99 .3±0.9Stack ( D1) (SR, low-dim) 100.0±0.0 - 100.0±0.0 99 .3±0.9Stack Three ( D0) (SR, low-dim) 88.0±1.6 - 84.0±1.6 81 .3±2.5Stack Three ( D1) (SR, low-dim) 90.7±0.9 - 78.7±2.5 83 .3±0.9Pick Place ( D0) (SR, low-dim) 58.7±7.5 - 52.0±3.3 56 .0±5.9Table N.2: Effect of Removing Selection Strategy. Some of our tasks used a nearest-neighbor selectionstrategy and a per-subtask selection strategy for source demonstration segments. These results show the effectof removing these selection strategies (e.g. using the default, random selection strategy). Interestingly, whilethe data generation rates decrease significantly, agent performance does not decrease significantly for mosttasks.demonstration at the start of each trial and reset the simulator state to its initial state before collec-tion.This comparison shows the value of using MimicGen to transform and interpolate source humansegments to collect data on new configurations, instead of purely using replay with noise on thesame configurations from the source data. Comparing the “replay w/ noise” column of Table N.1 toFig. 4, we see that there is an appreciable increase in the success rate on D0compared to just usingthe 10 source demos (Square increases from 11.3 to 42.0, and Threading increases from 19.3 to74.0), but training on the MimicGen dataset still achieves better performance on D0(Square: 90.7,Threading: 98).N.5 Data Generation for Mobile Manipulation TasksThe process of transforming source segments differs slightly for mobile manipulation tasks. Asource segment may or may not contain mobile base actions. If the segment does not contain mobilebase actions we generate segments in the same manner as our method for manipulator-only environ-ments. If a segment does contain mobile base actions we assume that the segment can be split intothree contiguous sub-segments: (1) a sub-segment involving manipulator actions, (2) a subsequentsub-segment involving mobile base actions, and (3) a final sub-segment involving manipulator ac-tions. We generate corresponding sub-segments for each of these phases. We generate sub-segmentsfor (1) and (3) in the same manner as our algorithm for manipulator-only environments, and we gen-erate sub-segment (2) by simply copying the mobile base actions from the reference sub-segment.We found this scheme to work sufficiently well for the mobile manipulation task in this work, butfuture work improve the generation of sub-segment (2) (the robot base movement) to account fordifferent environment layouts in a scene, by defining and using a reference frame for each basemotion segment, like the object-centric subtasks used for arm actions, and/or integrating a motionplanner for the base. We highlight the limitations of our approach in Appendix D.N.6 MimicGen HyperparametersIn this section, we summarize the data generation hyperparameters (defined above) used for eachtask. As several tasks had the same settings, we group tasks together wherever possible.Default. Most of our tasks used a noise scale of σ= 0.05, interpolation steps of ninterp = 5,nfixed= 0, and a selection strategy of random with per-subtask set to False. These tasks include37Threading, Coffee, Three Piece Assembly, Hammer Cleanup, Mug Cleanup, Kitchen, Coffee Prepa-ration, Mobile Kitchen, Nut-and-Bolt Assembly, Gear Assembly, and Frame Assembly.Nearest-Neighbor and Per-Subtask. Some of our tasks used the default values above, with the ex-ception of using a nearest-neighbor selection strategy. The following tasks used nearest-neighbor(nnk= 3) with per-subtask set to False: Square and Nut Assembly. Some tasks used nearest-neighbor (nnk= 3) with per-subtask set to True: Stack, Stack Three, Pick Place. In general, wefound per-subtask selection to help for pick-and-place tasks. To showcase the value of using thesespecific selection strategies, we ran an ablation experiment (presented in Table N.2) that shows howmuch data generation success rate and agent performance changes when turning these strategies offduring data generation. Interestingly, while the data generation rates decrease significantly, agentperformance does not decrease significantly for most tasks.Real. Our real robot tasks used different settings for safety considerations, and to ensure that datacould be collected in a timely manner (maintain high data generation rate). All tasks used a reducednoise scale of σ= 0.02, and higher interpolation steps of ninterp = 25 ,nfixed= 25 . The Stacktask used a selection strategy of nearest-neighbor (nnk= 3) with per-subtask set to True, andthe Coffee task used a selection strategy of random with per-subtask set to False, just like theirsimulation counterparts.38O Policy Training DetailsWe describe details of how policies were trained via imitation learning. Several design choices arethe same as the robomimic study [7].Observation Spaces. As in robomimic [7], we train policies on two observation spaces — “low-dim” and “image”. While both include end effector poses and gripper finger positions, “low-dim”includes ground-truth object poses, while “image” includes camera observations from a front-viewcamera and a wrist-view camera. All tasks use images with 84x84 resolution with the exception ofthe real world tasks (Stack, Coffee), which use an increased resolution of 120x160. For “image”agents, we apply pixel shift randomization [7, 36–39] and shift image pixels by up to 10% of eachdimension each time observations are provided to the agent.Training Hyperparameters. We use BC-RNN from robomimic [7] with the default hyperparam-eters reported in their study, with the exception of an increased learning rate (1e-3 instead of 1e-4)for policies trained on low-dim observations, as we found it to speed up policy convergence on largedatasets.Policy Evaluation. As in [7], on simulation tasks, we evaluate policies using 50 rollouts per agentcheckpoint during training, and report the maximum success rate achieved by each agent across 3seeds. On the real world tasks, due to the time-consuming nature of policy evaluation, we take thelast policy checkpoint produced during training, and evaluate it over 50 episodes.Hardware. Each data generation run and training run used a machine (on a compute cluster) withan NVIDIA V olta V100 GPU, 8 CPUs, 32GB of memory, and 128GB of disk space. In certaincases, we batched multiple data generation runs and training runs on the same machine (usually 2to 4 runs). Real robot experiments were carried out on a machine with an NVIDIA GeForce RTX3090 GPU, 36 CPUs, 32GB of memory, and 1 TB of storage.39P Data Generation Success RatesIn this section, we present data generation success rates for each of our generated datasets. Com-paring the results in Table P.1 with our core image-based agent results (Fig. 4) and low-dim agentresults (Table Q.1), we see that in many cases the agent performance is much higher than the datageneration success rate. An extreme example is the Gear Assembly task which has data generationrates of 46.9%(D0),8.2%(D1), and 7.1%(D2) but policy success rates of 92.7%(D0),76.0%(D1), and 64.0%(D2). We also saw much higher agent performance than the data generation ratein our robot transfer experiment (see Appendix F).Task D0D1D2Stack 94.3 90 .0 -Stack Three 71.3 68 .9 -Square 73.7 48 .9 31 .8Threading 51.0 39 .2 21 .6Coffee 78.2 63 .5 27 .7Three Pc. Assembly 35.6 35 .5 31 .3Hammer Cleanup 47.6 20 .4 -Mug Cleanup 29.5 17 .0 -Kitchen 100.0 42 .7 -Nut Assembly 50.0 - -Pick Place 32.7 - -Coffee Preparation 53.2 36 .1 -Mobile Kitchen 20.7 - -Nut-and-Bolt Assembly 66.0 59 .4 47 .6Gear Assembly 46.9 8 .2 7 .1Frame Assembly 45.3 32 .7 28 .9Table P.1: Data Generation Rates. For each task that we generated data for, we report the data generation rate(DGR) — which is the success rate of the data generation process (recall that not all data generation attempts aresuccessful, and MimicGen only keeps the attempts that result in task success). Comparing with Table Q.1 andFig. 4, we can see that several tasks have significantly higher policy learning performance than data generationrates.40Q Low-Dim Policy Training ResultsIn the main text we focused on image observation spaces. In this section we present full resultsfor agents trained on low-dim observation spaces and show that these agents are equally perfor-mant. Results on our main generated datasets are shown in Table Q.1 (and can be compared to theimage-based agent results in Fig. 4), and the source dataset size comparison and policy training datacomparisons are shown in Fig. Q.1 (and can be compared to Fig. 4).Task Source D0 D1 D2Stack 38.7±4.1 100 .0±0.0 100 .0±0.0 -Stack Three 2.7±0.9 88 .0±1.6 90 .7±0.9 -Square 18.7±0.9 98 .0±1.6 80 .7±3.4 58 .7±1.9Threading 9.3±2.5 97 .3±0.9 72 .0±1.6 60 .7±6.2Coffee 42.7±4.1 100 .0±0.0 93 .3±2.5 76 .7±0.9Three Pc. Assembly 2.7±0.9 74 .7±3.8 61 .3±1.9 38 .7±4.1Hammer Cleanup 64.7±4.1 100 .0±0.0 74 .0±1.6 -Mug Cleanup 8.0±1.6 82 .0±2.8 54 .7±5.0 -Kitchen 43.3±3.4 100 .0±0.0 78 .0±2.8 -Nut Assembly 0.0±0.0 76 .0±1.6 - -Pick Place 0.0±0.0 58 .7±7.5 - -Coffee Preparation 2.0±0.0 76 .0±5.7 59 .3±3.4 -Mobile Kitchen 6.7±3.8 76 .7±10.5 - -Nut-and-Bolt Assembly 2.0±0.0 98 .0±1.6 96 .0±1.6 81 .3±3.8Gear Assembly 12.0±1.6 92 .7±1.9 76 .0±4.9 64 .0±3.3Frame Assembly 9.3±3.4 87 .3±2.5 70 .7±1.9 58 .0±5.7Table Q.1: Low-Dim Agent Performance on Source and Generated Datasets. For each task, we presentthe success rates (3 seeds) of low-dim agents trained with BC on the 10 source demos and on each MimicGendataset (1000 demos for each reset distribution). There is a large improvement across all tasks on the defaultdistribution ( D0) and agents are performant on the broader distributions ( D1,D2).Square (D0) Square (D1) Square (D2) TPA (D0) TPA (D1) TPA (D2)Task020406080100Success RateSource Dataset Size Comparison (low dim)Num SourceDemos1 10 50 200 Stack Three (D1) Square (D0) Square (D2) TPA (D0) Threading (D1)Task020406080100Success RatePolicy Training Data Comparison (low dim)Dataset200 human200 MG1000 MG5000 MGFigure Q.1: (left) MimicGen with more source human demonstrations. We found that using larger sourcedatasets to generate MimicGen data did not result in significant low-dim agent improvement. (right) PolicyTraining Dataset Comparison. We compare agents trained on 200 MimicGen demos to 200 human demos —remarkably, the performance is similar, despite MimicGen only using 10 source human demos. MimicGen canalso produce improved low-dim agents by generating datasets — we show a comparison between 200, 1000,and 5000 above. However, there can be diminishing returns.41R Bias and Artifacts in Generated DataIn this section, we discuss some undesirable properties of the generated data.Are datasets generated by MimicGen biased towards certain scene configurations? This isa natural question to ask, since MimicGen keeps trying to re-use the same small set of humandemonstrations on new scenes and only retains the successful traces. Indeed, there might be a limitedset of scene configurations where data generation works successfully, and some scene configurationsthat are never included in the generated data. We conduct an initial investigation into whether suchbias exists by analyzing the set of initial states in a subset of our generated datasets. Specifically, wetake inspiration from [79], and discretize the set of possible object placements for each object in eachtask into bins. Then, we simply maintain bin counts by taking the initial object placements for eachepisode in a generated dataset, computing the bin it belongs to, and updating the bin count. Finally,we estimate the support coverage of the reset distribution by counting the number of non-zero binsand dividing by the total number of bins.As a concrete example, consider the Threading D1variant, where the needle and tripod are bothsampled from a region with bounds in x,yandθ, where θis a top-down rotation angle (see Fig. 5).If each dimension is discretized into nindependent bins, there are a total of n6bins (all combinationsof the dimensions). Due to this exponential scaling, we use a small number of bins ( n= 3). Notethat when conducting this analysis, we had to be careful to ensure that the overall bin count was nottoo small or too large. If it was too small, each bin would correspond to a large section of the objectconfiguration space, and the results would not be meaningful. Similarly, if it was too large, there isno way for 1000 generated demonstrations to cover a meaningful portion of the support (since therecan only be 1000 bins covered at best).We now present our results. For several environments, we found there to be a good amount of sup-port coverage — for example, Coffee D1(98.8%), Coffee D2(89.3%), and Square D1(92.6%).However, we also found datasets that likely have significant amounts of bias — for example, SquareD2(66.4%), Threading D1(71%), Threading D2(61.2%), Three Piece Assembly D0(67.9%),Three Piece Assembly D1(43.5%), and Mug Cleanup D1(64%). This analysis is certainly im-perfect, as some datasets could still be biased towards containing certain object configurations thanothers (e.g. having non-uniform bin counts across the support), and there could also be differentkinds of bias (such as repetitive motions). However, this analysis does confirm that there is certainlybias in some of the generated datasets. A deeper investigation into the properties of the generateddata is left for future work.Are there artifacts and other undesirable behavior characteristics in MimicGen datasets? Ar-tifacts and other undesirable behavior characteristics are likely, for two reasons. One reason isthat MimicGen bridges transformed segments from the source dataset with interpolation segments.These interpolation segments could result in long paths and unnatural motions that are difficult toimitation. In fact, we found some evidence of this fact (see Appendix H). Another reason is thatMimicGen only checks for a successful task completion when deciding whether to accept a gen-erated trajectory. This means that there might be undesirable behaviors such as collisions betweenthe robot and certain parts of the world (including objects that are not task-relevant). As we movetowards deploying robots trained through imitation learning, data curation efforts are of the utmostimportance — this is left for future work.42S Using More Varied Source DemonstrationsTask Source D0 D1 D2Square (src D0) (DGR) - 73.7 48 .9 31 .8Square (src D2) (DGR) - 54.4 51 .7 52 .3Three Piece Assembly (src D0) (DGR) - 35.6 35 .5 31 .3Three Piece Assembly (src D2) (DGR) - 26.9 29 .1 23 .9Square (src D0) (SR, low-dim) 18.7±0.9 98 .0±1.6 80 .7±3.4 58 .7±1.9Square (src D2) (SR, low-dim) 2.0±0.0 98 .0±1.6 84 .7±1.9 60 .7±2.5Three Piece Assembly (src D0) (SR, low-dim) 2.7±0.9 74 .7±3.8 61 .3±1.9 38 .7±4.1Three Piece Assembly (src D2) (SR, low-dim) 0.0±0.0 62 .0±4.9 57 .3±4.1 32 .0±2.8Table S.1: Using More Varied Source Demonstrations. We present a comparison of data generation successrates and policy success rates (3 seeds) across two choices of source datasets — the 10 source human demon-strations collected on D0(default used in main experiments) and 10 source human demonstrations collectedon the significantly more diverse D2reset distribution. Interestingly, while the data generation success ratesdiffer, the policy success rates are comparable, suggesting that downstream agent performance can be invariantto how much the task initializations of the source demonstrations vary.Most of our experiments used 10 source human demonstrations collected on a narrow reset distri-bution ( D0) and generated demonstrations with MimicGen across significantly more varied resetdistributions ( D0,D1,D2). In this section, we investigate whether having source demonstrationscollected on a more varied set of task initializations is helpful. We do this by collecting 10 sourcehuman demonstrations on D2and using it to generate data for all reset distributions ( D0,D1,D2).The results are presented in Table S.1. Interestingly, while the data generation success rates dif-fer, the policy success rates are comparable, suggesting that downstream agent performance can beinvariant to how much the task initializations of the source demonstrations vary.43T Data Generation with Multiple SeedsMimicGen’s data generation process has several sources of randomness, including the initial state ofobjects for each data generation attempt (which is sampled from the reset distribution D), selectingthe source dataset segment that will be transformed (Appendix N.3), and the noise added to actionsduring execution (Appendix N.4). In all of our experiments, we only used a single seed to generatedatasets (our policy learning results are reported across 3 seeds though). In this section, we justifythis decision, by showing that there is very little variance in empirical results across different datageneration seeds.We generated 3 datasets (3 different seeds) for Stack Three ( D0,D1) and Square ( D0,D1,D2),and train low-dim policies (3 seeds per generated results, so 9 seeds in total per task variant) andsummarize the results in Table T.1. The data generation success rates have very tight variance (lessthan 1%) and do not deviate from our reported data generation rates (Appendix P) by more than0.6%. Furthermore, the mean policy success rates are extremely close to our reported results forlow-dim agents in Table Q.1 (less than 2% deviation).Task D0 D1 D2Stack Three (DGR) 71.7±0.3 69 .3±0.4 -Square (DGR) 74.4±0.5 48 .5±0.7 32 .0±0.9Stack Three (SR) 89.6±2.1 92 .4±1.6 -Square (SR) 96.7±2.1 81 .6±4.5 58 .0±3.5Table T.1: Data Generation with Multiple Seeds. We present data generation rates (DGR) and success rates(SR) across 3 seeds of data generation, and 3 low-dim policy training seeds per dataset (9 seeds) total. Theresults are very close to our reported results (less than 0.6% deviation in DGR, less than 2% deviation in SR)despite our results only generating datasets with one seed.44U Tolerance to Pose Estimation ErrorIn the main text, we demonstrated that MimicGen is fully functional in real-world settings and canoperate with minimal assumptions (e.g. no special tags or pose trackers) by using pose estimationmethods (see Appendix H for details). Consequently, the data generation process has some toleranceto pose error and can operate without having access to perfect pose estimates. In this section, wefurther investigate this tolerance in simulation by adding 2 levels of uniform noise to object poses- L1 is 5 mm position and 5 deg rotation noise and L2 is 10 mm position and 10 deg rotationnoise [108]. As shown in Table U.1, the data generation rate decreases (e.g. Square D0 decreasesfrom 73.7% to 60.9% for L1 and 30.5% for L2 and Square D2 decreases from 31.8% to 25.1%for L1 and 14.5% for L2), but visuomotor policy learning results are relatively robust (Square D0decreases from 90.7% to 89.3% for L1 and 84.7% for L2, and Square D2 decreases from 49.3% to47.3% for L1 and 39.3% for L2).Task None Level 1 (5 mm / 5 deg) Level 2 (10 mm / 10 deg)Stack Three ( D1) (DGR) 68.9 62 .3 38 .7Stack Three ( D1) (SR) 86.7±3.4 84 .0±2.8 80 .7±3.4Square ( D0) (DGR) 73.7 60 .9 30 .5Square ( D1) (DGR) 48.9 40 .2 20 .2Square ( D2) (DGR) 31.8 25 .1 14 .5Square ( D0) (SR) 90.7±1.9 89 .3±2.5 84 .7±2.5Square ( D1) (SR) 73.3±3.4 64 .0±1.6 62 .0±1.6Square ( D2) (SR) 49.3±2.5 47 .3±6.8 39 .3±4.7Coffee ( D0) (DGR) 78.2 28 .9 5 .6Coffee ( D1) (DGR) 63.5 22 .6 4 .3Coffee ( D0) (SR) 100.0±0.0 95 .3±2.5 79 .3±0.9Coffee ( D1) (SR) 90.7±2.5 83 .3±2.5 77 .3±4.1Threading ( D0) (DGR) 51.0 17 .6 5 .2Threading ( D0) (SR) 98.0±1.6 94 .7±0.9 86 .7±1.9Table U.1: Tolerance to Noisy Pose Estimates. We investigate how the data generation success rates (DGR)and visuomotor policy success rates (SR) change when adding uniform pose noise to the object poses in thesource demonstrations and the new scene during data generation. Although the data generation rates decrease,policy success rates are robust. This shows that MimicGen can be tolerant to noisy object pose estimation, andis suitable for real-world data collection.45 |
32c8pl84_uD | Marginalized Importance Sampling forOff-Environment Policy EvaluationPulkit KatdareDepartment of Electrical and Computer EngineeringUniversity of Illinois Urbana-Champaign Illinois, United Stateskatdare2@illinois.eduNan JiangDepartment of Computer ScienceUniversity of Illinois Urbana-Champaign Illinois, United StatesKatherine Driggs-CampbellDepartment of Electrical and Computer EngineeringUniversity of Illinois Urbana-Champaign Illinois, United StatesAbstract: Reinforcement Learning (RL) methods are typically sample-inefficient,making it challenging to train and deploy RL-policies in real world robots. Evena robust policy trained in simulation requires a real-world deployment to assesstheir performance. This paper proposes a new approach to evaluate the real-worldperformance of agent policies prior to deploying them in the real world. Ourapproach incorporates a simulator along with real-world offline data to evaluatethe performance of any policy using the framework of Marginalized ImportanceSampling (MIS). Existing MIS methods face two challenges: (1) large densityratios that deviate from a reasonable range and (2) indirect supervision, wherethe ratio needs to be inferred indirectly, thus exacerbating estimation error. Ourapproach addresses these challenges by introducing the target policy’s occupancyin the simulator as an intermediate variable and learning the density ratio as theproduct of two terms that can be learned separately. The first term is learnedwith direct supervision and the second term has a small magnitude, thus makingit computationally efficient. We analyze the sample complexity as well as errorpropagation of our two step-procedure. Furthermore, we empirically evaluate ourapproach on Sim2Sim environments such as Cartpole, Reacher, and Half-Cheetah.Our results show that our method generalizes well across a variety of Sim2Simgap, target policies and offline data collection policies. We also demonstrate theperformance of our algorithm on a Sim2Real task of validating the performanceof a 7 DoF robotic arm using offline data along with the Gazebo simulator.Keywords: Sim2Real, Policy Evaluation, Robot Validation1 IntroductionReinforcement Learning (RL) algorithms aim to select actions that maximize the cumulative returnsover a finite time horizon. In recent years, RL has shown state-of-the-art performance over a rangeof complex tasks such as chatbots, [1], games [2], and robotics [3, 4, 5, 6]. However, RL algorithmsstill require a large number of samples, which can limit their practical use in robotics [7, 8].A typical approach is to train robust robot policies in simulation and then deploy them on therobot [9, 10]. Such an approach, although useful, does not guarantee optimal performance on thereal robot without significant fine tuning [11]. In this work, we propose an approach that evaluates7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.the real world performance of a policy using a robot simulator and offline data collected from thereal-world [6]. To achieve this, we employ the framework of off-policy evaluation (OPE) [12].OPE is the problem of using offline data collected from a possibly unknown behavior policy toestimate the performance of a different target policy. Classical OPE methods are based on theprinciple of importance sampling (IS) [13, 14, 15], which reweights each trajectory by its densityratio under the target versus the behavior policies. More recently, significant progress has been madeonmarginalized importance sampling (MIS), which reweights transition tuples using the densityratio (or importance weights) over states instead of trajectories to overcome the so-called curse ofhorizon [16, 17, 18]. The density ratio is often learned via optimizing minimax loss functions.Most existing MIS methods are model-free, relying on data from the real environment to approx-imate the MIS weight function. However, a direct application of MIS methods to robotics carriestwo main disadvantages. (1) Large ratios: MIS method learns distribution mismatch between thebehavior and the target policies. When the mismatch between the behavior and the target policy islarge, which is often the case, MIS method tend be challenging to learn. (2) Indirect supervision:MIS methods requires no samples from target policies, which requires the weight being learned in-directly via the Bellman flow equation. In states where coverage of the offline dataset is scarce, MISmethods tend to perform poorly.In robotics, it is reasonable to assume access to a good but imperfect simulator of the real environ-ment [19, 20, 21]. In this work, we propose an improved MIS estimator that estimates the densityratio mismatch between the real world and the simulator. We further show that such a MIS estimatorcan be used to evaluate the real-world performance of a robot using just the simulator. As describedin figure 1, we estimate the discrepancy between the real world and the simulator by using the targetpolicy’s occupancy in the simulator as an intermediate variable. This allows us to calculate the MISweights as a combination of two factors, which can be learned independently. The first factor has di-rect supervision, while the second factor has a small magnitude (close to 1), thereby addressing bothlarge ratios and indirect supervision issues mentioned above. We present a straightforward deriva-tion of our method, examine its theoretical properties, and compare it to existing methods (includingexisting ways of incorporating a simulator in OPE) and baselines through empirical analysis.We make the following contributions. (1) We derive an MIS estimator for off-environment evalu-ation (Section 4). (2) We explore the theoretical properties of our off-environment evaluation es-timator by proposing a sample-complexity analysis (Section 4) and studying its special cases inlinear and kernel settings. (3) We empirically evaluate our estimator on both Sim2Sim as well asSim2Real environments (Section 5). For the Sim2Sim experiments, we perform a thorough ablationstudy over different Sim2Sim gap, data-collection policies and target policies, environments (Taxi,Cartpole, Reacher, and Half-Cheetah). Furthermore, we demonstrate practicality of our approachon a sim2real task by validating performance of a Kinova robotic arm over using offline data alongwith Gazebo based Kinova simulator.2 PreliminariesRobot learning problems are often modelled as an infinite-horizon discounted Markov DecisionProcess (MDP). MDP is specified by (S,A, P, R, γ, d 0). Here, SandAare the state and the actionspaces, P:S × A → ∆(S)is the transition function ( ∆(·)is the probability simplex). We alsodefine the reward function R:S × A → ∆([0, Rmax]),γ∈[0,1)is the discount factor, andd0∈∆(S)is the initial state distribution. A policy π:S → ∆(A)induces a distribution oftrajectory: ∀t≥0,s0∼d0, at∼π(·|st), rt∼R(·|s, a), st+1∼P(·|st, at).The performanceofπis measured by its expected discounted return under the initial state distribution, defined asJP(π) = (1 −γ)E[P∞t=0γtrt|π, d0]; here, we use the subscript PinJP(π)to emphasize thedynamics w.r.t. which the return is defined, since we will consider both the true environment and thesimulator in the rest of this paper and the subscript will help distinguish between them. JP(π)also2Figure 1: For a given policy π, we first collect on-policy data dπPtrfrom a simulator environment.Using dπPtrand offline data dDPteon the real world, we first calculate the importance sampling fac-torβ=dπbPte/dπPtr. This importance sampling factor essentially allows us to re-weight existingoff-policy evaluation algorithm to estimate w=dπPte/dπPtrwhich helps us estimate real-world per-formance of the agent JPte(π)using on-policy simulator data.has an alternative expression JP(π) :=E(s,a)∼dπP,r∼R(s,a)[r], wheredπP(s, a) = (1 −γ)P∞t=0γtP[st=s, at=a|π, d0] (1)is the discounted state-action occupancy induced by πfrom d0. An important quantity asso-ciated with a policy is its Q-function QπP, which satisfies the Bellman equation QπP(s, a) =Er∼R(s,a),s′∼P(s,a)[r+γQπP(s′, π)], where f(s′, π) :=Ea′∼π(·|s′)[f(s′, a′)]. We can also definethe state-value function VπP(s) =QπP(s, π), and J(π) = (1 −γ)Es∼d0[VπP(s)].OPE and Marginalized Importance Sampling: In off-policy evaluation (OPE), we want to eval-uate a target policy πusing data collected from a different policy in the real environment, de-noted by its dynamics P. As a standard simplification, we assume data is generated i.i.d. as(s, a)∼μ, r∼R(s, a), s′∼P(s, a), and the sample size is n. When the data is generated fromsome behavior policy πb,μcan correspond to its occupancy dπbP. We will use Eμ[·]as a shorthand fortaking expectation over (s, a, r, s′)generated from such a data distribution in the real environment.The key idea in marginalized importance sampling (MIS) is to learn the weight functionwπ/μP(s, a) :=dπP(s,a)μ(s,a). Once this function is known, J(π)can be estimated as J(π) =E(s,a)∼dπP,r∼R(s,a)[r] =Eμ[wπ/μP(s, a)·r].Note that Eμ[·]can be empirically approximated bythe dataset sampled from the real environment. The real challenge in MIS is how to learn wπ/μP. Ex-isting works often do so by optimizing minimax loss functions using Q-functions as discriminators,and is subject to both difficulties (large ratios and indirect supervision) mentioned in the introduc-tion. We refer the readers to [22] for a summary of typical MIS methods.Learning Density Ratios from Direct Supervision: Given two distributions pandqover the samespaceX, the density ratio p(x)/q(x)can be learned directly if we have access to samples from bothpandq, using the method proposed by [23]:p(x)q(x)= arg maxf:X→R>0Ex∼p[lnf(x)]−Ex∼q[f(x)] + 1 . (2)To guarantee generalization over a finite sample when we approximate the expectations empirically,we will need to restrict the space of fthat we search over to function classes of limited capacities,such as RKHS or neural nets, and p(x)/q(x)can still be well approximated as long as it can berepresented in the chosen function class (i.e., realizable ). More concretely, if we have nsamplesx1, . . . , x nfrom pandmsamples ̃x1, . . . , ̃xmfrom q, and use Fto approximate p(x)/q(x), thelearned density ratio can be made generalizable by adding a regularization term I(f)to improve thestatistical and computational stability of learning:arg maxf∈F1nXilnf(xi)−1mXjf( ̃xj) +λ2I(f)2. (3)33 Related Work3.1 Reinforcement Learning applications in RoboticsThere are three different themes that arise in reinforcement learning for robotics [24, 25]. (1)Sim2Real: algorithms are primarily concerned with learning robust policy in simulation by train-ing the algorithms over a variety of simulation configurations [26, 9, 27]. Sim2Real algorithms,although successful, still require a thorough real-world deployment in order to gauge the policy’sperformance. (2) Imitation learning algorithms learn an optimal policy by trying to mimic offlineexpert demonstrations [28, 29, 30]. Many successful imitation learning algorithms minimize someform of density matching between the expert demonstrations and on-policy data to learn optimalpolicy. A key problem with imitation learning is the fact that it requires constant interaction withthe real-world environment in order to learn an optimal policy. (3) Offline reinforcement learn-ingis a relatively new area. Here the idea is to learn an optimal policy using offline data withoutany interaction with the environment [31, 32, 33, 34]. Offline reinforcement learning has recentlydemonstrated performance at-par with classical reinforcement learning in a few tasks. However, of-fline reinforcement learning algorithms tend to overfit on the offline data. Thus, even offline learningmodules too require an actual deployment in-order to assess performance.3.2 Off-Policy EvaluationWe review related works in this section, focusing on comparing to existing OPE methods that canleverage the side information provided by an imperfect simulator. (1) Marginalized ImportanceSampling (MIS): MIS methods tend to assume the framework of data collection and policy eval-uation being on the same environment. To that end, there are both model-free [35, 36, 37] andmodel-based variants of MIS methods and face the aforementioned two challenges (large magnitudeof weights and indirection supervision) simultaneously. Model based variants of MIS sometimestend to be doubly robust (DR) in nature [18, 38] and can benefit from Q-functions as control vari-ates, which can be supplied by the simulator. However, the DR version of MIS is a meta estimator,and the weight dπPte/μstill needs to be estimated via a “base” MIS procedure. Therefore, the in-corporation of the simulator information does not directly address the challenges we are concernedwith, and there is also opportunity to further combine our estimator into the DR form of MIS. (2)Model-based methods: Model-based estimators first approximate the transition dynamics of thetarget environment [39], which is further used to generate rollouts and evaluate performance for anytarget policy. One way of incorporating a given imperfect simulator in this approach is to use thesimulator as “base predictions,” and only learn an additive correction term, often known as residualdynamics [40]. This approach combines the two sources of information (simulator and data fromtarget environment) in a very different way compared to ours, and are more vulnerable to misspeci-fication errors than model-free methods.4 Weight EstimatorRecall that our goal is to incorporate a given simulator into MIS. We will assume that the simulatorshares the same S,A, γ, d 0with the real environment, but has its own transition function Ptrwhichcan be different from Pte. As we will see, extension to the case where the reward function is un-known and must be inferred from sample rewards in the data is straightforward, and for simplicitywe will only consider difference in dynamics for most of the paper.Split the weight: The key idea in our approach is to split the weight wπ/μPinto two parts by intro-ducing dπPtras an intermediate variable:wπ/μP(s, a) =dπPte(s, a)μ(s, a)=dπPtr(s, a)μ(s, a)|{z}:=β(direct supervision)·dπPte(s, a)dπPtr(s, a)|{z}:=wπPte/Ptr(magnitude ≃1)Note that dπPtris the occupancy of πin the simulator, which we have free access to. The advantageof our approach is that by estimating βandwπPte/Ptrseparately, we avoid the situation of running4into the two challenges mentioned before simultaneously, and instead address one at each time:β=dπPtr/μhas large magnitude but can be learned directly via [23] without the difficult minimaxoptimization typically required by MIS, and we expect wπPte/Ptr=dπPte/dπPtrto be close to 1whenPte≈Ptr(and thus easier to learn).Estimate wπPte/Ptr:Since βis handled by the method of [23], the key remaining challenge is howto estimate wπPte/Ptr. (Interestingly, βalso plays a key role in estimating wπPte/Ptr, as will be shownbelow.) Note that once we have approximated wπPte/Ptrwith some w, we can directly reweight thestate-action pairs from the simulator (i.e., dπPtr) if the reward function is known (this is only assumedfor the purpose of derivation), i.e.,JPte(π)≈E(s,a)∼dπPtr,r∼R(s,a)[w·r],and this becomes an identity if w=wπPte/Ptr. Following the derivation in [18, 22], we now reasonabout the error of the above estimator for an arbitrary wto derive an upper bound as our loss forlearning w:|E(s,a)∼dπPtr,r∼R(s,a)[w·r]−JPte(π)|=|E(s,a)∼dπPtr,s′∼P(s,a)[w·(QπPte(s, a)−γQπPte(s′, π))]−(1−γ)Es∼d0[QπPte(s, π)]|≤supq∈Q|EdπPtr×Pte[w·(q(s, a)−γq(s′, π))]−(1−γ)Es∼d0[q(s, π)]|. (4)Here dπPtr×Pteis a shorthand for (s, a)∼dπPtr, s′∼P(s, a). In the last step, we handle theunknown QπPby a relaxation similar to [16, 18, 22], which takes an upper bound of the error overq∈ Q for some function class Q ⊂RS×A, and the inequality holds as long as QπP∈conv(Q)withconv(·)being the convex hull.Approximate dπPtr×Pte:The remaining difficulty is that we will need samples from dπPtr×Pte,i.e.,(s, a)sampled from π’s occupancy in the Ptrsimulator , and the next s′generated in the Pterealenvironment . While there is no natural dataset for such a distribution, we can take the data from thereal environment, (s, a, s′)∼μ×Pte, and reweight it using β=dπPtr/μto approximate expectationw.r.t. dπPtr×Pte, i.e.,(s, a, s′)∼dπPtr×Pte⇐⇒(s, a, s′)∼μ×Ptereweighted with β:=dπPtr/μ.Based on such an observation, we can further upper-bound |E(s,a)∼dπPtr,r∼R(s,a)[w·r]−JPte(π)|from end of Equation 4 with:supq∈QLw(w, β, q ) :=|Eμ[w·β·(q(s, a)−γq(s′, π))]−(1−γ)Es∼d0[q(s, π)]|. (5)As our derivation has shown, this is a valid upper bound of the error as long as conv(Q)can representQπPte. We also need to show that the upper bound is non-trivial, i.e., when w=wπPte/Ptr, the upperbound should be 0. This is actually easy to see, as for any q:L(wπPte/Ptr, β, q) :=|EdπPtr×Pte[wπPte/Ptr·(q(s, a)−γq(s′, π))]−(1−γ)Es∼d0[q(s, π)]|(6)=|EdπPte×P[q(s, a)−γq(s′, π)]−(1−γ)Es∼d0[q(s, π)]|= 0. (7)The last step directly follows from the fact that dπPteis a valid discounted occupancy and obeys theBellman flow equation. Therefore, it makes sense to search for wover a function class W ⊂RS×Ato minimize the loss supq∈QL(w, β, q ).Final estimator: To summarize our estimation procedure, we will first use [23] to estimate ˆβ≈dπP′/μwith a function class F, and plug the solution into our loss for estimating wπPte/Ptr, i.e.,ˆw= arg minw∈Wsupq∈QLw(w,ˆβ, q). (8)As mentioned above, if the reward function is known, we can use E(s,a)∼dπPtr,r∼R(s,a)[ ˆw·r]asour estimation of JPte(π). We can also demonstrate interesting properties of our optimization likethe effect of a linear function class and RKHS function class. This discussion is deferred to thesupplementary materials section 9.1.Sample Complexity Guarantee: We can further provide an upper-bound on the performance ofour final estimator under the following two assumptions.5Assumption 1 (Boundedness) .We assume ∀f∈ F,0< CF,min≤f≤CF,max. Define CF:=CF,max+ max(log CF,max,−logCF,min). Similarly, ∀w∈ W ,w∈[0, CW], and∀q∈ Q ,q∈[0, CQ].Assumption 2 (Realizability of F).dπP′/μ∈ F.Theorem 4.1. Letˆβbe our approximation of dπP′/μwhich we found using [23]. We utilize thisˆβto further optimize for ˆwn(equation 8) using n samples. In both cases, E(s,a)∼dπPtr[·]is alsoapproximated with nsamples from the simulator Ptr. Then, under Assumptions 1 and 2 along withthe additional assumption that QπPte∈C(Q)with probability at least 1−δ, the Off EnvironmentEvaluation error can be bounded as|E(s,a)∼dπPtr,r∼R(s,a)[ ˆwn·r]−JPte(π)| ≤minw∈Wmaxq∈Q|Lw(w, β, q )|+ 2CW·CQ· ̃OvuuutdπP′μ∞·4ERn(F) +CFs2 log(2δ)n+ 2Rn(W,Q) +CQCWslog(2δ)2n(9)where Rn(F),Rn(W,Q)are the Radamacher complexities of function classes {(x, y)→f(x)−log(f(y)) :f∈ F} and{(s, a, s′)→(w(s, a)·dπP′(s,a)μ(s,a)·(q(s, a)−γq(s′, π)) :w∈ W, q∈ Q} ,respectively, ∥dπP′/μ∥∞:= max s,adπP′(s, a)/μ(s, a)measures the distribution shift between dπP′andμ, and ̃O(·)is the big-Oh notation suppressing logarithmic factors.Note that we do not make realizability assumption for Win the theorem above. Realizability as-sumption is reflected in the infw∈Wsupq∈Q|L(w, β, q )|, which equals 0when dπPte/dπPtr∈ W . Theremaining terms vanishes to 0at anO(1/√n)rate when n→ ∞ .Generalizing Off-Environment Policy Evaluation: A key advantage of our two-step approach isthat we can improve many existing off-policy evaluation algorithm with a similar two-step process.In this work, we use our two-step procedure with GradientDICE—which is an empirically state-of-the-art estimator in the DICE family—can also be similarly adapted as below, which we use in ourexperiments. Detailed derivation for the same can be found in the supplementary materials 9.3.5 Experiments5.1 Sim2Sim Validation of β-DICEExperimental Setting: In the Sim2Sim experiments, we aim to show the effectiveness of our ap-proach across different target policies, offline dataset as well as changing sim2sim gap. We furthershow the effectiveness of our approach over different types of Sim2Sim environments like Taxi(Tabular), Cartpole (discrete-control), Reacher (continuous control), and HalfCheetah (continuouscontrol) environments. For each of these environments, we refer to the default configurations ofTable 1: Log mean squared error between the performance predicted by our β-DICE algorithm andthe real world performance of the robot. We observe that our method is able to outperform DICEbased baselines by a comfortable margin.Log 10Mean Squared Error (↓)Algorithm Kinova (Sim2Real) Taxi Cartpole Reacher Half-CheetahSimulator -3.96 -0.19 -2.58 -1.09 1.18β-DICE (Ours) -4.38 -1.60 -4.19 -4.08 -3.42GenDICE -3.48 -0.13 -2.84 -2.61 -2.96GradientDICE -3.49 -0.59 -1.45 -3.17 -2.16DualDICE -3.48 -0.48 -0.99 -2.88 -2.126(a) Target Policy α= 0.2(b) Target Policy α= 0.0Figure 2: We demonstrate the effectiveness of β-DICE on Cartpole (a) and Reacher (b) Sim2Simenvironment. For Cartpole environment we demonstrate the performance of β-DICE over for aSim2Sim pair of {10,15}m/s2. Similarly for Reacher the Sim2Sim pair is (0.1m, 0.075m) forthe length of the link. On left hand side, we demonstrate the effect of β-DICE with different datacollection policies while keeping the target policies fixed. On the right hand side, we demonstrate theimpact of β-DICE with increasing Sim2Sim gap keeping the offline data collection policy the same.We observe that our β-DICE algorithm comfortably outperforms closest DICE based baselines.these environments as the simulator environment. We further create a “real” world environment bychanging key configurations from each of these environments. For example, we modify the transi-tion probability in taxi, gravity in cartpole, and link lengths in reacher. These kinds of configurationchanges help us assess the performance limits of our algorithm across a variety of sim2sim gap. Intable 2, we list all the different sim2sim environments configurations over which we experimentedour algorithm. Typically these configurations are such that the real-world performance predicted bythe simulator alone is off by 9-45%We first collect our offline data by using a noisy pre-trained policy which is parameterised by δ.Higher the δ, noisier the data-collection policy. For Taxi and Cartpole environment, we using auniform random policy for the noise, while we choose zero-mean gaussian policy for continuousenvironments like reacher and halfcheetah. Using this offline data as well as our simulator, we nowevaluate the performance of any target policy, which we parameterise by α. Target policy is furtherdefined by a mixture of another pre-trained policy with noise. More the α, more the randomness inthe policy. Detailed experimental details along with the setup has been detailed in Appendix 9.11Results and Observations: We present the detailed results for all the four environments in fig-ures 3 (Taxi), figures 2a, 4 (Cartpole), figures 2b and 6 (Reacher) and figure 7 (HalfCheetah). Forthe boxplot, we fix target policy ( α) and demonstrate the evaluation error for our algorithm acrossa range of offline dataset ( δ) while keeping Sim2Sim gap fixed. For the line plots, we demon-strate the effectiveness of our algorithm across a changing Sim2Sim gap, while keeping the offline7data ( δ) and target policy αfixed. We also compare our algorithm against DICE baselines Gra-dientDICE [36], GenDICE [37], DualDICE [35]. DICE baselines are currently the state-of-the-artalgorithm in off-policy evaluation and are known to outperform even hybrid off-policy evaluation al-gorithms. In figure 5, we also compare our algorithm against hybrid off-policy evaluation baselinesfor the cartpole environment (further details in section 9.11). We observe that our method is ableto comfortably outperform closest DICE baselines with the help of extra simulator. These resultsnot only empirically validate the effectiveness of our algorithm, but also point out that we can learnimportant information from imperfect simulated environments to help in improving RL policies. Wealso observe that as the Sim2Sim gap increases the performance of our algorithm tends to decline.This means that with increasing Sim2Sim gap the amount of relevant information that can be learnedfrom the simulator diminishes. We observe that this decline actually becomes significant when theSim2Sim gap breaches the 60% threshold.5.2 Real-world performance validation on Kinova Robotic ArmExperimental Setting: We demonstrate the effectiveness of our β-DICE algorithm for a sim2realvalidation task on a Kinova robotic arm. We first collect offline data by asking users to move thearm from one-position to another via RC controllers. Our data collection ensures sufficient coverageof the robotic arm’s task space. We then use this offline data along with our in-house gazebo basedsimulator to experimentally validate the real-world performance of a PID controller using β-DICEthat moves our robot from a given initial location to any desired location.Results and Observations: Our results along with different baselines are averaged over 10 differentlocations are tabulated in Table 1. We observe that β-DICE is able to outperform state-of-the-artshowing an improvement of 60% over the nearest baseline. There are two key conclusions fromall of our experiments. One, although β-DICE outperforms state-of-the art baselines in off-policyevaluation. We observe that the performance drops when the gap between the target policy andbehavior policy increases. Two, prediction error decreases as the gap between the training and testenvironments increases, as the transferable information between the two environments decreases.6 LimitationsWe present the limitations of our work that we wish to address in future work. (1) Our algorithmexpects high quality data with sufficient coverage of the state-action space. Identifying the confi-dence interval of our estimator wwill not only ensure sample efficient evaluation, but also help us indesigning robust offline reinforcement learning algorithm. (2) Similar to DICE class of min-max op-timization, our algorithm also suffers from high variance in their performance. Efforts are requiredto reduce this variance.7 Conclusion and Future WorkWe derive a novel MIS estimator that is able to evaluate real world performance of a robot usingoffline data and an imperfect robot simulator. We then develop sample complexity bounds, andempirically validate our approach on diverse Sim2Sim environments and Sim2Real environment likeKinovaGen3 robot. For future work, we wish to utilize this framework of off-environment evaluationto learn optimal robot policies using simulation and a limited amount of real-world offline data.8 AcknowledgementsThe authors thank Neeloy Chakroborty and Shuijing Liu for their valuable suggestions on this paperdraft. This work was supported in part by ZJU-UIUC Joint Research Center Project No. DREMES202003, funded by Zhejiang University. Additionally, Nan Jiang would also like to acknowledgefunding support from NSF IIS-2112471 and NSF CAREER IIS-2141781.8References[1] C. K. John Schulman, Barret Zoph et al. Chatgpt: Optimizing language models for dialogue,2023.[2] D. Silver, J. Schrittwieser, K. Simonyan, and thers. Mastering the game of go without humanknowledge. Nat., 2017.[3] S. Liu, P. Chang, Z. Huang, N. Chakraborty, W. Liang, J. Geng, and K. R. Driggs-Campbell.Socially aware robot crowd navigation with interaction graphs and human trajectory prediction.2022.[4] OpenAI, I. Akkaya, M. Andrychowicz, et al. Solving rubik’s cube with a robot hand. 2019.[5] W. Yu, V . C. V . Kumar, G. Turk, and C. K. Liu. Sim-to-real transfer for biped locomotion. In2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS , 2019.[6] P. Katdare, S. Liu, and K. D. Campbell. Off environment evaluation using convex risk mini-mization. In International Conference on Robotics and Automation, ICRA , 2022.[7] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In Proceedings of the 35th InternationalConference on Machine Learning, ICML , 2018.[8] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. 2017.[9] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization fortransferring deep neural networks from simulation to the real world. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems, IROS , 2017.[10] Y . Du, O. Watkins, T. Darrell, P. Abbeel, and D. Pathak. Auto-tuned sim-to-real transfer. InIEEE International Conference on Robotics and Automation, ICRA , 2021.[11] G. Leibovich, G. Jacob, S. Endrawis, G. Novik, and A. Tamar. Validate on sim, detect onreal - model selection for domain randomization. In International Conference on Robotics andAutomation, (ICRA) , 2022.[12] C. V oloshin, H. M. Le, N. Jiang, and Y . Yue. Empirical study of off-policy policy evaluationfor reinforcement learning. arXiv preprint arXiv:1911.06854 , 2019.[13] D. Precup, R. S. Sutton, and S. P. Singh. Eligibility traces for off-policy policy evaluation. InProceedings of the Seventeenth International Conference on Machine Learning , 2000.[14] N. Jiang and L. Li. Doubly Robust Off-policy Value Evaluation for Reinforcement Learning.InProceedings of the 33rd International Conference on Machine Learning , volume 48, pages652–661, 2016.[15] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[16] Q. Liu, L. Li, Z. Tang, and D. Zhou. Breaking the curse of horizon: Infinite-horizon off-policyestimation. In Advances in Neural Information Processing Systems , 2018.[17] O. Nachum and B. Dai. Reinforcement learning via fenchel-rockafellar duality. arXiv preprintarXiv:2001.01866 , 2020.[18] M. Uehara, J. Huang, and N. Jiang. Minimax Weight and Q-Function Learning for Off-PolicyEvaluation. In Proceedings of the 37th International Conference on Machine Learning (ICML) ,2020.9[19] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.OpenAI gym. arXiv preprint arXiv:1606.01540 , 2016.[20] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016–2021.[21] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , 2012.[22] N. Jiang and J. Huang. Minimax value interval for off-policy evaluation and policy optimiza-tion. Advances in Neural Information Processing Systems , 33, 2020.[23] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and thelikelihood ratio by convex risk minimization. CoRR , abs/0809.0853, 2008. URL http://arxiv.org/abs/0809.0853 .[24] H. Sikchi, A. Zhang, and S. Niekum. Imitation from arbitrary experience: A dual unificationof reinforcement and imitation learning methods. arXiv preprint arXiv:2302.08560 , 2023.[25] O. Nachum and B. Dai. Reinforcement learning via fenchel-rockafellar duality. CoRR , 2020.[26] S. H ̈ofer, K. E. Bekris, et al. Sim2real in robotics and automation: Applications and challenges.IEEE Trans Autom. Sci. Eng. , 2021.[27] Q. Vuong, S. Vikram, H. Su, S. Gao, and H. I. Christensen. How to pick the domain random-ization parameters for sim-to-real transfer of reinforcement learning policies? 2019.[28] S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the Fourteenth International Conferenceon Artificial Intelligence and Statistics, AISTATS , JMLR Proceedings, 2011.[29] J. Ho and S. Ermon. Generative adversarial imitation learning. In Advances in Neural Infor-mation Processing Systems 29 , 2016.[30] Y . Wu, N. Charoenphakdee, H. Bao, V . Tangkaratt, and M. Sugiyama. Imitation learning fromimperfect demonstration. CoRR , 2019.[31] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforce-ment learning. arXiv preprint arXiv:2006.04779 , 2020.[32] J. Lee, W. Jeon, B. Lee, J. Pineau, and K. Kim. Optidice: Offline policy optimization viastationary distribution correction estimation. 2021.[33] T. Yu, A. Kumar, R. Rafailov, A. Rajeswaran, S. Levine, and C. Finn. COMBO: conserva-tive offline model-based policy optimization. In Advances in Neural Information ProcessingSystems 34 , 2021.[34] L. Yu, T. Yu, J. Song, W. Neiswanger, and S. Ermon. Offline imitation learning with subopti-mal demonstrations via relaxed distribution matching. In Thirty-Seventh AAAI Conference onArtificial Intelligence , 2023.[35] O. Nachum, Y . Chow, B. Dai, and L. Li. Dualdice: Behavior-agnostic estimation of discountedstationary distribution corrections. 2019.[36] S. Zhang, B. Liu, and S. Whiteson. Gradientdice: Rethinking generalized offline estimation ofstationary values. 2020.[37] R. Zhang, B. Dai, L. Li, and D. Schuurmans. Gendice: Generalized offline estimation ofstationary values. In International Conference on Learning Representations (ICLR) , 2019.10[38] Z. Tang, Y . Feng, L. Li, D. Zhou, and Q. Liu. Doubly robust bias reduction in infinite horizonoff-policy estimation. arXiv preprint arXiv:1910.07186 , 2019.[39] J. Fu, M. Norouzi, O. Nachum, G. Tucker, Z. Wang, A. Novikov, M. Yang, M. R. Zhang,Y . Chen, A. Kumar, et al. Benchmarks for deep off-policy evaluation. arXiv preprintarXiv:2103.16596 , 2021.[40] C. V oloshin, N. Jiang, and Y . Yue. Minimax model learning. In International Conference onArtificial Intelligence and Statistics . PMLR, 2021.[41] M. R. Bruce Hajek. Ece 543: Statistical learning theory, 2018.[42] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds andstructural results. In 14th Annual Conference on Computational Learning Theory, COLT ,2001.[43] M. Wulfmeier, I. Posner, and P. Abbeel. Mutual alignment transfer learning. CoRR , 2017.[44] B. Eysenbach, S. Chaudhari, S. Asawa, S. Levine, and R. Salakhutdinov. Off-dynamics rein-forcement learning: Training for transfer with domain classifiers. In 9th International Confer-ence on Learning Representations, ICLR , 2021.[45] Z. Zhu, K. Lin, and J. Zhou. Transfer learning in deep reinforcement learning: A survey.CoRR , 2020.[46] C. Packer, K. Gao, J. Kos, P. Kr ̈ahenb ̈uhl, V . Koltun, and D. Song. Assessing generalization indeep reinforcement learning, 2018.[47] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.Openai gym, 2016.11 |
eE3fsO5Mi2 | Stealthy Terrain-Aware Multi-Agent Active SearchNikhil Angad BakshiCarnegie Mellon Universitynabakshi@cs.cmu.eduJeff SchneiderCarnegie Mellon Universityschneide@cs.cmu.eduAbstract: Stealthy multi-agent active search is the problem of making efficientsequential data-collection decisions to identify an unknown number of sparselylocated targets while adapting to new sensing information and concealing thesearch agents’ location from the targets. This problem is applicable to recon-naissance tasks wherein the safety of the search agents can be compromised asthe targets may be adversarial. Prior work usually focuses either on adversarialsearch, where the risk of revealing the agents’ location to the targets is ignoredor evasion strategies where efficient search is ignored. We present the StealthyTerrain-Aware Reconnaissance (STAR) algorithm, a multi-objective parallelizedThompson sampling-based algorithm that relies on a strong topographical priorto reason over changing visibility risk over the course of the search. The STARalgorithm outperforms existing state-of-the-art multi-agent active search methodson both rate of recovery of targets as well as minimising risk even when subject tonoisy observations, communication failures and an unknown number of targets.Keywords: Reconnaissance, Adversarial Search, Multi-robot, Active Learning1 IntroductionSearch and reconnaissance tasks are distinguished from each other only by the adversarial natureof the targets: they do not wish to be found and search agents must attempt to conceal their ownlocations from the them. Despite this, these two problems share many common elements. Histori-cally, both have been a largely human endeavour, but several factors may impede effective search byhuman teams. The search region could be too vast to mobilise enough human resources effectivelyor the personal safety of human searchers could be at risk. Multi-robot systems are increasinglybeing deployed for search missions via tele-operation [1, 2, 3, 4]. While human operators can re-motely control a small number of robotic platforms, they cannot efficiently coordinate larger teams[5]. Consistent communication between the agents may not be possible either due to environmentalfactors or hardware failures. Finally, search operations are often time-critical, hence decentralizedmulti-robot teams capable of efficient asynchronous adversarial active search are crucial.The problem of adversarial search has been theoretically studied as the pursuer-evader problem [6,7]. Several approaches seek to maximise the worst-case performance of the pursuer and imbibethe evader with extraordinary abilities like complete knowledge, infinite travel speed, and infinitecompute. However, these solutions are often prohibitively expensive to compute [8] in real-time ortoo conservative to be applicable in the real-world settings [9].We model the problem as one of stealthy target detection [10], with multiple pursuers and immobileevaders that may be placed on the map adversarially. This is in contrast to target tracking [11,12] where the targets can move. Target detection with static targets is a realistic choice as, in thereconnaissance task, it is not uncommon that the search is being carried out for well-concealed staticobjects like pieces of infrastructure or environmental features as this knowledge can be of strategicimportance; and, in search and rescue missions, stranded people could be immobile or mobile, butuntil they are detected for the first time they can be treated as immobile and the problem formulationremains the same.The core idea of our approach is simple, during search and reconnaissance missions, a strong prioron the number of targets or their locations is usually not available, however, satellite imagery and7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.(a) Desert mountainous landscape in aUnity simulation environment with twofull-size ground vehicles also pictured.(b) Top-down view of the terrain in(a) showing average visibility as acolour gradient. Y - high; B - low.(c) Traversal costmap of theterrain in (a) and (b) and thesearch zone (red polygon).Figure 1: Terrain can inform search and evasion strategies. The grid size in Fig. 1b is 60 m×60m. Thisregion’s dimensions are 2305 .5m×1837 .5mrepresenting a total area of 4 .236km2. In the costmap in Fig. 1cthe area inside the red polygon is 2 .48km2. Magenta is non-traversable. Black to blue to red indicate increasingtopography-aware visibility risk. The green cross is the goal location provided to the OMPL [13] or A-star [14]path planner and the green curve is the path computed to the goal location while minimizing the visibility risk.by extension topographical information of the search region may be available. If the agent wereto search for sparsely located targets in the map shown in Fig. 1, the intuition is that it would bebeneficial to search in better hiding places.We present the Stealthy Terrain-Aware Reconnaissance (STAR)1algorithm, a multi-objective opti-misation algorithm that follows the Myopic Posterior Sampling (MPS) framework [15, 16, 17, 18]which has been shown to be near optimal for adaptive goal-oriented tasks [19] such as in the multi-agent asynchronous active search problem we have here. In MPS, we select the optimal action byoptimising a reward based on a single sample (known as the Thompson Sample) taken from the cur-rent belief. This allows for a calculated randomness in the search decisions made by various agentseven if communication or hardware failures prevent coordination. As more observations are made,the agents learn to take better actions that bring them closer to their search/risk objective.We ablate the performance of STAR against existing methods varying the map type, availabilityof communication, number of search agents and whether or not targets are placed adversarially insimulation. In all cases, STAR outperforms existing methods. We have designed STAR to be de-ployable on ground-based robotic platforms described in [18] in a search region that is 4 km2insize. Our motivation for only presenting simulation results in this paper is to ablate and assess theperformance of the STAR algorithm in our realistic simulator against the state of the art from an al-gorithmic standpoint as physical runs cannot usually be conducted in quantities that show statisticalsignificance.To the best of our knowledge, STAR is the state-of-the-art in search efficiency for (adversarial)multi-agent active search given a known terrain map that is also robust to communication failures,operates without any human direction or explicit subdivision of the search region. Our contributionsmay be summarized as follows:• We propose the Stealthy Terrain-Aware Reconnaissance (STAR) algorithm, a multi-objectivesearch algorithm that combines the information-seeking reward term presented in Bakshi et al.[18] with a novel stealth objective that uses a known terrain map to encourage concealment of thesearch agent while improving search efficiency by searching in locations with greater likelihoodof recovery.• We ablate this superior performance in communication-disabled scenarios with our proposedterrain-aware noisy observation model, varying number of agents, map types, and in adversar-ial and non-adversarial scenarios. In each case STAR outperforms all other methods.• Finally, STAR has been deployed on our physical systems for search and/or reconnaissance mis-sions. Appendix A contains details of the physical systems.1https://github.com/bakshienator77/Stealthy-Terrain-Aware-Reconnaissance-and-Search.git22 Related WorkIn pursuer-evader problems the primary objective of the search agent(s) is to trap or track the evaderwith theoretical guarantees for a worst-case evader [20, 21, 22, 23]. However, these approachesdon’t consider the inverse adversarial problem of minimizing risk of detection by the adversary (asit already has complete knowledge). Some approaches attempt to use approximate algorithms andrelax the requirement for guarantees [24], however none of these approaches can be extended tohave an unknown number of evaders and that limits their practical applications.Probabilistic search methods seek to improve expected or average case performance. Bayesian learn-ing provides an effective way to probabilistically model the world, inculcate prior information andadapt to information over the course of the search [25, 26]. However, these approaches often rely onperfect observation models (no noise, no false positives) and their examination of optimal behaviouris usually confined to single pursuer or single evader cases [27, 28]. In a similar vein, modellingthe problem using Partially Observable Markov Decision Processes (POMDPs) [29] yields tractableoptimal solutions only with a single pursuer. An excellent survey on adversarial search by Chunget. al. [30] covers an overview of the field and open research questions.Decentralised adversarial multi-agent active search that is robust to communication failure is anactively researched field. Though multi-robot teams may partition the search region for explorationefficiency [31], generating such a partitioning is challenging with unreliable communication. Giventhe success of the POMDP formulation in the single-agent case, researchers have attempted to applyreinforcement learning to the problem [10, 32]; however, these approaches are extremely sampleinefficient and prone to overfit to the environment they are trained on. In our formulation withknown topography, it is not clear if these RL methods will generalize to different topographies andtherefore they aren’t a good candidate for realistic search missions.Terrain-aware path planning with adversarial targets is well-studied in the context of military op-erations [33, 34, 35] and in the context of stealth-based video games [36, 37]. However, theseapproaches focus on path planning but not on a competing search objective, that is, they assume thatthe adversary locations are known and need to be avoided, or are unknown and need to be evaded ifencountered en route to the goal.Adversarial search has some implementations on real-hardware and there are approaches that at-tempt to validate their results in simulation [27, 38, 39]. However, these approaches are usu-ally single-agent [40, 41]. If they are multi-agent then they rely on strong coordination betweenagents [42, 43]. This motivates that an efficient solution to multi-agent reconnaissance problemsthat can be deployed on real systems remains an open question.GUTS [18] is a non-adversarial multi-agent active search algorithm that has been shown to out-perform state-of-the-art algorithms on recovery rate of targets and has been deployed on physicalhardware. It is the state-of-the-art for robust multi-agent active search and it can handle intermittentcommunication and observation uncertainties, but it is not suitable for the reconnaissance task.3 Problem FormulationWe model the search region as a grid with a cell size of 60m x 60m. Fig 1c shows an overhead view ofthe costmap for one of the maps we test on. Due to the ubiquity of satellite imagery, it is reasonableto assume such approximate map information of the search region is available. Some parts of themap may be different during deployment on physical systems, but our on-robot sensing and mappingsystem can recognize changes and dynamically update the prior map. For our experiments, however,we assume the map to be fixed. We model the stealthy active search problem as follows:• We give the same search region to all robots, for example, see red polygon in Fig. 1c.• The targets are sparsely placed and static. They need to be recovered quickly with high certainty.• The robots must minimise their exposure to the targets which are considered hostile.• Each robot must plan its next data collection action on-board, i.e., no central planner exists.3(a) (b) (c) (d)Figure 2: A simple illustration of the viewableregions of the (a) robot given its coordinates anddirection of facing, and (b) target given its co-ordinates. Lighter the shade greater the noise.Black cells are obstructions.Figure 3: A realistic illustration of the viewable region ofthe target when (c) unbounded if it were located at the reddot; (d) discretized (60 m×60mcell size grid) and subjectto viewing limits (200 m−300m) superimposed on the topo-graphical map. Further details in Appendix A.4• The robots may communicate their locations and observations with each other; however, the al-gorithm’s performance should improve with increasing number of search agents anywhere in thespectrum of total absence of communication to perfect communication.Formally, we represent the locations of targets in our 2D grid representation using a sparse matrixB∈RM1×M2. Let β∈RMbe the flattened version of matrix B, where M=M1M2. This is a sparsevector, with 1s corresponding to target locations and 0s elsewhere. The objective is to recover thetrueβthrough search. We model the terrain-aware noisy observations through Eqn. 1 and Eqn. 2.yjt=clip(Xjtβ±bjt,0,1)(1) bt=nt/vt (2) P(Ljt) =Q′∑q∑kXkLjt (3)where, Xjt∈RQ×Mdescribes the sensing matrix such that each row in Xjtis a one-hot vector indi-cating one of the grid cells in view of the robot jat timestep t, and Qis the total number of grid cellsthe robot can view. yjt∈RQ×1is the resultant observation including the additive terrain-aware noisevector bjt∈RQ×1. The observation ytis clipped to be within 0 and 1 as the additive noise can causethe resultant quantity to exceed those bounds. Going forward, we assume that these quantities aredefined on a per-robot basis and drop the superscript jfor ease of notation. The topography-awarenoise bthas two components; firstly, it encodes the intuition that observation uncertainty increaseswith distance to the robots; secondly, it encodes the intuition that observation uncertainty increaseswith occlusions in the line of sight from the robot.In Eqn. 2, / denotes element wise division, nt∼N+(0,Σt)with diagonal elements of the noisecovariance matrix Σtmonotonically increasing with the square of the distance of the observed cellfrom the robot. vt∈RQ×1where each entry represents the square of the fractional visibility (ac-counting for occlusions) of each of the Qcells visible in Xt. The noise is sampled from a positivehalf-Gaussian distribution N+(0,Σt)and is added for cells without targets and subtracted for cellswith targets. Similarly, we have an observation model for the targets albeit with some relaxations,namely, only accounting for occlusions but no depth-aware noise. Fig. 2 shows simple examples ofthe modelled viewable regions of the robots and targets, and Fig. 3 shows a realistic example for atarget viewable region.Let the robot trajectory for robot juntil timestep tbe denoted Ljt∈RMsuch that each entry in Ljtis the integer count of the number of times robot jhas visited that cell in the whole space M. InEqn. 3, we define a penalty function P(Ljt), which penalizes the robot for showing itself to any ofthe targets. Similar to above, Xk∈RQ′×Mis the sensing matrix for the kthtarget, note that it is notdependent on the timestep tas targets are static. The second summation is to reduce the value to asingle real number. The stealth penalty can be thought of as a scaled discretized representation ofthe time spent in the viewable region of the target(s).LetDjtbe the set of observations available to robot jat timestep t.Djtcomprises of (Xt,yt)pairscollected by robot jas well as those communicated to robot jby other robots. Let the total numberof sensing actions by all agents be T. Our main objective is to sequentially optimize the next sensingaction Xt+1based on Djtat each timestep tto recover the sparse signal βwith as few measurements4Tas possible while minimizing the stealth penalty over all robots ∑jP(Ljt). Each robot optimizesthis objective based on its own partial dataset Djtin a decentralized manner.4 STAR: Stealthy Terrain-Aware ReconnaissanceThis section presents the STAR algorithm. Each robot jasynchronously estimates the posteriordistribution over target locations based on its partial dataset Djt. During the action selection stage,each robot generates a sample from this posterior and simultaneously optimizes a reward functionand a stealth penalty for this sampled set of target locations. The reward function represents potentialinformation gain for the sensing action in consideration, while the stealth penalty represents thepotential information leakage based on the partial dataset Djtavailable.4.1 Calculating PosteriorFollowing Bakshi et al. [18], each robot assumes a zero-mean gaussian prior per entry of the vector βs.t.p0(βm) =N(0,γm). The variances Γ=diag([γ1...γM])are hidden variables which are estimatedusing data. Bakshi et al. [18] follows Tipping [44] and Wipf and Rao [45] and uses a conjugate in-verse gamma prior on γmto enforce sparsity s.t. p(γm) =IG(am,bm) =bammΓ(am)γ(−am−1)m e−(bm/γm)∀m∈{1...M}. The salient feature of the inverse gamma distribution IG(.)is that it selects a smallnumber of variances γmto be significantly greater than zero, while the rest will be nearly zero,this enforces sparsity. We estimate the posterior distribution on βgiven data Djtfor robot jus-ing Expectation Maximisation [46]. We can write analytic expressions for the E-step (estimatingˆβ=p(β|Djt,Γ) =N(μ,V)) and M-step (computing max Γp(Djt|β,Γ)) respectively:V= (Γ−1+XTΣX)−1;μ=VXTΣy(4) γm= ([V]mm+[μ]2m+2bm)/(1+2am)(5)where Xandyare created by vertically stacking all measurements (Xt,yt)inDjt, andΣis a diagonalmatrix composed of their corresponding terrain-aware noise variances.Each robot estimates p(β|Djt) =N(μ,V)on-board using its partial dataset Djt. We set the valuesofam=0.1 and bm=1 as these were found to be effective in Ghods et al. [16]. Finally, agent jsamples from the posterior ̃β∼p(β|Djt).4.2 Choosing Next Sensing ActionEach robot chooses the next sensing action Xt+1by assuming that the sampled set of target locations ̃βis correct. Specifically, let ˆβ(Djt∪(Xt+1,yt+1))be our expected estimate of the parameter βusingall available measurements Djtand the next candidate measurement (Xt+1,yt+1). Then, followingBakshi et al. [18] the reward function is defined as:R( ̃β,Djt,Xt) =−|| ̃β−ˆβ(Djt∪(Xt,yt))||22−λ×I( ̃β,ˆβ) (6)Where, λ(=0.01)is a hyperparameter that reduces the reward for a search location if the estimatedˆβdoes not have high likelihood entries in common with the sample at the current step ̃β. Let ˆkand ̃kbe the number of non-zero entries in ˆβand ̃β, then the indicator function I(.)is defined as:I( ̃β,ˆβ) =(0, if any matches between topˆk2entries in ˆβand top ̃k2entries in ̃β1, otherwiseThe reward function is stochastic due to the sampling of ̃βand this ensures that the search actionsselected by the robots are diverse. The intuition behind this reward term is that those search de-cisions are preferred that can confirm the locations of suspected (but not confident) targets as perthe sample. This reward function was shown to improve search recovery rate in robotic search andrescue missions over existing methods [18] which tend to be more explorative.The reward in Eqn. 6 must be balanced against the risk quantitified by the stealth penalty term(Eqn. 3). Since it is computationally infeasible to compute the risk over all possible trajectories,we calculate the penalty for every possible goal location ljtof each robot j.ljt∈RMis a one hot5(a) (b) (c) (d)Figure 4: Depth Elevation Map (DEM) of (a) thenatural mountainous search region from Fig. 1,and (b) a grid of perpendicular corridors. DEMsare heightmaps, when represented as an imagethe brighter the region, the greater the elevation.Figure 5: Experimental Results in the Grid Map (Fig. 4-b).STAR (red) outperforms existing methods (c) narrowly onSearch Efficiency, and (d) significantly on Visibility Riskdue to the novel multi-objective function. RSI’s efficiencyremains flat as it is an information-greedy method.vector indicating the location of the robot corresponding to the potential measurement Xjtin thereward defined in Eqn. 6. This will yield a risk landscape over the entire map that is used foraction selection and path planning. Since we only have the posterior ˆβover target locations andnot ground truths, in Eqn. 7 we use the folded normal distribution to determine a separate posteriormean ˆμvis∈RMfor visibility risk that accounts for the mean and variance of the posterior ˆβ. Thefinal risk objective is defined in Eqn. 8:ˆμivis=r2Viiπexp−μ2i2Vii+μi1−2φ−μi√Vii(7)P(ljt) =M∑iˆμivisQ′∑qXiljt (8)where, in Eqn. 7 Vandμare the variance and mean of the posterior defined in Eqn. 4, φis the errorfunction φ(z) =2√πRz0e−t2dtandiindexes into the Mlength vector. In Eqn. 8, ˆμivisbehaves as aweighting scalar for each location iin the map where a threat may be located. When the posteriorvariance for a location iis close to zero then ˆμiviswill tend to the posterior mean μi, and when thevariance is high but the mean is zero (as it is at the beginning of the run) the weighting factor willstill be non-zero as it will be governed by the variance. The overall optimisation objective can thenbe thought of as two competing objectives as follows:Xt,lt=argmax ̃X, ̃lR(β∗,Djt, ̃X)−γP( ̃l)from (6) and (8 ) (9)where, γis a hyperparameter controls the tradeoff between goal selection to satisfy the stealthpenalty and the reward term. We found that the best value for γis 1 combined with normalisingboth the reward and stealth penalty terms between 0 and 1.5 Experiments and ResultsOur experiments demonstrate the superior search efficiency of our proposed algorithm STAR com-pared to existing search methods: GUTS, RSI, coverage-based search and random search. TheGUTS algorithm [18] is a parallelized Thompson sampling based algorithm that prioritises recoveryrate in multi-agent active search missions with realistically modelled noise. It has been optimised torun on real robots and to the best of our knowledge it is the state-of-the-art in decentralised multi-agent active search methods. Region Sensing Index (RSI) [47] is an active search algorithm thatlocates sparse targets while taking into account realistic sensing constraints based on greedy max-imisation of information gain. The coverage baseline myopically chooses the next waypoint in anunvisited part of the search region while the random search policy randomly selects a cell to visit.5.1 Testing SetupEach search run is initialized by specifying the same search region for each robot (see Fig. 1c).All robots start at the same location. We evaluate the various search algorithms under two targetsampling paradigms: uniform and adversarial. In uniform sampling the targets are placed uniformly6Non-Adversarial Placement of Targets Adversarial Placement of TargetsFull Communication Communication BreakdownTable 1: Simplified Simulator Results. STAR (red) outperforms existing state-of-the-art methods on bothmetrics (lower is better) regardless of how targets are placed, and even with total communication breakdown.at random in the search region. In adversarial sampling the targets are placed stealthily, i.e., theyare placed in locations with lower average visibility within the search region. We control for thelocations of the targets using these two paradigms. We utilize two possible maps, a mountainousdesert landscape (See Fig. 1a and Fig. 4a) and a grid of corridors (see Fig. 4b). There are always 5targets (K) but this is not known apriori. We run experiments varying number of search agents (J)which may or may not be able to communicate with each other to showcase the robustness of STAR.5.2 Evaluation MetricsOur primary evaluation metric is the recovery rate, which is defined as the fraction of targets thesearch method has located against the number of search decisions made within the time budget. Oursecondary metric is the stealth penalty incurred by the team of search agents.We evaluate the algorithm on the basis of decision steps, i.e. one search decision is one time step.This is equivalent to assessing an algorithm’s sample complexity while abstracting away environ-ment/hardware specific factors like terrain conditions or particular compute specifications that mightaffect wall clock time. That being said, when evaluating in the realistic simulator each algorithm isgiven the same runtime budget of 1 hr and 15 min, this is short enough to be a realistic duration for asearch operation and long enough that robots with a max speed of 5 m/s may complete exploration.The stealth penalty is a scaled and discretized representation of the time spent in the viewable regionof the target(s). Eqn. 3 describes the stealth penalty as the dot product between the viewable regionof the targets and the path(s) taken by the robot(s). Given the cell size of the grid and speed of therobots, we may calculate time spent by the agents in view of the targets using the stealth penalty.Similar to search efficiency, we choose to abstract away physical quantities like speed and reportperformance on the stealth penalty directly.5.3 Simple Simulator ResultsOur simple simulator accurately simulates the robots’ sensing and trajectory planning while using avery simple physics model for robot traversal in order to speed up simulation time. Fig. 5c-d showsthe performance the results in the simple simulator on the grid of corridors (Fig. 4b) with adver-sarially placed targets. STAR wins out on both metrics, maximising recovery rate of targets andminimising risk. Table 1 shows the results on the simple simulator for the mountainous desert land-scape (Fig. 4a) and ablates it against communication failures and non-adversarial target placement.Across the board, STAR (marked in red) outperforms existing methods. When communication fail-ures exist, coverage based planners suffer the most as their search efficiency relies on coordination.RSI remains unaffected as it is an information greedy algorithm and more agents doesn’t translate71-Robot Realistic Simulator Runs 2-Robot Realistic Simulator RunsAdversarial Placement Non-Adversarial PlacementTable 2: Realistic Simulation Results. STAR (red) has greater search efficiency (targets found F/ total targetsK) and lower stealth penalty incurred regardless of target placement strategy. In the top-left, RSI (purple) isable to achieve a lower penalty than STAR as it fails to locate all targets in most runs. The end of each curve isthe end of the runtime budget, hence the penalty at the end of each curve is the final penalty for that run.to greater efficiency either. GUTS, the current state-of-the-art in decentralised multi-agent activesearch, performs fairly well on recovery targets but clearly loses out on the stealth metric to STAR.5.4 Realistic Simulator ResultsWe have designed STAR such that it runs in real-time on the multi-robot team presented in [18].We simulate field tests in a desert mountainous environment with a search area of ∼2.5km2with arealistic physics model and present those results here. We compare the search efficiency of STAR,with GUTS, RSI, random search and coverage-based search in Table 2. We plot results varyingteam size and target placement strategy with five targets. Results are aggregated across 10-12 runsfor each line on the graph. We can see that STAR outperforms our baselines: GUTS, RSI, coverage-based search and random search on the recovery rate (odd columns) as well as in terms of the stealthpenalty (even columns) regardless if the targets are placed adversarially or uniformly.5.5 DiscussionDespite optimising to reduce the stealth penalty, a competing objective, STAR still outperformsthe other algorithms on recovery rate, including GUTS, the the previous best algorithm in terms ofsearch efficiency. This indicates that the terrain-aware stealth penalty term improves the discrimina-tory power of the reward function.The results shown in this work demonstrate that fully autonomous robots can effectively search incomplicated natural terrains in a time efficient manner. We believe the algorithm presented herepaves the way for more ubiquitous application of autonomous robotics in multi-agent search for dis-aster response and reconnaissance and will save human effort and human lives with greater adoption.6 LimitationsThis work tackles a gap in current literature wherein, prior work in adversarial search ignores visi-bility risk when solving for efficient search or ignores efficient search when designing stealth algo-rithms. We utilize Depth Elevation Maps since our primary use case is open outdoor environments.3-D structures that breakdown the 2-D assumption like caves or cliff overhangs are failure cases. Weassume a symmetric sensing model for the targets and the agents, while this is realistic as it requiresno special knowledge, it can be improved by incorporating a directionality to the assumed targetsensing model, possibly by incorporating a movement model since STAR currently assumes statictargets.8AcknowledgmentsThis material is based upon work supported by the U.S. Army Research Office and the U.S. ArmyFutures Command under Contract No. W911NF-20-D-0002. Authors would like to acknowledgethe contributions of Conor Igoe, Tejus Gupta and Arundhati Banerjee to the development of theSTAR algorithm. Additionally the work on the physical platforms and true-to-life simulations wereenabled thanks to Herman Herman, Jesse Holdaway, Prasanna Kannappan, Luis Ernesto Navarro-Serment, Maxfield Kassel and Trenton Tabor.References[1] C. Mouradian, J. Sahoo, R. H. Glitho, M. J. Morrow, and P. A. Polakos. A coalition forma-tion algorithm for multi-robot task allocation in large-scale natural disasters. In 2017 13thInternational Wireless Communications and Mobile Computing Conference (IWCMC) , pages1909–1914, 2017.[2] D. S. Drew. Multi-agent systems for search and rescue applications. Curr Robot Rep , 2(2):189–200, 2021.[3] K. Nagatani, Y . Okada, N. Tokunaga, S. Kiribayashi, K. Yoshida, K. Ohno, E. Takeuchi, S. Ta-dokoro, H. Akiyama, I. Noda, T. Yoshida, and E. Koyanagi. Multirobot exploration for searchand rescue missions: A report on map building in RoboCupRescue 2009. J. field robot. , 28(3):373–387, 2011.[4] J. L. Baxter, E. K. Burke, J. M. Garibaldi, and M. Norman. Multi-Robot Search and Rescue: APotential Field Based Approach , pages 9–16. Springer Berlin Heidelberg, Berlin, Heidelberg,2007. ISBN 978-3-540-73424-6.[5] F. Greenwood, E. L. Nelson, and P. G. Greenough. Flying into the hurricane: a case study ofuav use in damage assessment during the 2017 hurricanes in texas and florida. PLoS one , 15(2):e0227808–e0227808, 2020. ISSN 1932-6203.[6] R. Nowakowski and P. Winkler. Vertex-to-vertex pursuit in a graph. Discrete Mathematics , 43(2):235–239, 1983. ISSN 0012-365X.[7] M. Aigner and M. Fromme. A game of cops and robbers. Discrete Applied Mathematics , 8(1):1–12, 1984. ISSN 0166-218X.[8] A. S. Goldstein and E. M. Reingold. The complexity of pursuit on a graph. Theor. Comput.Sci., 143:93–112, 1995.[9] V . Isler, S. Kannan, and S. Khanna. Randomized pursuit-evasion with local visibility. SIAM j.discrete math. , 20(1):26–41, 2006.[10] J. Buermann and J. Zhang. Multi-robot adversarial patrolling strategies via lattice paths. Arti-ficial Intelligence , 311:103769, 2022. ISSN 0004-3702.[11] L. Zhou, V . Tzoumas, G. J. Pappas, and P. Tokekar. Resilient active target tracking withmultiple robots. IEEE Robotics and Automation Letters , 4(1):129–136, 2019.[12] L. Zhou and V . Kumar. Robust multi-robot active target tracking against sensing and commu-nication attacks. In 2022 American Control Conference (ACC) , pages 4443–4450, 2022.[13] I. A. S ̧ucan, M. Moll, and L. E. Kavraki. The Open Motion Planning Library. IEEE Robotics& Automation Magazine , 19(4):72–82, December 2012.[14] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination ofminimum cost paths. IEEE Transactions on Systems Science and Cybernetics , 4(2):100–107,1968.9[15] K. Kandasamy, A. Krishnamurthy, J. Schneider, and B. Poczos. Parallelised bayesian optimi-sation via thompson sampling. In A. Storkey and F. Perez-Cruz, editors, Proceedings of theTwenty-First International Conference on Artificial Intelligence and Statistics , volume 84 ofProceedings of Machine Learning Research , pages 133–142. PMLR, 09–11 Apr 2018.[16] R. Ghods, W. J. Durkin, and J. Schneider. Multi-Agent active search using realistic Depth-Aware noise model. CoRR , abs/2011.04825, 2020.[17] R. Ghods, A. Banerjee, and J. Schneider. Decentralized multi-agent active search for sparsesignals. In C. de Campos and M. H. Maathuis, editors, Proceedings of the Thirty-SeventhConference on Uncertainty in Artificial Intelligence , volume 161 of Proceedings of MachineLearning Research , pages 696–706. PMLR, 27–30 Jul 2021.[18] N. A. Bakshi, T. Gupta, R. Ghods, and J. Schneider. Guts: Generalized uncertainty-awarethompson sampling for multi-agent active search. In IEEE International Conference onRobotics and Automation (ICRA) , 2023. To appear.[19] K. Kandasamy, W. Neiswanger, R. Zhang, A. Krishnamurthy, J. Schneider, and B. Poczos.Myopic posterior sampling for adaptive goal oriented design of experiments. In K. Chaudhuriand R. Salakhutdinov, editors, Proceedings of the 36th International Conference on MachineLearning , volume 97, pages 3222–3232. PMLR, 2019.[20] G. Hollinger, A. Kehagias, and S. Singh. Gsst: Anytime guaranteed search. Auton. Robots , 29(1):99–118, jul 2010. ISSN 0929-5593.[21] G. Hollinger, S. Singh, and A. Kehagias. Improving the efficiency of clearing with multi-agentteams. The International Journal of Robotics Research , 29(8):1088–1105, 2010.[22] A. Kehagias, G. Hollinger, and S. Singh. A graph search algorithm for indoor pursuit/evasion.Mathematical and Computer Modelling , 50(9):1305–1317, 2009. ISSN 0895-7177.[23] A. Kleiner and A. Kolling. Guaranteed search with large teams of unmanned aerial vehicles.In2013 IEEE International Conference on Robotics and Automation , pages 2977–2983, 2013.[24] R. J. Marcotte, A. Haggenmiller, G. Ferrer, and E. Olson. Probabilistic multi-robot search foran adversarial target.[25] D. Assaf and S. Zamir. Optimal Sequential Search: A Bayesian Approach. The Annals ofStatistics , 13(3):1213 – 1221, 1985.[26] F. Bourgault, A. G ̈oktogan, T. Furukawa, and H. F. Durrant-Whyte. Coordinated search for alost target in a bayesian world. Advanced Robotics , 18(10):979–1000, 2004.[27] E.-M. Wong, F. Bourgault, and T. Furukawa. Multi-vehicle bayesian search for multiple losttargets. In Proceedings of the 2005 IEEE International Conference on Robotics and Automa-tion, pages 3169–3174, 2005.[28] G. Kimeldorf and F. H. Smith. Binomial searching for a random number of multinomiallyhidden objects. Management Science , 25(11):1115–1126, 1979.[29] N. Roy, G. Gordon, and S. Thrun. Finding approximate pomdp solutions through belief com-pression. J. Artif. Int. Res. , 23(1):1–40, jan 2005. ISSN 1076-9757.[30] T. H. Chung, G. A. Hollinger, and V . Isler. Search and pursuit-evasion in mobile robotics: Asurvey. Auton. Robots , 31(4):299–316, 2011.[31] D. W. Schuldt and J. A. Kurucar. Efficient partitioning of space for multiple UAS search in anunobstructed environment. Int. J. Intell. Robot. Appl. , 2(1):98–109, 2018.10[32] B. Baker, I. Kanitscheider, T. Markov, Y . Wu, G. Powell, B. McGrew, and I. Mordatch. Emer-gent tool use from multi-agent autocurricula. In International Conference on Learning Repre-sentations , 2020.[33] A. Teng, D. DeMenthon, and L. Davis. Stealth terrain navigation. Systems, Man and Cyber-netics, IEEE Transactions on , 23:96 – 110, 02 1993.[34] A. Tews, G. Sukhatme, and M. Mataric. A multi-robot approach to stealthy navigation in thepresence of an observer. In IEEE International Conference on Robotics and Automation, 2004.Proceedings. ICRA ’04. 2004 , volume 3, pages 2379–2385 V ol.3, 2004.[35] B. McCue and National Defense University Press. U-boats in the bay of Biscay: An essay inoperations analysis . National Defense University Press, 1990.[36] W. Al Enezi and C. Verbrugge. Skeleton-based multi-agent opponent search. In 2021 IEEEConference on Games (CoG) , pages 1–8. IEEE, 2021.[37] D. Isla. Third eye crime: building a stealth game around occupancy maps. In Proceedingsof the Ninth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment ,page 206. AAAI Press, 2013.[38] G. Hollinger, S. Singh, J. Djugash, and A. Kehagias. Efficient multi-robot search for a movingtarget. Int. J. Rob. Res. , 28(2):201–219, 2009.[39] J. Buermann and J. Zhang. Multi-robot adversarial patrolling strategies via lattice paths. Arti-ficial Intelligence , 311:103769, 2022. ISSN 0004-3702.[40] R. Vidal, O. Shakernia, H. Kim, D. Shim, and S. Sastry. Probabilistic pursuit-evasion games:theory, implementation, and experimental evaluation. IEEE Transactions on Robotics andAutomation , 18(5):662–669, 2002.[41] J. Tisdale, Z. Kim, and J. K. Hedrick. Autonomous uav path planning and estimation. IEEERobotics & Automation Magazine , 16(2):35–42, 2009.[42] B. P. Gerkey, S. Thrun, and G. Gordon. Parallel stochastic hill- climbing with small teams. InL. E. Parker, F. E. Schneider, and A. C. Schultz, editors, Multi-Robot Systems. From Swarms toIntelligent Automata Volume III , pages 65–77, Dordrecht, 2005. Springer Netherlands. ISBN978-1-4020-3389-6.[43] M. Vieira, R. Govindan, and G. Sukhatme. Scalable and practical pursuit-evasion with net-worked robots. Intelligent Service Robotics , 2:247–263, 10 2009.[44] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. J. Mach. Learn.Res., 1:211–244, sep 2001. ISSN 1532-4435.[45] D. Wipf and B. Rao. Sparse bayesian learning for basis selection. IEEE Transactions on SignalProcessing , 52(8):2153–2164, 2004.[46] T. Moon. The expectation-maximization algorithm. IEEE Signal Processing Magazine , 13(6):47–60, 1996.[47] Y . Ma, R. Garnett, and J. Schneider. Active search for sparse signals with region sensing.Proceedings of the AAAI Conference on Artificial Intelligence , 31(1), Feb. 2017.[48] J. Wang, G. Robinson, and K. White. Generating viewsheds without using sightlines. 66, 012000.[49] S. D. Roth. Ray casting for modeling solids. Computer Graphics and Image Processing , 18(2):109–144, 1982. ISSN 0146-664X.11[50] M. F. Goodchild and J. Lee. Coverage problems and visibility regions on topographic surfaces.Annals of Operations Research , 18(1):175–186, Dec 1989. ISSN 1572-9338.[51] M. R. Travis, G. H. Elsner, W. D. Iverson, and C. G. Johnson. VIEWIT: computation ofseen areas, slope, and aspect for land-use planning. General Technical Report PSW-GTR-11,Pacific Southwest Research Station, Forest Service, U.S. Department of Agriculture, Berkeley,CA, 1975.[52] P. Fisher. First experiments in viewshed uncertainty: The accuracy of the viewshed area.Photogrammetric Engineering & Remote Sensing , 57, 01 1991.[53] P. F. Fisher. Extending the applicability of viewsheds in landscape planning. PhotogrammetricEngineering and Remote Sensing , 52:1297–1302, 1996.12A AppendixA.1 Glossary of NotationsNotation DescriptionB∈RM1×M2 Sparse matrix representing the locations of targets in a 2D gridβ∈RMFlattened version of B, sparsely populated vectorM=M1M2 Total number of cells in the gridyjt Observation vector for robot jat timestep tXjt Sensing matrix for robot jat timestep tQ Total number of grid cells visible to a robotbjt Terrain and depth-aware noise vector for robot jat timestep tP(Ljt) Penalty function penalizing robot jfor showing itself to targets given a path of traversalljt One-hot vector indicating the location of a robot jR(β∗,Djt, ̃X) The reward term in the optimization objectiveP( ̃l) The stealth penalty term in the optimization objective given the potential location of the robotXkSensing matrix for the kthtargetDjt Set of observations available to robot jat timestep tT Total number of sensing actions by all agentsDjt Set of observations available to robot jat timestep t ̃β Sampled set of target locationsΓ Diagonal matrix of hidden variables, estimated using dataγm Variance for the mthentry of βam Parameter of the inverse gamma prior on γmbm Parameter of the inverse gamma prior on γmμ Mean of the posterior distribution p(β|Djt,Γ)V Variance of the posterior distribution p(β|Djt,Γ)X Matrix formed by stacking all sensing actions in DjtΣ Diagonal matrix of terrain-aware noise variancesλ Hyperparameter for the reward functionˆβ(Djt∪(Xt+1,yt+1)) Expected estimate of βusing all available measurements and the next candidate measurementˆk Number of non-zero entries in ˆβ ̃k Number of non-zero entries in ̃βI( ̃β,ˆβ) Indicator function for comparing entries in ˆβand ̃βγ Hyperparameter controlling the tradeoff between goal selection and the stealth penaltyˆμivis Posterior mean for visibility riskφ(z) Error functionK The number of targets in a search runJ The number of search agents participating in a search runF The number of targets found during a runA.2 Algorithm Pseudo-codeThe algorithm has been summarized in Alg. 1A.3 Real Hardware PerformanceThe STAR algorithm has been tested on the physical systems as shown in the demo video2. In themain paper we ablated and assessed the performance of the STAR algorithm against the state of theart from an algorithmic standpoint to show statistical significant superiority. Here, we provide somestatistics from running the algorithm on physical systems.2https://youtu.be/Fs1lv4y6Nq813Algorithm 1 STAR AlgorithmAssume: Sensing model (1), sparse signal β,JagentsSet: Dj0←/ 0 ,Lj0← {xj,yj} ∀ j∈ {1,...,J},γm=1∀m∈ {1,...,M}fort=1,...., TdoWait for an agent to finish; for the free agent j:Sample ̃β∼p(β|Djt,Γ) =N(μ,V)from (4)Xt,lt=argmax ̃X, ̃lR( ̃β,Djt, ̃X)−γP( ̃l)from (6) and (8)Observe ytgiven action XtUpdate Djt+1=Djt∪(Xt,yt)(robot observations)Update Ljt+1=Ljt∪(lt)(robot path)Share (Xt,yt)Estimate Γ=diag([γ1,...,γM])using (5)end forFigure 6: Planning time vs Search region SizeThe Fig. 6 shows a plot of the planning time for one decision of the STAR algorithm against the sizeof the search space. The planning time in search regions under a sq km is around 10-15 secs. At2.5 sq. km (search region size in the paper), it rises to over a minute. The compute on the robot isa Nuvo-8108GC with Intel Xeon E-2278GEL (Coffee Lake R) 2.0 GHz Processor. In practice therobot may start planning its next decision slightly before it expects to arrive at its next goal locationso this planning time doesn’t impact search performance. To isolate such engineering optimisationsfrom algorithmic assessment we evaluated the algorithms on their sample complexity rather thanwall clock time.A.4 Terrain Visibility PriorSince our use case is outdoor spaces we use Depth Elevation Maps (DEMs) to represent the terrain(See Fig. 4) since it is more memory efficient than voxels.To determine what portion of the map is visible given location and direction of facing we use areference plane-based approach [48] which can compute the viewable region in the map given anypoint in constant time as opposed to ray casting methods [49, 50, 51, 52, 53] which takes variabletime. We assume that the topography remains unchanged over the course of the run; however, ourphysical systems are capable of dynamically updating the topography using point clouds generatedby stereo cameras. Hence, having a constant time algorithm for viewshed computation allows forefficient onboard updating of the visibility map if there are differences between the terrain prior andthe dynamic observations made by the robot on the ground.Once we have the viewable region from a given point on the map we discretise it and apply viewa-bility limits on in accordance with our physical system as shown in Fig. 3 and described ahead.14A.5 UGV Sensing Action modelWe use a array of 5MP RGB cameras with an effective lateral field-of-view (FOV) of 193◦for theground vehicles. This allows the perception system to pick up detections several hundred metresout.We model the sensing action model in the grid representation as a trapezium of fifteen cells alongthe bearing of the UGV as shown in Fig. 2a. Its full extent is upto 210 m−300min front of the robotsubject to occlusions. The motivation behind this is that beyond a certain distance even if the terrainis in line of sight, it is not possible to make accurate detections of targets as they are just a few pixelsin the image.A.6 Target Sensing Action ModelFig. 2b shows a representative example of the viewshed of the targets. Since we don’t have informa-tion on the direction of facing of the targets, we model the FOV such that targets see in all directionssubject to the topography and the 210 m−300mviewing limit but without depth aware noise. Fig. 3shows an example of the viewshed computed at an example location in the map desert mountainousmap (Fig. 1) assuming a 360◦FOV .A.7 Visibility Risk Aware Path PlanningSince our robot and target viewing models are symmetric, it implies that detecting a target is ac-companied by the target detecting the search agent, however being identified once does not meanthe task is over, there could be more targets to locate and known targets should be avoided for theremainder of the search. We expect to minimize the stealth penalty over the course of the run butdon’t expect it to be zero. As an aside, were we to employ asymmetric viewing models such thatviewing targets without being viewed was possible, we might aim to have zero risk policies but weoutline this for future work.In order for the search agents to respect the visibility risk map when path planning (See Fig. 1c), weuse the OMPL planner [13] on the physical system and for the realistic simulation and the A-starplanner [14] for our simplified simulations. Both planners can plan paths within time constraintsand subject to state costs, which in our case is the visibility risk map, and an occupancy map ofobstacles.15 |
a0mFRgadGO | Bootstrap Your Own Skills: Learning to Solve NewTasks with Large Language Model GuidanceJesse Zhang1, Jiahui Zhang1, Karl Pertsch1, Ziyi Liu1,Xiang Ren1,Minsuk Chang2,Shao-Hua Sun3,Joseph J. Lim41University of Southern California,2Google AI,3National Taiwan University,4KAISTjessez@usc.eduAbstract: We propose BOSS, an approach that automatically learns to solve newlong-horizon, complex, and meaningful tasks by growing a learned skill librarywith minimal supervision. Prior work in reinforcement learning requires expertsupervision, in the form of demonstrations or rich reward functions, to learn long-horizon tasks. Instead, our approach BOSS ( BOotstrapping your own SkillS)learns to accomplish new tasks by performing “skill bootstrapping,” where anagent with a set of primitive skills interacts with the environment to practice newskills without receiving reward feedback for tasks outside of the initial skill set.This bootstrapping phase is guided by large language models (LLMs) that informthe agent of meaningful skills to chain together. Through this process, BOSSbuilds a wide range of complex and useful behaviors from a basic set of primi-tive skills. We demonstrate through experiments in realistic household environ-ments that agents trained with our LLM-guided bootstrapping procedure outper-form those trained with na ̈ıve bootstrapping as well as prior unsupervised skillacquisition methods on zero-shot execution of unseen, long-horizon tasks in newenvironments. View website at clvrai.com/boss.1 IntroductionRobot learning aims to equip robots with the capability of learning and adapting to novel scenarios.Popular learning approaches like reinforcement learning (RL) excel at learning short-horizon taskssuch as pick-and-place [1, 2, 3], but they require dense supervision (e.g., demonstrations [4, 5, 6, 7]or frequent reward feedback [8, 9, 10]) to acquire long-horizon skills.In contrast, humans can learn complex tasks with much less supervision—take, for example, theprocess of learning to play tennis: we may initially practice individual skills like forehand andbackhand returns under close supervision of a coach, analogous to RL agents practicing simple pick-place skills using demonstrations or dense rewards. Yet importantly, in between coaching sessions,tennis players return to the tennis court and practice to combine the acquired basic skills into long-horizon gameplay without supervision from the coach. This allows them to develop a rich repertoireof tennis-playing skills independently and perform better during their next match.Can we enable agents to similarly practice and expand their skills without close human supervision?We introduce BOSS ( BOotstrapping your own SkillS), a framework for learning a rich repertoireof long-horizon skills with minimal human supervision (see Figure 1). Starting from a base setof acquired primitive skills, BOSS performs a skill bootstrapping phase in which it progressivelygrows its skill repertoire by practicing to chain skills into longer-horizon behaviors. BOSS enablesus to train generalist agents, starting from a repertoire of only tens of skills, to perform hundreds oflong-horizon tasks without additional human supervision.A crucial question during practice is which skills are meaningful to chain together: randomly chain-ing tennis moves does not lead to meaningful gameplay; similarly, random chains of pick-placemovements do not solve meaningful household tasks. Thus, in BOSS we propose to leverage therich knowledge captured in large language models (LLMs) to guide skill chaining: given the chain7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Skill LibraryPut 🥖 in ♨Pick up 🥖Pick up 🍎Serve 🥖Serve baked 🥖Practice in EnvironmentPick up 🥖LLMSample Initial SkillGuide Next Skill SelectionPolicyPut 🥖 in ♨LLMPolicyServe 🥖Update AgentPolicyName New SkillPick up 🥖Put 🥖 in ♨Serve 🥖LLMAdd New Skill to Library(a) Skill Bootstrapping(b) Target TaskExecutionExecute New TaskMake 🥞Policy withBootstrapped Skills CriticV(s,z) CriticV(s,z)Figure 1: BOSS learns to execute a large set of useful, long-horizon skills with minimal supervisionby performing LLM-guided skill bootstrapping .(a): The agent starts with an initial skill library.During bootstrapping, it practices chaining skills into new long-horizon behaviors using guidancefrom an LLM. The collected experience is used to update the policy. Newly discovered skill chainsare summarized with an LLM and added as new skills into the library for further bootstrapping.Thus, the agent’s skill repertoire grows over time. (b):After bootstrapping, we condition the policyon novel instructions and show execution in the environment using the bootstrapped skill repertoire.of executed skills so far, the LLM predicts a distribution over meaningful next skills to sample.Importantly, in contrast to existing approaches that leverage the knowledge captured in LLMs forlong-horizon task planning [11, 12, 13, 14], BOSS can use unsupervised environment interactionstopractice how to chain skills into long-horizon task executions; this practice is crucial especiallyif the target environment differs from the ones used to train the base skill set. This results in a morerobust policy that can compensate for accumulating errors from the initial skill repertoire.We validate the effectiveness of our proposed approach in simulated household environments fromthe ALFRED benchmark and on a real robot. Experimental results demonstrate that BOSS canpractice effectively with LLM guidance, allowing it to solve long-horizon household tasks in novelenvironments which prior LLM-based planning and unsupervised exploration approaches fail at.2 Preliminaries and Related WorkReinforcement Learning Reinforcement learning (RL) algorithms aim to learn a policy π(a|s)that maximizes the expected discounted return Ea∼π,P[PtγtR(st, at, st+1)]in a Markov DecisionProcess M= (S,A, P,R, γ), where SandAare state and action spaces, P:S × A × S → R+represents the transition probability distribution, R:S × A × S → Rdenotes the reward function,andγis the discount factor. Temporal-difference algorithms are a class of RL algorithms that alsolearn critic functions, denoted Vπ(s)orQπ(s, a), which represent future discounted returns whenfollowing the policy at state sor after taking action afrom state s, respectively [15]. Standard RLalgorithms struggle with learning long-horizon tasks and can be prohibitively sample-inefficient.Skill-based RL To solve long-horizon tasks, prior works have focused on pre-training skills , short-horizon behaviors that can be re-combined into long-horizon behaviors [9, 16, 17, 18, 19]. Theseskills can be represented as learned options [16, 18], sub-goal setting and reaching policies [20, 21],a set of discrete policies [22, 23], or continuous latent spaces that represent behaviors [9, 10, 24, 25,26]. Yet, most of these approaches need expert supervision (e.g., demonstrations [4, 5, 6, 7, 20, 21,27], frequent reward feedback [9, 10, 23]). In contrast, BOSS learns to execute long-horizon taskswith minimal human supervision via skill bootstrapping.Unsupervised RL To learn skills without human supervision, recent works have introduced manyunsupervised RL objectives, e.g., based on curiosity [28], contrallability [29, 30], and behavior orstate diversification [31, 32, 33, 34, 35]. Because these works learn skills from scratch and explorewithout supervision, they generally focus on locomotion tasks where most behaviors agents can ex-2plore, such as different running gaits, are already meaningful. Few works demonstrate learning ofmanipulation tasks, but either require hand-crafted state or action spaces [28] or remain constrainedto learning simple, short-horizon skills [36, 37]. BOSS makes two improvements to enable boot-strapping of long-horizon tasks: (1) We start from a base repertoire of language-conditioned skillsto enable coherent, long-horizon exploration. (2) We leverage an LLM to guide exploration towardsmeaningful skill-chains within the exponential number of possible long-horizon behaviors.Language in RL Prior works have employed language to parameterize rich skill sets to train multi-task RL agents [38, 39, 40, 41, 42, 43]. Recent progress in training LLMs has enabled approachesthat combine LLMs with pre-trained language-conditioned policies to perform open-loop planningover pre-trained skills [11, 12, 13, 14, 44]. These works do not perform any policy training orfinetuning when planning with the LLMs; but instead use the LLMs as top-down planners whoseplans are given to fixed low-level skill policies to execute. In contrast, BOSS pratices chainingbehaviors in the environment during skill bootstrapping and thus learns a more robust, closed-looppolicy. This leads to substantially higher success rate for executing long-horizon tasks.ELLM [45], LMA3 [46], and IMAGINE [47] are closest to our work. ELLM and LMA3 both use anLLM to generate tasks, with the former requiring a captioning model to reward agents and the latteradditionally using the LLM to hindsight label past agent trajectories for task completion; instead, weexpand upon a learned skill repertoire, allowing for building skill chains while automatically reward-ing the agent based on the completion of skills in the chain. Meanwhile, IMAGINE uses languageguidance to generate exploration goals, requiring a “social partner” that modifies the environmentaccording to desired goals. In realistic settings, this social partner requires extensive human effortto design. BOSS instead utilizes LLMs to propose goals in a target environment automatically.3 MethodOur method, BOSS ( BOotstrapping your own SkillS), automatically learns to solve new long-horizon, complex tasks by growing a learned skill library with minimal supervision. BOSS consistsof two phases: (1) it acquires a base repertoire of skills (Section 3.1) and then (2) it practices chain-ing these skills into long-horizon behaviors in the skill bootstrapping phase (Section 3.2). BOSS canthen zero-shot execute novel natural language instructions describing complex long-horizon tasks.3.1 Pre-training a Language-Conditioned Skill PolicyWe assume access to a dataset DL={τz1, τz2, τz3, ...,}where τzidenotes a trajectory of (s, a, s′, r)tuples and ziis a freeform language description of the trajectory. We also assume access to a sparsereward function for the primitive skills, e.g., an object detector that can detect if an object is placedin the correct location. For example, if τzidemonstrates a robot arm picking up a mug, then zi=“pick up the mug. ” andr= 1in the final transition in which the mug is picked up and 0otherwise.To obtain a language-conditioned primitive skill policy, we train a standard offline RL algorithm onDL. In our experiments, we use Implicit Q-Learning (IQL) [48] as it is performant and amenable toonline fine-tuning. We condition the policy and critic networks on the trajectory’s natural languageannotation z, yielding a language-conditioned policy π(a|s, z)and a critic function V(s, z).3.2 Skill BootstrappingAfter learning the language-conditioned primitive skill policy, we perform skill bootstrapping —the agent practices by interacting with the environment, trying new skill chains, then adding themback into its skill repertoire for further bootstrapping. As a result, the agent learns increasinglylong-horizon skills without requiring additional supervision beyond the initial set of skills.Sampling initial skills. At the start of bootstrapping, the skill repertoire Z={z1, z2, ...}is initial-ized to the set of pre-trained base skills. Upon initializing the agent in the environment at state s1,we must sample an initial skill. Intuitively, the skill we choose should be executable from s1i.e.,have a high chance of success. Therefore, in every bootstrapping episode, we sample the initial skillaccording to probabilities generated from the pre-trained value function, V(s1, z). We then try toexecute the sampled skill until a timeout threshold is reached.Guiding Skill Chaining via LLMs. If the first skill execution succeeds, the next step is constructinga longer-horizon behavior by chaining together the first skill with a sampled next skill. Na ̈ıvely3choosing the next skill by, for example, sampling at random will likely result in a behavior that isnot useful for downstream tasks. Even worse, the likelihood of picking a badskill chain via randomsampling increases linearly with the size of the skill repertoire and exponentially with the lengthof the skill chain. For a modestly sized repertoire with 20 skills and a chain length of 5 there are205= 3.2Mpossible skill chains, only few of which are likely meaningful.Thus, instead of randomly sampling subsequent skills, we propose to use large language models(LLMs) to guide skill selection. Prior work has demonstrated that modern LLMs capture relevantinformation about meaningful skill chains [11, 12, 14]. Yet, in contrast to prior top-down LLMplanning methods, we explore a bottom-up approach to learning long-horizon tasks: by allowingour agent to iteratively sample skill chains and practice their execution in the environment, we trainmore robust long-horizon task policies that achieve higher empirical success rates, particularly whengeneralizing to unseen environments (see Section 4).LLM Prompt ExamplePredict the next skill from the fol-lowing list: Pick up the mug; Turn onthe lamp; Put the mug in the coffeemachine; ...1: Pick up the mug.2:Figure 2: A shortened LLM prompt.See the full prompt in Appendix A.2.To sample next skills, we prompt the LLM with the cur-rent skill repertoire and the chain of skills executed so far.For example, if the agent has just completed “Pick up themug” , we prompt the LLM with the list of skill annota-tions in Zand then the following prompt: 1. P ICK UPTHE MUG . 2. (see Figure 2). The LLM then proposesthe next skill by generating text following the prompt. Wethen map this predicted next skill string back to the set ofexisting skills in Zby finding the nearest neighbor of Ztothe proposed skill annotation in the embedding space of apre-trained sentence embedding model [49]. To encourage diversity in the practiced skill chains, werepeat this process Ntimes and sample the true next skill from the distribution of LLM-assignedtoken likelihoods. Finally, if the sampled skill is successfully executed, we repeat the same processfor sampling the following skill.1Learning new skills. Once an episode concludes, either because a skill times out or because adefined maximum skill chain length is reached, we add the collected data back into the replay bufferwith a sparse reward of 1for every completed skill. For example, if an attempted skill chain containsa total of 3skills, then the maximum return of the entire trajectory is 3. We then continue policytraining via the same offline RL algorithm used to learn the primitive skills—in our case, IQL [48].Finally, to maximize data efficiency, we relabel the language instructions for the collected episodeupon adding it to the replay buffer. Specifically, following prior work [42], we aggregate consecutiveskills into composite skill instructions using the same LLM as for skill sampling. We then add thecomposite skill instruction and associated experience to the replay buffer and also add it to ourskill repertoire for continued bootstrapping. We store new trajectories with both their lowest levelannotations and the LLM-generated composite instructions so the agent can fine-tune its base skillswhile learning longer-horizon skill chains online. To ensure the agent does not forget its initial skillrepertoire, we sample data from the offline dataset DLwith new data at equal proportions in batch.Algorithm 1 BOSS Pseudocode.1:Train policy πon initial skill repertoire2:forskill bootstrapping episode do3: Sample initial skill zand execute4: while not episode timeout do5: Sample next skill from LLM and execute6: Construct composite skill and add to repertoire7: Update policy πIn sum, we iterate through these threesteps to train a policy during the skill boot-strapping phase: (1) Sampling initial skillsusing the value function. (2) Samplingnext skills by prompting the LLM withskills executed so far. (3) Adding learnedskills to the skill library and training oncollected agent experience. Algorithm 1presents a brief overview. The implemen-tation details can be found in Section B and Algorithm 2 in Appendix describes the full algorithm.1Note that we do not treat invalid LLM skill chain proposals, like asking the agent to “put keys in a safe”when it has not yet picked any keys up, in a special manner. If the proposal is poor, the agent will fail and thevalue of the skill will drop with training, making it unlikely to sample the skill chain again.412345 6"walk to the coffee maker on the right""wash the mug in the sink""put the clean mug in the coffee maker""pick up the mug and go back to the coffee maker""pick up the dirty mug from the coffee maker""turn and walk to the sink"visual navigationvisual navigationmemoryobject interactionstate changesvisual navigation object interactionobject interactionGoal: "Rinse off a mug and place it in the coffee maker"t0= t10= t21=t50= t27= t36=(a) ALFRED benchmark. (b) Real world Jaco arm setup.Figure 3: Environments. (a)The ALFRED environment is a benchmark for learning agentsthat can follow natural language instructions to fulfill household tasks. This illustration was drawnfrom Shridhar et al. [50] with permission. (b) Real-world Jaco arm: Our real-world kitchen ma-nipulation tabletop environment based on RGB image inputs.4 Experimental EvaluationThe goal of our experiments is to test BOSS’s ability to acquire long-horizon, complex, and mean-ingful behaviors. We compare to unsupervised RL and zero-shot planning methods in two challeng-ing, image-based control environments: solving household tasks in the ALFRED simulator [50] andkitchen manipulation tasks with a real-world Jaco robot arm. Concretely, we aim to answer the fol-lowing questions: (1) Can BOSS learn a rich repertoire of useful skills during skill bootstrapping?(2) How do BOSS’s acquired skills compare to skills learned by unsupervised RL methods? (3) CanBOSS directly be applied on real robot hardware?4.1 Experimental SetupALFRED Environment. We test our approach in the ALFRED simulator [50] (see Figure 3a), sinceits 100+ floorplans with many interactable objects provide a rich environment for learning number-ous long-horizon household tasks. We leverage a modified version of the ALFRED simulator [ ?]that allows for online RL interactions via a gym interface with 300×300egocentric RGB imageobservations. The action space consists of 12 discrete action choices (e.g. turn left, look up, pick upobject), along with 82 discrete object types, first proposed by Pashevich et al. [51]. To train the skillsin our initial skill library, we leverage the ALFRED dataset of 73kprimitive skill demonstrationswith language instructions. For bootstrapping we use four unseen floorplans. In each floorplan wedefine 10 evaluation tasks, each of which requires 2 to 8 primitive skills to complete.Real-Robot Kitchen Manipulation. We evaluate our method with a real-robot manipulation setupin which a Kinova Jaco 2 robot arm needs to solve stylized kitchen tasks in a table-top environment(see Figure 3b). The observations consist of concatenated RGB images from a third-person anda wrist-mounted camera. The robot is controlled with continuous end-effector displacements anddiscrete gripper open/stay/close commands at a frequency of 10Hz. To train the initial skills, wecollect a dataset of 6klanguage-annotated primitive skill demonstrations via human teleoperation.We perform bootstrapping and evaluate the agents in a table setup with unseen object arrangements.Training and Evaluation Procedure. We equip the policy with the initial primitive skill libraryby training it for 150 epochs on the respective pre-collected demonstration datasets using IQL [48](see Section 3.1). We then perform 500,000 and 15,000 steps ( ∼17 min of robot interaction time) ofonline skill bootstrapping in the respective unseen eval environments of ALFRED and the real robotsetup. Note that for ALFRED we train separate agents for each floorplan, mimicking a scenario inwhich an agent is dropped into a new household and acquires skills with minimal supervision. Afterbootstrapping, we evaluate the trained agents zero-shot on the held-out evaluation tasks by condi-tioning the policy on the respective language instruction. To perform well in this evaluation setting,an agent needs to acquire a large number of useful skills during online environment interactions.5Baselines. We compare BOSS to prior works that can learn a wide range of skills with minimalsupervision: (1) unsupervised RL approaches that, like BOSS, learn from environment interac-tions without additional feedback and (2) large-language model based planners, that leverage theknowledge captured in large pre-trained language models to “bootstrap” given skill libraries intolong-horizon behaviors. Concretely, we are comparing to the following approaches:•CIC [52]: SoTA method on the unsupervised RL benchmark [53], expands its skill library witha contrastive alignment objective during bootstrapping. For fair comparison, we pre-train CIC’spolicy on the same primitive skill dataset used in BOSS before unsupervised bootstrapping.•SayCan [12]: Leverages a pre-trained LLM to break down a given task into step-by-step in-structions, i.e., “primitive skills”, by ranking skills from a given library. We implement SayCanusing the same primitive skill policy pre-trained via offline RL as in BOSS. We use the sameLLM as our method, and adapt SayCan’s LLM prompt for our environment. Notably, SayCanand similar LLM planning work have no mechanism for fine-tuning to new environments.•SayCan+P : To evaluate the effects of online bootstrapping vs. top-down LLM planning inisolation, we evaluate a SayCan variant that uses ourLLM-based skill proposal mechanism,which leverages the LLM to generate step-by-step instructions in place of SayCan’s originalskill ranking method. We found this to perform better than standard SayCan in our evaluation.•SayCan+PF : SayCan+P on policies fine-tuned in the target environments for the same numberof steps as BOSS by sampling single skills with the value function and learning to executethem. This compares the effect of BOSS learning to chain skills in the target environments.Additionally, we evaluate (1) an Oracle that finetunes the pre-trained primitive skill policy directlyon the target tasks, serving as an upper bound, and (2) a pre-trained primitive skill policy withoutanybootstrapping ( No Bootstrap ), serving as a performance lower bound.All methods utilize the same base primitive skill policy pre-trained on the same demonstration data.We implement a transformer policy and critic architecture based on Pashevich et al. [51] trained withthe IQL algorithm [48]. All results reported are inter-quartile means and standard deviations over5 seeds [54]. Finally, Saycan and BOSS all use the LLaMA-13b open-source, 13-billion parameterLLM [55]. For more baseline implementation and training details, see Appendix B.4.2 BOSS Bootstrapping Learns Useful SkillsTable 1: Inter-quartile means (IQMs) and standard devia-tions of oracle-normalized returns, i.e., number of solvedsubtasks, broken down by task length, across the ALFREDevaluation tasks. We also report oracle-normalized successrate in the last column. We do not report results for length 6and 8 tasks since not even the oracle was able to learn these.Returns by Evaluation Task Length AverageMethod Length 2 Length 3 Length 4 Return SuccessNo Bootstrap 0.03 +- 0.02 0.05 +- 0.07 0.08 +- 0.09 0.03 +- 0.01 0.00 +- 0.00CIC [52] 0.02 +- 0.02 0.25 +- 0.08 0.18 +- 0.07 0.11 +- 0.01 0.00 +- 0.00SayCan [12] 0.06 +- 0.02 0.14 +- 0.00 0.10 +- 0.12 0.06 +- 0.00 0.00 +- 0.00SayCan + P 0.08 +- 0.04 0.28 +- 0.00 0.20 +- 0.15 0.12 +- 0.01 0.00 +- 0.00SayCan + PF 0.64 +- 0.06 0.49 +- 0.20 0.59 +- 0.02 0.57 +- 0.05 0.00 +- 0.00BOSS (ours) 0.47 +- 0.12 0.59 +- 0.13 0.81 +- 0.13 0.57 +- 0.06 0.57 +- 0.14ALFRED. Overall, BOSS achievessuperior performance to all non-oracle baselines, with better oracle-normalized return at longer, length3 and 4 tasks than the best base-lines, and BOSS is the only methodto achieve non-zero success ratesacross all lengths of tasks. From Ta-ble 1, the gap between BOSS and bestbaselines is largest on the length 4tasks, indicating the benefit of BOSS’LLM-guided skill bootstrapping inlearning difficult, longer-horizon tasks without task supervision. CIC can make some progress insome length 3 and 4 tasks, but its contrastive objective generally fails to finetune the primitive skillsinto meaningful long-horizon skills. Saycan+P performs better than Saycan, indicating that our pro-posal mechanism better extracts a more meaningful distribution of skills from an LLM, but evenSaycan+P greatly falls short of BOSS’ performance as it is not robust to execution failures incurredfrom directly using the pre-trained policy in unseen floor plans. Saycan+PF performs better as it firstfine-tunes its policies, but it still achieves a 0% success rate compared to BOSS’ 57%. Additionalanalyses we perform in Appendix C.1 demonstrates that in SayCan+P, 95.8% of all unsuccessfulSayCan+P trajectories are caused by policy execution failures. SayCan+PF is only slightly better:95.0% are caused by policy execution failures, indicating that na ̈ıve fine-tuning in the target en-6Put sliced tomato in microwaveWash and store lettuceCut the apple with a knifeFigure 4: Left: The number of subtasks in skills executed during skill bootstrapping by BOSS inone of the unseen ALFRED floorplans. BOSS progressively learns longer skill chains throughoutthe course of training. Right : The number of newly acquired skills by BOSS throughout training.(1)Go to the area between the cabinets and the toilet(2)Pick up the empty toilet paper tube behind the toilet brush(3)Place the toilet paper tube upright to the left of the full toilet paper roll(4)Close the cabinet doorPut the empty toilet paper tube next to the full toilet paper roll.(1)Take the apple on the right from the sink(2)Pick up the knife from the counter(3)Cut the apple into pieces(4)Put the apple on the right of the statue and in front of the saltCut the apple and put it on the right of the statue.(1)Pick up the pillow off of the seat of the blue chair(2)Put the pillow vertically on the couch to the left of the newspaperPut the pillow on the couch next to the newspaper.(1)Pick up the white pencil on the desk(2)Place the white pencil on the desk near the books(3)Pick up the books from the bed(4)Turn on the lampPlace the white pencil on the desk next to the books and then look at the book from the bed under the lamp light.Figure 5: Example skill chains (light gray) and new skill summaries (dark grey) learned by BOSSduring skill bootstrapping. LLM-guidance ensures meaningful skill chains and summaries.vironment is ineffective for solving long-horizon tasks. Since BOSS learns to finetune individualprimitive skills and transition between skills using a closed-loop policy, it performs much better oncomplex, long-horizon language-specified tasks in unseen environments.We display qualitative examples of a length 2 and 3 task in appendix Figure 10, where we can see thatBOSS successfully completes the tasks whereas Saycan suffers from execution failures, getting stuckwhile attempting to manipulate objects, and CIC navigates around performing random behaviors(Figure 10a) or gets stuck navigating around objects (Figure 10b). We show qualitative examples oflearned skills in Figure 5 and perform additional experiments and analysis in Appendix C.1.Table 2: Success rates, split by tasklength, across the 4 robot eval tasks inan unseen table arrangement.Evaluation Task LengthMethod Length 2 Length 4ProgPrompt [14] 0.65 +- 0.15 0.00 +- 0.00BOSS (ours) 0.50 +- 0.30 0.15 +- 0.05Real Robot. In our real world experiments, we com-pare BOSS to ProgPrompt [14], a similar LLM plan-ning method to Saycan that has been extensively evalu-ated on real-world tabletop robot manipulation environ-ments similar to ours. We also augment it with promptexamples similar to ours and our skill proposal mecha-nism. Here, we evaluate on 4 tasks, 2 of length 2 and 2of length 4 after performing bootstrapping. Results in Ta-ble 2 demonstrate that both methods perform similarly on length 2 tasks, but only BOSS achievesnonzero success rate on more difficult length 4 tasks as it is able to learn to chain together long-horizon skills in the new environment. See Appendix C.2 for more detailed task information.4.2.1 Ablation StudiesTo better analyze the effect of our core contribution, the usage of LLM guidance during skill boot-strapping, we compare to the following variants of our approach:•BOSS-OPT1 : BOSS bootstrapping with a weaker 1-billion parameter LLM, OPT-1 [56].7•BOSS-Rand : An ablation of our approach BOSS that uses noLLM guidance during skillbootstrapping and simply selects the next skill at random from the current skill library.Table 3: ALFRED ablation returns.Evaluation Task LengthMethod Length 2 Length 3 Length 4 AverageBOSS (ours) 0.47 +- 0.12 0.59 +- 0.13 0.81 +- 0.13 0.57 +- 0.06BOSS-OPT1 0.39 +- 0.08 0.36 +- 0.07 0.56 +- 0.08 0.49 +- 0.07BOSS-Rand 0.32 +- 0.03 0.29 +- 0.11 0.61 +- 0.16 0.43 +- 0.06We report results in Table 3. The analysis shows theimportance of accurate LLM guidance during skill boot-strapping for learning useful skills. Using an LLM withlower performance (OPT1) results in degraded overallperformance. Yet, bootstrapping without any LLM guidance performs even worse. Interestingly,the performance gap between BOSS and its variants widens for longer task lengths. Intuitively, thelonger the task, the more possible other, less useful tasks of the same length could be learned by theagent during bootstrapping. Thus, particularly for long tasks accurate LLM guidance is helpful.0k 200k 400kTimesteps0100200300400Skill Library SizeBOSSBOSS-RandFigure 6: Skill li-brary size duringbootstrapping.To further analyze this, we compare the sizes of the learned skill librariesbetween BOSS bootstrapped with LLaMA-13B guidance vs. random skillselection (BOSS-Rand) in Figure 6. Perhaps surprisingly, the random skillchaining ablation learns more skills than BOSS – its skill library grows fasterduring bootstrapping. Yet, Table 3 shows that it has lower performance. Thisindicates, that while BOSS-Rand learns many skills, it learns less meaning-fulskills. A qualitative analysis supports this intuition: many of the learnedskills contain repetitions and meaningless skill chains. This underlines theimportance of LLM guidance during skill bootstrapping. Furthermore, thepositive correlation between the powerfulness of the used guidance LLM(1B→13B parameters) and the evaluation task performance suggests thatfuture, even more powerful LLMs can lead to even better skill bootstrapping.5 DiscussionWe propose BOSS, an approach that learns a diverse set of long-horizon tasks with minimal super-vision via LLM-guided skill bootstrapping. Starting from an initial library of skills, BOSS acquiresnew behaviors by practicing to chain skills while using LLMs to guide skill selection. We demon-strate in a complex household simulator and real robot manipulation tasks that BOSS can learn moreuseful skills during bootstrapping than prior methods.Limitations. While BOSS learns a large repertoire of skills with minimal supervision, it still haslimitations that prevent it from truly fulfilling the vision of agents autonomously acquiring skills innew environments. BOSS requires environment resets between bootstrapping episodes, which arecurrently performed by a human in our real world experiments. Also, we require success detectionfor each of the primitive skills during bootstrapping. Future research can investigate using advancesin reset-free RL [57, 58] to approach the goal of truly autonomous skill learning. Furthermore, BOSSgreedily proposes new skill chains one skill at a time, this greedy skill chaining process may not beoptimal for generating consistent long-horizon behaviors beyond a certain length. In future work,we plan to explore mechanisms to propose long-horizon tasks that are broken down to individualskills in conjunction with the greedy skill chaining of BOSS. Finally, BOSS is currently limited toskills that are combinations of skills in its initial skill library. Extending our work with unsupervisedRL [59, 52] techniques for learning new low-level skills is an exciting direction for future work.AcknowledgmentsWe thank Ishika Singh for her assistance with implementing and debugging ProgPrompt. This workwas supported by a USC Viterbi Fellowship, Institute of Information & Communications Technol-ogy Planning & Evaluation (IITP) grants (No.2019-0-00075, Artificial Intelligence Graduate SchoolProgram, KAIST; No.2022-0-00077, AI Technology Development for Commonsense Extraction,Reasoning, and Inference from Heterogeneous Data, No.2022-0-00984, Development of Artifi-cial Intelligence Technology for Personalized Plug-and-Play Explanation and Verification of Ex-planation), a National Research Foundation of Korea (NRF) grant (NRF-2021H1D3A2A03103683)funded by the Korean government (MSIT), the KAIST-NA VER hypercreative AI center, and Sam-sung Electronics Co., Ltd (IO220816-02015-01). Shao-Hua Sun was supported by the Yushan Fel-low Program by the Taiwan Ministry of Education and National Taiwan University.8References[1] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, and S. Levine. Scalable deep reinforcement learning forvision-based robotic manipulation. In Conference on Robot Learning , 2018.[2] D. Kalashnkov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. Mt-opt: Continuous multi-task robotic reinforcement learning at scale. arXiv ,2021.[3] A. X. Lee, C. Devin, Y . Zhou, T. Lampe, K. Bousmalis, J. T. Springenberg, A. Byravan,A. Abdolmaleki, N. Gileadi, D. Khosid, C. Fantacci, J. E. Chen, A. Raju, R. Jeong, M. Neunert,A. Laurens, S. Saliceti, F. Casarini, M. Riedmiller, R. Hadsell, and F. Nori. Beyond pick-and-place: Tackling robotic stacking of diverse shapes. In Conference on Robot Learning , 2021.[4] A. Gupta, V . Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solvinglong horizon tasks via imitation and reinforcement learning. In Conference on Robot Learning ,2019.[5] K. Pertsch, Y . Lee, Y . Wu, and J. J. Lim. Demonstration-guided reinforcement learning withlearned skills. In Conference on Robot Learning , 2021.[6] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, andS. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets,2021.[7] M. Heo, Y . Lee, D. Lee, and J. J. Lim. Furniturebench: Reproducible real-world benchmarkfor long-horizon complex manipulation. In Robotics: Science and Systems , 2023.[8] X. B. Peng, M. Chang, G. Zhang, P. Abbeel, and S. Levine. MCP: Learning composable hier-archical control with multiplicative compositional policies. In Neural Information ProcessingSystems , 2019.[9] K. Pertsch, Y . Lee, and J. J. Lim. Accelerating reinforcement learning with learned skill priors.InConference on Robot Learning , 2020.[10] A. Ajay, A. Kumar, P. Agrawal, S. Levine, and O. Nachum. Opal: Offline primitive discov-ery for accelerating offline reinforcement learning. In International Conference on LearningRepresentations , 2021.[11] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Ex-tracting actionable knowledge for embodied agents. arXiv preprint arXiv:2201.07207 , 2022.[12] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.Ruano, K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee,S. Levine, Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes,P. Sermanet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu,M. Yan, and A. Zeng. Do as i can and not as i say: Grounding language in robotic affordances.InarXiv preprint arXiv:2204.01691 , 2022.[13] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch,Y . Chebotar, P. Sermanet, N. Brown, T. Jackson, L. Luu, S. Levine, K. Hausman, and B. Ichter.Inner monologue: Embodied reasoning through planning with language models. In arXivpreprint arXiv:2207.05608 , 2022.[14] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. ProgPrompt: Generating situated robot task plans using large language models. InNeural Information Processing Systems , 2022.9[15] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction . MIT press, 2018.[16] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework fortemporal abstraction in reinforcement learning. Artificial Intelligence , 112(1):181–211, 1999.ISSN 0004-3702.[17] M. Pickett and A. G. Barto. Policyblocks: An algorithm for creating useful macro-actions inreinforcement learning. In International Conference on Machine Learning , 2002.[18] P.-L. Bacon, J. Harb, and D. Precup. The option-critic architecture. In Association for theAdvancement of Artificial Intelligence , 2017.[19] T. Nam, S.-H. Sun, K. Pertsch, S. J. Hwang, and J. J. Lim. Skill-based meta-reinforcementlearning. In International Conference on Learning Representations , 2022.[20] A. Gupta, V . Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solvinglong-horizon tasks via imitation and reinforcement learning, 2019.[21] A. Mandlekar, F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox. Iris: Implicitreinforcement without interaction at scale for learning control from offline robot manipulationdata. In IEEE International Conference on Robotics and Automation , 2020.[22] S. Schaal. Dynamic movement primitives–a framework for motor control in humans and hu-manoid robotics. Adaptive Motion of Animals and Machines , 2006.[23] Y . Lee, S.-H. Sun, S. Somasundaram, E. Hu, and J. J. Lim. Composing complex skills bylearning transition policies with proximity reward induction. In International Conference onLearning Representations , 2019.[24] K. Hausman, J. T. Springenberg, Z. Wang, N. Heess, and M. Riedmiller. Learning an embed-ding space for transferable robot skills. In International Conference on Learning Representa-tions , 2018.[25] D. Trivedi, J. Zhang, S.-H. Sun, and J. J. Lim. Learning to synthesize programs as interpretableand generalizable policies. In Neural Information Processing Systems , 2021.[26] G.-T. Liu, E.-P. Hu, P.-J. Cheng, H.-Y . Lee, and S.-H. Sun. Hierarchical programmatic rein-forcement learning via learning to compose programs. In International Conference on MachineLearning , 2023.[27] L. X. Shi, J. J. Lim, and Y . Lee. Skill-based model-based reinforcement learning. In Conferenceon Robot Learning , 2022.[28] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning , 2017.[29] A. Sharma, S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-aware unsuperviseddiscovery of skills. arXiv , abs/1907.01657, 2019.[30] S. Park, K. Lee, Y . Lee, and P. Abbeel. Controllability-aware unsupervised skill discovery. InInternational Conference on Machine Learning , 2023.[31] J. Achiam, H. Edwards, D. Amodei, and P. Abbeel. Variational option discovery algorithms.arXiv , 2018.[32] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skillswithout a reward function. In International Conference on Learning Representations , 2019.[33] D. Warde-Farley, T. V . de Wiele, T. Kulkarni, C. Ionescu, S. Hansen, and V . Mnih. Unsuper-vised control through non-parametric discriminative rewards. In International Conference onLearning Representations , 2019.10[34] K. Gregor, D. J. Rezende, and D. Wierstra. Variational intrinsic control. arXiv ,abs/1611.07507, 2016.[35] J. Zhang, H. Yu, and W. Xu. Hierarchical reinforcement learning by discovering intrinsicoptions. In International Conference on Learning Representations , 2021.[36] R. Sekar, O. Rybkin, K. Daniilidis, P. Abbeel, D. Hafner, and D. Pathak. Planning to explorevia self-supervised world models. In International Conference on Machine Learning , 2020.[37] R. Mendonca, O. Rybkin, K. Daniilidis, D. Hafner, and D. Pathak. Discovering and achievinggoals via world models. In Neural Information Processing Systems , 2021.[38] S.-H. Sun, T.-L. Wu, and J. J. Lim. Program guided agent. In International Conference onLearning Representations , 2020.[39] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data. InRobotics: Science and Systems , 2021.[40] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, 2021.[41] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. J. Joshi,R. Julian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manju-nath, I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1:Robotics transformer for real-world control at scale, 2022.[42] J. Zhang, K. Pertsch, J. Zhang, and J. J. Lim. Sprint: Scalable policy pre-training via languageinstruction relabeling, 2023.[43] Z. Liu, J. Zhang, K. Asadi, Y . Liu, D. Zhao, S. Sabach, and R. Fakoor. Tail: Task-specificadapters for imitation learning with large pretrained models, 2023.[44] D. Shah, B. Osinski, B. Ichter, and S. Levine. Robotic Navigation with Large Pre-TrainedModels of Language, Vision, and Action. In Conference on Robot Learning , 2022.[45] Y . Du, O. Watkins, Z. Wang, C. Colas, T. Darrell, P. Abbeel, A. Gupta, and J. Andreas. Guidingpretraining in reinforcement learning with large language models. In International Conferenceon Machine Learning , 2023.[46] C. Colas, L. Teodorescu, P.-Y . Oudeyer, X. Yuan, and M.-A. C ˆot ́e. Augmenting autotelicagents with large language models. In Conference on Lifelong Learning Agents , 2023.[47] C. Colas, T. Karch, N. Lair, J.-M. Dussoux, C. Moulin-Frier, F. P. Dominey, and P.-Y . Oudeyer.Language as a cognitive tool to imagine goals in curiosity driven exploration. In Neural Infor-mation Processing Systems , 2020.[48] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.InInternational Conference on Learning Representations , 2022.[49] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Empirical Methods in Natural Language Processing , 2019.[50] M. Shridhar, J. Thomason, D. Gordon, Y . Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, andD. Fox. ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks.InComputer Vision and Pattern Recognition , 2020.11[51] A. Pashevich, C. Schmid, and C. Sun. Episodic Transformer for Vision-and-Language Navi-gation. In ICCV , 2021.[52] M. Laskin, H. Liu, X. B. Peng, D. Yarats, A. Rajeswaran, and P. Abbeel. CIC: Contrastiveintrinsic control for unsupervised skill discovery, 2022.[53] M. Laskin, D. Yarats, H. Liu, K. Lee, A. Zhan, K. Lu, C. Cang, L. Pinto, and P. Abbeel. Urlb:Unsupervised reinforcement learning benchmark, 2021.[54] R. Agarwal, M. Schwarzer, P. S. Castro, A. Courville, and M. G. Bellemare. Deep reinforce-ment learning at the edge of the statistical precipice. In Neural Information Processing Systems ,2021.[55] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi `ere,N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama:Open and efficient foundation language models, 2023.[56] S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V .Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang,and L. Zettlemoyer. Opt: Open pre-trained transformer language models. 2022.[57] A. Gupta, J. Yu, T. Z. Zhao, V . Kumar, A. Rovinsky, K. Xu, T. Devlin, and S. Levine. Reset-freereinforcement learning via multi-task learning: Learning dexterous manipulation behaviorswithout human intervention. In IEEE International Conference on Robotics and Automation .IEEE, 2021.[58] A. Sharma, A. Gupta, S. Levine, K. Hausman, and C. Finn. Autonomous reinforcement learn-ing via subgoal curricula. In Neural Information Processing Systems , 2021.[59] A. Sharma, S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-aware unsuperviseddiscovery of skills. In International Conference on Learning Representations , 2020.[60] X. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple andscalable off-policy reinforcement learning, 2019.[61] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEEConference on Computer Vision and Pattern Recognition , 2016.[62] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. 2016.[63] P. J. Ball, L. Smith, I. Kostrikov, and S. Levine. Efficient online reinforcement learning withoffline data. In International Conference on Machine Learning , 2023.[64] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. In Conference on Robot Learning , 2022.[65] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulationwith low-cost hardware. arXiv preprint arXiv:2304.13705 , 2023.[66] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y . Tay, W. Fedus, Y . Li, X. Wang, M. Dehghani,S. Brahma, A. Webson, S. S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowdhery, A. Castro-Ros,M. Pellat, K. Robinson, D. Valter, S. Narang, G. Mishra, A. Yu, V . Zhao, Y . Huang, A. Dai,H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V . Le, and J. Wei.Scaling instruction-finetuned language models. 2022.[67] T. Dettmers, M. Lewis, Y . Belkada, and L. Zettlemoyer. Llm.int8(): 8-bit matrix multiplicationfor transformers at scale. arXiv preprint arXiv:2208.07339 , 2022.12AppendixAlgorithm 2 BOSS AlgorithmRequire: Dataset DLw/ language labels, LLM, Skill Library Z, Time limit T, max chain lengthM1:Pre-train policy π(a|s, z), value function V(s, z)onDLwith offline RL. ▷Section 3.12:while not converged do3: SKILL BOOTSTRAPPING (V, Z, LLM, π,DL,M,T) ▷Section 3.24:5:procedure SKILL BOOTSTRAPPING (V , Z, LLM, π,DL,M,T)6: s1←Reset environment7: RolloutData ←[]8: z←sample from discrete distribution with probsV(s, z 1), V(s, z 2), ..., V (s, z|Z|).9: i←010: Success ←True11: while i < M andSuccess do ▷If a rollout fails, break the loop.12: i←i+ 113: (Success ,τ)←Rollout π(·|s, z)in Environment for at most Tsteps.14: AddτtoRolloutData15: ifSuccess then16: z←SAMPLE NEXTSKILL (LLM, RO L L O U T DA T A,Z)17: UPDATE BUFFER ANDSKILL REPERTOIRE (DL,RO L L O U T DA T A, LLM)18: Train π,VonDLwith offline RL.19:20:procedure SAMPLE NEXTSKILL (LLM, RolloutData ,Z)21: AllSkills ←extract all skill annotations from Z.22: SkillChain ←extract executed primitive skills from RolloutData .23: Prompt ←construct prompt from AllSkills ,SkillChain .▷Prompt in Figure 8.24: ([ˆz1, ...,ˆzN],[p1, ..., p N])←Sample Ntext generations from LLM( Prompt ) with averagetoken probabilities p1, ..., p N.25: Find closest match in Zto each of ˆz1, ...,ˆzNin embedding space ▷Embedding model:all-mpnet-base-v2 from Reimers and Gurevych [49].26: z←sample the matches in Zfrom categorical distribution with parameters p1, ..., p N.27: return z28:29:procedure UPDATE BUFFER ANDSKILL REPERTOIRE (DL,RolloutData ,Z, LLM) ▷SeeAppendix B.3 for details.30: τ1, ..., τ k←extract primitive skill trajectories from RolloutData .31: forτiinτ1, ..., τ kdo32: DL← DL∪ {τi,zi} ▷Add trajectory to DLwith annotation zi.33: τ1:k←concatenate all trajectories together34: zLLM, 1:k←LLM (τ1:k)assign name by asking LLM summarize annotations of τ1:k.▷See Appendix A.2 for prompt.35: zconcat, 1:k←“{z1}.{z2}...{zk}.’ ▷Assign another label for the trajectory byconcatenating primitive skill annotations.36: DL← DL∪ {τLLM, 1:k, τconcat, 1:k} ▷Add to DLwith annotation zLLM, 1:kandzconcat, 1:k.37: AddzLLM, 1:kas a new skill to Z.13A Dataset and Environment DetailsA.1 ALFREDA.1.1 Dataset DetailsWe base our dataset and environment on the ALFRED benchmark [50]. ALFRED originally con-tains over 6000 full trajectories collected from an expert planner following a set of 7 high-level taskswith randomly sampled objects (e.g., “pick an object and heat it” ). Each trajectory has three crowd-sourced annotations, resulting in around 20k distinct language-annotated trajectories. We separatethese into only the primitive skill trajectories, resulting in about 141k language-annotated trajecto-ries. Following Zhang et al. [42], we merge navigation skills (e.g., “Walk to the bed” ) with theskill immediately following them as these navigation skills make up about half of the dataset, arealways performed before another skill, and are difficult to design online RL reward functions for thatwork across all house floor plans given only the information in the dataset for these skills. After thisprocessing step, the resulting dataset contains 73k language-annotated primitive skill trajectories.A.1.2 RL Environment DetailsWe modified ALFRED similarly to Zhang et al. [42], Pashevich et al. [51] to make it suitable forpolicy learning by modifying the action space to be fully discrete, with 12 discrete action choicesand 82 discrete object types.Furthermore, we rewrote reward functions for all primitive skill types (“CoolObject”, “PickupOb-ject”, “PutObject”, “HeatObject”, “ToggleObject”, “SliceObject”, “CleanObject”) so that rewardscan be computed independently of a reference expert trajectory. While our rewards depend on theground truth primitive skill type, no agents are allowed access to what the underlying true primitiveskill type is. All of our reward function are sparse, with 1 for a transition that completes primitiveskill and 0 for all other transitions.A.1.3 Evaluation TasksWe generate evaluation tasks by randomly sampling 10 tasks each for 4 unseen ALFRED floor plans,resulting in 40 total tasks unseen tasks requiring anywhere from 2-8 primitive skills to complete. Thetasks for each floor plan are sampled randomly from the V ALID -UNSEEN ALFRED dataset collectedin these plans with the specific object arrangements, and we use the high-level task language descrip-tions collected by humans for ALFRED as our task descriptions for language-conditioned zero-shotevaluation. See Figure 7 for a histogram of task lengths.2 3 4 5 6 7 8Length0246810121416CountHistogram of Task LengthsFigure 7: Task lengths regarding the number of primitive skills needed to chain together to solve thetask.14Examples of common household tasks and their descriptions:Task Steps: 1. Pick up the keys on the center table. 2. Put the keys in the box. 3. Pick up the box withkeys. 4. Put the box with keys on the sofa close to the newspaper.Task: Put the box with keys on the sofa.Task Steps: 1. Pick up the knife from in front of the tomato. 2. Cut the lettuce on the counter. 3. Set theknife down on the counter in front of the toaster. 4. Pick up a slice of the lettuce from the counter. 5. Putthe lettuce slice in the refrigerator. take the lettuce slice out of the refrigerator. 6. Set the lettuce slice onthe counter in front of the toaster.Task: Put a cooled slice of lettuce on the counter.Task Steps: 1. Pick up the book on the table, in front of the chair. 2. Place the book on the left cushionof the couch.Task: Put a book on the couch.Task Steps: 1. Pick up the fork from the table. 2. Put the fork in the sink and fill the sink with water, thenempty the water from the sink and remove the fork. 3. Put the fork in the drawer.Task: Put the cleaned fork in a drawer.Task Steps: 1. Take the box of tissues from the makeup vanity. 2. Put the tissues on the barred rack. 3.Take the box of tissues from the top of the toilet. 4. Put the tissues on the barred rack.Task: Put the box of tissues on the barred rack.Task Steps: 1. Pick up the glass from the sink. 2. Heat the glass in the microwave. 3. Put the glass on thewooden rack.Task: Put a heated glass on the wooden rack.Task Steps: 1. Pick up the box from the far side of the bed. 2. Hold the box and turn on the lamp.Tasks: Look at the box under the lamp light.Predict the next skill correctly by choosing from the following skills: [SKILL 1 IN LIBRARY], [SKILL2 IN LIBRARY], ...Task Steps: 1. [SKILL 1 EXECUTED SO FAR] 2. [SKILL 2 EXECUTED SO FAR] ... N.Figure 8: Prompt for the LLM for next skill proposal (Section 3.2). Text is generated after listingout all skills completed so far.A.2 Language Model PromptsWe use two prompts when using the LLM for two different purposes. The main purpose of theLLM is to propose a distribution over next skills to chain with currently executed skills during skillbootstrapping (Section 3.2). Thus, we pass skills in the given skill library Zinto the prompt andask it to predict the next skill. We also include a fixed set of 7 in-context examples from a randomsample of different tasks from the ALFRED training dataset. The prompt for bootstrapping is shownin Figure 8.We also generate summaries (see Section 3.2 and appendix Appendix B.3) for composite skill anno-tations with the LLM. These summaries are used to label newly chained longer-horizon skills beforeadding them back to the skill library. We show the prompt for this in Figure 9.B Training Implementation details and HyperparametersWe implement IQL [48] as the base offline RL algorithm to pre-train on primitive skill data for allmethods, baselines, and ablations, due to its strong offline and finetuning performance on a varietyof dense and sparse reward environments.The IQL policy is trained to maximize the following objective:eβ(Q(s,a)−V(s))logπ(a|s),which performs advantage-weighted regression [60] with an inverse temperature term β.QandVare trained on (s, a, s′, r, a′)tuples from the dataset rather than sampling a policy for a′to mitigate15Instructions: give a high-level description for the following steps describing common household tasks.Task Steps: 1. Pick up the keys on the center table. 2. Put the keys in the box. 3. Pick up the box withkeys. 4. Put the box with keys on the sofa close to the newspaper.Summary: Put the box with keys on the sofa.Task Steps: 1. Pick up the knife from in front of the tomato. 2. Cut the lettuce on the counter. 3. Set theknife down on the counter in front of the toaster. 4. Pick up a slice of the lettuce from the counter. 5. Putthe lettuce slice in the refrigerator. take the lettuce slice out of the refrigerator. 6. Set the lettuce slice onthe counter in front of the toaster.Summary: Put a cooled slice of lettuce on the counter.Task Steps: 1. Pick up the book on the table, in front of the chair. 2. Place the book on the left cushionof the couch.Summary: Put a book on the couch.Task Steps: 1. Pick up the fork from the table. 2. Put the fork in the sink and fill the sink with water, thenempty the water from the sink and remove the fork. 3. Put the fork in the drawer.Summary: Put the cleaned fork in a drawer.Task Steps: 1. Take the box of tissues from the makeup vanity. 2. Put the tissues on the barred rack. 3.Take the box of tissues from the top of the toilet. 4. Put the tissues on the barred rack.Summary: Put the box of tissues on the barred rack.Task Steps: 1. Pick up the glass from the sink. 2. Heat the glass in the microwave. 3. Put the glass on thewooden rack.Summary: Put a heated glass on the wooden rack.Task Steps: 1. Pick up the box from the far side of the bed. 2. Hold the box and turn on the lamp.Summary: Look at the box under the lamp light.Task Steps: 1. [SKILL 1] 2. [SKILL 2] 3. [SKILL 3] ...Summary:Figure 9: Prompt for the LLM to summarize completed skills into high-level composite annotations,following Zhang et al. [42].issues with critic function overestimation common in offline RL. We detail shared training and im-plementation details below, with method-specific information and hyperparameters in the followingsubsections.B.1 ALFRED EnvironmentWe implement the same observation and action space as Zhang et al. [42]. Details are listed below.Observation space. The observations given to agents are 300×300RGB images. For all methods,we first preprocess these images by sending them through a frozen ResNet-18 encoder [61] pre-trained on ImageNet, resulting in a 512×7×7observation.Action space. The agent chooses from 12 discrete low-level actions. There are 5 navigation ac-tions: MoveAhead ,RotateRight ,RotateLeft ,LookUp , andLookDown and 7 interactionactions: Pickup ,Put,Open ,Close ,ToggleOn ,ToggleOff , and Slice . For interactionactions the agent additionally selects one of 82 object types to interact with, as defined by Pashevichet al. [51]. In total, the action space consists of 5 + 7∗82 = 579 discrete action choices. Forall methods, due to the large discrete action space, we perform the same action masking as Zhanget al. [42] to prevent agents from taking actions that are not possible by using ground truth objectproperties given by the ALFRED simulator for each object in the scene. For example, we do notallow the agent to Close objects that aren’t closeable or ToggleOn objects that can’t be turnedon.16Policy and critic networks. We use the transformer architecture (and hyperparameters) used byEpisodic Transformers (ET) [51] for our policy and critic networks. We implement all critics (twoQfunctions and one V) with a shared backbone and separate output heads. Additionally, we useLayerNorms [62] in the MLP critic output heads as recommended by Ball et al. [63]. All networkscondition on tokenized representations of input language annotations.Hyperparameters. Hyperparameters were generally selected from tuning the Oracle baseline towork as best as possible, then carried over to all other methods. Shared hyperparameters for allmethods (where applicable) for pre-training on primitive skills are listed below. Any unlisted hyper-parameters or implementation details are carried over from Pashevich et al. [51]:Param ValueBatch Size 64# Training Epochs 150Learning Rate 1e-4Optimizer AdamWDropout Rate 0.1Weight Decay 0.1Discount γ 0.97Q Update Polyak Averaging Coefficient 0.005Policy and Q Update Period 1 per train iterIQL Advantage Clipping [0, 100]IQL Advantage Inverse Temperature β 5IQL Quantile τ 0.8Maximum Observation Context Length 21When fine-tuning policies (for Oracle, CIC, and BOSS), we keep hyperparameters the same. Wefine-tune one policy per floor plan (zero-shot evaluating on 10 tasks in each floor plan) in our AL-FRED task set so that the aggregated results are reported over 4 runs per seed. For methods that usea skill library (BOSS, Saycan, Saycan+P), all available primitive skills across all evaluation tasks ineach floor plan compose the starting skill library, resulting in anywhere from 15-40 available skillsdepending on the floor plan.Additionally, when finetuning the Oracle baseline along with BOSS and its ablations, we sample olddata from the offline dataset and newly collected data at equal proportions in the batch, followingsuggestions from [63]. We do not do this for CIC when finetuning with its unsupervised RL objectivebecause the language embeddings from the old data are not compatible with the online collected datalabeled with CIC-learned skill embeddings. Fine-tuning hyperparameters follow:Param Value# Initial Rollouts 50# Training Steps to Env Rollouts Ratio 15εinε-greedy action sampling 0.05Discrete action sampling True# Parallel Rollout Samplers 10B.2 Real Robot EnvironmentThe input observation from the environment includes environment RGB input and robot states.The RGB input consists of the third-person view RGB images from a Logitech Pro Webcam C920cropped to 224×224×3, and wrist view images from an Intel RealSense D435. We use a pretrainedR3M [64] model to get the latent representation for each view. The robot states include the robot’send-effector position, velocity, and gripper state. The end-effector position and velocity are twocontinuous vectors, and the gripper state is a one-hot vector, which presents OPEN, CLOSE, or NOTMOVE. We concatenate the RGB latent representations and robot states together as environmentstates.17The policy is language conditioned, and we use a pre-trained sentence encoder to encode the lan-guage annotation to a 384-dimensional latent vector. The pretrained sentence encoder we use isall-MiniLM-L12-v2 from the SentenceTransformers package [49].The total state input dimension is 2048 (third-person R3M) + 2048 (wrist R3M) + 15 (robot stateinput) + 384 (language latent representation) = 4495.Action space. The action space of the robot encompasses the difference in the end effector positionbetween each time step, along with discrete open and close signals for the gripper. These actions aretransmitted to the robot with 10HZ and interpreted as desired joint poses using PyBullet’s inversekinematics module.In line with [65], we adopt the Action Chunking method to train an autoregressive policy. Ourpolicy utilizes an LSTM model to predict the next 15 actions, given the initial observation as input,denoted as π(at:t+15|st). Both our Q and Value networks are recurrent as well, estimating rewardson a per-timestep basis for each action in the sequence. Similar to the policy, these networks onlyhave access to the observation preceding the action sequence initiation.Due to the gripper action space is discrete and imbalanced distributed in the dataset, we reweighgripper loss inversely proportionally to the number of examples in each class.B.3 Additional BOSS Implementation DetailsHere we continue discussion of BOSS in detail. In the main text in Section 3.2 we mention that weadd learned skills back to the agent’s skill repertoire and then train on collected experience gatheredfrom each rollout. Here, we detail exactly how we do that.Labeling new composite skills. Finally, after we have finished attempting a composite skill chain,we need a natural language description for it so we can train the language-conditioned policy on thisnew composite skill. We ask the LLM to generate high-level task descriptions of the annotationsof the two skills the agent has just attempted to chain together like proposed by Zhang et al. [42]for offline policy pre-training. Doing so will allow the agent to learn skills at a higher level oftext abstraction, allowing the agent to operate on more natural evaluation task specifications. Forexample, humans are more likely to ask an agent to “Make coffee” than to say “Get a coffee pod.Put the coffee pod in the machine. Fill it up with water...”We give the LLM a prompt similar to the one for generating next skills. For example, if our agent hasjust completed two skills: “Pick up the spoon” ,“Put the spoon on the counter” , we ask the LLMto summarize “1. PICK UP THE SPOON . 2. PUT THE SPOON ON THE COUNTER .”, and the LLMcan generate “put a spoon on the counter. ” We denote the generated language annotation for thiscombined skill composed of the annotations of z1andz2asz′. We then add z′as a new compositeskill to Zfor the agent to possibly sample from again.Training on new skill data. After the agent has finished a rollout in the environment, it trains onthe experience gathered. There are three types of data that we add to the agent’s replay buffer fromits rollout data:1. The trajectory of the attempted skill chain which is collected only if the entire first skillis successfully executed (regardless if it is a primitive skill or a chain of them) since onlythen will another skill be used for chaining. The label for this trajectory is produced by theLLM.2. The trajectory of the composite skill but with a label generated by concatenating the prim-itive skill annotations as a sequence of sentences of their language annotations. This tra-jectory ensures that the agent receives a description for the collected composite trajectorythat specifies the exact primitive skills that make it up, in order. This is useful because theLLM-generated high-level skill description may not describe certain steps. Those steps areexplicitly spelled out in this new label.183. Trajectories for all lowest-level primitive skills executed during the rollout. These corre-spond to the original set of skills the policy was equipped with and will help the policycontinue to finetune its ability to execute its original primitive skills.After the rollout, we add these trajectories to the agent’s replay buffer.Other details. When performing skill bootstrapping in the ALFRED environment, we set a maxtime limit ( Tin Algorithm 2) for 40 timesteps per primitive skill. For simplicity, we restrict M,the max number of skills to chain, to be 2during skill bootstrapping rollouts. We also restrict thesecond skill to be chained to only the set of primitive skills so that the agent can only learn newskill chains that are one primitive skill longer than the first sampled skill. Note that this does notrestrict the agent from sampling composite skills it has learned during bootstrapping as first skillsupon initialization.One final implementation detail is with respect to how we map LLM next skill proposals to existingskills in the skill library Z. We found that pre-trained sentence embedding models generally seemto put great emphasis on the nouns of skill annotation sentences in ALFRED, instead of the verb.Therefore, all sentence embeddings models we initially experimented with (up to the 11B parametermodel FLAN-T5-XXL [66]) would have a tendency to map LLM generations such as “Place theapple in the sink” to skills with different verbs as long as the nouns were the same, such as “Pickup the apple from the sink” . These skills are clearly very different, so this presented a problem tous initially. To solve this, we settled on using an NLP library2to extract the main verb of sentencesand then added that same verb as a prefix to each sentence before embedding with the sentenceembedding model. For example, “Place the apple in the sink” →“PLACE: Place the apple inthe sink. ” With this change, the aforementioned issue was addressed in most cases and we coulduse much smaller sentence embedding models ( all-mpnet-v2 from the SentenceTransformerspackage [49]).Training Time and Hardware Requirements We perform experiments on a server with 2 AMDEPYC 7763 64-Core Processors, and 8 RTX 3090 GPUs. Pre-training the policies takes around 10hours with just a single RTX 3090 and 4 CPU threads for parallel dataloading.Skill bootstrapping experiments require just 1 GPU with sufficient VRAM to run inference withour LLM, along with 4 available CPU threads for parallel dataloading and environment rollouts.In practice, a single RTX 3090 is sufficient for our experiments using LLaMA-13B with 8-bit in-ference [67] on ALFRED, requiring around 3-5 days of training, mainly due to the speed of theunderlying simulator used in ALFRED.B.4 CIC ImplementationFor fairness in our experimental comparison, we implement CIC [52] by using its objective to traina policy pre-trained on the same dataset as BOSS; thus, the CIC agent is first initialized with a setof sensible behaviors. Since CIC operates on a fixed latent space, we modified the critic and policyarchitectures so that they operate on fixed-length, 768-dimensional embeddings of language inputsfrom the same sentence embedding model used for skill bootstrapping [49] instead of on variablelength tokenized language representations.CIC-specific hyperparameters follow:2https://github.com/chartbeat-labs/textacy19Param ValueCIC K-means K 12CIC K-means avg TrueCIC Hidden Dim 1024CIC Latent Skill Dim 768CIC Temp 0.5CIC Skill Projection Layer True# Timesteps for each skill rollout before reset 200B.5 SayCan ImplementationWe implement SayCan [12] by combining the prompt from SayCan with ours. We use the samein-context examples except but convert them to a human-robot conversation. All other details arethe same, including the LLM that we use in this comparison (LLaMa-13b [55]). The Saycan promptfollows below:Robot: Hi there, I’m a robot operating in a house. Robot: You can ask me to do various tasksand I’ll tell you the sequence of actions I would do to accomplish your task.Human: How would you put the box with keys on the sofa?Robot: 1. Pick up the keys on the center table. 2. Put the keys in the box. 3. Pick up the boxwith keys. 4. Put the box with keys on the sofa close to the newspaper.Human: How would you put a cooled slice of lettuce on the counter?Robot: 1. Pick up the knife from in front of the tomato. 2. Cut the lettuce on the counter.3. Set the knife down on the counter in front of the toaster. 4. Pick up a slice of the lettucefrom the counter. 5. Put the lettuce slice in the refrigerator. take the lettuce slice out of therefrigerator. 6. Set the lettuce slice on the counter in front of the toaster.Human: How would you put a book on the couch?Robot: 1. Pick up the book on the table, in front of the chair. 2. Place the book on the leftcushion of the couch.Human: How would you put the cleaned fork in a drawer?Robot: 1. Pick up the fork from the table. 2. Put the fork in the sink and fill the sink withwater, then empty the water from the sink and remove the fork. 3. Put the fork in the drawer.Human: How would you put the box of tissues on the barred rack?Robot: 1. Take the box of tissues from the makeup vanity. 2. Put the tissues on the barredrack. 3. Take the box of tissues from the top of the toilet. 4. Put the tissues on the barred rack.Human: How would you put a heated glass on the wooden rack?Robot: 1. Pick up the glass from the sink. 2. Heat the glass in the microwave. 3. Put the glasson the wooden rack.Human: How would you look at the box under the lamp light?Robot: 1. Pick up the box from the far side of the bed. 2. Hold the box and turn on the lamp.Predict the next skill correctly by choosing from the following skills: [SKILL 1 IN LIBRARY],[SKILL 2 IN LIBRARY], ...Human: How would you [HIGH LEVEL TASK DESCRIPTION]?Robot: 1. [SKILL 1 EXECUTED SO FAR] 2. [SKILL 2 EXECUTED SO FAR] ... N.20BOSSSAYCAN + PCICTask: Put a clean bar of soap on the counter.CompletedSubtask3/30/30/3(a) Length 3 Task ExampleBOSSSAYCAN + PCICTask: Pick up the disc and turn on the lamp on the desk.CompletedSubtask2/20/20/2(b) Length 2 Task ExampleFigure 10: Qualitative visualizations of zero-shot evaluation rollouts. See the plans SayCan+Pgenerated for these two tasks at the top of Figure 12.B.6 ProgPrompt ImplementationProgPrompt [14] converts natural language queries to code and executes the code on a real robot.After consulting with the authors, we converted the examples in our prompt to one suitable forProgPrompt by converting task descriptions into a code representation by converting spaces intounderscores, e.g., “Pick up the milk” into def pick_up_the_milk() . Then, to translate codecommands into commands suitable for our pre-trained policy, we prompt ProgPrompt to outputpick_and_place( object ,object )style code commands that we convert into two separatepick and place natural language commands in the same format as the instructions used for pre-training the policy. We then execute these instructions on the real robot in sequence.21Task: Clean the black bowl and put in the gray plate.BOSSCompletedTasks4/4Figure 11: Example of a BOSS rollout after skill bootstrapping on task 4: “Clean the black bowland put it in the gray plate.” BOSS is able to complete all 4 tasks in this rollout after performingskill bootstrapping.Task: Put a clean bar of soap on the counter. (Execution Fail)GROUND TRUTH1. Pick up the bar of soap.2. Put the bar of soap in the sink, turnthe water on and then off and thenpick up the bar of soap.3. Put the soap down in between the twosinks.SAYCAN+P G ENERATED PLAN1. Pick up the bar of soap.Task: Pick up the disc and turn on the lamp. (Execution Fail)GROUND TRUTH1. Pick up the disc on the desk.2. Turn on the lamp on the desk.SAYCAN+P G ENERATED PLAN1. Pick up the disc on the desk.Task: Examine a bowl by the lamp. (Planning Fail)GROUND TRUTH1. Pick up the bowl on the desk.2. Turn on the lamp.SAYCAN+P G ENERATED PLAN1. Pick up the bowl on the desk.2. Pick up the bowl on the desk.Task: Put cooked apple slice on a counter. (Planning Fail)GROUND TRUTH1. Pick up the butter knife that is in frontof the bowl on the counter.2. Cut the apple that is in the garbagecan into slices.3. Put the knife in the garbage can.4. Pick up a slice of apple that is in thegarbage can.5. Put the apple in the microwave andturn it on to cook, remove the cookedapple from the microwave.6. Put the slice of apple on the counterto the right of the statue.SAYCAN+P G ENERATED PLAN1. Pick up a slice of apple that is in thegarbage can.Figure 12: Example plans from SayCan+P [12] evaluated on EVAL INSTRUCT . SayCan+P errorsmainly come from policy execution failures.22C Additional ResultsC.1 ALFRED ResultsSayCan Performance Analysis. Here, we analyze the performance of the SayCan baselines ingreat detail to determine how andwhy they perform poorly. SayCan errors occur for two reasons:(1) Planning errors in which the LLM fails to output the correct low-level instruction based on thehigh level task description, and (2) Policy execution errors in which the policy fails to execute thetask correctly, given the correct instruction.Qualitative examples of BOSS compared to SayCan+P and CIC are shown in Figure 10, where wesee that SayCan+P is unable to solve either task. Why is this? The first two plans in Figure 12 cor-respond to the top two tasks in Figure 10. As we can see, SayCan+P generated the correct first stepbut the policy failed to execute the skill as SayCan does not fine-tune policies in the environment.While Figure 12 demonstrates that SayCan+P can make partial progress towards certain tasks, itrelies on zero-shot LLM execution over fixed policies and therefore does not fine-tune the policiesin the environment nor learn to chain them together so that the policy is robust enough to transitionbetween skills in new settings.Table 4: Comparison of SayCan andSayCan+P MethodsMethodFailure Rate (%)Planning ExecutionSayCan 57.5 42.5SayCan+P 4.2 95.8SayCan+PF 5.0 95.0We analyze the overall proportions of policy executionfailures and planning failures for the SayCan baselinesin Table 4. We see that SayCan mostly fails at planning(57.5% of the time) while SayCan+P, using BOSS’ skillproposal mechanism, mainly fails at execution. Mean-while, SayCan+PF performs similarly to SayCan+P, indi-cating that na ̈ıve fine-tuning does not greatly improve thesuccess rate of the final plans.SayCan+BOSS. Here, we test one more method which combines the advantages of top-downLLM planning methods like SayCan with BOSS’ ability to enable agents to learn how to chaintogether skills directly in the target environment. We evaluate SayCan+BOSS, a baseline whichbreaks down high-level task instructions using SayCan and then issues the commands to BOSSagents after they have performed skill bootstrapping in the target environments. Results in thebelow table indicate that this baseline performs much better than BOSSalone, indicating that BOSS’LLM-guided skill bootstrapping enables it to learn robust policies that can even be combined withplanners to better execute the given plans than na ̈ıve fine-tuning with SayCan+PF. Yet if there is nopowerful LLM available at test time, BOSS alone still performs very well.Evaluation Task Length AverageMethod Length 2 Length 3 Length 4 Return SuccessNo Bootstrap 0.03 +- 0.02 0.05 +- 0.07 0.08 +- 0.09 0.03 +- 0.01 0.00 +- 0.00CIC [52] 0.02 +- 0.02 0.25 +- 0.08 0.18 +- 0.07 0.11 +- 0.01 0.00 +- 0.00SayCan [12] 0.06 +- 0.02 0.14 +- 0.00 0.10 +- 0.12 0.06 +- 0.00 0.00 +- 0.00SayCan + P 0.08 +- 0.04 0.28 +- 0.00 0.20 +- 0.15 0.12 +- 0.01 0.00 +- 0.00SayCan + PF 0.64 +- 0.06 0.49 +- 0.20 0.59 +- 0.02 0.57 +- 0.05 0.00 +- 0.00BOSS (ours) 0.47 +- 0.12 0.59 +- 0.13 0.81 +- 0.13 0.57 +- 0.06 0.57 +- 0.14SayCan+BOSS (ours) 0.84 +- 0.16 0.87 +- 0.18 0.96 +- 0.13 0.84 +- 0.06 1.02 +- 0.12C.2 Real Robot ResultsWe evaluate on 4 tasks, detailed below, in the environment setup shown in Figure 11.1. Clean the black bowl (length 2): (1) Pick up the black bowl, (2) put it in the sink.2. Put the black bowl to the dish rack (length 2): (1) Pick up the black bowl, (2) put it in thedish rack.3. Clean the black bowl and put it in the dish rack (length 4): (1) Pick up the black bowl, (2)put it in the sink, (3) pick up the black bowl, (4) put it in the dish rack.23Table 5: Full returns and success rates for real robot evaluation comparisons.Task ProgPrompt return ProgPrompt success rate BOSS return BOSS success rate1 1.6±0.80 0.8 1.6±0.8 0.82 1.0±1.00 0.5 0.8±0.75 0.23 0.9±0.78 0.0 1.7±1.1 0.14 2.0±1.2 0.0 2.2±0.98 0.24. Clean the black bowl and put it in the gray plate (length 4): (2) pick up the black bowl, (2)put it in the sink, (3) pick up the black bowl, (4) put it in the plate.We report full results in Table 5.24 |
HEIRj51lcS | Polybot: Training One PolicyAcross Robots While Embracing VariabilityJonathan Yang, Dorsa Sadigh, Chelsea FinnStanford Universityjyang27@cs.stanford.eduAbstract: Reusing large datasets is crucial to scale vision-based robotic manip-ulators to everyday scenarios due to the high cost of collecting robotic datasets.However, robotic platforms possess varying control schemes, camera viewpoints,kinematic configurations, and end-effector morphologies, posing significant chal-lenges when transferring manipulation skills from one platform to another. Totackle this problem, we propose a set of key design decisions to train a single pol-icy for deployment on multiple robotic platforms. Our framework first aligns theobservation and action spaces of our policy across embodiments via utilizing wristcameras and a unified, but modular codebase. To bridge the remaining domainshift, we align our policy’s internal representations across embodiments throughcontrastive learning. We evaluate our method on a dataset collected over 60 hoursspanning 6 tasks and 3 robots with varying joint configurations and sizes: the Wid-owX 250S, the Franka Emika Panda, and the Sawyer. Our results demonstrate sig-nificant improvements in success rate and sample efficiency for our policy whenusing new task data collected on a different robot, validating our proposed de-sign decisions. More details and videos can be found on our project website:https://sites.google.com/view/polybot-multirobotKeywords: vision-based manipulation, multi-robot generalization1 IntroductionLeveraging large datasets is essential to learning widely generalizable models in computer vision[1] and natural language processing [2]. In robotic manipulation, a promising avenue of researchlies in the collection of similar extensive, in-domain datasets with the aspiration that they bestowcomparable benefits. However, while past datasets have demonstrated good generalizability withinthe same hardware setup, applying them to different robotic configurations has proven difficult [3].This challenge stems from four sources of variation: control scheme, camera viewpoint, kinematicconfiguration, and end-effector morphology. Each of these factors can vary significantly acrossrobotic setups, leading to a large domain shift when transferring data collected on one robot platformto another. In this paper, we study the problem of how to mitigate this issue and effectively leveragerobotic data across different platforms, making a stride toward learning widely-applicable policies.In an effort to bridge the aforementioned domain gap, prior works have made advancements inenabling transfer across a subset of the factors of variation. Early works have studied cross-embodiment transfer across kinematic configurations from low-dimensional observations [4, 5].More recently, efforts have shifted towards utilizing high-dimensional image observations, enablingtransfer across robotic hardware with a fixed camera angle for 3-DoF tasks [6] and across end-effectors with a fixed embodiment [7]. Unlike these works, we do not constrain the camera view-point, embodiment, or low-level controller to be fixed. Instead, we propose several new designchoices to align the observation and action spaces across robots, such as using wrist-mounted cam-eras and a shared inverse kinematics solver. Each of these choices greatly mitigate the domain shiftacross embodiments without compromising the generality of the robotic setup.We integrate these design choices into a single framework that aligns the input ,output , and internalrepresentation spaces of our policy across embodiments. Our choice of utilizing front-mounted wrist7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Our framework for generalization across multiple robots. We first standardize ourobservation space using front-mounted wrist cameras and our action space using a shared higher-level control environment. We then align our policy’s internal representations using contrastivelearning then finetune these representations to learn robot-specific dynamics.cameras aligns the input (or observation) space by naturally reducing the visual variation betweenembodiments. This allows us to remove assumptions about collecting data only under a specific fixedangle. In order to align the output space of our policy, we employ a shared inverse kinematics solver,while allowing the lower-level controller to vary. Although we would ideally completely unify theoutput space by using a single abstract action space across robots, this is infeasible due to disparitiesin action interpretation arising from hardware discrepancies. We instead learn a multiheaded policywith robot-specific heads that capture separate dynamics. Finally, we exploit a consistent, low-dimensional proprioceptive state signal to align our policy’s internal representations. This facilitatesgeneralization across the remaining factors that may cause a domain shift.Our main contribution is a pipeline for learning end-to-end closed-loop policies that can reuse andbenefit from data collected on other robotic platforms. We empirically demonstrate that our method,Polybot: Cross-Robot ADaptation and ALignmEnt We first show that given a shared dataset oftasks, data from other robots for a new similar task can be transferred in a zero-shot fashion. Inaddition, providing as few as 5 demonstrations allows our system to achieve an average of >70%success rate on difficult tasks which cannot be learned without data from other robots. We thenshow that aligning internal representations across robots can help transfer across remaining domainshifts, with our method having an average of 19% higher success rate over an ablation withoutrepresentation alignment. Finally, we show that our multiheaded training method achieves highsuccess on 6-DoF tasks outperforming naive blocking controllers that achieve no success at all.2 Related WorkLearning from large datasets. The study of utilizing diverse datasets to scale robotic learningis currently gaining momentum in the research community. Examples of large-scale real-worldrobot datasets include single-embodiment datasets [8, 9, 3, 10, 11, 12] and simulation benchmarks[13, 14, 15, 16]. However, these methods have typically focused on transfer with a single em-bodiment. To maximally reuse data from various sources, there have additionally been efforts tolearn generalizable representations from sources other than robot data. One line of work is learn-ing from human videos, including unstructured egocentric videos [17, 18, 19, 20, 21, 22, 23, 24],egocentric videos collected with a parallel gripper device [10], and in-the-wild videos [25, 26, 27].Apart from human videos, there are works that use representations from large-scale image datasets[28, 29, 30, 31], Youtube videos [32], and multi-source data from the Internet [33]. Although thesemethods can improve sample-efficiency and generalizability, they are often focused on learning vi-sual representations that can be fine-tuned for learning robot policies. Instead, we focus directly onlearning visuomotor policies from robot data.Learning across embodiments. A number of works have studied learning policies that transferacross embodiments. Some works in the past have focused on smaller aspects of robotic transfer,2including across 2-dimensional kinematic configurations [4], robotic models in simulation [5, 34],end-effector morphologies [7], dynamics [35, 36, 37, 38], and camera viewpoints [39]. A few worksexist that focus on the task of transferring between robotic hardware [6, 40, 41]. General naviga-tion models (GNM) demonstrates cross-embodiment transfer in navigation for wheeled vehicles,quadrupeds, and drones [40]. Hu et al. study transfer to new manipulators via a robot-aware visualforesight model trained on observations from fixed exterior camera by computing a mask [6]. Incontrast to these works, we focus on transfer across a wide range of high-dimensional observations,diverse robotic control schemes, and more complex 6-DoF tasks in the low-data regime.Transfer learning. Cross-robot transfer is closely related to the general area of transfer learning,where there is a large body of work on task generalization [42, 43, 44, 45, 46, 47, 48, 49, 33]and sim2real generalization [39, 50, 51, 52, 53, 54, 55]. These methods either only use a singleembodiment or largely ignore cross-embodiment differences. In addition, our work is closely relatedto the area of domain adaptation. Previous works include using GANs [56, 57, 55, 26], domain-adversarial learning [58, 59, 60], contrastive learning [17, 21], and learning system-invariant featurerepresentations [61, 62, 63] to transfer between two related domains. Our method focuses on thedomain adaptation problem across robots by exploiting a low-dimensional proprioceptive signal.3 Multi-Robot Generalization Through Domain AlignmentOur goal is to maximally reuse robotic datasets collected from one setup when deploying policies onanother. Let Dnr={{(p0, o0, a0),(p1, o1, a1), . . . , (pT, oT, aT)}}be a dataset containing demon-strations of robot rcompleting task n.p∈ Prdenotes the robot’s proprioceptive end-effector pose,o∈ Ordenotes the image observation, and a∈ Ardenotes the action. We assume that there existsa shared dataset ̃D:=Sn≤N,r≤RDnrof experience for Ndifferent tasks and Rdifferent robots.Then, given a dataset for a new task on some robotic platform DN+1j,1≤j≤R, we would like tolearn a policy πN+1k(a|o), k̸=jthat completes the same task on robot k.In order to reduce the domain gap between robots, we align the observation space, action space,and internal representations of our policy. In an ideal world, this would allow us to train a jointmultitask policy ̃π( ̃a|o, z)onDN+1j∪ ̃Dwhere ̃a∈ ̃Ais a shared action with similar interpre-tation across robots. Prior work [40] demonstrates that training cross-embodiment policies with aunified abstract action space can facilitate learning cross-robot features by coupling similar stateswith similar signals. However, discrepancies in action interpretation across manipulators makethis approach infeasible, so we instead train a task-conditioned multiheaded policy with Rheads:πr(a| ̃f(o), z), r≤R, a∈Arwhere ̃fis a shared encoder and zis a one-hot task encoding. Fig. 5and Fig. 6 in the Appendix depict this architecture.3.1 Aligning the Observation SpaceIdeally, an aligned observation space ̃Owould have the property that o1∈Dnjis similar to o2∈Dnkifs1is similar to s2, where s1ands2denote the ground truth states of o1ando2. This correspondsto ̃f(o1)∼ ̃f(o2)ifs1∼s2. To align this space as much as possible while maintaining generalityof our robotic setup, we employ 3D-printed wrist camera mounts. The cameras are mounted directlyin front of the robot and positioned to capture the end-effector within their field of view. However,because of variability across robots, the wrist camera mounts are not the same across robots, nordo we standardize the camera angle. Appendix A.1 shows the locations of the wrist and exteriorcameras. Utilizing a wrist camera greatly simplifies the range of variation of camera positioning totwo dimensions: the height of the camera with respect to the tip of the end-effector, and the angle ofthe camera. This is in contrast with exterior cameras, which require six dimensions to fully specify.In addition, wrist cameras provide natural robustness to distribution shifts in the environment [64].One noticeable example of this is invariance to visual differences in robotic arms, since only theend-effector is in view as shown in the Observation Alignment section of Fig. 1.3.2 Aligning the Action SpaceTo ensure a consistent action interpretation across robots, we use a shared upper-level environment.Fig. 1 depicts our control stack under the Action Alignment section. For an action at∈ Ar, we use ashared upper-level environment, responsible for processing atand converting it into a commanded3Figure 2: Internal Representation Alignment. This figure depicts two trajectories across differentrobots for the same task. Our contrastive pretraining approach maps observations with similar pro-prioceptive state with respect to the grasped book and cabinet together. The green lines representexample pairs of observations mapped together, while the red line represents an example pair ofobservations whose embeddings are pushed apart.delta pose target ∆pct. Then the sum of the robot’s current pose and desired target pose, pt+ ∆pct, istransmitted to a robot-specific controller. For modularization, each robot-specific controller cr,1≤r≤R, exposes a small, shared API to set pose targets and query end-effector poses, which iscalled by the upper-level environment. addition, we utilize a shared inverse kinematics solver, whichprocesses these pose targets into joint commands. Since inverse kinematics is an underspecifiedproblem, different solvers can cause large discrepancies in control. Our choice of a shared solverminimizes inconsistencies in interpretation of pose commands by providing a more-standardizedtarget pose-to-joint mapping. In addition, it aligns the coordinate frame of the robots’ actions, byusing a consistent definition of a robot’s pose with respect to its base frame. Further details of ourimplementations of these controllers and inverse kinematics solvers are provided in Appendix A.4.While the above design decisions successfully aligns the poses of the robot pt∈ Priand ensuresconsistent targets pt+∆pctare transmitted to each robot controller, it should be noted that the result-ing achieved pose at the next timestep pt+1is not necessarily aligned. This discrepancy arises fromthe fact that the trajectory a end-effector follows to reach its target can differ based on the specificrobot controller criand the characteristics of the robot’s hardware, even when employing a sharedinverse kinematics solver. For example, consider continuous or non-blocking controllers, whichinterrupt the robot with a new command at regular intervals of ∆t. Due to movement limitationsimposed by the robot’s kinematic configuration, the resulting trajectory will be highly nonlineargiven the commanded pose pctand the current state. Consequently, even with a standardized controlstack, the displacement ∆pctproves insufficient for learning a shared action space with a consistentlearning signal because its interpretation differs per robot.One attempt to circumvent this issue would be to use a blocking controller, which waits for a robotto reach its target pose before issuing a new command. Firstly, we would relabel the actions in ourreplay buffer as the change in achieved poses pand train a blocking controller to reach these poses.Since teleoperation with blocking controllers is difficult, we would instead use continuous controlto collect the trajectory {(p0, o0, a0),(p1, o1, a1), . . . , (pT, oT, aT)}. We would then construct anew dataset {(p0, o0,∆p0),(pn, on,∆pn), . . . , (pkn, okn,∆pkn)}, where ∆pkn=p(k+1)n−pkn.However, we find that even with blocking controllers, pt+ ∆pctcan have significant error frompt+1. Due to limited degrees of freedom of the hardware and inaccuracies with the controller, notall∆pt+1actions are easily reachable from the current state. As a result, one robot may havesignificant difficulty following trajectories from other robots for tasks that require more complex6-DoF motion. Due to these challenges, we instead use separate heads πr(a| ̃f(o), t)to learn eachrobot’s individual dynamics. By doing so, our policy benefits from shared visual features and overallmovement directions, while enabling each head to learn the specific means to reach the desired goals.43.3 Aligning Internal RepresentationsAligning the internal representations of our policy can allow it to harness the advantages of trainingwith a consistent signal, even in the absence of a unified action space. Our use of a shared kinematicssolver provides a consistent proprioceptive signal ptwith respect the robot’s base. As a result, two(theoretical) trajectories with same exact same motion of the end effector, T1∈ Dnr1, T2∈ Dnr2willdiffer only by some translation (i.e. for all timesteps tandpr1,t, pr2,t∈T1, T2,(pr1,t−pr2,t) =kfor some constant k). Even if we do not directly act on this signal when rolling out our policy, wecan still exploit it to learn better features from our observations. With this in mind, we propose topretrain robot-agnostic features, then fine-tune to learn robot-specific controls.In order to pretrain using the shared pose signal, we use a contrastive method that maps togethersimilar states across trajectories and robots. We define the notion of "state similarity" by computingthe changes in pose between a state and a predefined set of "fixed states" in the trajectory. Fixedstates are defined as states in a trajectory that have a similar notion of task completion. For exam-ple, in quasi-static environments, these "fixed states" can be defined as states where a subtask iscompleted, since all successful demonstrations must contain this state. More specifically, considera trajectory τ:={(p0, o0, a0),(p1, o1, a1), . . . , (pT, oT, aT)}and a set of fixed poses pf:={ptf}.We define the difference between a trajectory state and fixed state byd(pi, ptf)xyz=pxyzi−pxyztf, d(pi, ptf)quat=pquattf(pquati)−1(1)where pxyziis the proprioceptive Cartesian position of the state and pquati is the orientation of thestate. Since we may have more than one fixed state for trajectory, we additionally define a closestfixed state difference d(pi) := d(pi, ptf), where tfis the first timestep greater than or equal to iwhich corresponds to a fixed states. After randomly sampling an anchor batch A, for each state,observation pair (pa, oa)∈A, we uniformly sample corresponding positive and negative states(p+, o+)∼U(Ppa),(p−, o−)∼U(Npa)by thresholding the distances between closest fixed statedifferences:Ppa:={(p+, o+);||d(pa)xyz−d(p+)xyz||22< εxyz, Dg(d(pa)quat, d(p+)quat)< εquat}(2)Npa:={(p−, o−);||d(pa)xyz−d(p−)xyz||22< εxyz, Dg(d(pa)quat, d(p−)quat)< εquat}(3)We define Dg(p, q)ascos−1(2⟨p, q⟩2−1), or the geodesic distance between the two quaternions pandq. We finally use a triplet contrastive loss to pretrain our policy: L(oa, o+, o−) = max(0 , m+|| ̃fθ(oa)− ̃fθ(o+)||22− || ̃fθ(oa)− ̃fθ(o−)||22)where ̃fθis the shared encoder parameterized byθ. These embeddings explicitly encourage mapping similar states together across trajectories androbots . This is achieved by sampling states from other rollouts in the positive buffer that correspondto similar poses with respect to a fixed pose. Fig. 2 depicts example positive and negative pair acrosstwo trajectories from different robots.By finetuning our policy using these embeddings, we can learn robot-specific elements built uponrobot-agnostic characteristics. To accomplish this, we train individualized dynamics modulesπr(a| ̃f(o), t)for each robot using a multi-headed training approach. Each head corresponds to adistinct action space that may vary across robots due to disparities in action interpretation. This isdepicted in Fig. 1 by arrows denoting separate actions per robot. By adopting this technique, wecircumvent potential challenges associated with trajectory matching involving blocking controllers,facilitating transferability across more intricate tasks that necessitate greater degrees of freedom.4 Experiment SetupWe aim to test the following questions: (1) Does leveraging datasets from other robotic platformsenable zero-shot transfer and increase sample efficiency when learning a new task? (2) Can aligninginternal representations help bridge the domain gap across robotic platforms? (3) How does ourchoice of multiheaded training over robot-specific dynamics compare to using a shared action spacethrough a blocking controller? (4) How does our decision of utilizing wrist cameras compare to thepast approaches of collecting data with exterior cameras? Videos of experiments can be found onour project website: https://sites.google.com/view/cradle-multirobot5Robotic platforms. We evaluate our method with the WidowX 250S, Franka Emika Panda, andSawyer robot arms. All three of the robots are controlled with 6-DoF delta joint position control.For each robot, we collect 64by64image observations from two sources: an exterior camerapointing at the robot, and a wrist camera mounted on the arm itself.Figure 3: Pick/Place Tasks. The leftcolumn contains the shared pick/placetask, while the other columns containthe new distractor and new object vari-ants. For zero-shot evaluation, we in-clude data for the shared task acrossall robots. For few-shot, we also in-clude 5demonstrations of a variant forone robot.Evaluation tasks. We assume access to a shared buffer con-taining experience for at least one task on all three robots.Then, we collect data for a target task on two robots that wewant to transfer to a third. For zero-shot evaluation, we directlytrain on this data and evaluate on the third robot. We also testsample-efficiency of learning a new task with few-shot experi-ments, where we teleoperate 5demonstrations for the new taskon the third robot.To effectively leverage data from prior tasks to solve new tasks,the prior tasks must share structural similarity with the newtasks. With this in mind, we evaluate the robot on variantsfor two types of tasks: standard pick/place, and shelf manip-ulation. In the pick/place task, the robot needs to pick up amarker and drop it into a white container from a variety of po-sitions. In the shelf manipulation task, the robot rearrangesa book from a container onto a shelf. The first set of taskswas chosen to evaluate multi-robot transfer in environmentswith simple dynamics and greater similarity in control fromone robot to another. The second set evaluates transfer for en-vironments that require greater degrees of motion. Each of thetasks is evaluated on 10starting locations for the objects.For each type of task, we test for both scene generalization andtask generalization . We define scenegeneralization as generalization to a task for which there exists an mapping of observations to apreviously seen task. More formally, given an optimal policy π∗(a|o)for a previous task T1, a newtaskT2only differs with T1by scene if there is a function f:O→Osuch that π∗(a|f(o))solvesT2. Task generalization is more general than scene generalization, applying to new tasks which aresimilar to an old one, but may require different actions and controls to solve. The following are thevariations that we evaluate on:Scenario 1 (S1): New Distractor Pick/Place : Pick/place with a distractor object in the backgroundScenario 2 (S2): New Object Pick/Place : Pick/place with a banana instead of a markerScenario 3 (S3): New Container Pick/Place : Pick/place putting a pen into a cup. This requires arotation motion that is not seen in the shared dataset.Scenario 4 (S4): New Orientation Shelf : Shelf Manipulation with a reversed orientation of the orig-inal container that contains the books. The motion to grasp and place the book is the same.Scenario 5 (S5): New Compartment Shelf : Shelf Manipulation putting a book into a new compart-ment on the bottomComparison methods. For few-shot transfer, we compare our main method (denoted are Con-trastive + Multiheaded orPolybot to two different baselines. The first is Naive Multi-Robot Training ,which trains a task-conditioned multiheaded policy on exterior camera angles without contrastivepretraining. We also evaluate the Single Robot Training baseline, which only contains demonstra-tions for the target robot. We also consider two different ablations for our method: task-conditionedmultiheaded training without constrastive pretraining denoted as Ours w/o Contr. , and constrastivepretraining with a blocking controller denoted as Contr. + Blocking .Dataset collection. Details are provided in Appendix A.2.5 Experimental Results6Robot Method New Distr. (S1) New Obj. (S2) New Cont. (S3) New Orient. (S4) New Comp. (S5)Pick and Place ShelfFrankaPolybot 0.9 0.8 0.9 1.0 0.9Naive Multi-Robot 0.4 0.3 0.3 0.0 0.0Single Robot 0.2 0.2 0.0 0.0 0.0SawyerPolybot 0.9 0.9 0.7 0.9 0.7Naive Multi-Robot 0.3 0.2 0.2 0.0 0.0Single Robot 0.2 0.1 0.0 0.0 0.0WidowXPolybot 0.9 1.0 0.7 0.8 0.7Naive Multi-Robot 0.4 0.1 0.2 0.0 0.0Single Robot 0.3 0.2 0.0 0.0 0.0Table 1: Few-shot multi-robot transfer results. Given 5demonstrations on a new task, Polybotperforms significantly better than a baseline without data from other robots. In addition, our task-conditioned multiheaded policy enables the transfer of multi-robot data for shelf manipulation tasks,where a blocking controller fails.Robot Method S1 S2 S3 S4 S5FrankaPolybot 0.4 0.6 0.00.40.0Contr. + Blocking 0.3 0.3 0.0 0.0 0.0SawyerPolybot 0.6 0.5 0.00.40.0Contr. + Blocking 0.60.3 0.0 0.0 0.0WidowXPolybot 0.4 0.4 0.00.50.0Contr. + Blocking 0.40.3 0.0 0.0 0.0Table 2: Zero-shot results. Polybot can learn anew task with high structural similarity to tasksin a shared multi-robot buffer given only datafrom other robots.Utilizing data from other robots signfi-cantly improves few-shot generalization per-formance on a new task. In Table 1, the suc-cess rate for Polybot demonstrates significantimprovement on all tasks and all robots over sin-gle robot training. On the Franka, the resultsshow an average of 0.56higher success rate onPick/Place (0.9,0.8,1.0versus 0.4,0.3,0.3) and0.95higher success rate on Shelf Manipulation .(1.0,0.9versus 0.0,0.0). This indicates thatPolybot can effectively utilize data from otherrobots to learn a new task with high sample efficiency. Notably, single-robot training fails to getany success on Shelf Manipulation . Qualitatively, we observe that this policy quickly falls out-of-distribution and in unable to grasp the book. For tasks with high structural similarity to those inthe shared buffer such as New Distractor Pick/Place , training without other robot data has nonzeroperformance, likely due to the variation in scenes we collect in our shared dataset. However, 5demonstrations are not enough to cover the entire distribution of object positions for a new task,leading to suboptimal performance.Polybot facilitates better transfer than naive task-conditioned multiheaded training on withexterior cameras. Table 1 shows significantly higher success rates for Polybot over naive multi-robot training ( 0.9,0.8,0.9,1.0,1.0versus 0.4,0.3,0.3,0.0,0.0for the Franka). This suggests thattask-conditioned multiheaded training on exterior camera struggles to utilize data from other robots,validating our hypothesis that a standardized observation space is crucial for this transfer to occur.ForShelf Manipulation scenarios, naive multi-robot training fails to achieve any success. For Pick/-Place scenarios, this method does have some success, but it is likely due to the similarities betweenthe new task and other tasks from the same robot.Utilizing data from other robots allows for zero-shot generalization performance on a newtask. For tasks that have higher structural similarity to that scene in previous data for the robot suchas scene generalization tasks described in the experiment setup section, we see good success ratefor zero-shot multi-robot transfer. Table 2 showcases this, with both Polybot having success rates of0.4,0.6,0.4and the blocking controller having success rates of 0.3,0.6,0.4for the New DistractorPick/Place task . Without data from other robots, the success rate would be near 0because the newtask would not be present in the replay buffer. For Scenario 4, or New Orientation Shelf , Polybot hasnonzero performance while the blocking method doesn’t. This is because while both methods areable to pick up the book from the reversed bookshelf, the blocking controller struggles with placingthe book onto the shelf.Aligning the internal representations of the observations between robots leads to learning moregeneralizable features. Table 3 compares the performances of Polybot with an ablation of the7Robot Method S1 S2 S3 S4 S5FrankaPolybot 0.9 0.8 0.9 1.0 0.9Polybot w/o Contr. 0.8 0.5 0.7 0.6 0.7SawyerPolybot 0.9 0.9 0.7 0.9 0.7Polybot w/o Contr. 0.7 0.5 0.4 0.6 0.7WidowXPolybot 0.9 1.0 0.7 0.8 0.7Polybot w/o Contr. 0.8 0.7 0.7 0.6 0.7Table 3: Ablation: ours vs ours without contrastive.Our contrastive pretraining and multiheaded finetuningapproach provides an average of 19%oimprovementon few-shot transfer for a new task over regular mul-tiheaded training.Method S1 S2 S3 S4 S5Polybot 0.9 0.8 0.9 1.0 0.9Contr. + Blocking 0.8 0.6 0.0 0.0 0.0Polybot 0.9 0.9 0.7 0.9 0.7Contr. + Blocking 0.9 0.9 0.0 0.0 0.0Polybot 0.9 1.0 0.7 0.8 0.7Contr. + Blocking 0.8 0.9 0.0 0.0 0.0Table 4: Ablation: Ours vs contrastive +blocking Although a blocking controllerhas similar few-shot performance to Poly-bot on simple Pick/Place variants, it strug-gles with tasks that require 6-DoF motion.contrastive pretraining phase. Aligning the internal representation of the policy causes an averageof19% increase in performance over a baseline without contrastive pretraining. This suggests thataligning the internal representation of the policy assists in learning features that generalize acrossrobots. We hypothesize that training on an aligned proprioceptive signal across robots. Notably,multiheaded training alone seems to have reasonable performance on all tasks, albiet lower than ourmethod.Multiheaded policies can better learn from 6-DoF, cross-robot demonstrations over blockingcontrollers. Table 9 shows that on the Shelf Manipulation tasks , Polybot achieves high successrates 1.0,0.9on the Franka compared to zero success rate on the blocking controller. In addition,in the New Container Pick/Place , Polybot has a 0.9success rate over the 0.0with blocking. Thislarge discrepancy is explained by the inability of one robot to precisely imitate another robot. Forinstance, variations in the length of the wrist link can lead to large discrepancies in the radius ofthe rotation necessary to perform the Shelf Manipulation task. To verify this, we provide the errorover timestep of imitating each task in the Appendix. Notably, the blocking controller has highperformance on S1andS2due to lower discrepancy between robots for translational movement.6 DiscussionSummary. We have developed a method, Polybot, that efficiently learns new tasks using data col-lected on other robots. This is enabled through careful design decisions driven by a fundamentalobservation: transferring data across domains requires aligning the domains as much as possiblewithout making assumptions that limit their applicability. For example, Polybot uses wrist cam-eras, which can be mounted on a wide range of robots, while exhibiting significantly less variationthan exterior cameras, even if the mounting position is not fixed. In addition, Polybot uses a sharedhigher-level action representation and varying lower-level controller, which can align the policy’sactions while accommodating the diversity of robotic setups. Finally, our choice of contrastive losscan align internal representations across robots. Despite the simplicity of each design decision, theircombination sufficiently aligns our multi-embodiment dataset to enable cross-embodiment transfer.Polybot achieves over a 70% success rate on all tasks, outperforming naive multi-robot training.Limitations and Future Work. One limitation of our approach is that our method requires a shareddataset between robots to learn their correspondences. As a result, our method is not able to transferpolicies to a new robot with no demonstrations. Our method also does not allow for zero-shottransfer on tasks with different motion than seen in the shared dataset. In addition, the scope ofour evaluation has been on parallel-jaw robotic manipulators. Generalization can become moredifficult with a more diverse set of end-effectors, although our method does not preclude this typeof transfer. Our use of egocentric cameras may lead to difficulties in partially observable settings,such as settings without a clear view of the end-effector. Finally, our representation alignment relieson a scalable method to compute fixed states across all trajectories, which may not apply for allmanipulation tasks. In the future, we plan to scale our datasets, as we believe that can allow formore new tasks, camera-viewpoints, and end-effectors to be in-distribution.8AcknowledgmentsWe thank Tony Zhao, Moojin Kim, and Alexander Khazatsky for the numerous discussions aboutreal-world robot learning and Kyle Hsu, Hengyuan Yu, and Suvir Mirchandani for their helpfulfeedback. This research was supported by the Office of Naval Research grants N00014-22-1-2621and N00014-22-1-2293.References[1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchi-cal image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition ,pages 248–255, 2009. doi:10.1109/CVPR.2009.5206848.[2] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving language understandingby generative pre-training. 2018.[3] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, andC. Finn. Robonet: Large-scale multi-robot learning. In Conference on Robot Learning , 2019.[4] C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural networkpolicies for multi-task and multi-robot transfer. In International Conference on Robotics andAutomation , 2016.[5] T. Chen, A. Murali, and A. Gupta. Hardware conditioned policies for multi-robot transferlearning, 2019.[6] E. S. Hu, K. Huang, O. Rybkin, and D. Jayaraman. Know thyself: Transferable visual controlpolicies through robot-awareness. In International Conference on Learning Representations ,2022.[7] G. Salhotra, xI Chun Arthur Liu, and G. Sukhatme. Bridging action space mismatch in learningfrom demonstrations. 2023.[8] P. Sharma, L. Mohan, L. Pinto, and A. Gupta. Multiple interactions made easy (mime): Largescale demonstrations data for imitation, 2018.[9] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, S. Savarese, and L. Fei-Fei. Roboturk: A crowdsourcing platform for robotic skilllearning through imitation. 2018.[10] S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto. Visual imitation madeeasy. 2020.[11] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, 2021.[12] F. Ebert, Y . Yang, K. Schmeckpeper, B. Bucher, G. Georgakis, K. Daniilidis, C. Finn, andS. Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets.InRobotics: Science and Systems , 2022.[13] T. Yu, D. Quillen, Z. He, R. Julian, A. Narayan, H. Shively, A. Bellathur, K. Hausman, C. Finn,and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforce-ment learning. In Conference on Robot Learning , 2019.[14] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. 2019.[15] Y . Zhu, J. Wong, A. Mandlekar, R. Martín-Martín, A. Joshi, S. Nasiriany, and Y . Zhu. robo-suite: A modular simulation framework and benchmark for robot learning. In arXiv preprintarXiv:2009.12293 , 2020.9[16] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-drivenreinforcement learning. 2021.[17] P. Sermanet, C. Lynch, Y . Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain.Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 1134–1141, 2018. doi:10.1109/ICRA.2018.8462891.[18] K. Schmeckpeper, O. Rybkin, K. Daniilidis, S. Levine, and C. Finn. Reinforcement learningwith videos: Combining offline observations with interaction, 2021.[19] I. Radosavovic, T. Xiao, S. James, P. Abbeel, J. Malik, and T. Darrell. Real-world robotlearning with masked visual pre-training. 2022.[20] K. Grauman et al. Ego4d: Around the world in 3,000 hours of egocentric video, 2022.[21] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. In Conference on Robot Learning , 2022.[22] Y . J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V . Kumar, and A. Zhang. Vip: Towardsuniversal visual reward and representation via value-implicit pre-training. In InternationalConference on Learning Representations , 2023.[23] A. Majumdar, K. Yadav, S. Arnaud, Y . J. Ma, C. Chen, S. Silwal, A. Jain, V .-P. Berges,P. Abbeel, J. Malik, D. Batra, Y . Lin, O. Maksymets, A. Rajeswaran, and F. Meier. Whereare we in the search for an artificial visual cortex for embodied intelligence?, 2023.[24] S. Karamcheti, S. Nair, A. S. Chen, T. Kollar, C. Finn, D. Sadigh, and P. Liang. Language-driven representation learning for robotics. 2023.[25] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from "in-the-wild" human videos. 2021.[26] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. 2022.[27] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. Concept2robot: Learning manipula-tion concepts from instructions and human demonstrations. 2020.[28] R. Shah and V . Kumar. Rrl: Resnet as representation for reinforcement learning. 2021.[29] L. Yen-Chen, A. Zeng, S. Song, P. Isola, and T.-Y . Lin. Learning to see before learning to act:Visual pre-training for manipulation. 2021.[30] M. Sharma, C. Fantacci, Y . Zhou, S. Koppula, N. Heess, J. Scholz, and Y . Aytar. Losslessadaptation of pretrained vision models for robotic manipulation. 2023.[31] C. Wang, X. Luo, K. Ross, and D. Li. Vrl3: A data-driven framework for visual deep rein-forcement learning. 2023.[32] M. Chang, A. Gupta, and S. Gupta. Semantic visual navigation by watching youtube videos.2020.[33] S. Reed et al. A generalist agent. Transactions on Machine Learning Research , 2022.[34] H. You, T. Yang, Y . Zheng, J. Hao, and E. Taylor, Matthew. Cross-domain adaptive transferreinforcementlearning based on state-action correspondence. In J. Cussens and K. Zhang, editors, Proceed-ings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence , volume 180 ofProceedings of Machine Learning Research , pages 2299–2309. PMLR, 01–05 Aug 2022.10[35] P. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Blackwell, J. Tobin, P. Abbeel, andW. Zaremba. Transfer from simulation to real world through learning deep inverse dynam-ics model. In CoRR , 2016.[36] W. Yu, J. Tan, C. K. Liu, and G. Turk. Preparing for the unknown: Learning a universal policywith online system identification. In CoRR , 2017.[37] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of robotic con-trol with dynamics randomization. In International Conference on Robotics and Automation .IEEE, 2018.[38] Q. Zhang, T. Xiao, A. A. Efros, L. Pinto, and X. Wang. Learning cross-domain correspondencefor control with dynamics cycle-consistency, 2021.[39] F. Sadeghi, A. Toshev, E. Jang, and S. Levine. Sim2real view invariant visual servoing byrecurrent control. In International Conference on Robotics and Automation , 2017.[40] D. Shah, A. Sridhar, A. Bhorkar, N. Hirose, and S. Levine. Gnm: A general navigation modelto drive any robot. In International Conference on Robotics and Automation , 2023.[41] K. Bousmalis, G. Vezzani, D. Rao, C. Devin, A. X. Lee, M. Bauza, T. Davchev, Y . Zhou,A. Gupta, A. Raju, A. Laurens, C. Fantacci, V . Dalibard, M. Zambelli, M. Martins, R. Pevce-viciute, M. Blokzijl, M. Denil, N. Batchelor, T. Lampe, E. Parisotto, K. ̇Zołna, S. Reed, S. G.Colmenarejo, J. Scholz, A. Abdolmaleki, O. Groth, J.-B. Regli, O. Sushkov, T. Rothörl, J. E.Chen, Y . Aytar, D. Barker, J. Ortiz, M. Riedmiller, J. T. Springenberg, R. Hadsell, F. Nori, andN. Heess. Robocat: A self-improving foundation agent for robotic manipulation, 2023.[42] E. Parisotto, J. L. Ba, and R. Salakhutdinov. Actor-mimic: Deep multitask and transfer rein-forcement learning, 2016.[43] Y . Duan, M. Andrychowicz, B. C. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, andW. Zaremba. One-shot imitation learning, 2017.[44] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning viameta-learning, 2017.[45] K. Hausman, J. T. Springenberg, Z. Wang, N. Heess, and M. Riedmiller. Learning an embed-ding space for transferable robot skills. In International Conference on Learning Representa-tions , 2019.[46] Z. Xu, K. Wu, Z. Che, J. Tang, and J. Ye. Knowledge transfer in multi-task deep reinforcementlearning for continuous control, 2020.[47] S. Sodhani, A. Zhang, and J. Pineau. Multi-task reinforcement learning with context-basedrepresentations. In International Conference on Machine Learning , 2021.[48] A. Brohan et al. Rt-1: Robotics transformer for real-world control at scale, 2022.[49] M. Ahn et al. Do as i can, not as i say: Grounding language in robotic affordances. 2022.[50] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomizationfor transferring deep neural networks from simulation to the real world. 2017.[51] Z.-W. Hong, C. Yu-Ming, S.-Y . Su, T.-Y . Shann, Y .-H. Chang, H.-K. Yang, B. H.-L. Ho, C.-C.Tu, Y .-C. Chang, T.-C. Hsiao, H.-W. Hsiao, S.-P. Lai, and C.-Y . Lee. Virtual-to-real: Learningto control in visual semantic segmentation. In International Joint Conferences on ArtificialIntelligence , 2018.[52] A. A. Rusu, M. Vecerik, T. Rothörl, N. Heess, R. Pascanu, and R. Hadsell. Sim-to-real robotlearning from pixels with progressive nets, 2018.11[53] Y . Chebotar, A. Handa, V . Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox. Closingthe sim-to-real loop: Adapting simulation randomization with real world experience. In 2019International Conference on Robotics and Automation (ICRA) , pages 8973–8979, 2019. doi:10.1109/ICRA.2019.8793789.[54] M. Kaspar, J. D. M. Osorio, and J. Bock. Sim2real transfer for reinforcement learning withoutdynamics randomization, 2020.[55] D. Ho, K. Rao, Z. Xu, E. Jang, M. Khansari, and Y . Bai. Retinagan: An object-aware approachto sim-to-real transfer, 2021.[56] K. Bousmalis, A. Irpan, P. Wohlhart, Y . Bai, M. Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz,P. Pastor, K. Konolige, S. Levine, and V . Vanhoucke. Using simulation and domain adaptationto improve efficiency of deep robotic grasping. 2017.[57] J.-Y . Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE International Conference on Computer Vision(ICCV) , pages 2242–2251, 2017. doi:10.1109/ICCV .2017.244.[58] Y . Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, andV . Lempitsky. Domain-adversarial training of neural networks, 2016.[59] K. Fang, Y . Bai, S. Hinterstoisser, S. Savarese, and M. Kalakrishnan. Multi-task domainadaptation for deep learning of instance grasping from simulation. In 2018 IEEE Inter-national Conference on Robotics and Automation (ICRA) , pages 3516–3523, 2018. doi:10.1109/ICRA.2018.8461041.[60] M. Xu, M. Islam, C. M. Lim, and H. Ren. Learning domain adaptation with model calibrationfor surgical report generation in robotic surgery. In International Conference on Robotics andAutomation , 2021.[61] A. Gupta, C. Devin, Y . Liu, P. Abbeel, and S. Levine. Learning invariant feature spaces totransfer skills with reinforcement learning, 2017.[62] N. H. Kim, Z. Xie, and M. van de Panne. Learning to correspond dynamical systems, 2020.[63] S. J. Wang and A. M. Johnson. Domain adaptation using system invariant dynamics models.In S. J. Wang and A. M. Johnson, editors, Proceedings of the 3rd Conference on Learningfor Dynamics and Control , volume 144 of Proceedings of Machine Learning Research , pages1130–1141. PMLR, 07 – 08 June 2021. URL https://proceedings.mlr.press/v144/wang21c.html .[64] K. Hsu, M. J. Kim, R. Rafailov, J. Wu, and C. Finn. Vision-based manipulators need to alsosee from their hands, 2022.12A AppendixFurther details and videos of experiments can be found on our project website: https://sites.google.com/view/polybot-multirobotA.1 Robotic SetupFigure 4: Our robotic setups. For each robot, we collect data with both a wrist camera and exteriorcamera. The cameras are Logitech C920s and Zeds. Although these cameras do have slight differ-ences in brightness and contrast, this does not seem to affect results.A.2 Dataset CollectionWe collect two types of datasets: a shared dataset containing data from similar tasks for all threerobots and a target dataset containing a new task from one robot platform which we want to transferto other platforms. For the purposes of evaluation, our shared dataset consists of the original taskswe defined above. For each variant, we collect data on 3diverse scenes and backgrounds to ensurethat the resulting policies have some degree of robustness to changes in the environment. In order toprovide diversity to visual observations, we use cups, plates, and wallpapers with intricate patternsin the background. By collecting our datasets for each task over 3variations of this scene, we ensurethat our policy is robust to changes in lighting conditions. Overall, our dataset contains 6tasks over3robots with 3different backgrounds per task and 50demonstrations per (scene, robot, background)combination collected over the course of 60hours.A.3 Higher-Level Environment DetailsOur higher-level environment has of a shared image server and action processor between robots.We use delta position control as our action space. This is parameterized by a 7-dimensional vectorconsisting of 3 translational dimensions, 3 rotational dimensions, and 1 dimension indicate the per-centage to close the parallel end-effector. The code for processing an action before sending it to thelower-level controller is shown below:d e f s t e p ( s e l f , a c t i o n ) :s t a r t _ t i m e = ti me . ti me ( )# P r o c e s s A c t i o na s s e r t l e n ( a c t i o n ) == ( s e l f . DoF + 1)a s s e r t ( a c t i o n . max ( ) <= 1) and ( a c t i o n . min ( ) >= −1)13p o s _ a c t i o n , a n g l e _ a c t i o n , g r i p p e r = s e l f . _ f o r m a t _ a c t i o n ( a c t i o n )l i n _ v e l , r o t _ v e l = s e l f . _ l i m i t _ v e l o c i t y ( p o s _ a c t i o n , a n g l e _ a c t i o n )d e s i r e d _ p o s = s e l f . _ c u r r _ p o s + l i n _ v e ld e s i r e d _ a n g l e = a d d _ a n g l e s ( r o t _ v e l , s e l f . _ c u r r _ a n g l e )s e l f . _ u p d a t e _ r o b o t ( d e s i r e d _ p o s , d e s i r e d _ a n g l e , g r i p p e r )comp_time = tim e . tim e ( ) − s t a r t _ t i m es l e e p _ l e f t = max ( 0 , (1 / s e l f . hz ) − comp_time )ti me . s l e e p ( s l e e p _ l e f t )Given a delta position, angle, and gripper command, our environment first normalized and clips thecommands to ensure that large actions are not sent to the robot. Then, we add the delta position toour current pose and the delta angle to our current angle. We pass the position and angle into ourlower-level robot controller.A.4 Robot-Specific Controller DetailsEach robot-specific controller provides the following API to the higher-level environment:d e f u p d a t e _ p o s e ( pos , a n g l e ) :d e f u p d a t e _ j o i n t s ( j o i n t s ) :d e f u p d a t e _ g r i p p e r ( c l o s e _ p e r c e n t a g e ) :d e f g e t _ j o i n t _ p o s i t i o n s ( ) :d e f g e t _ j o i n t _ v e l o c i t i e s ( ) :d e f g e t _ g r i p p e r _ s t a t e ( ) :d e f g e t _ e e _ p o s e ( ) :The functions update_pose, update_joints and update_gripper set targets for moving the robot.For each lower-level controller, we use a shared inverse kinematics solver to take the target poses inupdate_pose and convert them into joint targets. For simplicity, we use a Pybullet-based solver andURDF model specifications for each of our robots to compute target joint positions from Cartesianposes. When computing inverse kinematics through Pybullet, we manually set the joint limits ofeach robot, since the solver does not automatically consider these limits by itself. We also use theIK solver to give us joint positions, joint velocities, gripper states, and end-effector poses. Thisallows us to use a standarized coordinate from with respect to the robot’s base to get a robot’sCartesian coordinates.For each robot, we implement two controllers: a blocking version and a nonblocking ver-sion. The blocking controller waits for an entire movement command to finish before executing thenext command. Meanwhile, the nonblocking or continuous controller continuously interrupts therobot with a new target pose every fixed period of time.A.5 Network ArchitectureA.6 Contrastive Learning DetailsWe train our encoder with a triplet loss of margin m= 0.5.L(oa, o+, o−) = max(0 , m+|| ̃fθ(oa)− ̃fθ(o+)||22− || ̃fθ(oa)− ̃fθ(o−)||22)We provide nearest neighbor lookup for our robot below. We first embed the left image via ourencoder. Then, we embed all observations in a dataset for a different robot. For example, in thetop-left image, we use the shelf manipulation dataset with only Franka data. Then, we compute the14Figure 5: Our encoder architecture. We parameterize our encoder as a CNN. The convolutionallayers are flattened and then fed into two MLP layers to get a representation z. In order to learncorrespondence between robots, we train this encoder with a contrastive loss. We use random cropand color jitter as image augmentations for our encoder.Figure 6: Our decoder architecture. The output of our encoder z is concatenated with a one-hottask index and fed into the decoder. This task index specifies a task either in the shared buffer, ora new task which we want to achieve. After passing the input through two MLP layers, we feed ininto three-robot specific heads for each of the robots we are evaluating on.Attribute ValueInput Width 64Input Height 64Input Channels 3Kernel Sizes [3, 3, 3]Number of Channels [16, 16, 16]Strides [1, 1, 1]Paddings [1, 1, 1]Pool Type Max 2DPool Sizes [2, 2, 1]Pool Strides [2, 2, 1]Pool Paddings [0, 0, 0]Image Augmentation Random Crops/Color JitterImage Augmentation Padding 4Table 5: CNN hyperparameters for our policy encoder. Our CNN uses 64 by 64 images, whichpasses through through 3convolutional layers. Each layer has a 3by3kernel with 16channels. Weaugment our architecture with random crop and color jitter.embedding with the closest l2distance from the embedding of the left image. Note that our methodalso aligns trajectories with same robot.15Hyperparameter ValueBatch Size 64Number of Gradient Updates Per Epoch 1000Learning Rate 3E-4Optimizer AdamHyperparameter ValueBatch Size 64Number of Gradient Updates Per Epoch 1000Learning Rate 1E-4Optimizer AdamTable 6: Hyperparameters. The left table contains hyperparameters for behavior cloning, and theright table contains hyperparameters for contrastive learning.Figure 7: Contrastive Nearest Neighbors. This figure shows nearest neighbors examples acrossthe three robots for embeddings from our pretrained encoder. These examples are computed for bothshelf and pick/place trajectories.A.7 Shelf TasksFigure 8: Shelf Tasks. The original shelf tasks consists of placing the book on the top compartment.The first target task requires doing the same from a reversed book container, while the second tasksrequires placing the book in the lower compartment. The tasks in the first column are part of theshared dataset while the second and third are target tasks to test transfer.16A.8 Error between Commanded Delta Pose Target and Achieved Delta PoseThe following figures depict a plot of the l2norm between the translational components of the deltacommanded pose targets and achieved delta poses for demonstration trajectories across 3robots.At each timestep, the environment receives a delta commanded pose target, which gets added tothe robot’s current pose then sent to the lower-level controller. Although the controller defines atrajectory to reach this target pose, due to errors in the inverse kinematics solver and limitations onmovement imposed by the hardware, it may not reach the pose. We plot the error for each timestepacross a trajectory from a Pick/Place task and one from a Shelf Manipulation task. Expectedly, theWidowX has the highest average error, followed by the Sawyer then the Franka. This error varieswildly between robots and timesteps, causing the commanded delta pose to be highly unpredictablefrom the achieved delta pose.Figure 9: Action interpretation error for theFranka.Figure 10: Action interpretation error for theSawyerFigure 11: Action interpretation error for theWidowX.17A.9 Ablation: Wrist Camera VariationIn order to more comprehensively evaluate the effect of wrist-camera variation, we have 3D-printed2new wrist-camera mounts for the WidowX250S. We have then collected data for the shared Pick-/Place task as well as New Distractor Pick/Place task variant. Here are the resultsViewpoint 1 (V1): Original Mount : The original camera mount used on the WidowX 250S for ourexperiments. The mount is 20degrees from vertical.Viewpoint 2 (V2): Original Mount + Masked Gripper: The original camera mount used on theWidowX 250S experiments with the bottom part of the image masked out.Viewpoint 3 (V3): 35Degree Mount A wrist-camera viewpoint that is 35degres from the vertical.Viewpoint 4 (V4): 50Degree Mount A wrist-camera viewpoint that is 50degres from the vertical.Viewpoint 5 (V5): 50Degree Mount + Change in Height : The 50-degree wrist-camera mount isplaced lower on the WidowX.Figure 12: Our wrist camera mounts. The follow depicts the original, 35-degree and 50-degreemounts.Figure 13: Our wrist camera viewpoints. This figure depicts the viewpoints we used for ourablations. The lego block is in the same place directly under the gripper for all mounts. Differentwrist camera angles can cause the same object to appear in different parts of an egocentric image.Method V1 V2 V3 V4 V5CRADLE 0.8 0.8 0.9 0.8 0.7Table 7: Ablation: Wrist Camera Viewpoint The table depicts the few-shot performances of thenew-distract Pick/Place task with new distractor viewpoints. Each variation was evaluated 10timeson a wide variety of angles.Our results show consistently high success rate on a target task for the variation in viewpoint weevaluated on. Notably, even though the location of the object on the image changes, CRADLE isstill able to learn good correspondences and transfer experience to these camera angles. We expect18that by training on a larger shared dataset with more variations in camera angle, we will be able tofinetune directly on new camera viewpoints without requiring shared data.Interestingly, the policy performs well on the Pick/Place even with the bottom part of the imagecropped out. In order to make these experiments work, we had to include the robot’s proprioceptiveinformation alongside the 1-hot task ID and latent variable to the decoder. This allows the state tomaintain full observability on the Pick/Place tasks. With more than 1 viewpoint that does not containthe gripper, the policy may have problems determining the camera angle from the image alone. Thisis because there one can achieve the same image by either changing the camera angle or moving therobot’s end-effector.A.10 Ablation: End-Effector Size VariationIn order to consider the effect of variations in end-effector size, we have ran experiments on Pick-/Place tasks with a larger and smaller gripper. Although we reuse data with the larger gripper, wecollect new data on both the shared Pick/Place task as well a small number of demonstrations foreach task variant using the smaller gripper. Similar to our main experiments, we finetune our policywith5demonstrations for task variant. In order to succeed, a robot has to slightly adapt its actionsin order to transfer robotic data collected from the larger gripper to the smaller. If incorrect featuresare transferred across these two settings, the policy may attempt to grasp the object too early.Figure 14: Egocentric Viewpoints for the Larger/Smaller Grippers. The fingers on the left are alarger 3D-printed variations of the smaller ones designed by Trossen Robotics on the right.Method S1 S2 S3CRADLE 0.9 1.0 0.7Table 8: Ablation: End-Effector Size Variation : CRADLE achieved an average of 87% successon few-shot generalization to a new gripper.Our experiments show that CRADLE has high success rate on Pick/Place tasks with both small andlarge grippers. These results are in-line with the results we see from transfer across the Franka’sSawyer’s and WidowX’s end effectors. Since the grippers are different sizes and widths, the pol-icy will need to learn how to adapt to their individual constraints in order to get effective transferperformance.A.11 Ablation: Joint Egocentric and Exterior TrainingAlthough our settings do not require partial observability, to provide evidence that CRADLE is ableto transfer information with joint egocentric and exterior training, we have runs New Distractor Pick-/Place experiments with both joint and exterior camera training. In order to process this information,we stack these viewpoints together channel-wise before passing it through the convolutional neuralnetwork encoder. The following table describes the results:19Method S1 S2 S3CRADLE 0.7 0.7 0.6Table 9: Ablation: Joint Egocentric and Exterior Training The table depicts the performancesof the new-distract Pick/Place task with new distractor viewpoints. Each variation was evaluated 10times on a wide variety of angles.Although CRADLE is able to achieves >60success rate with both egocentric and exterior camera,we see that its performance suffers. We believe that this is due to difficulties aligning the third-person observations. Although wrist-camera perspectives look similar across embodiments, third-person perspectives can have vary based on the a robot’s appearance. We believe that by pretrainingwith more20 |
WWiKBdcpNd | HANDLOOM: Learned Tracing of One-DimensionalObjects for Inspection and ManipulationVainavi Viswanath*1, Kaushik Shivakumar*1, Mallika Parulekar†1, Jainil Ajmera†1,Justin Kerr1, Jeffrey Ichnowski2, Richard Cheng3, Thomas Kollar3,Ken Goldberg1* equal contribution, †equal contributionvainaviv@berkeley.edu ,kaushiks@berkeley.eduAbstract: Tracing – estimating the spatial state of – long deformable linear ob-jects such as cables, threads, hoses, or ropes, is useful for a broad range of tasks inhomes, retail, factories, construction, transportation, and healthcare. For long de-formable linear objects (DLOs or simply cables) with many (over 25) crossings,we present HANDLOOM (Heterogeneous Autoregressive Learned DeformableLinear Object Observation and Manipulation) a learning-based algorithm that fitsa trace to a greyscale image of cables. We evaluate HANDLOOM on semi-planarDLO configurations where each crossing involves at most 2 segments. HAND-LOOM makes use of neural networks trained with 30,000 simulated examplesand 568 real examples to autoregressively estimate traces of cables and classifycrossings. Experiments find that in settings with multiple identical cables, HAND-LOOM can trace each cable with 80% accuracy. In single-cable images, HAND-LOOM can trace and identify knots with 77% accuracy. When HANDLOOMis incorporated into a bimanual robot system, it enables state-based imitation ofknot tying with 80% accuracy, and it successfully untangles 64% of cable con-figurations across 3 levels of difficulty. Additionally, HANDLOOM demonstratesgeneralization to knot types and materials (rubber, cloth rope) not present in thetraining dataset with 85% accuracy. Supplementary material, including all codeand an annotated dataset of RGB-D images of cables along with ground-truthtraces, is at https://sites.google.com/view/cable-tracing.Keywords: state estimation, deformable manipulation1 IntroductionTracing long one-dimensional objects such as cables has many applications in robotics. However,this is a challenging task as depth images are prone to noise, and estimating cable state fromgreyscale images alone is difficult since cables often fall into complex configurations with manycrossings. Long cables can also contain a significant amount of free cable ( slack ), which can oc-clude and inhibit the perception of knots and crossings.Prior cable tracing research use a combination of learned and analytic methods to trace a cable, butare limited to at most 3 crossings [1, 2]. Previous cable manipulation work bypasses state estimationwith object detection and keypoint selection networks for task-specific points based on geometricpatterns [3, 4, 5, 6, 7, 8]. Sundaresan et al. [9] employ dense descriptors for cable state estimationin knot tying but focus on thick short cables with loose overhand knots. However, this work tacklesthe challenges of long cables in semi-planar configurations (at most 2 cable segments per crossing),which often contain dense arrangements with over 25 crossings and near-parallel cable segments.Although works like Lui and Saxena [10] and Huang et al. [11] also perform cable state estimation,1University of California, Berkeley.2Carnegie Mellon University.3Toyota Research Institute.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.BCE Loss 1. Autoregressive Cable Tracing 2. Crossing Classification & Correction 123 45678 9 101112131415 1617181920212223 242526272829303132Sim + Real Data Prev Next GTGTOver Training Sim + Real Data Over Under 3. Full Cable State Estimate Inference Under Over 4. HANDLOOM for Manipulation BCE Loss Under Over Crossing Classifier Trace Predictor Figure 1: Example of HANDLOOM on a single knotted cable with 16 crossings: HANDLOOM includestwo networks. One network (pink at the top) is trained to predict the next point in the trace given a priorcontext window, and the other network (blue at the top) is trained to classify over- and under-crossings. Duringinference, HANDLOOM 1) uses the pink network to autoregressively find the most likely trace (illustrated witha rainbow gradient, from violet to purple, depicting the path of the cable) and 2) performs crossing recognitionusing the blue network to obtain the full state of the cable (3), where red circles indicate overcrossings andblue circles indicate undercrossings. 4) The state estimate from HANDLOOM can be used for inspection andmanipulation.they use analytic methods and do not address the challenging cases mentioned above which weconsider. While analytic methods require heuristics to score and select from traces, learning topredict cable traces directly from images avoids this problem.This work considers long cables (up to 3 meters in length) in semi-planar configurations, i.e. whereeach crossing includes at most 2 cable segments. These cable configurations may include knotswithin a single cable (e.g. overhand, bowline, etc.) or between multiple cables (e.g. carrick bend,sheet bend, etc.). This paper contributes:1. Heterogeneous Autoregressive Learned Deformable Linear Object Observation and Ma-nipulation (HANDLOOM), shown in Figure 1: A tracing algorithm for long deformablelinear objects with up to 25 or more crossings.2. Novel methods for cable inspection, state-based imitation, knot detection, and autonomousuntangling for cables using HANDLOOM.3. Data from physical robot experiments that suggest HANDLOOM can correctly trace longDLOs unseen during training with 85% accuracy, trace and segment a single cable in multi-cable settings with 80% accuracy, and detect knots with 77% accuracy. Robot experimentssuggest HANDLOOM in a physical system for untangling semi-planar knots achieves 64%untangling success in under 8 minutes and in learning from demonstrations, achieves 80%success.4. Publicly available code and data (and data generation code) for HANDLOOM:https://github.com/vainaviv/handloom.2 Related Work2.1 Deformable Object ManipulationRecent advancements in deformable manipulation include cable untangling algorithms [5, 6, 7, 10],fabric smoothing and folding techniques [12, 13, 14, 15, 16, 17, 18], and object placement in bags[19, 20]. Methods for autonomously manipulating deformable objects range from model-free tomodel-based, the latter of which estimates the state of the object for subsequent planning.2Model-free approaches include reinforcement or self-supervised learning for fabric smoothingand folding [21, 22, 23, 24] and straightening curved ropes [22], or directly imitating human ac-tions [12, 25]. Research by Grannen et al. [6] and Sundaresan et al. [7] employs learning-basedkeypoint detection to untangle isolated knots without state estimation. Viswanath et al. [3] and Shiv-akumar et al. [4] extend this approach to long cables (3m) with a learned knot detection pipeline.However, scaling to arbitrary knot types requires impractical amounts of human labels across manyconceivable knot types. Our study focuses on state estimation to address this limitation.Model-based methods for deformable objects employ methods to estimate the state as well asdynamics. Work on cable manipulation includes that by Sundaresan et al. [9], who use dense de-scriptors for goal-conditioned manipulation. Descriptors have also been applied to fabric smoothing[14], as well as visual dynamics models for non-knotted cables [26, 27] and fabric [15, 26, 28].Other approaches include learning visual models for manipulation [29], iterative refinement of dy-namic actions [30], and using approximate state dynamics with a learned error function [31]. Fusingpoint clouds across time has shown success in tracking segments of cable provided they are not tan-gled on themselves [32, 33]. Works from Lui and Saxena [10] and Huang et al. [11] estimate splinesof cables before untangling them, and we discuss these methods among others below.2.2 Tracing Deformable Linear ObjectsPrior tracing methods for LDOs, including in our prior work [4], primarily employ analytic ap-proaches [2, 10, 11, 34]. Other works, Jackson et al. [35] and Padoy and Hager [36], optimizesplines to trace surgical threads. Very recently, Kicki et al. [2] use a primarily analytic method forfast state estimation and tracking of short cable segments with 0-1 crossings. In contrast to theseworks, this work focuses on longer cables with a greater variety of configurations including thosewith over 25 crossings and twisted cable segments. Other prior work attempts to identify the loca-tions of crossings [37] in cluttered scenes of cables but does not fully estimate cable state.Several prior works approach sub-parts of the problem using learning. Lui and Saxena [10] uselearning to identify weights on different criteria to score traces. Huang et al. [11] focus on classifyingcrossings for thick cables. Song et al. [8] predict entire traces for short cables as gradient mapsbut lack quantitative results for cables and evaluation of long cables with tight knots. Yan et al.[38] use self-supervised learning to iteratively estimate splines in a coarse-to-fine manner. In veryrecent work, Caporali et al. [1, 39] use learned embeddings to match cable segments on oppositesides of analytically identified crossings, in scenes with 3 or fewer total crossings, while we testHANDLOOM on complex configurations containing over 25 crossings.3 Problem StatementWorkspace and Assumptions: The workspace is defined by an (x, y, z )coordinate system witha fixed overhead camera facing the surface that outputs grayscale images. We assume that 1) thegreyscale image includes only cables (no obstacles), 2) each cable is visually distinguishable fromthe background, 3) each cable has at least one endpoint visible, and 4) the configuration is semi-planar, meaning each crossing contains at most 2 cable segments. We define each cable state fori= 1, ..., l cables to be θi(s) ={(x(s), y(s), z(s))}where sis an arc-length parameter that ranges[0,1], representing the normalized length of the cable. Here, (x(s), y(s), z(s))is the location ofa cable point at a normalized arc length of sfrom the cable’s first endpoint. We also define therange of θ(s)—that is, the set of all points on a cable at time t—to be Ct. We assume all cables arevisually distinguishable from the background and that the background is monochrome. Additionalassumptions we make for the manipulation tasks are stated in Section 5.4.2.Objective: The objective of HANDLOOM is to estimate the cable state – a pixel-wise trace as afunction of sof one specified cable indicating all over- and under-crossings.4 AlgorithmHANDLOOM (Figure 1) includes a learned cable tracer that estimates the cable’s path through theimage and a crossing classifier with a correction method that refines predictions.3Sim R eal Figure 2: Cropped images of simulated and real cables for training HANDLOOM : On the left are simulatedcable crops augmented with Gaussian noise, brightness, and sharpening to match the real images (right).4.1 Learned Cable Tracer ModelWe break the problem of cable tracing down into steps, where each step generates a probabilitydistribution for the next point based on prior points. To achieve this, we employ a learned model thattakes an image crop and trace points from previous iterations and predicts a heatmap representing theprobability distribution of the next pixel’s location. By operating on local information within crops,the model mitigates overfitting and facilitates sim-to-real transfer, focusing on local characteristicsrather than global visual and geometric attributes like knots.More formally, representing the grayscale image as I, and each trace si,totin the image as a sequenceof pixels si,0, si,1, ...si,n, we break the probability distribution over traces conditioned on the imageinto smaller, tractable pieces using the chain rule of probability. fθis a learned neural network,crop( I, p)is a crop of image Icentered at pixel p, andkis the context length.P(si,tot|si,0,I) =nYj=1P(si,j|si,0...si,j−1, I)≈nYj=1fθ(si,j|si,j−k...si,j−1,crop( I, si,j−1))4.1.1 Dataset and Model TrainingTo train the crossing classifier, we simulate a diverse range of crossing configurations to generatea dataset. We use Blender [40] to create 30,000 simulated grayscale images that closely resemblereal observations (Fig. 2). Cable configurations are produced through three methods with randomBezier curves: (1) selecting points outside a small exclusion radius around the current point, (2)intentionally creating near-parallel segments, and (3) constraining specific cable segments to achievea dense and knot-like appearance in certain spatial regions. The curves are colored white and haveslightly randomized thicknesses.We randomly sample image crops along the cable of interest, with a focus on cable crossings (rep-resenting 95% of samples). The simulated images are augmented with pixel-wise Gaussian noisewith standard deviation of 6, brightness with standard deviation of 5, and sharpening to imitate theappearance of real cables. Additionally, we include a smaller dataset of 568 hand-labeled real cablecrop images, sampled such that it comprises approximately 20% of the training examples. Trainingemploys the Adam optimizer [41] with pixelwise binary cross-entropy loss, using a batch size of 64and a learning rate of 10−5.4.1.2 Model Architecture and InferenceWe use the UNet architecture [42]. We choose trace points spaced approximately 12 pixels apart,chosen by grid search, balancing between adding context and reducing overfitting. We further bal-ance context and overfitting by tuning the crop size ( 64×64) and the number of previous points fedinto the model (3) through grid search.Naively training the model for cable tracing in all possible initial directions leads to poor perfor-mance, as it would require learning rotational equivariance from data, reducing data efficiency. Toovercome this, we pre-rotate the input image, aligning the last two trace points horizontally andensuring the trace always moves left to right, optimizing the model’s capacity for predicting the nexttrace point. We then rotate the output heatmap back to the original orientation.4During inference, initializing the trace requires an input of a single start pixel along the cable (inpractice, one endpoint). We use an analytic tracer as in [4] to trace approximately 4 trace pointsand use these points to initialize the learned tracer, which requires multiple previous trace points topredict the next point along the trace.The tracer autoregressively applies the learned model to extend the trace. The network receives acropped overhead image centered on the last predicted trace point ( 64×64pixels). Previous tracepoints are fused into a gradient line (shown in Figure 1), forming one channel of the input image. Theother two channels contain an identical grayscale image. The model outputs a heatmap ( 64×64×1)indicating the likelihood of each pixel being the next trace point. We greedily select the argmax ofthis heatmap as the next point in the trace. This process continues until leaving the visible workspaceor reaching an endpoint, which is known using similar learned detectors as Shivakumar et al. [4].4.2 Over/Undercrossing Predictor4.2.1 Data and Model InputWe use 20×20simulated and real over/undercrossing crops. The 568 real images are oversampledsuch that they are seen 20% of the time during training. Augmentation methods applied to thecable tracer are also used here to mimic the appearance of real cable crops. The network receivesa20×20×3input crop. The first channel encodes trace points fused into a line, representing thesegment of interest. To exploit rotational invariance, the crop is rotated to ensure the segment’s firstand last points are horizontal. The second channel consists of a Gaussian heatmap centered at thetarget crossing position, providing positional information to handle dense configurations. The thirdchannel encodes the grayscale image of the crop.4.2.2 Model Architecture and InferenceWe use a ResNet-34 classification model with a sigmoid activation to predict scores between 0 and1. The model is trained using binary cross-entropy loss. We determine the binary classificationusing a threshold of 0.275 (explanation for this number is in the appendix Section 7.1.1). Thealgorithm uses the fact that each crossing is encountered twice to correct errors in the classifier’spredictions, favoring the higher confidence detection and updating the probability of the crossingto (1−the original value), storing the confidences for subsequent tasks. The learned cable tracer,over/undercrossing predictor, and crossing correction method combined together result in a fullcable state estimator: HANDLOOM.4.3 Using HANDLOOM in Downstream ApplicationsTracing in Multi-cable Settings: We apply HANDLOOM to inspection in multi-cable settings fortasks like locating the power adapter of a cable tangled with other visually similar cables given theendpoint. HANDLOOM is fed an endpoint and returns a trace to the adapter connected to it.Learning from Demonstrations: We use HANDLOOM to enable physical, state-based knot tyingin the presence of distractor cables, which is much more challenging for a policy operating on RGBobservations rather than underlying state. Demonstrations are pick-place actions, parameterized byabsolute arc-length or arc-length relative to crossings. More details are in Appendix Section 7.3.Robot Cable Untangling: For cable untangling, HANDLOOM is combined with analytic knotdetection, untangling point detection techniques, and bi-manual robot manipulation primitives tocreate a system for robot untangling. Appendix Section 7.2 contains more details.5 Physical ExperimentsWe evaluate HANDLOOM on 1) tracing cables unseen during training, 2) cable inspection in multi-cable settings, 3) learning knot tying from demonstrations, 4) knot detection, and 5) untangling.The workspace has a 1 m ×0.75 m foam-padded, black surface, a bimanual ABB YuMi robot, andan overhead Photoneo PhoXi camera with 773×1032×4RGB-D observations. HANDLOOM isfed only the RGB image, but the depth data is used for grasping. The PhoXi outputs the same values5Table 1: Generalization of HANDLOOM to Different Cable TypesCable Reference Length (m) Color Texture Physical Properties HANDLOOM Succ. RateTR (trained with) 2.74 White/gray Braided Slightly stiff 6/81 2.09 Gray Rubbery Slightly thicker than TR 7/82 4.68 Yellow with black text Rubbery and plastic Very stiff 8/83 2.08 Tan Rubbery Highly elastic 7/84 1.79 Bright red Braided Flimsy 6/85 4.61 White Braided Flimsy 6/8across all 3, making the observations grayscale and depth. See Appendix, Section 7.4 for details onfailure modes for each experiment.5.1 Using HANDLOOM for Tracing Cables Unseen During TrainingFor this perception experiment, the workspace contains a single cable with one of the followingknots: overhand, figure-eight, overhand honda, or bowline. Here, we provide HANDLOOM withthe two endpoints to test the tracer in isolation, independent of endpoint detection. We report asuccess if the progression of the trace correctly follows the path of the cable without deviating.Results in Table 1 show HANDLOOM can generalize to cables with varying appearances, textures,lengths, and physical properties. Cables can also be seen in Appendix Figure 9. HANDLOOMperforms comparably on cable TR (the cable it was trained with) as it does on the other cables.5.2 Using HANDLOOM for Cable Inspection in Multi-Cable SettingsTier A2 Tier A3 Tier A1 Figure 3: Multi-cable tracing : here are 3 pairs of images. The left of each pair illustrates an example from thetier and the right is the successful trace. The traces from left to right encounter 24, 13, and 28 crossings.For cable inspection, the workspace contains a power strip. Attached to the power strip are threewhite MacBook adapters with two 3 m USB C-to-C cables and one 2 m USB-C to MagSafe 3 cable(shown in Figure 3). The goal is to provide the trace of a cable and identify the relevant adaptorgiven the endpoint. We evaluate perception in multi-cable settings across 3 tiers of difficulty.1.Tier A1 : No knots; cables are dropped onto the workspace, one at a time.2.Tier A2 : Each cable is tied with a single knot (figure-eight, overhand, overhand honda,bowline, linked overhand, or figure-eight honda) measuring 5-10 cm in diameter, and sub-sequently dropped onto the workspace one by one.3.Tier A3 : Similar to tier A2 but contains the following 2-cable knots (square, carrick bend,and sheet bend) with up to three knots in the scene.Across all 3 tiers, we assume the cable of interest cannot exit and re-enter the workspace and thatcrossings must be semi-planar. Additionally, we pass in the locations of all three adapters to thetracer and an endpoint to start tracing from. To account for noise in the input images, we take 3images of each configuration. We count a success if a majority (2 of the 3 images) have the correcttrace (reaching their corresponding adapter); otherwise, we report a failure.We compare the performance of the learned tracer from HANDLOOM against an analytic tracerfrom Shivakumar et al. [4] as a baseline, using scoring rules inspired by Lui and Saxena [10] andKeipour et al. [34]. The analytic tracer explores potential paths and selects the most likely trace basedon a scoring metric [4], prioritizing paths that reach an endpoint, have minimal angle deviations,and have high coverage scores. Table 2 shows that the learned tracer significantly outperforms thebaseline analytic tracer on all 3 tiers of difficulty with a total of 80% success across the tiers.65.3 Using HANDLOOM for Physical Robot Knot Tying from DemonstrationsKnot-tying demonstrations are conducted on a nylon rope of length 147 cm and a diameter of 7mm. We tune our demonstrations to work on plain backgrounds; however, during rollouts, we adddistractor cables that intersect the cable to be tied. These distractor cables are of identical types tothe cable of interest; thus, manipulating the correct points requires accurate cable state estimationfrom HANDLOOM. Although these cables are thicker, more twisted, and less stiff than the cableHANDLOOM is trained with, HANDLOOM is able to generalize, tracing the cable successfully in13 out of 15 cases. We evaluate the policy executed by the YuMi bimanual robot on the following:Tier B1: 0 distractors, Tier B2: 1 distractor, and Tier B3: 2 distractors. We count a success whena knot has been tied (i.e. lifting the endpoints results in a knot’s presence).Table 4 show 86% tracing success and 80% robotic knot tying success across the 3 tiers, suggestingHANDLOOM can apply policies learned from real world demonstrations, even with distractors.5.4 Using HANDLOOM for Knot Detection and Physical Robot UntanglingTo test HANDLOOM applied to the knot detection and physical untangling task, we use a single 3m white, braided USB-A to micro-USB cable.5.4.1 Knot DetectionThe details on the state-based knot detection method, which uses the sequence of over- and under-crossings to identify knots, can be found in the Appendix Section 7.2.2.Category 3 Tier C1 Tier C2 Tier C3 Tier D1 Tier D2 Tier D3 Figure 4: Starting configurations for the tiers for HANDLOOM experiments and robot untangling experi-ments. Here is the crossing count for these examples from left to right: 5, 10, 6, 4, 9, and 14.We evaluate HANDLOOM on 3 tiers of cable configurations, shown in Figure 4. The ordering of thecategories for these experiments does not indicate varying difficulty. Rather, they are 3 categories ofknot configurations to test HANDLOOM on.1.Tier C1 : Loose (35-40 cm in diameter) figure-eight, overhand, overhand honda, bowline,linked overhand, and figure-eight honda knots.2.Tier C2 : Dense (5-10 cm in diameter) figure-eight, overhand, overhand honda, bowline,linked overhand, and figure-eight honda knots.3.Tier C3 : Fake knots (trivial configurations positioned to appear knot-like from afar).We evaluate HANDLOOM on the following 3 baselines on the 3 tiers.1. SGTM 2.0 [4] perception system: using a Mask R-CNN model trained on overhand andfigure-eight knots for knot detection.2. HANDLOOM (-LT): replacing the L earned T racer with the same analytic tracer from Shiv-akumar et al. [4] as described in Section 5.2 combined with the crossing identification.3. HANDLOOM (-CC): using the learned tracer and crossing identification scheme to do knotdetection without C rossing C ancellation, covered in the appendix (Section 7.2.3).We report the success rate of each of these algorithms as follows: if a knot is present, the algorithmis successful if it correctly detects the first knot (i.e. labels its first undercrossing); if there are noknots, the algorithm is successful if it correctly detects that there are no knots.As summarized in Table 3, knot detection using HANDLOOM considerably outperforms SGTM2.0, HANDLOOM (-LT), and HANDLOOM (-CC) on tiers C1 and C3. SGTM 2.0 marginallyoutperforms HANDLOOM in tier C2. This is because SGTM 2.0’s Mask R-CNN is trained ondense overhand and figure-eight knots, which are visually similar to the knots in tier C2.7Table 2: Multi-Cable TracingTier Analytic LearnedA1 3/30 26/30A2 2/30 23/30A3 1/30 23/30Learned (HANDLOOM) compared to ananalytic tracer from Shivakumar et al. [4].Table 3: Knot Detection ExperimentsTier SGTM 2.0 HL (-LT) HL (-CC) HLC1 2/30 14/30 20/30 24/30C2 28/30 8/30 21/30 26/30C3 12/30 14/30 0/30 19/30HL = HANDLOOM. HANDLOOM outperforms the baseline andablations on all tiers except tier C2. Explanation provided inSection 5.4.1.Table 4: Learning From DemosTier Corr. Trace Succ.B1 5/5 5/5B2 4/5 4/5B3 4/5 3/5Corr. Trace = correct trace, or perceptionresult success. Succ. = perception andmanipulation success.Table 5: Robot Untangling Experiments (90 total trials)Tier D1 Tier D2 Tier D3SGTM 2.0 HL SGTM 2.0 HL SGTM 2.0 HLKnot 1 Succ. 11/15 12/15 6/15 11/15 9/15 14/15Knot 2 Succ. - - - - 2/15 6/15Verif. Rate 11/11 8/12 6/6 6/11 1/2 2/6Knot 1 Time (min) 1.1±0.1 2.1±0.3 3.5±0.7 3.9±1.1 1.8±0.4 2.0±0.4Knot 2 Time (min) - - - - 3.1±1.2 7.5±1.6Verif. Time (min) 5.7±0.9 6.1±1.4 6.4±1.8 10.1±0.7 5.4 9.6±1.5HL = HANDLOOM. Across all 3 tiers, HANDLOOM outperforms SGTM 2.0 onuntangling success. SGTM 2.0, however, outperforms HANDLOOM on verification.Details provided in Section 5.4.25.4.2 Physical Robot UntanglingFor the untangling system based on HANDLOOM, we compare performance against SGTM 2.0 [4],the current state-of-the-art algorithm for untangling long cables, using the same 15-minute timeouton each rollout and the same metrics for comparison. We evaluate HANDLOOM deployed on theABB YuMi bimanual robot in untangling performance on the following 3 levels of difficulty (Figure4), where all knots are upward of 10 cm in diameter:Tier D1: Cable with overhand, figure-eight, or overhand honda knot; total crossings ≤6.Tier D2: Cable with bowline, linked overhand, or figure-eight honda knot; total crossings ∈[6,10).Tier D3: Cable with 2 knots (1 each from tiers D1 and D2); total crossings ∈[10,15).Table 5 shows that our HANDLOOM-based untangling system achieves a higher untangling successrate (29/45) than SGTM 2.0 (19/45) across 3 tiers of difficulty, although SGTM 2.0 is faster. Theslower speed of HANDLOOM is attributed to the requirement of a full cable trace for knot detectionwhich is inhibited by the fact that the cable is 3 ×as long as the width of the workspace, leading toadditional reveal moves before performing an action or verifying termination. On the other hand,SGTM 2.0 does not account for the cable exiting the workspace, benefiting speed, but failing todetect off-workspace knots, leading to premature endings of rollouts without fully untangling.6 Limitations and Future WorkHANDLOOM assumes cables are not occluded by large, non-cable objects, are in semi-planar con-figurations, and are distinguishable from the background, which is one uniform color. Additionally,HANDLOOM is not trained on cables with a wide range of physical (e.g. thickness) and material(elasticity, bending radius) properties, such as loose string or rubbery tubing; and certain empiricalexperimentation suggests that for significantly thicker ropes and threads that form “kinks”, perfor-mance of HANDLOOM suffers due to the out-of-distribution nature of these cases. Future workwill aim to mitigate these problems and also generalize across more broadly varying backgroundsand properties of cable. Another area of work will involve sampling multiple likely traces from themodel and using resultant uncertainty as a signal downstream as a means of mitigating noise.8AcknowledgmentsThis research was performed at the AUTOLAB at UC Berkeley in affiliation with the BerkeleyAI Research (BAIR) Lab. The authors were supported in part by donations from Toyota ResearchInstitute.References[1] A. Caporali, R. Zanella, D. D. Greogrio, and G. Palli. Ariadne+: Deep learning–based aug-mented framework for the instance segmentation of wires. IEEE Transactions on IndustrialInformatics , 18(12):8607–8617, 2022. doi:10.1109/TII.2022.3154477.[2] P. Kicki, A. Szymko, and K. Walas. Dloftbs – fast tracking of deformable linear objects withb-splines. In 2023 IEEE International Conference on Robotics and Automation , 2023.[3] V . Viswanath, K. Shivakumar, J. Kerr, B. Thananjeyan, E. Novoseller, J. Ichnowski, A. Es-contrela, M. Laskey, J. E. Gonzalez, and K. Goldberg. Autonomously untangling long cables.Robotics: Science and Systems (RSS) , 2022.[4] K. Shivakumar, V . Viswanath, A. Gu, Y . Avigal, J. Kerr, J. Ichnowski, R. Cheng, T. Kollar, andK. Goldberg. Sgtm 2.0: Autonomously untangling long cables using interactive perception.arXiv preprint arXiv:2209.13706 , 2022.[5] V . Viswanath, J. Grannen, P. Sundaresan, B. Thananjeyan, A. Balakrishna, E. Novoseller,J. Ichnowski, M. Laskey, J. E. Gonzalez, and K. Goldberg. Disentangling dense multi-cableknots. Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2021.[6] J. Grannen, P. Sundaresan, B. Thananjeyan, J. Ichnowski, A. Balakrishna, M. Hwang,V . Viswanath, M. Laskey, J. E. Gonzalez, and K. Goldberg. Untangling dense knots by learningtask-relevant keypoints. Conference on Robot Learning , 2020.[7] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, J. Ichnowski, E. Novoseller,M. Hwang, M. Laskey, J. E. Gonzalez, and K. Goldberg. Untangling dense non-planar knotsby learning manipulation features and recovery policies. Proc. Robotics: Science and Systems(RSS) , 2021.[8] Y . Song, K. Yang, X. Jiang, and Y . Liu. Vision based topological state recognition for de-formable linear object untangling conducted in unknown background. In 2019 IEEE In-ternational Conference on Robotics and Biomimetics (ROBIO) , pages 790–795, 2019. doi:10.1109/ROBIO49542.2019.8961652.[9] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, M. Laskey, K. Stone, J. E. Gon-zalez, and K. Goldberg. Learning rope manipulation policies using dense object descriptorstrained on synthetic depth data. In Proc. IEEE Int. Conf. Robotics and Automation (ICRA) ,2020.[10] W. H. Lui and A. Saxena. Tangled: Learning to untangle ropes with RGB-D perception. In2013 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems , pages 837–844. IEEE, 2013.[11] X. Huang, D. Chen, Y . Guo, X. Jiang, and Y . Liu. Untangling multiple deformable linearobjects in unknown quantities with complex backgrounds. IEEE Transactions on AutomationScience and Engineering , pages 1–13, 2023. doi:10.1109/TASE.2023.3233949.[12] D. Seita, A. Ganapathi, R. Hoque, M. Hwang, E. Cen, A. K. Tanwani, A. Balakrishna,B. Thananjeyan, J. Ichnowski, N. Jamali, et al. Deep imitation learning of sequential fab-ric smoothing from an algorithmic supervisor. In Proc. IEEE/RSJ Int. Conf. on IntelligentRobots and Systems (IROS) , 2020.9[13] T. Weng, S. M. Bajracharya, Y . Wang, K. Agrawal, and D. Held. Fabricflownet: Bimanualcloth manipulation with a flow-based policy. In Conference on Robot Learning , pages 192–202. PMLR, 2022.[14] A. Ganapathi, P. Sundaresan, B. Thananjeyan, A. Balakrishna, D. Seita, J. Grannen, M. Hwang,R. Hoque, J. E. Gonzalez, N. Jamali, et al. Learning to smooth and fold real fabric using denseobject descriptors trained on synthetic color images. In Proc. IEEE Int. Conf. Robotics andAutomation (ICRA) , 2021.[15] R. Hoque, D. Seita, A. Balakrishna, A. Ganapathi, A. K. Tanwani, N. Jamali, K. Yamane,S. Iba, and K. Goldberg. Visuospatial foresight for multi-step, multi-task fabric manipulation.InProc. Robotics: Science and Systems (RSS) , 2020.[16] T. Kollar, M. Laskey, K. Stone, B. Thananjeyan, and M. Tjersland. Simnet: Enabling robustunknown object manipulation from pure synthetic data via stereo. In Conference on RobotLearning , pages 938–948. PMLR, 2022.[17] B. Thananjeyan, J. Kerr, H. Huang, J. E. Gonzalez, and K. Goldberg. All you need is luv:Unsupervised collection of labeled images using invisible uv fluorescent indicators, 2022. URLhttps://arxiv.org/abs/2203.04566 .[18] R. Hoque, K. Shivakumar, S. Aeron, G. Deza, A. Ganapathi, A. Wong, J. Lee, A. Zeng, V . Van-houcke, and K. Goldberg. Learning to fold real garments with one arm: A case study in cloud-based robotics research. In 2022 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 251–257, 2022. doi:10.1109/IROS47612.2022.9981253.[19] D. Seita, P. Florence, J. Tompson, E. Coumans, V . Sindhwani, K. Goldberg, and A. Zeng.Learning to rearrange deformable cables, fabrics, and bags with goal-conditioned transporternetworks. In Proc. IEEE Int. Conf. Robotics and Automation (ICRA) , 2021.[20] L. Y . Chen, B. Shi, D. Seita, R. Cheng, T. Kollar, D. Held, and K. Goldberg. Autobag: Learningto open plastic bags and insert objects, 2022. URL https://arxiv.org/abs/2210.17217 .[21] J. Matas, S. James, and A. J. Davison. Sim-to-real reinforcement learning for deformableobject manipulation. In Conf. on Robot Learning (CoRL) , 2018.[22] Y . Wu, W. Yan, T. Kurutach, L. Pinto, and P. Abbeel. Learning to manipulate deformableobjects without demonstrations. arXiv preprint arXiv:1910.13439 , 2019.[23] R. Lee, D. Ward, A. Cosgun, V . Dasagi, P. Corke, and J. Leitner. Learning arbitrary-goal fabricfolding with one hour of real robot experience. In Conf. on Robot Learning (CoRL) , 2020.[24] Y . Avigal, L. Berscheid, T. Asfour, T. Kr ̈oger, and K. Goldberg. Speedfolding: Learningefficient bimanual folding of garments, 2022. URL https://arxiv.org/abs/2208.10552 .[25] J. Schulman, J. Ho, C. Lee, and P. Abbeel. Learning from demonstrations through the use ofnon-rigid registration. In International Symposium of Robotics Research , 2013.[26] W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto. Learning predictive representations fordeformable objects using contrastive estimation. In Conf. on Robot Learning (CoRL) , 2020.[27] A. Wang, T. Kurutach, K. Liu, P. Abbeel, and A. Tamar. Learning robotic manipulation throughvisual planning and acting. Robotics: Science and Systems (RSS) , 2019.[28] X. Lin, Y . Wang, Z. Huang, and D. Held. Learning visible connectivity dynamics for clothsmoothing. In Conference on Robot Learning , pages 256–266. PMLR, 2022.[29] A. Nair, D. Chen, P. Agrawal, P. Isola, P. Abbeel, J. Malik, and S. Levine. Combining self-supervised learning and imitation for vision-based rope manipulation. CoRR , abs/1703.02018,2017. URL http://arxiv.org/abs/1703.02018 .10[30] C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song. Iterative residual policy for goal-conditioned dynamic manipulation of deformable objects. In Proceedings of Robotics: Scienceand Systems (RSS) , 2022.[31] D. McConachie, T. Power, P. Mitrano, and D. Berenson. Learning when to trust a dynamicsmodel for planning in reduced state spaces. CoRR , abs/2001.11051, 2020. URL https://arxiv.org/abs/2001.11051 .[32] J. Schulman, A. Lee, J. Ho, and P. Abbeel. Tracking deformable objects with point clouds.In2013 IEEE International Conference on Robotics and Automation , pages 1130–1137, 2013.doi:10.1109/ICRA.2013.6630714.[33] T. Tang and M. Tomizuka. Track deformable objects from point clouds with structure preservedregistration. The International Journal of Robotics Research , 41(6):599–614, 2022. doi:10.1177/0278364919841431. URL https://doi.org/10.1177/0278364919841431 .[34] A. Keipour, M. Bandari, and S. Schaal. Deformable one-dimensional object detection forrouting and manipulation. CoRR , abs/2201.06775, 2022. URL https://arxiv.org/abs/2201.06775 .[35] R. C. Jackson, R. Yuan, D.-L. Chow, W. S. Newman, and M. C. C ̧ avus ̧o ̆glu. Real-time visualtracking of dynamic surgical suture threads. IEEE Transactions on Automation Science andEngineering , 15(3):1078–1090, 2018. doi:10.1109/TASE.2017.2726689.[36] N. Padoy and G. Hager. Deformable tracking of textured curvilinear objects. In Proceedings ofthe British Machine Vision Conference , pages 5.1–5.11. BMV A Press, 2012. ISBN 1-901725-46-4. doi:http://dx.doi.org/10.5244/C.26.5.[37] P. Parmar. Use of computer vision to detect tangles in tangled objects. In 2013 IEEE SecondInternational Conference on Image Information Processing (ICIIP-2013) . IEEE, dec 2013.doi:10.1109/iciip.2013.6707551.[38] M. Yan, Y . Zhu, N. Jin, and J. Bohg. Self-supervised learning of state estimation for manip-ulating deformable linear objects. IEEE Robotics and Automation Letters , 5(2):2372–2379,2020. doi:10.1109/LRA.2020.2969931.[39] A. Caporali, K. Galassi, R. Zanella, and G. Palli. Fastdlo: Fast deformable linear objectsinstance segmentation. IEEE Robotics and Automation Letters , 7(4):9075–9082, 2022. doi:10.1109/LRA.2022.3189791.[40] B. O. Community. Blender - a 3D modelling and rendering package . Blender Foundation,Stichting Blender Foundation, Amsterdam, 2018. URL http://www.blender.org .[41] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations , 2015.[42] P. Iakubovskii. Segmentation models pytorch. https://github.com/qubvel/segmentation_models.pytorch , 2019.[43] K. Reidemeister. Knot theory . BCS Associates, 1983.7 Appendix7.1 Details on HANDLOOM Methods7.1.1 Over/Undercrossing PredictorModel Architecture and Inference: The binary classification threshold of 0.275 is determinedby testing accuracy on a held-out validation set of 75 images on threshold values in the range111 2 Figure 5: Reidemeister Moves and Crossing Cancellation : Left of part 1 depicts Reidemeister Move II. Rightof part 1 depicts Reidemeister Move I. Part 2 shows that by algorithmically applying Reidemeister Moves IIand I, we can cancel trivial loops, even if they visually appear as knots.[0.05,0.95]at intervals of 0.05. Scores <0.275 indicate undercrossing predictions and scores≥0.275indicate overcrossing predictions. We output the raw prediction score and a scaled confi-dence value (0.5 to 1) indicating the classifier’s probability.7.2 Details on Robot Untangling using HANDLOOM7.2.1 Knot DefinitionConsider a pair of points p1andp2on the cable path at time twith ( p1, p2∈ Ct). Knot theory strictlyoperates with closed loops, so to form a loop with the current setup, we construct an imaginarycable segment with no crossings joining p1top2[43]. This imaginary cable segment passes abovethe manipulation surface to complete the loop between p1andp2(“p1→p2loop”). A knot existsbetween p1andp2at time tif no combination of Reidemeister moves I, II (both shown in Figure 5),and III can simplify the p1→p2loop to an unknot, i.e. a crossing-free loop. In this paper, we aim tountangle semi-planar knots. For convenience, we define an indicator function k(s) : [0,1]→ {0,1}which is 1 if the point θ(s)lies between any such points p1andp2, and 0 otherwise.Based on the above knot definition, this objective is to remove all knots, such thatRk(s)10= 0.In other words, the cable, if treated as a closed loop from the endpoints, can be deformed into anunknot. We measure the success rate of the system at removing knots, as well as the time taken toremove these knots.7.2.2 State DefinitionWe construct line segments between consecutive points on the trace outputted by the learned cabletracer (Section 4.1). Crossings are located at the points of intersection of these line segments. Weuse the crossing classifier (Section 4.2) to estimate whether these crossings are over/undercrossings.We also implement probabilistic crossing correction with the aim of rectifying classification errors,as we describe in Section 4.2.2.We denote the sequence of corrected crossings, in the order that they are encountered in the trace,byX= (c1, ..., c n), where nis the total number of crossings and c1, ..., c nrepresent the crossingsalong the trace.7.2.3 Crossing CancellationCrossing cancellation allows for the simplification of cable structure by removing non-essentialcrossings, shown in Figure 5. It allows the system to filter out some trivial configurations as Rei-demeister moves maintain knot equivalence [43]. We cancel all pairs of consecutive crossings ( ci,ci+1) inXfor some j) that meet any of the following conditions:•Reidemeister I: ciandci+1are at the same location, or•Reidemeister II: ciandci+1are at the same set of locations as cjandcj+1(cj, cj+1∈ X).Additionally, ciandci+1are either both overcrossings or both undercrossings. We alsocancel ( cj, cj+1) in this case.12Knot Detection & Topology Cage-Pinch Points Cage Pinch Knot Figure 6: Knot Detection and Cage Pinch Point Selection : The left image shows using crossing cancellationrules from knot theory, the knot detection algorithm analytically determines where the knot begins in the cable.The right image shows the survey process for selecting the cap pinch points.We algorithmically perform alternating Reidemeister moves I and II as described. We iterativelyapply this step on the subsequence obtained until there are no such pairs left. We denote the finalsubsequence, where no more crossings can be canceled, by X′.7.2.4 Knot DetectionWe say that a subsequence of X′,Kij= (ci, ..., c j), defines a potential knot if:•ciis an undercrossing, and•cjis an overcrossing at the same location, and• at least one intermediate crossing, i.e. crossing in X′that is not ciorcj, is an overcrossing.The first invariant is a result of the fact that all overcrossings preceding the first undercrossing (asseen from an endpoint) are removable. We can derive this by connecting both endpoints from abovevia an imaginary cable (as in Section 7.2.1): all such overcrossings can be removed by manipulatingthe loop formed. The second invariant results from the fact that a cable cannot be knotted without aclosed loop of crossings. The third and final invariant can be obtained by noting that a configurationwhere all intermediate crossings are undercrossings reduces to the unknot via the application ofthe 3 Reidemeister moves. Therefore, for a knot to exist, it must have at least one intermediateovercrossing.Notably, these conditions are necessary, but not sufficient, to identify knots. However, they improvethe likelihood of bypassing trivial configurations and detecting knots. This increases the system’sefficiency by enabling it to focus its actions on potential knots.7.2.5 Algorithmic Cage-Pinch Point DetectionAs per the definition introduced in Section 7.2.4, given knot Kij= (ci, ..., c j),ciandcjdefine thesegments that encompass the knot where ciis an undercrossing and cjis an overcrossing for thesame crossing. The pinch point is located on the overcrossing cable segment, intended to increasespace for the section of cable and endpoint being pulled through. The cage point is located on theundercrossing cable segment. To determine the pinch point, we search from crossing cu1to crossingcu2.cu1is the previous undercrossing in the knot closest in the trace to j.u2> j andcu2is thenext undercrossing after the knot. We search in this region and select the most graspable region topinch at, where graspability ( G) is defined by the number of pixels that correspond to a cable withina given crop and a requirement of sufficient distance from all crossings ci. To determine the cagepoint, we search from crossing citockwhere i < k < j andckis the next undercrossing in the knotclosest in the trace to ci. We similarly select the most graspable point. If no points in the search13space for either the cage or pinch point are graspable, meaning G <TwhereTis an experimentallyderived threshold value, we continue to step along the trace from cu2for pinch and from ckfor cageuntilG≥ T. This search process is shown in Figure 6.7.2.6 Manipulation PrimitivesWe use the same primitives as in SGTM 2.0 (Sliding and Grasping for Tangle Manipulation 2.0) [4]to implement HANDLOOM as shown in Figure 7 for untangling long cables. We add a perturbationmove.Cage-Pinch Dilation: We use cage-pinch grippers as in Viswanath et al. [3]. We have one grippercage grasp the cable, allowing the cable to slide between the gripper fingers but not slip out. Theother gripper pinch grasps the cable, holding the cable firmly in place. This is crucial for preventingknots in series from colliding and tightening during untangling. The partial version of this moveintroduced by Shivakumar et al. [4] separates the grippers to a small, fixed distance of 5 cm.Reveal Moves: First, we detect endpoints using a Mask R-CNN object detection model. If bothendpoints are visible, the robot performs an Endpoint Separation Move by grasping at the two end-points and then pulling them apart and upwards, away from the workspace, allowing gravity to helpremove loops before placing the cable back on the workspace. If both endpoints are not visible, therobot performs an Exposure Move . This is when it pulls in cable segments exiting the workspace.Building on prior work, we add a focus on where this move is applied. While tracing, if we detectthe trace hits the edge, we perform an exposure move at the point where the trace exits the image.Perturbation Move: If an endpoint or the cable segment near an endpoint has distracting cablesegments nearby, making it difficult for the analytic tracer to trace, we perturb it by grasping it andtranslating in the x-y plane by uniformly random displacement in a 10cm×10cm square in order toseparate it from slack.Knot Endpoint Detection Certain Tracer Initialization Perturb Endpoint No Knots Analyze Topology Partial Dilation Exposure Move Dilation Uncertain Left Workspace Re-Encountered Trace Hit Endpoint (Re)start Figure 7: Untangling Algorithm with HANDLOOM : We first detect the endpoints and initialize the tracerwith start points. If we are not able to obtain start points, we perturb the endpoint and try again. Next, wetrace. While tracing, if the cable exits the workspace, we pull the cable towards the center of the workspace. Ifthe tracer gets confused and begins retracing a knot region, we perform a partial cage-pinch dilation that willloosen the knot, intended to make the configuration easier to trace on the next iteration. If the trace is ableto successfully complete, we analyze the topology. If there are no knots, we are done. If there are knots, weperform a cage-pinch dilation and return to the first step.7.2.7 Cable Untangling SystemCombining HANDLOOM and the manipulation primitives from Section 7.2.6, the cable untanglingalgorithm works as follows: First, detect endpoints and initialize the learned tracer with 6 steps ofthe analytic tracer. If HANDLOOM is unable to get these initialization points, perturb the endpointfrom which we are tracing and return to the endpoint detect step. Otherwise, during tracing, ifthe cable leaves the workspace, perform an exposure move. If the trace fails and begins retracingitself, which can happen in denser knots, perform a partial cage-pinch dilation as in [4]. If the tracecompletes and reaches the other endpoint, analyze the topology. If knots are present, determine the14Table 6: Tracing on Unseen Cables ResultsCable Reference TR 1 2 3 4 5 Avg.Tracing Success Rate 6/8 7/8 8/8 7/8 6/8 6/8 40/48=83%Failures (I) 2 (I) 1 (II) 1 (I) 1, (III) 1 (II) 1, (III) 1cage-pinch points for it, apply a cage-pinch dilation move to them, and repeat the pipeline. If noknots are present, the cable is considered to be untangled. The entire system is depicted in Figure 7.7.3 Details on Knot Tying ExperimentsKnot Tying Demo 121Knot Tying With Two Distractor Cables 2Completed Overhand Knot Figure 8: Using HANDLOOM for Learning from Demos: The left (blue panel) displays the single humandemonstration, indicating the pick and place points for tying an overhand knot. The right (pink panel) showsthis demonstration successfully applied to the cable in a different configuration with 2 other distractor cables inthe scene. The first step of the demonstration is achieved through an arc length relative action while the secondstep is achieved through a crossing relative action.When performing state-based imitation, each of the pick and place points pifrom the demonstrationis parameterized in the following way: 1) find the point along the trace, T, closest to the chosenpoint ˆpiwith index jinT, 2) find the displacement di=pi−ˆpiin the local trace-aligned coordinatesystem of ˆpi, 3) in memory, for point pi, store di, arc length of ˆpi(Pjx=1Tx−Tx−1), and the indexvalue of the crossing in the list of crossings just before ˆpi.When rolling out a policy using this demonstration, there are two ways to do so: 1) relative to the arclength along the cable, or 2) relative to the fraction of the arc length between the 2 crossing indices.The way to do so is to find the point on the cable with the same arc length as ˆpifrom the demo or thefractional arc length between the same 2 crossing indices, depending on the type of demonstration.Then, apply diin the correct trace-aligned coordinate system. An example demonstration is shownin Figure 8.7.4 Experiments Failure Mode Analysis7.4.1 Using HANDLOOM for Tracing Cables Unseen During Training(1) Retraces previously traced cable (went in a loop).(2) Missteps onto a parallel cable.(3) Skips a loop.Figure 9 shows the cables tested on. The most common failure mode is (I), retracing previouslytraced cable. This is commonly observed in cases with near parallel segments or in dense loop areaswithin a knot.7.4.2 Using HANDLOOM for Cable Inspection in Multi-Cable Settings(I) Misstep in the trace, i.e. the trace did not reach any adapter.(II) The trace reaches the wrong adapter.15Figure 9: Cables for Tracing Unseen Cables ExperimentTable 7: Multi-Cable Tracing ResultsAnalytic LearnedTier A1 3/30 27/30Tier A2 2/30 23/30Tier A3 1/30 23/30Failures (I) 3, (II) 45, (III) 36 (I) 14, (II) 1, (III) 2(III) The trace reaches the correct adapter but is an incorrect trace.The most common failure mode for the learned tracer, especially in Tier A3, is (I). One reason forsuch failures is the presence of multiple twists along the cable path (particularly in Tier A3 setups,which contain more complex inter-cable knot configurations). The tracer is also prone to deviatingfrom the correct path on encountering parallel cable segments. In Tier A2, we observe two instancesof failure mode (III), where the trace was almost entirely correct in that it reached the correct adapterbut skipped a section of the cable.The most common failure modes across all tiers for the analytic tracer are (II) and (III). The analytictracer particularly struggles in regions of close parallel cable segments and twists. As a result of thescoring metric, 87 of the 90 paths that we test reach an adapter; however, 45/90 paths did not reachthe correct adapter. Even for traces that reach the correct adapter, the trace is incorrect, jumping toother cables and skipping sections of the true cable path.7.4.3 Using HANDLOOM for Physical Robot Knot Tying from Demonstrations(1) Trace missteps onto a parallel cable.(2) Cable shifted during manipulation, not resulting in a knot at the end.Failure mode (1) occurs when the distractor cable creates near parallel sections to the cable ofinterest for knot tying, causing the trace to misstep. Failure mode (2) occurs when the manipulationsometimes slightly perturbs the rest of the cable’s position while moving one point of the cable,causing the end configuration to not be a knot, as intended.7.4.4 Using HANDLOOM for Knot Detection(A) The system fails to detect a knot that is present—a false negative.(B) The system detects a knot where there is no knot present—a false positive.(C) The tracer retraces previously traced regions of cable.(D) The crossing classification and correction schemes fail to infer the correct cable topology.(E) The knot detection algorithm does not fully isolate the knot, also getting surrounding trivialloops.(F) The trace skips a section of the true cable path.(G) The trace is incorrect in regions containing a series of close parallel crossings.Table 8: Learning From DemosSucc. Rate FailuresTier B1 5/5 -Tier B2 4/5 (1) 1Tier B3 4/5 (1) 1, (2) 116Table 9: HANDLOOM ExperimentsSGTM 2.0 HANDLOOM (-LT) HANDLOOM (-CC) HANDLOOMTier C1 2/30 14/30 20/30 24/30Tier C2 28/30 8/30 21/30 26/30Tier C3 12/30 14/30 0/30 19/30Failures (A) 30, (B) 18 (D) 11, (F) 7 (B) 38, (C) 5, (B) 11, (D) 8(G) 24, (H) 11 (E) 6 (F) 1Table 10: HANDLOOM and Physical Robot Experiments (90 total trials)Tier D1 Tier D2 Tier D3SGTM 2.0 HANDLOOM SGTM 2.0 HANDLOOM SGTM 2.0 HANDLOOMKnot 1 Succ. 11/15 12/15 6/15 11/15 9/15 14/15Knot 2 Succ. - - - - 2/15 6/15Verif. Rate 11/11 8/12 6/6 6/11 1/2 2/6Knot 1 Time (min) 1.1±0.1 2.1±0.3 3.5±0.7 3.9±1.1 1.8±0.4 2.0±0.4Knot 2 Time (min) - - - - 3.1±1.2 7.5±1.6Verif. Time (min) 5.7±0.9 6.1±1.4 6.4±1.8 10.1±0.7 5.4 9.6±1.5Failures (7) 4 (1) 2, (2) 1 (1) 3, (5) 6 (2) 2, (4) 1 (1) 3, (2) 3, (5) 3 (1) 2, (2) 3(1) 2, (2) 1 (1) 3, (5) 6 (5) 1 (6) 2, (7) 2 (3) 1, (6) 3(H) The tracer takes an incorrect turn, jumping to another cable segment.For SGTM 2.0, the most common failure modes are (A) and (B), where it misses knots or incorrectlyidentifies knots when they are out of distribution. For HANDLOOM (-LT), the most common failuremodes are (F), (G), and (H). All 3 failures are trace-related and result in knots going undetected orbeing incorrectly detected. For HANDLOOM (-CC), the most common failure modes are (B) and(E). This is because HANDLOOM (-CC) is unable to distinguish between trivial loops and knotswithout the crossing cancellation scheme. By the same token, HANDLOOM (-CC) is also unableto fully isolate a knot from surrounding trivial loops. For HANDLOOM, the most common failuremode is (B). However, this is a derivative of failure mode (D), which is present in HANDLOOM(-LT), HANDLOOM (-CC), and HANDLOOM. Crossing classification is a common failure modeacross all systems and is a bottleneck for accurate knot detection. In line with this observation, wehope to dig deeper into accurate crossing classification in future work.7.4.5 Using HANDLOOM for Physical Robot Untangling(1) Incorrect actions create a complex knot.(2) The system misses a grasp on tight knots.(3) The cable falls off the workspace.(4) The cable drapes on the robot, creating an irrecoverable configuration.(5) False termination.(6) Manipulation failure.(7) Timeout.The main failure modes in HANDLOOM are (1), (2), and (6). Due to incorrect cable topologyestimates, failure mode (1) occurs: a bad action causes the cable to fall into complex, irrecoverablestates. Additionally, due to the limitations of the cage-pinch dilation and endpoint separation moves,knots sometimes get tighter during the process of untangling. While the perception system is stillable to perceive the knot and select correct grasp points, the robot grippers bump the tight knot,moving the entire knot and causing missed grasps (2). Lastly, we experience manipulation failureswhile attempting some grasps as the YuMi has a conservative controller (6). We hope to resolvethese hardware issues in future work.The main failure modes in SGTM 2.0 are (5) and (7). Perception experiments indicate that SGTM2.0 has both false positives and false negatives for cable configurations that are out of distribution.(5) occurs when out-of-distribution knots go undetected. (7) occurs when trivial loops are identifiedas knots, preventing the algorithm from terminating.17 |
93qz1k6_6h | Dexterous Functional GraspingAnanye Agarwal Shagun Uppal Kenneth Shaw Deepak PathakCarnegie Mellon UniversityFigure 1: We use a single policy trained in simulation to pickup and grasp objects like hammers, drills, saucepan,staplers and screwdriver in different positions and orientations. An affordance model based on matching DINOv2features is used to localize the object and move above the relevant region of the object. A blind reactive policythen picks up the object and moves it inside the palm to a firm grasp so that post-grasp motions like drilling,hammering, etc can be executed. Videos at https://dexfunc.github.io/ .Abstract: While there have been significant strides in dexterous manipulation, mostof it is limited to benchmark tasks like in-hand reorientation which are of limitedutility in the real world. The main benefit of dexterous hands over two-fingeredones is their ability to pickup tools and other objects (including thin ones) and graspthem firmly in order to apply force. However, this task requires both a complexunderstanding of functional affordances as well as precise low-level control. Whileprior work obtains affordances from human data this approach doesn’t scale to low-level control. Similarly, simulation training cannot give the robot an understandingof real-world semantics. In this paper, we aim to combine the best of both worldsto accomplish functional grasping for in-the-wild objects. We use a modularapproach. First, affordances are obtained by matching corresponding regions ofdifferent objects and then a low-level policy trained in sim is run to grasp it. Wepropose a novel application of eigengrasps to reduce the search space of RL usinga small amount of human data and find that it leads to more stable and physicallyrealistic motion. We find that eigengrasp action space beats baselines in simulationand outperforms hardcoded grasping in real and matches or outperforms a trainedhuman teleoperator. Videos at https://dexfunc.github.io/ .Keywords: Functional Grasping, Tool Manipulation, Sim2real1 IntroductionThe human hand has played a pivotal role in the development of intelligence – dexterity enabledhumans to develop and use tools which in turn necessitated the development of cognitive intelligence.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.[1,2,3,4,5] Dexterous manipulation is central to the day-to-day activities performed by humansranging from tasks like writing, typing, lifting, eating, or tool use to perform end tasks. In contrast,the majority of robot learning research still relies on using two-fingered grippers (usually paralleljaws) or suction cups which makes them restricted in terms of the kind of objects that can be graspedand how they can be grasped. For instance, grasping a hammer using a parallel jaw gripper is notonly challenging but also inherently unstable due to the center of mass of the hammer being close tothe head, which makes it impossible to use it for the hammering function it is intended for. Althoughthere are lots of recent works in learning control of dexterous hands, they are either limited to simplegrasping or the tasks of in-hand reorientation [ 6,7,8,9,10,11] which ignore the functional aspect ofpicking the object for tool use.This paper investigates the problem of functional grasping of such complex daily life objects using alow-cost dexterous multi-fingered hand. For instance, consider the sequence of events that take placewhen one uses a hammer. First, the hammer must be detected and localized in the environment. Next,one must position their hand in a suitable pose perpendicular to the handle such that a suitable grasppose may be initiated. A hammer may be feasibly grasped from both the hammer or the head andchoosing the correct pose (also known as pre-grasp pose ) requires an understanding of how hammerswork. Next, the actual grasping motion is executed which is a high-dimensional closed-loop operationinvolving first picking up the hammer from the table and then moving it with respect to the handinto a firm power grasp. Power grasp is essential to ensure the stability of the hammer during usage.Once this is done, the arm can then execute the hammering motion while the hand holds it stably(post-grasp trajectory ). Notably, the act of functional grasping, which is almost a muscle memory forhumans, is not just a control problem but lies at the intersection of perception, reasoning, and control.How to do it seamlessly in a robot is the focus of our work.Inspired by the above example, we approach the problem of functional grasping in three stages:predicting pre-grasp, learning low-level control of grasping, post-grasp trajectory. Out of thesestages, visual reasoning is the critical piece of the first and third stage, while the second stage can beperformed blind using proprioception as long as the pre-grasp pose is reasonable. To obtain the pre-grasp pose, we use a one-shot affordance model that gives pre-grasp keypoints for different objects indifferent orientations by finding correspondences across objects. To obtain these correspondences, weleverage a pretrained DinoV2 model [ 12] which is trained using self-supervised learning on internetimages. This allows us to generalize across object instances. However, a more challenging problemis how to learn the low-level control for functional grasping the task itself.We take a sim2real approach for the grasping motion in our approach. Prior approaches to sim2realhave shown remarkable success for in-hand reorientation [ 7,6] and locomotion [ 13,14,15,16]. How-ever, we observe that directly applying prior sim2real methods that have shown success in locomotionor reorientation yields unrealistic finger-gaiting results in simulation that are not transferrable to thereal world. This is because grasping tools typically involve continuous surface contacts and highforces while maintaining the grasping pose – challenges which pose a significant sim2real gap andare nontrivial to engineer reward for. We introduce an action compression scheme to leverage a smallamount of human demo data to reduce the action space of the hand from 16 to 9 and constrain it tooutput physically realistic poses. We evaluate our approach across 7 complex tasks in both the realworld and simulation and find that our approach is able to make significant progress towards thismajor challenge of dexterous functional grasping as illustrated in Figure 1.2 Method: Dexterous Functional GraspingIn this paper, we aim to combine the best of both human data and large scale simulation trainingto accomplish dexterous functional grasping in the real world. Given an object to grasp we use anaffordance model to predict a plausible functional grasp pose for the hand. Then, we train a blindpickup policy to pickup the object and then grasp it tightly so that the arm may execute the post-grasptrajectory. Our method is divided into three phases - the pre-grasp, grasp and post-grasp (see Fig. 2)2High-dimensional action spaceEigen Grasps∏Policy GraspxPre-graspOne-shot Affordance Correspondences for functional pre-graspsPost-graspTele-operationMotion CapturePassive dataFigure 2: To get the pre-grasp pose we use a one-shot affordance model. After annotating one object we areable to get affordances for other objects in that category via feature matching. Given a new object, the arm ismoved to that point and oriented perpendicular to the principal component of the object mask. The sim2realpickup policy is then executed and moves the object into a power grasp. After this, a post grasp trajectory can besafely executed.In the pre-grasp phase, an affordance model outputs a region of interest of the object and we use thelocal object geometry around that region to compute a reasonable pre-grasp pose. We train a sim2realpolicy to execute robust grasps for pickup. However, in contrast to two fingered manipulation orlocomotion where simple reward functions suffice, in the complex high-dimensional dexterous caseit is easy to fall into local minima or execute poses in simulation that are not realizable in the realworld. We therefore use human data to extract a lower-dimensional subspace of the full action spaceand run RL inside the restricted action space. Empirically, this leads to physically plausible posesthat can transfer to real and stabler RL training.2.1 Pre-grasp pose from affordancesAn affordance describes a region of interest on the object that is relevant for the purpose of using it.This usually cannot be inferred from object geometry alone and depends upon the intended properuse of the object. For instance, by just looking at the geometry or by computing grasp metrics wecould conclude that grabbing a hammer from the head or handle are both equally valid ways of usingit. However, because we have seen other people use it we know that the correct usage is to grab thehandle. This problem has been studied in the literature and one approach is to use human data in theform of videos, demos to obtain annotations for affordances. However, these are either not scalableor too noisy to enable zero-shot dexterous grasping.Another approach is to leverage the fact that affordances across objects usually correspond. For allhammers, no matter the type the hammering, affordance will always be associated with the handle.This implies that feature correspondence can be used in a one-shot fashion to obtain affordances. Inparticular, we use Hadjivelichkov et al. [17], where for each object category we annotate one imagefrom the internet with its affordance mask. To obtain the affordance mask for a new object instancewe simply match DINO-ViT features to find the region which matches the specified mask. Since themask may bleed across the object boundary we take its intersection with the segment obtained usingDETIC [ 18]. Taking the center of the resulting mask gives us the keypoint (ximg, yimg)in image spacecorresponding to the pre-grasp position. To get the zimg, we project to the points (ximg, yimg)into thealigned depth image and then transform by camera intrinsics and extrinsics to get the correspondingpoint in the coordinate frame of the robot (xrobot, yrobot, zrobot). To get the correct hand orientation qwe use the object mask obtained from DETIC and take the angle perpendicular to its largest principalcomponent. Since there are three cameras, one each along x, y, z axes (Fig. 7) we repeat this process3for each camera and pick the angle that has the highest affordance matching score (see Fig. ??). Thisallows us to grasp objects in any direction, like upright drills and glasses.Given the pregrasp pose, we first move the hand to a point at fixed offset (xrobot, yrobot, zrobot) +δvwhere δvis a fixed offset along the chosen grasp axis. We then move the finger joints to a pre-grasppose with the joint positions midway between their joint limits. We found that same pre-grasp poseto work well across objects since our policy learns to adapt to the inaccuracies in the pre-grasp.2.2 Sim2real for dexterous graspingOnce the robot is in a plausible pre-grasp pose it must execute the grasp action which involves usingthe fingers to grip the object and then moving it into a stable grasp pose. This requires high frequencyclosed-loop control. Further, this is typically a locally reactive behavior which can be accomplishedusing proprioception alone. Indeed, once we move our hand close to the object we wish to graspwe can usually pick it up even if we close our eyes. However, the challenge is that learning highfrequency closed loop behavior typically requires a lot of interaction data which is missing fromhuman videos and infeasible to scale via demos. In the past, sim2real has had remarkable successesin locomotion and in-hand dexterous manipulation in learning robust and reactive policies and wepropose to use this method here.Dexterous manipulation however presents a unique challenge because of its high-dimensional nature.It is easy for the hand to enter physically inconsistent poses or experience self collisions. Further, RLin high dimensional action spaces is unstable or sample efficient. We propose to leverage a smallamount of human data to restrict the action space to physically realistic poses.Eigengrasp action space A small number of human demos are often used to guide RL towardsreasonable solutions like offline RL [ 19], DAPG [ 20]. However, the main problem with these is thatthey fail to learn optimal behavior from highly suboptimal demos. Further, the coverage of the demodata may be very poor which can artificially restrict the exploration space of the RL algorithm. Wepropose a simple alternative to these approaches which works from a few demos and can discoveroptimal behaviors even from suboptimal data. Our insight is that we have a very weak constraint onthe behavior of the RL policy. We only care that the individual hand poses are realistic and not somuch about the exact sequence in which they occur. We can therefore restrict the action space suchthat only realistic hand poses are possible.In particular, suppose we are given a mocap dataset D={τ1, . . . , τ n}where τi= (x1, . . . , xk)andxi∈R16is a set of joints angles of the 16 dof hand. We perform PCA on the set of all hand posesto get 9 eigenvectors e1, . . . , emwhere m= 9. These vectors are called eigengrasps [ 21] and havebeen classically used in grasp synthesis approaches. Here, we instead use it as a compressed actionspace for RL. Our policy predicts m-dimensional actions π(ot) =at∈Rm. The raw joint angles arethen computed as a linear combination of eigenvectors (at)1e1+. . .+ (at)kek. This transformationreduces the action dimension of the RL problem and decreases sample complexity in addition toenforcing realism. It also exploits the property that the convex combination of any two realistic handposes is also likely to be realistic. Thus, doing PCA (as opposed to training a generative model)allows the policy to output hand poses that were not seen in the dataset. Empirically, we find that thisstabilizes training and minimizes variation between different random seeds.Rewards We train our policy to lift objects off the ground and them firmly grasp them in theirhand. We find that a simple reward function that is a combination of two terms rthreshold andrhand-objis enough. The first, is a binary signal incentivizing the policy to pickup the object rthreshold (t) =I[(robj(t))z≥0.04cm]and the second is a sum of exponentials and an L2 distance to incentivize theobject to be close to the palm of the handrhand-obj (t) =3Xi=1exp−∥robj−rhand∥di−4∥robj−rhand∥4where d1= 10 cm,d2= 5cmandd3= 1cm. The overall reward function is r(t) =rhand-obj (t) +0.1·rthreshold (t) + 1 . Due to the eigengrasp parameterization we do not need any additional rewardshaping terms.Policy Architecture We use a recurrent policy as that maps observations ot∈R16to actionsat∈R9. A stateful policy is able to adapt to changes in environment dynamics better than afeedforward one. This allows our robot to adapt to slight errors in the pre-grasp pose from theaffordance model. The policy observes the 7 dimensional target pose (position, quaternion) of theend-effector and the 16 joint angle positions of the hand.Training environment We want our policy to be robust to different surface properties and geome-tries and grasp them firmly. We therefore domain randomize the physical properties of the object,robot and simulation environment. We procedurally generate a set of hammers in simulation withrandomized physical parameters. The hand is initialized in a rough pre-grasp pose with hand jointangles zeroed out. This corresponds to a neutral relaxed pose for the hand. The end-effector pose isinitialized to be close to the real world pose obtained from the affordance model. The arm is keptclose to the ground for 1s to allow the grasp to execute and then spun around in a circle. Episodes areterminated if the hand object distance exceeds 20cm. This spinning motion produces tight graspsand we see emergent behavior where the hand adjusts its grasp in response to changes in orientation.We also randomize physical properties of the simulation and add gaussian noise to observations andactions to simulate actuator noise (see Tab. 4).2.3 Post-grasp trajectoryOnce the object or tool is firmly grasped, since it is mounted on a 6-dof arm it can be movedarbitrarily in space to accomplish tasks such as screwing, hammering, drilling, etc. During trainingand evaluation we use either motion capture trajectories or define a set of keypoints and interpolatebetween them, but in principle these could be obtained from other sources such as internet video orthird person imitation.3 Experimental SetupWe demonstrate the performance of our method on a variety of objects, both similar and verydissimilar to the training objects like stapler, drill (light and heavy), saucepan, hammer (light andheavy). In our real world experiments, we aim to understand the reliability and efficiency of ourmethod relative to an expert teleop oracle (20 hours) and a hardcoded grasping primitive. The formeracts as an upper bound on the performance of the hardware while the latter is designed to show thatlarge scale sim training yields a more robust policy than a grasping hardcoded.In simulation, we test the effectiveness of our restricted action space and policy architecture. First,we compare against an unconstrained baseline that operates in the full 16 dimensional action space.Second, we compare against a policy that operates in the latent space of a VAE trained on the mocapdataset. Unlike our method, since a V AE is a generative model it can only output hand poses seen inthe dataset and cannot extrapolate to new ones. Finally, we compare to a feedforward version of ourmethod where the RNN policy is replaced by a feedforward one. This is designed to test whetherrecurrence helps in adaptation to domain randomization.We experimentally validate the pre-grasp affordance matching [ 17] part of our pipeline separately.We compare against CLIPort [ 22] and CLIPSeg [ 23], two CLIP-based affordance prediction methods.CLIPort uses demonstration data to learn the correct affordances in a supervised fashion. CLIPSeguses CLIP text and image features to zero-shot segment an object given a text prompt.5Average Reward Success RateHammer Drill Screwdriver Hammer Drill ScrewdriverUnconstrained 213 .40±169 .37 102 .12±36.12 121 .28±96.05 0.60±0.55 0 .09±0.11 0 .46±0.45V AE 140 .60±109 .24 83 .34±43.32 117 .25±76.26 0.30±0.44 0 .08±0.18 0 .25±0.41Feed-forward 232 .80±175 .59 104 .61±44.84 153 .19±105 .830.60±0.54 0 .21±0.19 0 .56±0.52Ours 327 .40±11 .61 129 .03±22 .58 211 .13±11 .141.00±0.00 0 .23±0.16 0 .95±0.10Table 1: We measure the average reward and success rate of the trained policy in simulation. For each methodwe train a policy to hold the object close to the palm while arm spins. A success is counted when the arm doesnot drop the object at anytime. We see that our method outperforms the baselines and has significantly lessvariation between the runs. This is likely because the restricted action space makes the exploration problemeasier and the physically plausible poses help keep the motion smooth. Each policy was trained randomizedhammer but still generalizes to other different objects.3.1 HardwareWe use the xarm6 with our own custom hand pictured in Fig. 7. The arm has 6 actuated joints, whilethe hand has 16 joints, four on each digit (three fingers and one thumb). An overhead calibratedD435 camera facing downward is used to obtain masks and affordance regions. The hand consists ofDynamixel servos mounted in a special kinematic structure designed to maximize dexterity [ 24]. Weuse an overhead D435 camera to obtain pre-grasp end-effector poses. Both the arm and the hand runat 30Hz. To teleoperate the hand and collect human demos for eigengrasps we use a Manus VR glovewith SteamVR lighthouses which gives fingertip and hand positions which are then retargeted to ourhand as in Figure 8.3.2 Implementation DetailsWe use IsaacGym [ 25] as a simulator with IsaacGymEnvs for the environments and rl games as thereinforcement learning library. The policy contains a layer-normed GRU with 256 as the hiddenstate followed by an MLP with hidden states 512, 256, 128. The policy is trained using PPO withbackpropagation through time truncated at 32 timesteps. We run 8192 environments in parallel andtrain for 400 epochs.4 Results and Analysis4.1 Simulation ResultsWe train each baseline and our method for 400 epochs over 5 seeds. We find that ours beats all othermethods primarily because it is stable with respect to the seed whereas the other baselines fluctuatewidely in performance across seeds resulting in a high standard deviation and lower average overallperformance. Note that our method also perfectly solves the training task for all seeds. This is likelydue to a combination of two factors (a) the restricted action space nearly halves the action dimension(from 16 to 9), since the search space scales exponentially with action dimension this cuts down thespace significantly and it is more likely that the algorithm discovers optimal behavior regardless ofseed, and (b) since each hand pose is realistic and doesn’t have self-collisions it leads to smootherand more predictable dynamics in simulation allowing the policy to learn better.The RNN policy is also better and more stable than the feedforward variant as reported in Table 1.This is because (a) an RNN can use the hidden state to adapt to domain randomization (b) since thehand hardware does not output joint velocities, the feedforward policy has no idea of how fast thefingers are moving which can hinder performance. The RNN on the other hand is able to implicitlycapture velocity of joints in the hidden state and this helps it to learn better.4.2 Real World ResultsWe choose a variety of objects to compare against – hammer (light and heavy), saucepan, drill(light and heavy), stapler and screwdriver. Of these, hammer and saucepan are quite similar to the6Hammer (unseen) Spatula (seen) Frying Pan (seen)Pick success IoU Pick success IoU Pick success IoUCLIPort 2/10 0.034 6/10 0.15 7/10 0.15ClipSeg 1/10 0.05 2/10 0.06 1/10 0.014Ours 9/10 0.33 8/10 0.23 7/10 0.17Table 3: We compare our affordance matching against CLIPort and CLIPSeg in terms of pick success rateand IoU between the predicted and ground truth affordance (human-annotated). We use the simulated CLIPortdataset for both unseen and seen objects. Our method outperforms CLIPort on both seen and unseen categories.CLIPSeg fails because it does not capture object parts such as the handle of the hammer.training distribution because of the handle geometry while the drill, stapler and screwdriver havesubstantially different geometry. The heavy drill is especially challenging because of its narrow gripand unbalanced weight distribution. We run 10 trials per object per baseline in the real world (seeTable 2). For all objects except the saucepan we execute a post-grasp trajectory where the object ispicked up and waved around to test the strength of the grasp. For the saucepan we simply pick it upsince waving it around is a safety hazard. During each trial, the orientation is randomized in the range[−π, π]and position is randomized in the entire workspace 1m×0.5m, the affordance model is runand the hand is moved to the pre-grasp pose. Videos at https://dexfunc.github.io/ .Success Rate ↑Teleop Oracle Hardcoded OursHammer (heavy) 0.5 0.0 0.8Hammer (light) 0.6 0.3 0.9Sauce pan 0.9 0.3 0.9Drill (heavy) 0.9 0.2 0.5Drill (light) 0.9 0.3 0.8Stapler 0.9 0.3 1.0Screwdriver 0.5 0.0 0.7Table 2: We show functional grasping for a varied setof objects. We compare to a hardcoded pinch grasp anda trained teleoperator with a VR glove. The hardcodedbaseline fails since the fingers push the object behind.Our method is able to beat the teleop oracle on challeng-ing objects such as screwdriver, stapler and hammer.We obtain the hardcoded baseline by interpo-lating between the fully open and fully closedeigengrasp over 1s. This leads to the handquickly snapping shut before the arm rises up.We find that this baseline performs poorly andgets zero success rate on many objects, espe-cially thin ones. This is because in order tosuccessfully grasp the object the thumb mustretract closer to the palm. However, the timingof this is crucial, if the thumb retracts too earlythen the object flies back away from the hand.This is the most common failure case of thisbaseline that we observe. The hardcoded graspsucceeds for tall objects like an upright stapleror if the object happens to be in a favorable poseat the time of grasping.The teleop oracle baseline was carried out with a Manus VR glove with the joints mapped one toone to the robot hand (ignoring the human pinky). This was teleoperated by a trained user ( 20 hoursof experience). This was intended to serve as an upper bound of hardware capability. We find ourmethod matches or slightly lags behind the oracle for drill (light) and saucepan. Surprisingly, forstapler, screwdriver and both hammers it even exceeds the oracle baseline. This is because theseobjects are heavy and sit close to the ground and require very swift and forceful motion which is alsovery precise in order to be successfully picked out. This is very hard to execute reliably for a humanbeing, whereas our policy is able to do it well. We also find that our method is able to complete thetask in a shorter time for the same reason.4.3 Affordance AnalysisWe experimentally validate the pre-grasp affordance matching part of our pipeline separately. Wecompare against CLIPort [ 22] and CLIPSeg [ 23] in terms of both pick success rate and IoU betweenthe predicted and ground truth affordance (human-annotated). We run evaluation on the simulatedCLIPort dataset for both unseen and seen objects (Table. 3). For our method, we annotate oneexemplar from each category. To obtain affordance from CLIPSeg we prompt with the relevant partof the object such as “hammer handle”. Note that Spatula and Frying Pan are present in the CLIPorttraining data while hammer is a new category.7Our method outperforms CLIPort on both seen and unseen categories. We observe that CLIPSeg failsto localize objects or is not able to capture the functional part of the object and only has understandingof the entire object as a whole (Fig. 6). While CLIPort is able to localize objects better but oftenpredicts bounding boxes that are not functionally correct (such as the pan part of the sauce instead ofthe handle in Fig. 6).5 Related WorkIn-hand dexterous manipulation: Dexterity in humans is the ability to manipulate objects withintheir hand’s workspace [ 26,27,28]. Accordingly, in-hand reorientation has remained a standard, yetchallenging task in robotics to imitate a human’s dexterity. In recent years, there has been a surgeof interest in this field and sim2real approaches have shown some success at reorienting objects[7,29,9,30,8] and also manipulating them [ 6,31]. Other works bypass sim and directly learnin-hand manipulation through trial and error in the real world [ 11,10]. Some other works use humandemos to guide RL [20] and others directly use demos to learn policies [32].Dexterous grasping: While in-hand reorientation is an important task most of the uses of adexterous hand involve grasping objects in different poses. Because of the large degrees of freedom,grasp synthesis is significantly more challenging. The classical approach is to use optimization[21,33,34]. This approach is still used today with the form or force closure objective [ 35,36,37].Some methods use the contact between the object and the hand as a way to learn proper grasping[38,39,40,41]. A V AE can be trained on these generated poses to learn a function that maps fromobject to grasp pose [ 36,42]. Recent works leverage differentiable simulation to synthesize stablegrasp poses [ 43]. Other works don’t decouple this problem into a grasp synthesis phase and learn itend-to-end in simulation [44], from demonstrations [45, 46, 32] or teleoperation [47, 48].Functional Grasping: While simulation can be a powerful tool to optimize grasp metrics, func-tional affordances are usually human data since there may be more than one physically valid grasppose but only one functionally valid one that allows one to use the object properly. Some ap-proaches rely on clean annotations or motion capture datasets [ 49,50,51,52] for hand object contact[53,54,55,56,57]. Some papers learn affordances from human images or video [ 58,59] directlyor through retargeting. These can however be noisy since they rely on hand pose detectors such as[60,61] which are often noisy and difficult to learn from directly [ 45]. Some recent work in this areabegin to target functional grasping using large scale datasets as a prior [62, 63, 64].6 Limitations and ConclusionWe show that combining semantic information from models trained on internet data with the robust-ness of low-level control trained in simulation can yield functional grasps for a large range of objects.We show that using eigengrasps to restrict the action space of RL leads to policies that transfer betterand are physically realistic. This leads to policies that are better to deploy in the real world on robothand hardware.The main failure case of our policy in the real world is due to incorrect pre-grasps from the affordancemodel. In particular, if the pre-grasp is such that the knuckle of the thumb joint lies over the objectthen the grasp fails since the hand cannot get the thumb around the object. One way to address thislimitation is to equip the robot with local field of view around the wrist such that it can finetune itsgrasp even if the affordance model is incorrect.Our method currently does not leverage joint pose information from the affordance model. While wefound this to not be necessary in the set of objects we have, it might be useful in the case of morefine-grained manipulation such as picking up very thin objects like coins or credit cards.87 AcknowledgementsWe would like to thank Russell Mendonca, Shikhar Bahl and Murtaza Dalal for fruitful discussions.KS is supported by NSF Graduate Research Fellowship under Grant No. DGE2140739. This work issupported by ONR N00014-22-1-2096 and the DARPA Machine Common Sense grant.References[1]K. Libertus, A. S. Joh, and A. W. Needham. Motor training at 3 months affects object exploration12 months later. Developmental Science , 19(6):1058–1066, 2016.[2]E. J. Gibson. Exploratory behavior in the development of perceiving, acting, and the acquiringof knowledge. Annual review of psychology , 39(1):1–42, 1988.[3]K. E. Adolph and S. E. Berger. Motor Development , chapter 4. John Wiley & Sons, Ltd,2007. ISBN 9780470147658. doi:https://doi.org/10.1002/9780470147658.chpsy0204. URLhttps://onlinelibrary.wiley.com/doi/abs/10.1002/9780470147658.chpsy0204 .[4]T. Bruce. Learning through play, for babies, toddlers and young children . Hachette UK, 2012.[5]R. A. Cortes, A. E. Green, R. F. Barr, and R. M. Ryan. Fine motor skills during early childhoodpredict visuospatial deductive reasoning in adolescence. Developmental Psychology , 2022.[6]O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron,M. Plappert, G. Powell, A. Ray, et al. Learning dexterous in-hand manipulation. The Interna-tional Journal of Robotics Research , 39(1):3–20, 2020.[7]T. Chen, M. Tippur, S. Wu, V . Kumar, E. Adelson, and P. Agrawal. Visual dexterity: In-handdexterous manipulation from depth. arXiv preprint arXiv:2211.11744 , 2022.[8]A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk,K. Van Wyk, A. Zhurkevich, B. Sundaralingam, et al. Dextreme: Transfer of agile in-handmanipulation from simulation to reality. arXiv preprint arXiv:2210.13702 , 2022.[9]Z.-H. Yin, B. Huang, Y . Qin, Q. Chen, and X. Wang. Rotating without seeing: Towards in-handdexterity through touch. Robotics: Science and Systems , 2023.[10] A. Nair, A. Gupta, M. Dalal, and S. Levine. Awac: Accelerating online reinforcement learningwith offline datasets. arXiv preprint arXiv:2006.09359 , 2020.[11] A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. In Conference on Robot Learning , pages 1101–1112. PMLR, 2020.[12] M. Oquab, T. Darcet, T. Moutakanni, H. V . V o, M. Szafraniec, V . Khalidov, P. Fernandez,D. Haziza, F. Massa, A. El-Nouby, R. Howes, P.-Y . Huang, H. Xu, V . Sharma, S.-W. Li,W. Galuba, M. Rabbat, M. Assran, N. Ballas, G. Synnaeve, I. Misra, H. Jegou, J. Mairal,P. Labatut, A. Joulin, and P. Bojanowski. Dinov2: Learning robust visual features withoutsupervision, 2023.[13] A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrainsusing egocentric vision. CoRL , 2022.[14] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robustperceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62):eabk2822,2022.[15] G. B. Margolis and P. Agrawal. Walk these ways: Tuning robot control for generalization withmultiplicity of behavior. In K. Liu, D. Kulic, and J. Ichnowski, editors, Proceedings of The 6thConference on Robot Learning , volume 205 of Proceedings of Machine Learning Research ,pages 22–31. PMLR, 14–18 Dec 2023. URL https://proceedings.mlr.press/v205/margolis23a.html .9[16] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots. RSS,2021.[17] D. Hadjivelichkov, S. Zwane, M. P. Deisenroth, L. de Agapito, and D. Kanoulas. One-shottransfer of affordance regions? affcorrs! In Conference on Robot Learning , 2022.[18] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In ECCV , 2022.[19] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforcementlearning, 2020.[20] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning complex dexterous manipulation with deep reinforcement learning and demonstrations.arXiv preprint arXiv:1709.10087 , 2017.[21] M. Ciocarlie, C. Goldfeder, and P. Allen. Dexterous grasping via eigengrasps: A low-dimensional approach to a high-complexity problem. In Robotics: Science and systemsmanipulation workshop-sensing and adapting to the real world , 2007.[22] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipula-tion. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[23] T. L ̈uddecke and A. Ecker. Image segmentation using text and image prompts. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7086–7096,2022.[24] K. Shaw, A. Agarwal, and D. Pathak. Leap hand: Low-cost, efficient, and anthropomorphichand for robot learning. 2023.[25] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac gym: High performance gpu-based physics simulation forrobot learning. arXiv preprint arXiv:2108.10470 , 2021.[26] R. R. Ma and A. M. Dollar. On dexterity and dexterous manipulation. In 2011 15th InternationalConference on Advanced Robotics (ICAR) , pages 1–7. IEEE, 2011.[27] N. Kamakura, M. Matsuo, H. Ishii, F. Mitsuboshi, and Y . Miura. Patterns of static prehension innormal hands. The American journal of occupational therapy , 34(7):437–445, 1980.[28] C. L. MacKenzie and T. Iberall. The grasping hand . Elsevier, 1994.[29] H. Qi, A. Kumar, R. Calandra, Y . Ma, and J. Malik. In-Hand Object Rotation via Rapid MotorAdaptation. In Conference on Robot Learning (CoRL) , 2022.[30] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino,M. Plappert, G. Powell, R. Ribas, et al. Solving rubik’s cube with a robot hand. arXiv preprintarXiv:1910.07113 , 2019.[31] Y . Qin, B. Huang, Z.-H. Yin, H. Su, and X. Wang. Dexpoint: Generalizable point cloud rein-forcement learning for sim-to-real dexterous manipulation. In Conference on Robot Learning ,2022.[32] S. P. Arunachalam, S. Silwal, B. Evans, and L. Pinto. Dexterous imitation made easy: A learning-based framework for efficient dexterous manipulation. arXiv preprint arXiv:2203.13251 , 2022.[33] A. Miller and P. Allen. Graspit! a versatile simulator for robotic grasping. IEEE Robotics &Automation Magazine , 11(4):110–122, 2004. doi:10.1109/MRA.2004.1371616.10[34] D. Berenson and S. S. Srinivasa. Grasp synthesis in cluttered environments for dexteroushands. In Humanoids 2008-8th IEEE-RAS International Conference on Humanoid Robots ,pages 189–196. IEEE, 2008.[35] R. Wang, J. Zhang, J. Chen, Y . Xu, P. Li, T. Liu, and H. Wang. Dexgraspnet: A large-scale robotic dexterous grasp dataset for general objects based on simulation. arXiv preprintarXiv:2210.02697 , 2022.[36] P. Li, T. Liu, Y . Li, Y . Geng, Y . Zhu, Y . Yang, and S. Huang. Gendexgrasp: Generalizabledexterous grasping. arXiv preprint arXiv:2210.00722 , 2022.[37] K. M. Lynch and F. C. Park. Modern robotics . Cambridge University Press, 2017.[38] P. Grady, C. Tang, C. D. Twigg, M. V o, S. Brahmbhatt, and C. C. Kemp. ContactOpt: Optimizingcontact to improve grasps. In Conference on Computer Vision and Pattern Recognition (CVPR) ,2021.[39] P. Mandikal and K. Grauman. Learning dexterous grasping with object-centric visual affor-dances. In 2021 IEEE International Conference on Robotics and Automation (ICRA) , pages6169–6176, 2021. doi:10.1109/ICRA48506.2021.9561802.[40] S. Brahmbhatt, C. Ham, C. C. Kemp, and J. Hays. Contactdb: Analyzing and predicting graspcontact via thermal imaging. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR) , June 2019.[41] S. Brahmbhatt, A. Handa, J. Hays, and D. Fox. Contactgrasp: Functional multi-finger graspsynthesis from contact. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 2386–2393, 2019. doi:10.1109/IROS40897.2019.8967960.[42] Y . Xu, W. Wan, J. Zhang, H. Liu, Z. Shan, H. Shen, R. Wang, H. Geng, Y . Weng, J. Chen, T. Liu,L. Yi, and H. Wang. Unidexgrasp: Universal robotic dexterous grasping via learning diverseproposal generation and goal-conditioned policy. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 4737–4746, June 2023.[43] D. Turpin, L. Wang, E. Heiden, Y .-C. Chen, M. Macklin, S. Tsogkas, S. Dickinson, andA. Garg. Grasp’d: Differentiable contact-rich grasp synthesis for multi-fingered hands. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part VI , pages 201–221. Springer, 2022.[44] Y . Qin, B. Huang, Z.-H. Yin, H. Su, and X. Wang. Generalizable point cloud reinforcementlearning for sim-to-real dexterous manipulation. In Deep Reinforcement Learning WorkshopNeurIPS 2022 .[45] K. Shaw, S. Bahl, and D. Pathak. VideoDex: Learning Dexterity from Internet Videos. InConference on Robot Learning (CoRL) , 2022.[46] Y . Qin, Y .-H. Wu, S. Liu, H. Jiang, R. Yang, Y . Fu, and X. Wang. Dexmv: Imitation learning fordexterous manipulation from human videos. In Computer Vision–ECCV 2022: 17th EuropeanConference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX , pages 570–587.Springer, 2022.[47] A. Sivakumar, K. Shaw, and D. Pathak. Robotic telekinesis: Learning a robotic hand imitatorby watching humans on youtube, 2022.[48] A. Handa, K. Van Wyk, W. Yang, J. Liang, Y .-W. Chao, Q. Wan, S. Birchfield, N. Ratliff, andD. Fox. Dexpilot: Vision-based teleoperation of dexterous robotic hand-arm system. In 2020IEEE International Conference on Robotics and Automation (ICRA) , pages 9164–9170. IEEE,2020.11[49] Z. Fan, O. Taheri, D. Tzionas, M. Kocabas, M. Kaufmann, M. J. Black, and O. Hilliges.ARCTIC: A dataset for dexterous bimanual hand-object manipulation. In Proceedings IEEEConference on Computer Vision and Pattern Recognition (CVPR) , 2023.[50] R. Goyal, S. Ebrahimi Kahou, V . Michalski, J. Materzynska, S. Westphal, H. Kim, V . Haenel,I. Fruend, P. Yianilos, M. Mueller-Freitag, et al. The” something something” video databasefor learning and evaluating visual common sense. In Proceedings of the IEEE internationalconference on computer vision , pages 5842–5850, 2017.[51] C. Zimmermann, D. Ceylan, J. Yang, B. Russell, M. Argus, and T. Brox. Freihand: A datasetfor markerless capture of hand pose and shape from single rgb images. In Proceedings of theIEEE/CVF International Conference on Computer Vision , pages 813–822, 2019.[52] O. Taheri, N. Ghorbani, M. J. Black, and D. Tzionas. GRAB: A dataset of whole-bodyhuman grasping of objects. In European Conference on Computer Vision (ECCV) , 2020. URLhttps://grab.is.tue.mpg.de .[53] S. Brahmbhatt, C. Ham, C. C. Kemp, and J. Hays. Contactdb: Analyzing and predicting graspcontact via thermal imaging. In Proceedings of the IEEE/CVF conference on computer visionand pattern recognition , pages 8709–8719, 2019.[54] Y . Liu, Y . Liu, C. Jiang, K. Lyu, W. Wan, H. Shen, B. Liang, Z. Fu, H. Wang, and L. Yi. Hoi4d:A 4d egocentric dataset for category-level human-object interaction, 2022.[55] O. Taheri, N. Ghorbani, M. J. Black, and D. Tzionas. Grab: A dataset of whole-body humangrasping of objects. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow,UK, August 23–28, 2020, Proceedings, Part IV 16 , pages 581–600. Springer, 2020.[56] S. Dasari, A. Gupta, and V . Kumar. Learning dexterous manipulation from exemplar objecttrajectories and pre-grasps. In IEEE International Conference on Robotics and Automation2023 , 2023.[57] A. Patel, A. Wang, I. Radosavovic, and J. Malik. Learning to imitate object interactions frominternet videos, 2022.[58] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as aversatile representation for robotics. 2023.[59] Y . Ye, X. Li, A. Gupta, S. D. Mello, S. Birchfield, J. Song, S. Tulsiani, and S. Liu. Affordancediffusion: Synthesizing hand-object interactions. In CVPR , 2023.[60] Y . Rong, T. Shiratori, and H. Joo. Frankmocap: A monocular 3d whole-body pose estimationsystem via regression and integration. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision , pages 1749–1759, 2021.[61] A. Mittal, A. Zisserman, and P. H. Torr. Hand detection using multiple proposals. In Bmvc ,volume 2, page 5, 2011.[62] Z. Q. Chen, K. Van Wyk, Y .-W. Chao, W. Yang, A. Mousavian, A. Gupta, and D. Fox. Learningrobust real-world dexterous grasping policies via implicit shape augmentation. arXiv preprintarXiv:2210.13638 , 2022.[63] J. Ye, J. Wang, B. Huang, Y . Qin, and X. Wang. Learning continuous grasping function witha dexterous hand from human demonstrations. IEEE Robotics and Automation Letters , 8(5):2882–2889, 2023.[64] S. Brahmbhatt, A. Handa, J. Hays, and D. Fox. Contactgrasp: Functional multi-finger graspsynthesis from contact. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 2386–2393. IEEE, 2019.12A Grasping along multiple axesIn some cases, an object may be kept upright and a top-down angle of approach does not work. Todeal with these cases, we setup three cameras along each axis (Fig. 7) and run affordance matchingfor each one. We finally pick the axis that has the highest score and move the hand along that axis tothe pre-grasp pose. See Fig. 3, 4 for a vizualization. Empirically, we find that the confidence score isindeed always highest for the correct direction of approach.Figure 3: Affordance prediction for an upright drill from multiple angles. The best angle of approach is fromthe side and that is also the angle with highest affordance score. Our system picks this angle and executes agrasp.Figure 4: Affordance prediction for an upright mug from multiple angles. Our system picks the side angle withhighest affordance score and executes a grasp.13B Training curves in simulationFigure 5: Training curves for baselines in simulation. Each baseline is run over 5 seeds. We see that oursoutperforms the other baselines and also is more stable with respect to the seed. This is because of the lowerdimensional action space.C Qualitative results for affordance predictionHammerSaucepanSpatulaGround TruthCLIP-SegCLIPortOursFigure 6: Qualitative comparisons of the affordance prediction from our method and CLIP-Seg, CLIPort.Overall, our method produces predictions that are more functionally aligned. CLIP-Seg is a zero-shot methodand fails to localize the object correctly in many cases. CLIPort is able to localize the object but predicts grasppoints that are not functional, for instance it predicts a bounding box around the head of the saucepan in additionto the handle.14D Hardware SetupTop ViewFront ViewSide ViewFigure 7: Hardware setup with LEAP hand mounted on xarm6 with one D435 along each axis.Figure 8: (left) the Manus VR glove we use to teleoperate our hand (right) the hand in the retargeted pose.E Domain RandomizationFor robustness, we domain randomize physics parameters as shown in Tab. 4.Name Rangeobject scale [0.8,1.2]object mass scaling [0.5,1.5]Friction coefficient [0.7,1.3]stiffness scaling [0.75,1.5]damping scaling [0.3,3.0]Table 4: domain randomization in simulation15 |
gFXVysXh48K | Efficient Sim-to-real Transfer of Contact-RichManipulation Skills with Online Admittance ResidualLearningXiang Zhang∗, Changhao Wang∗, Lingfeng Sun, Zheng Wu, Xinghao Zhu, Masayoshi TomizukaDepartment of Mechanical EngineeringUniversity of California at Berkeley, United StatesRobot Skill AdmittanceControllerOffline PhaseAdmittanceControllerAdmittanceOptimizationOnline PhaseSkill Learning in Simulation (Sec 3.1) Admittance Learning on Real Robot (Sec 3.2)Position/V elocityControllerRobot/Environment(a) (b)(c)Figure 1: As shown in (a), we propose a robust contact-rich manipulation skill learning frameworkthat offline learns the robot motion and compliance control parameters in the simulation and onlineadapts to the real world. The structure of the admittance controller is depicted in (b). Our frameworkdemonstrates robustness in sim-to-real transfer and generalizability to diverse real-world tasks in (c)Abstract: Learning contact-rich manipulation skills is essential. Such skills re-quire the robots to interact with the environment with feasible manipulation tra-jectories and suitable compliance control parameters to enable safe and stablecontact. However, learning these skills is challenging due to data inefficiencyin the real world and the sim-to-real gap in simulation. In this paper, we in-troduce a hybrid offline-online framework to learn robust manipulation skills.We employ model-free reinforcement learning for the offline phase to obtainthe robot motion and compliance control parameters in simulation with domainrandomization. Subsequently, in the online phase, we learn the residual of thecompliance control parameters to maximize robot performance-related criteriawith force sensor measurements in real time. To demonstrate the effectivenessand robustness of our approach, we provide comparative results against exist-ing methods for assembly, pivoting, and screwing tasks. Videos are available athttps://sites.google.com/view/admitlearn.Keywords: Contact-rich Manipulation, Admittance Control, Sim-to-real Transfer∗Equal Contribution. Correspondence: {xiang zhang 98, changhaowang }@berkeley.edu7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.1 IntroductionContact-rich manipulation is common in a wide range of robotic applications, including assem-bly [1, 2, 3, 4, 5, 6, 7, 8], object pivoting [9, 10, 11], grasping[12, 13, 14], and pushing [15, 16]. Toaccomplish these tasks, robots need to learn both the manipulation trajectory and the force controlparameters. The manipulation trajectory guides the robot toward completing the task while phys-ically engaging with the environment, whereas the force control parameters regulate the contactforce. Incorrect control parameters can lead to oscillations and excessive contact forces that maydamage the robot or the environment.Past works have tackled the contact-rich skill-learning problem in different ways. First, the ma-jority of previous works [2, 10, 6, 7, 3, 17, 9, 18, 19] focus on learning the manipulation trajec-tories and rely on human experts to manually tune force control parameters. While this simplifi-cation has demonstrated remarkable performance in many applications, letting human labor tunecontrol parameters is still inconvenient. Furthermore, the tuned parameter for one task may notgeneralize well to other task settings with different kinematic or dynamic properties. For exam-ple, assembly tasks with different clearances will require different control parameters. Another lineof work deals with this problem by jointly learning the robot’s motion and force control parame-ters [20, 21, 22, 23, 24, 25, 4, 26, 8]. Such learning processes can be conducted in both real-worldand simulation. However, learning such skills on real robots is time-consuming and may damage therobot or environment. Learning in simulation is efficient and safe, however, the learned control pa-rameters may be difficult to transfer to real robots due to the sim-to-real gap, and directly deployingthe learned control parameters may cause damage to the robot.In this paper, we focus on transferring robotic manipulation skills. We notice that the manipulationtrajectory is more related to the kinematic properties, such as size and shape, which have a smallersim-to-real gap and can be transferred directly, as demonstrated by previous works [9, 2, 10, 3].However, simulating the contact dynamics proves to be challenging, primarily due to its sensitivityto various parameters, including surface stiffness and friction [27]. This sensitivity will result in alarge sim-to-real gap and affects the learned compliance control parameters. Inspired by the aboveanalysis, we propose a framework to learn robot manipulation skills that can transfer to the realworld. As depicted in Fig. 1(a), the framework contains two phases: skill learning in simulation andadmittance adaptation on the real robot. We use model-free RL [28, 29] to learn the robot’s motionwith domain randomization to enhance the robustness for direct transfer. The compliance controlparameters are learned at the same time and serve as an initialization to online admittance learn-ing. During online execution, we iteratively learn the residual of the admittance control parametersby optimizing the future robot trajectory smoothness and task completion criterion. We conductreal-world experiments on three typical contact-rich manipulation tasks: assembly, pivoting, andscrewing. Our proposed framework achieves efficient transfer from simulation to the real world.Furthermore, it shows excellent generalization ability in tasks with different kinematic or dynamicproperties, as shown in Fig. 1(c). Comparison and ablation studies are provided to demonstrate theeffectiveness of the framework.2 Related Works2.1 Sim-to-real Transfer in Robot Contact-Rich ManipulationContact-rich manipulation tasks involve the interaction between robots and the environment throughphysical contact. In recent years, there has been a growing trend in utilizing simulation environmentssuch as MuJoCo [30], Bullet [31], and IsaacGym [17] to learn and train robots for these tasks. Thesesimulation environments offer advantages in terms of safety, scalability, and cost-effectiveness. Nev-ertheless, the sim-to-real gap remains a significant challenge.To address the gap, various approacheshave been explored, including system identification, transfer learning, domain randomization, andonline adaptation. System identification approaches [32, 33] involves the calibration of simulationparameters to improve accuracy and align the simulation with real-world dynamics. Transfer learn-2ing methods [34, 35, 25] aim to fine-tune skills learned in simulation for application in real-worldscenarios. Domain randomization techniques [10, 9, 25, 36] are employed to create diverse environ-ments with varying properties, enabling the learning of robust skills for better generalization. Insteadof collecting large datasets in the real world, online adaptation methods [37, 38, 39, 40, 41, 42] utilizereal sensor measurements to optimize a residual policy/model or directly update the policy networkin real-time. Tang et al. [43] further improves the sim-to-real transfer performance by combiningthe above techniques with a modified objective function design for insertion tasks.2.2 Learning Variable Impedance/Admittance ControlCompliance control [44], such as impedance and admittance control, enables robots to behave as amass-spring-damping system. Tuning the compliance control parameters is crucial for stabilizingthe robot and accomplishing manipulation tasks. However, manual tuning can be time-consuming.To address this issue, learning-based approaches have been applied to automatically learn the controlparameters. Previous methods have focused on learning compliance control parameters either fromexpert demonstrations [20, 21, 22] or through reinforcement learning (RL) [23, 24, 25, 4, 26, 45] toacquire gain-changing policies. [21, 4, 20, 23, 25] propose to directly collect data in the real world.However, it is time-consuming to collect the data. Authors in [22, 26] have demonstrated successin directly transferring the learned control parameters from the simulation to the real world. Nev-ertheless, their applications are limited to simple tasks, such as waypoint tracking and whiteboardwiping.3 Proposed ApproachWe focus on learning robust contact-rich manipulation skills that can achieve efficient sim-to-realtransfer. We define the skill as π(xd, P|s), which generates both the robot desired trajectory xdandthe compliance control parameters Pgiven the current state s.We use Cartesian space admittance control as the compliance controller. As shown in Fig. 1(b), theadmittance control takes in the desired trajectory [xd, ̇xd, ̈xd]∈R18, and the external force/torqueFext∈R6measured on the robot end-effector and outputs the compliance trajectory [xc, ̇xc, ̈xc]∈R18to the position/velocity controller according to the mass-spring-damping dynamics [44]:M( ̈xc− ̈xd) +D( ̇xc− ̇xd) +K(xc−xd) =Fext (1)where M, K, D are the robot inertia, stiffness, and damping matrices, respectively. We assumeM, K, D is diagonal for simplicity and P={M, K, D }as the collection of all control parameters.To achieve this goal, We propose an offline-online framework for learning contact-rich manipulationskills as depicted in Fig. 1(a). In the offline phase, we employ the model-free RL with domainrandomization to learn the robot motion and the initial guess of compliance control parameters fromthe simulation (Section 3.1). In the online phase, we execute the offline-learned motions on the realrobot and learn the residual compliance control parameters by optimizing the future robot trajectorysmoothness and task completion criteria(Section 3.2).3.1 Learning offline contact-rich manipulation skillsWe utilize model-free RL to learn contact-rich manipulation skills in MuJoCo simulation [30]. Theproblem is modeled as a Markov decision process {S, A, R, P, γ }where Sis the state space, Ais the action space, Ris a reward function, Pdenotes the state-transition probability, and γis thediscount factor. For each timestep t, the agent is at the state st∈S, executes an action at∈A, andreceives a scalar reward rt. The next state is computed by the transition probability p(st+1|st, at).Our goal is to learn a policy π(a|s)that maximizes the expected future return E[Ptγtrt].Specifically, we focus on learning robot skills for three contact-rich tasks: assembly, pivoting, andscrewing. In these tasks, the robot needs to utilize the contact to either align the peg and hole3or continuously push and pivot the object, which makes them suitable testbeds for our proposedframework. The detailed task setups can be found below:Assembly Task: The goal is to align the peg with the hole and then insert it.State space : The state space s∈R18contains peg pose sp∈R6(position and Euler angles) relativeto the hole, peg velocity vp∈R6, and the external force measured on the robot wrist Fext∈R6.Action space : The action a∈R12consists of the end-effector velocity command vd∈R6and thediagonal elements of the stiffness matrix k∈R6. To simplify the training, the robot inertia Mis fixed to diag(1,1,1,0.1,0.1,0.1), and the damping matrix D=diag(d1,···, d6)is computedaccording to the critical damping condition di= 2√miki, i={1,2,3,4,5,6}.Reward function : The reward function is defined as r(s) = 10(1−∥spos−sdpos∥2), where spos∈R3isthe peg position and sdpos∈R3is the nominal hole location. The exponential function encouragessuccessful insertion by providing a high reward.Pivoting Task: The goal is to gradually push the object to a stand-up pose against the wall.State space : The state s∈R12consists of the robot pose sp∈R6and the external force Fext∈R6.Action space : For simplicity, we consider a 2d pivoting problem: the robot can only move in theX, Z direction. The robot action a∈R4contains the velocity command in X, Z direction and thecorresponding stiffness parameter.Reward function : We use the rotational distance between the goal orientation Rgoaland cur-rent object orientation Ras the cost and define the reward function as r=π2−d,withd=arccos0.5(Tr(RgoalRT)−1), which computes the distance of two rotation matrices between RandRgoal. The constant termπ2simply shifts the initial reward to 0. This reward encourages therobot to push the object to the stand-up orientation.We use domain randomization on the robot’s initial pose and contact force to improve the robustnessof the learned skills. The implementation details can be found in Appendix. B.1.3.2 Online Optimization-Based Admittance LearningWe have learned a policy that can perform contact-rich manipulation tasks in simulation. However,the sim-to-real gap may prevent us from directly transferring the learned skills to the real world. Ourgoal is to adapt the offline learned skills, especially the admittance control parameters, with onlinedata in real time. Instead of retraining skills with real-world data, we propose locally updatingthe control parameters using the latest contact force measurements during online execution. Weformulate online learning as an optimization problem that optimizes the residual control parametersto achieve smooth trajectory and task completion criteria while respecting the interaction dynamicsbetween the robot and the environment. We will describe the optimization constraints, objectivefunction, and overall online learning algorithm in Section 3.2.1, 3.2.2, and 3.2.3, respectively.3.2.1 Optimization ConstraintsRobot dynamics constraint: Admittance control enables robot to behave as a mass-spring-dampingsystem as shown in Eq. 1. We consider the robot state x= [e, ̇e], where e=xc−xd, and we canobtain the robot dynamics constraint in the state space form: ̇x= ̇e ̈e=f(x, Fext, u) = ̇e−M−1D ̇e−M−1Ke+M−1Fext(2)where the optimization variable u= [m−11, . . . , m−16, k′1, . . . , k′6, d′1, . . . , d′6]Tis the diagonal ele-ments of M−1,K′=M−1K,D′=M−1D.e, ̇eare the robot states that can be directly accessed,andFextis the external force that should be modeled from the environment dynamics.Contact force estimation: Modeling the contact force explicitly is difficult because the contactpoint and mode can change dramatically during manipulation. Therefore, we propose to estimate4the contact force online using the force/torque sensor measurements. In our experiments, we utilizea simple but effective record & replay strategy, where we record a sequence of force information{F0ext, . . . , FText}within a time window [0, T]and replay them during the optimization.There are other approaches for force estimation, such as using analytical contact models [46, 47]or numerically learning the contact force by model fitting. However, we found the record & replaystrategy is better by experiments. We provide analyses in the Appendix. C.2.Stability constraint: To ensure stability for admittance control, we need the admittance parametersto be positive-definite. Therefore, we constrain the optimization variable uto be positive.3.2.2 Objective Function DesignWe want to optimize the admittance parameters to establish stable contact and successfully achievethe task. Previous work [48] introduces the FITA VE objectiveRT0t| ̇e(t)|dtto effectively generatesmooth and stable contact by regulating the robot’s future velocity error. In addition, the ITAEobjectiveR+∞0t|e(t)|dtin [49] minimizes the position error to ensure the robot tracking the desiredtrajectory and finishes the task. We combine those two functions as our objective:C(x) =ZT0t[w|e(t)|+ (1−w)| ̇e(t)|]dt (3)where w∈Ris a weight scalar to balance the trajectory smoothness and task completion criterion.3.2.3 Online admittance learningThe optimization formulation is shown in Eq. 4. We optimize the residual admittance parametersδu, with uinitobtained from the offline learned skill.minδuC(x)s.t. ̇x=f(x, Fext, uinit+δu)Fext←record & replayuinit+δu > 0(4)We illustrate the online admittance learning procedures in Alg. 1. In the online phase, we executethe skill learned offline on the real robot and recorded the contact force at each time step. Every Tseconds, the online optimization uses the recorded force measurements, the current robot state andthe admittance parameters learned offline to update the admittance parameter residual. The processruns in a closed-loop manner to complete the desired task robustly.u=uinit+δu∗, M=diag{m1,···, m6}, K=M·diag{k′1,···, k′6}, D=M·diag{d′1,···, d′6}(5)Algorithm 1: Online Admittance Residual LearningRequire: uinitfrom the offline policy π(a|s), current robot state x1:while task not terminated do2: ifevery Tseconds then3: δu∗←admittance optimization in (4)4: M, K, D ←Recover admittance parameters from (5)5: end if6:{Fext} ← record force sensor data7:end while5Figure 2: (a) shows the snapshots of the learned policy in simulation.(b) demonstrates the snapshotsusing the proposed approach for sim-to-real transfer. (c)(d) illustrate the forces and control param-eters profiles for both the learned and proposed approach in the real world. The proposed approachcan adjust the parameters to get the best performance in real-time.4 Experiment ResultsWe conduct experiments on three contact-rich manipulation tasks, peg-in-hole assembly, pivoting,and screwing, to evaluate: 1) the robustness of sim-to-real transfer and 2) the generalizability ofdifferent task settings. We provide comparison results with two baselines in the assembly and pivot-ing tasks: 1) Direct Transfer : directly sim-to-real transfer both the learned robot trajectory and thecontrol parameters [26], 2) Manual Tune : transfer learned trajectory with manually tuned controlparameters [2]. We consider three metrics for evaluation: 1) success rate indicates the robustnessof transfer, 2) completion time for successful trials denotes the efficiency of the skills, and 3) maxcontact force shows the safety. The screwing experiments further demonstrate the robustness of ourmethod for solving complex manipulation tasks.4.1 Skill Learning in SimulationWe use Soft Actor-Critic [28] to learn manipulation skills in simulation.2During the evaluation,the learned assembly and pivoting skills both achieved a 100% success rate. Fig. 2(a) shows thesnapshots of the learned assembly skills. The robot learns to search for the exact hole location onthe hole surface with a learned variable admittance policy and smoothly inserts the peg into the hole.For the learned pivoting skill, the robot pushes the object against the wall and gradually pivots it tothe target pose with suitable frictional force.4.2 Sim-to-Real TransferWe evaluate the sim-to-real transfer performance on the same task. In the real world, the task setup,such as the object and robot geometry, is identical to the simulation. We mainly focus on evaluatingthe effect of the sim-to-real gap on robot/environment dynamics.2We also tested other RL algorithms like DDPG [50] and TD3 [28]. As a result, all the methods are able tolearn a policy and have similar performance when transferring to the real robot. Details can be found on ourwebsite.6Figure 3: Snapshots of using the proposed approach to generalize to various task settings. Thesnapshots and videos of the baseline methods are available on our website.We first apply the offline learned skill to the real world. From the experiments, we notice that theDirect Transfer baseline fails to produce safe and stable interactions. As depicted in Fig 2(c), forpeg-in-hole assembly tasks, the peg bounces on the hole surface and generates large contact forces,making the assembly task almost impossible to complete. Similarly, in the pivoting task, the robotcannot make stable contact with the object and provide enough frictional force for pivoting.Then we examine whether the learned robot motion is valid with manually tuned control parameters.As shown in Tab. 1, the Manual Tune baseline can achieve a 100% success rate for both tasks. Thissupports our hypothesis and previous works that the manipulation trajectory is directly transferablewith suitable control parameters to address the sim-to-real gap.However, manual tuning requires extensive human labor. We want to evaluate whether the proposedonline admittance learning framework can perform similarly without any tuning. Table 1 presentsan overview of the sim-to-real transfer results. For all experiments, the weight parameter wof theproposed approach was consistently set to 0.4. Notably, our proposed method achieves a 100%success rate in the assembly task, along with a 90% success rate in the pivoting task. Furthermore,it achieves these results while exhibiting shorter completion times than the other two baselines.We also investigate the contact force and the adjusted admittance parameters during the manipula-tion, shown in Fig. 2(c)(d). Initially, the robot establishes contact with the environment using theoffline learned parameters, resulting in a large applied force. In the subsequent update cycle, theproposed method effectively adjusts the parameters by decreasing Kand increasing D, enabling therobot to interact smoothly with the environment and reduce the contact force. Later, it increases Kand decreases Dto suitable values to finish the task more efficiently.Assembly Task Pivoting TaskSucc. Rate Time (s) Max F (N) Succ. Rate Time (s) Max F (N)Proposed 10/10 19 .0±11.2 23 .6±6.3 9/10 25 .6±2.1 20 .1±4.1Manual 10/10 28 .1±8.6 10 .3±2.2 10/10 25 .3±3.6 9 .2±0.6Direct 3/10 39 .0±12.8 63 .7±6.8 0/10 N/A 30.7±4.6Table 1: Success rate evaluation in real-world experiments.4.3 Generalization to Different Task SettingsThe aforementioned experiments highlight the ability of the proposed framework to achieve sim-to-real transfer within the same task setting. In this section, we aim to explore the generalizationcapabilities of the proposed approach across different task settings, which may involve distinct kine-matic and dynamic properties. For two baselines, we directly use the manually tuned or learnedcontrol parameters of the training object for new tasks.7Triangle (gap = 1mm) Pentagon (gap = 1mm) Ethernet (gap = 0.17mm) Waterproof (gap = 0.21mm)Succ. Rate Time (s) Succ. Rate Time (s) Succ. Rate Time (s) Succ. Rate Time (s)Proposed 10/10 15 .9±6.2 10/10 20 .1±8.9 9/10 42 .1±13.7 9/10 37 .8±17.7Manual 8/10 43 .±17.0 9/10 38 .0±18.0 1/10 78 .0±0.0 0/10 N/ADirect 0/10 N/A 1/10 7 .0±0.0 0/10 N/A 0/10 N/AAdapter [L=8.8 cm, w=69g] Eraser [L=12.2 cm, w=36g] Pocky Short [L=7.9 cm, w=76g] Pocky Long [L=14.8 cm, w=76g]Succ. Rate Time (s) Succ. Rate Time (s) Succ. Rate Time (s) Succ. Rate Time (s)Proposed 8/10 25 .0±4.8 9/10 28 .4±2.7 8/10 12 .9±1.7 7/10 31 .8±11.0Manual 0/10 N/A 10/10 30 .0±1.0 1/10 19 .0±0.0 1/10 40 .0±0.0Direct 0/10 N/A 0/10 N/A 0/10 N/A 0/10 N/ATable 2: Generalization performance to different assembly tasks ( Top) and pivoting tasks ( Below ).Peg-in-hole assembly: We test various assembly tasks, including polygon-shaped peg-holes suchas triangles and pentagons, as well as real-world socket connectors like Ethernet and waterproofconnectors. These tasks are visualized in Fig. 1(c). The outcomes of our experiments are outlined inTable 2. Our proposed method achieves 100% success rates on the polygon shapes and a commend-able 90 %success rate on Ethernet and waterproof connectors. Moreover, the completion time ofthe proposed method is much shorter than other baselines. The Manual Tune baseline also achievesdecent success rates on the polygon shapes as it is similar to the scenario in that we tune the param-eters. However, for the socket connectors, due their tighter fit and irregular shapes, substantial forceis required for insertion (approximately 15N for Ethernet and 40N for waterproof connectors) andManual Tune baseline cannot accomplish these two tasks.Pivoting: Similarly, we conduct a series of pivoting experiments on various objects, encompassingdiverse geometries and weights as shown in Table 2. Remarkably, our proposed approach exhibitsrobust generalization capabilities across all tasks, achieving a success rate exceeding 70%. However,when relying solely on manually tuned parameters, the ability to pivot an object is limited to theeraser that has a similar length to the trained object and is the lightest object in the test set. Asthe object geometry and weight diverge significantly, the manually tuned parameters often fail toestablish stable contact with the object and exert sufficient force to initiate successful pivoting.Screwing: We conducted experiments on a more challenging robot screwing task to further validateour method. Its primary challenge is to precisely align the bolt with a nut and then smoothly securethem together. To address this, we employed the assembly skills previously learned for aligningthe bolt and nut and then used a manually-designed rotation primitive to complete the screwing.Throughout the process, online admittance learning continually optimizes the admittance controller.Impressively, our approach allowed the robot to consistently and reliably align and secure the nutand bolt. We executed this task five times, achieving a 100% success rate. The detailed settings canbe found in Appendix. D.2 and the experiment videos are available on our website.5 Conclusion and LimitationsThis paper proposes a contact-rich skill-learning framework for sim-to-real transfer. It consists oftwo main components: skill learning in simulation during the offline phase and admittance learningon the real robot during online execution. These components work together to enable the robot toacquire the necessary skills in simulation and optimize admittance control parameters for safe andstable interactions with the real-world environment. We evaluate the performance of our frameworkin three contact-rich manipulation tasks: assembly, pivoting, and screwing. Our approach achievespromising success rates in both tasks and demonstrates great generalizability across various tasks.However, there are some limitations of our proposed framework: 1) Our method refines the policyduring execution, which means that initially, a sub-optimal policy is used to make contact with theenvironment. As a result, this scheme may not be suitable for contact with fragile objects. 2) Weassume a simplified problem setup where the object is pre-grasped. However, real-world tasks mayrequire the robots to first pick the object and then do the following manipulation tasks [43]. 3)We currently online learn/optimize the diagonal elements of admittance parameters. We’d like toconsider learning other elements as suggested in [45].8AcknowledgmentsWe gratefully acknowledge reviewers for the valuable feedback, and we extend our thanks to theFANUC Advanced Research Laboratory for their insightful discussions on robot hardware and con-trol.References[1] T. Inoue, G. De Magistris, A. Munawar, T. Yokoya, and R. Tachibana. Deep reinforcementlearning for high precision assembly tasks. In 2017 IEEE/RSJ Int. Conf. on Intelligent Robotsand Syst. (IROS) , pages 819–825. IEEE, 2017.[2] X. Zhang, S. Jin, C. Wang, X. Zhu, and M. Tomizuka. Learning insertion primitives withdiscrete-continuous hybrid action space for robotic assembly tasks. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 9881–9887. IEEE, 2022.[3] N. Vuong, H. Pham, and Q.-C. Pham. Learning sequences of manipulation primitives forrobotic assembly. In 2021 IEEE International Conference on Robotics and Automation (ICRA) ,pages 4086–4092. IEEE, 2021.[4] J. Luo, E. Solowjow, C. Wen, J. A. Ojea, A. M. Agogino, A. Tamar, and P. Abbeel. Reinforce-ment learning on variable impedance controller for high-precision robotic assembly. In 2019International Conference on Robotics and Automation (ICRA) , pages 3080–3087. IEEE, 2019.[5] S. Jin, X. Zhu, C. Wang, and M. Tomizuka. Contact pose identification for peg-in-hole assem-bly under uncertainties. In 2021 American Control Conference (ACC) , pages 48–53. IEEE,2021.[6] Z. Wu, Y . Xie, W. Lian, C. Wang, Y . Guo, J. Chen, S. Schaal, and M. Tomizuka. Zero-shotpolicy transfer with disentangled task representation of meta-reinforcement learning. In 2023IEEE International Conference on Robotics and Automation (ICRA) , pages 7169–7175. IEEE,2023.[7] Z. Wu, W. Lian, C. Wang, M. Li, S. Schaal, and M. Tomizuka. Prim-lafd: A framework tolearn and adapt primitive-based skills from demonstrations for insertion tasks. arXiv preprintarXiv:2212.00955 , 2022.[8] J. Seo, N. P. S. Prakash, X. Zhang, C. Wang, J. Choi, M. Tomizuka, and R. Horowitz. Robotmanipulation task learning by leveraging se (3) group invariance and equivariance. arXivpreprint arXiv:2308.14984 , 2023.[9] W. Zhou and D. Held. Learning to grasp the ungraspable with emergent extrinsic dexterity. InICRA 2022 Workshop: Reinforcement Learning for Contact-Rich Manipulation , 2022. URLhttps://openreview.net/forum?id=Zrp4wpa9lqh .[10] X. Zhang, S. Jain, B. Huang, M. Tomizuka, and D. Romeres. Learning generalizable pivotingskills. arXiv preprint arXiv:2305.02554 , 2023.[11] Y . Shirai, D. K. Jha, A. Raghunathan, and D. Romeres. Chance-constrained optimization incontact-rich systems for robust manipulation. arXiv preprint arXiv:2203.02616 , 2022.[12] X. Zhu, L. Sun, Y . Fan, and M. Tomizuka. 6-dof contrastive grasp proposal network. In 2021IEEE International Conference on Robotics and Automation (ICRA) , pages 6371–6377. IEEE,2021.[13] Y . Fan, X. Zhu, and M. Tomizuka. Optimization model for planning precision grasps withmulti-fingered hands. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 1548–1554. IEEE, 2019.9[14] X. Zhu, Y . Zhou, Y . Fan, L. Sun, J. Chen, and M. Tomizuka. Learn to grasp with less su-pervision: A data-efficient maximum likelihood grasp sampling loss. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 721–727. IEEE, 2022.[15] J. St ̈uber, C. Zito, and R. Stolkin. Let’s push things forward: A survey on robot pushing.Frontiers in Robotics and AI , page 8, 2020.[16] K. Hausman, J. T. Springenberg, Z. Wang, N. Heess, and M. Riedmiller. Learning an embed-ding space for transferable robot skills. In International Conference on Learning Representa-tions , 2018.[17] Y . Narang, K. Storey, I. Akinola, M. Macklin, P. Reist, L. Wawrzyniak, Y . Guo, A. Mora-vanszky, G. State, M. Lu, et al. Factory: Fast contact for robotic assembly. arXiv preprintarXiv:2205.03532 , 2022.[18] X. Zhu, W. Lian, B. Yuan, C. D. Freeman, and M. Tomizuka. Allowing safe contact in roboticgoal-reaching: Planning and tracking in operational and null spaces. IEEE International Con-ference on Robotics and Automation (ICRA), 2023.[19] M. Huo, M. Ding, C. Xu, T. Tian, X. Zhu, Y . Mu, L. Sun, M. Tomizuka, and W. Zhan. Human-oriented representation learning for robotic manipulation. arXiv preprint arXiv:2310.03023 ,2023.[20] L. Peternel, T. Petri ˇc, and J. Babi ˇc. Human-in-the-loop approach for teaching robot assemblytasks using impedance control interface. In 2015 IEEE int. conf. on robotics and automation(ICRA) , pages 1497–1502. IEEE, 2015.[21] F. J. Abu-Dakka, L. Rozo, and D. G. Caldwell. Force-based learning of variable impedanceskills for robotic manipulation. In 2018 IEEE-RAS 18th Int. Conf. on Humanoid Robots (Hu-manoids) , pages 1–9. IEEE, 2018.[22] X. Zhang, L. Sun, Z. Kuang, and M. Tomizuka. Learning variable impedance control viainverse reinforcement learning for force-related tasks. IEEE Robotics and Automation Letters ,6(2):2225–2232, 2021.[23] J. Buchli, F. Stulp, E. Theodorou, and S. Schaal. Learning variable impedance control. TheInt. J. of Robotics Research , 30(7):820–833, 2011.[24] J. Rey, K. Kronander, F. Farshidian, J. Buchli, and A. Billard. Learning motions from demon-strations and rewards with time-invariant dynamical systems based policies. AutonomousRobots , 42(1):45–64, 2018.[25] C. C. Beltran-Hernandez, D. Petit, I. G. Ramirez-Alpizar, and K. Harada. Variable compliancecontrol for robotic peg-in-hole assembly: A deep-reinforcement-learning approach. AppliedSciences , 10(19):6923, 2020.[26] R. Mart ́ın-Mart ́ın, M. A. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg. Variableimpedance control in end-effector space: An action space for reinforcement learning incontact-rich tasks. arXiv preprint arXiv:1906.08880 , 2019.[27] M. Parmar, M. Halm, and M. Posa. Fundamental challenges in deep learning for stiff contactdynamics. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 5181–5188. IEEE, 2021.[28] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V . Kumar, H. Zhu, A. Gupta,P. Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 ,2018.[29] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.10[30] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ Int. Conf. on Intelligent Robots and Syst. , pages 5026–5033. IEEE, 2012.[31] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016–2019.[32] L. Ljung. System identification . Springer, 1998.[33] V . Lim, H. Huang, L. Y . Chen, J. Wang, J. Ichnowski, D. Seita, M. Laskey, and K. Goldberg.Real2sim2real: Self-supervised learning of physical single-step dynamic actions for planarrobot casting. In 2022 International Conference on Robotics and Automation (ICRA) , pages8282–8289, 2022. doi:10.1109/ICRA46639.2022.9811651.[34] J. Li, X. Liu, B. Zhu, J. Jiao, M. Tomizuka, C. Tang, and W. Zhan. Guided online dis-tillation: Promoting safe reinforcement learning by offline demonstration. arXiv preprintarXiv:2309.09408 , 2023.[35] R. Chitnis, S. Tulsiani, S. Gupta, and A. Gupta. Efficient bimanual manipulation using learnedtask schemas. In 2020 IEEE International Conference on Robotics and Automation (ICRA) ,pages 1149–1155. IEEE, 2020.[36] Y . Chebotar, A. Handa, V . Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox. Closingthe sim-to-real loop: Adapting simulation randomization with real world experience. In 2019International Conference on Robotics and Automation (ICRA) , pages 8973–8979. IEEE, 2019.[37] C. Wang, Y . Zhang, X. Zhang, Z. Wu, X. Zhu, S. Jin, T. Tang, and M. Tomizuka. Offline-online learning of deformation model for cable manipulation with graph neural networks. IEEERobotics and Automation Letters , 7(2):5544–5551, 2022.[38] M. Yu, K. Lv, C. Wang, M. Tomizuka, and X. Li. A coarse-to-fine framework for dual-armmanipulation of deformable linear objects with whole-body obstacle avoidance. In 2023 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 10153–10159. IEEE,2023.[39] S. Jin, C. Wang, and M. Tomizuka. Robust deformation model approximation for robotic cablemanipulation. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 6586–6593. IEEE, 2019.[40] C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song. Iterative residual policy: for goal-conditioned dynamic manipulation of deformable objects. arXiv preprint arXiv:2203.00663 ,2022.[41] Y . Sun, W. L. Ubellacker, W.-L. Ma, X. Zhang, C. Wang, N. V . Csomay-Shanklin,M. Tomizuka, K. Sreenath, and A. D. Ames. Online learning of unknown dynamics for model-based controllers in legged locomotion. IEEE Robotics and Automation Letters , 6(4):8442–8449, 2021.[42] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.arXiv preprint arXiv:2107.04034 , 2021.[43] B. Tang, M. A. Lin, I. Akinola, A. Handa, G. S. Sukhatme, F. Ramos, D. Fox, and Y . Narang.Industreal: Transferring contact-rich assembly tasks from simulation to reality. arXiv preprintarXiv:2305.17110 , 2023.[44] C. Ott, R. Mukherjee, and Y . Nakamura. Unified impedance and admittance control. In 2010IEEE international conference on robotics and automation , pages 554–561. IEEE, 2010.[45] S. Kozlovsky, E. Newman, and M. Zacksenhouse. Reinforcement learning of impedance poli-cies for peg-in-hole tasks: Role of asymmetric matrices. IEEE Robotics and Automation Let-ters, 7(4):10898–10905, 2022.11[46] N. Doshi, O. Taylor, and A. Rodriguez. Manipulation of unknown objects via contact con-figuration regulation. In 2022 International Conference on Robotics and Automation (ICRA) ,pages 2693–2699. IEEE, 2022.[47] J. Zhou, M. T. Mason, R. Paolini, and D. Bagnell. A convex polynomial model for planarsliding mechanics: theory, application, and experimental validation. The International Journalof Robotics Research , 37(2-3):249–265, 2018.[48] C. Wang, X. Zhang, Z. Kuang, and M. Tomizuka. Safe online gain optimization for cartesianspace variable impedance control. In 2022 IEEE 18th International Conference on AutomationScience and Engineering (CASE) , pages 751–757. IEEE, 2022.[49] F. G. Martins. Tuning pid controllers using the itae criterion. International Journal of Engi-neering Education , 21(5):867, 2005.[50] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 , 2015.[51] J. J. Craig. Introduction to robotics . Pearson Educacion, 2006.[52] Rail-Berkeley. Rail-berkeley/rlkit: Collection of reinforcement learning algorithms. URLhttps://github.com/rail-berkeley/rlkit .[53] J. Li, L. Sun, J. Chen, M. Tomizuka, and W. Zhan. A safe hierarchical planning frameworkfor complex driving scenarios based on reinforcement learning. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 2660–2666. IEEE, 2021.[54] J. Li, C. Tang, M. Tomizuka, and W. Zhan. Dealing with the unknown: Pessimistic offlinereinforcement learning. In Conference on Robot Learning , pages 1455–1464. PMLR, 2022.[55] J. Li, C. Tang, M. Tomizuka, and W. Zhan. Hierarchical planning through goal-conditionedoffline reinforcement learning. IEEE Robotics and Automation Letters , 7(4):10216–10223,2022.[56] C. Wang, J. Bingham, and M. Tomizuka. Trajectory splitting: A distributed formulation forcollision avoiding trajectory optimization. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 8113–8120. IEEE, 2021.[57] C. Wang, H.-C. Lin, S. Jin, X. Zhu, L. Sun, and M. Tomizuka. Bpomp: A bilevel path op-timization formulation for motion planning. In 2022 American Control Conference (ACC) ,pages 1891–1897. IEEE, 2022.12AppendicesA Task SetupA.1 Simulation DetailsWe build the simulation environment using MuJoCo [30] simulation for learning robot contact-richmanipulation skills. In the simulation, we include the model of the FANUC LRMate 200iD robot,and each of the joints is controlled with motor torque command. We incorporated an F/T sensorasset on the robot’s wrist to measure the contact force. For the low-level controller, we employedcomputed torque control [51] to track the compliant trajectory xcand ̇xcderived from the admittancecontroller. The simulation time step was set to 0.01s. Further details regarding the assembly andpivoting setups are outlined below:Assembly: This task involves aligning a square-shaped peg with a hole. The edge length of the pegis4cm, and there is a clearance of 2mm between the peg and hole. The friction coefficient betweenthe peg and hole is configured as 0.3.Pivoting: In this task, the objective is to reorient a rectangular object against a rigid wall. Thesimulated object has dimensions of 10×10×2.6cm3. A friction coefficient of 0.7is assigned toall objects in the simulation.A.2 Real Robot Experiment SetupThe real robot setup is visualized in Fig. 4. We utilized FANUC LRMate 200iD industrial robotas the test bed for our real-world experiments. The end-effector pose, and velocity are obtainedfrom the joint encoders. The end-effector pose, and velocity are obtained from forward kinematics.The contact force is measured by an ATI Mini45 Force/Torque sensor mounted on the robot’s wrist.The low-level position/velocity controller is achieved via a Positional-Integral (PI) control law withfeed-forward terms to cancel gravity and friction. The controller is implemented in Matlab SimulinkReal-Time and runs on 1KHz . The admittance controller we use takes in the desired robot motionxdand optimized admittance control parameters Pand outputs the command robot motion xcto thelow-level position/velocity controller. The robot motion xdis directly sent from an Ubuntu computerwith a User Datagram Protocol(UDP) in 125Hz. Similarly, the initial control parameters Pare sentfrom the Ubuntu computer and optimized in MATLAB with a built-in SQP solver.SimulinkMATLABSwitchRobotOnlineAdmittanceLearningUbuntu OfflineRLPolicyPolicyMotionCommandRobotControlRobotStateInfoAdmittanceMotorTorqueContactForceFigure 4: Real robot experiment setup.13B Simulation Training DetailsB.1 Domain Randomization Details for Contact-rich TasksIn both the assembly and pivoting tasks, we introduced Gaussian noise with a mean of zero anda standard deviation of 0.2Nto the FT sensor readings as measurement noise. Additionally, weapplied a clipping operation to the collected contact force, limiting it to the range of ±10Nforregulation purposes. To enhance the robustness of the learned skills, we incorporated randomizationinto the robot’s initial pose.For the assembly task, the robot’s initial pose was uniformly sampled from a range of[±30mm,±30mm, 30±5mm]along the X,Y, and Zaxes, respectively. As for the pivot-ing task, the range for the initial pose was set to [150±30mm, 5±5mm]along the XandZaxesrelative to the rigid wall.B.2 RL Training DetailsWe use the Soft Actor Critic [28] with implementation in RLkit [52] to learn robot manipulationskills in simulation. The hyperparameter selections are summarized in Table. 3.Hyperparameters Assembly PivotingLearning rate - Policy 1e-3 1e-4Learning rate - Q function 1e-4 3e-4Networks [128,128] MLP [128,128] MLPBatch size 4096 4096Soft target update ( τ) 5e-3 5e-3Discount factor ( γ) 0.95 0.9Replay buffer size 1e6 1e6max path length 20 40eval steps per epoch 100 400expl steps per epoch 500 2000Table 3: Hyperparameters for RL trainingC Discussion on Proposed ApproachC.1 Discussion on the Necessity of Learning the Compliance Control ParametersWe consider the manipulation policy for contact-rich manipulation tasks to contain a manipulationtrajectory and the corresponding compliance control parameters.The main difference between ‘contact-rich’ manipulation and regular manipulation tasks is howmuch force the robot exerts on the environment. The more force the robot applies, the more force ithas to withstand. For contact-rich manipulation, the robot desired trajectory often has to penetratethe object with its end-effector to generate enough force for the task. For example, to wipe a table,the robot has to push its end-effector below the table surface. Since the robot is a rigid object, itneeds a compliance controller to regulate its behavior and prevent potential damage. Compared toa position/velocity controller that might not need to tune the PID gains frequently, a compliancecontrol is very sensitive [48] to the change of environment or task goals. It thus requires carefultuning of the parameters for each task. Therefore, for contact-rich manipulation, a suitable policyshould be matched with the appropriate compliance control parameters to achieve the task smoothly.C.2 Discussion on Approaches for Modeling Contact ForceA key component in our online admittance learning is the dynamics constraint, as shown below: ̇x= ̇e ̈e=f(x, Fext, u) = ̇e−M−1D ̇e−M−1Ke+M−1Fext(6)14where we want to regulate the future robot behavior based on the current robot state and the externalforce Fext. In optimization, when we change the admittance parameter M,K, and D, the robotmotion will change, and the external force that the environment gives to the robot will change aswell. Thus, a robust way to model the external force Fextis crucial in our online admittance learning.To estimate or approximate the contact force in real time, we compare four approaches:•record & replay : We record the force/torque from the most recent measurements within atime window and directly use the pre-recorded data as Fextin the optimization.•hybrid impulse dynamics : We use Eq. 6 with Fext= 0 when there is no contact. For thecontact, we model it implicitly as M ̇x−=γM ̇x+, where ̇x−and ̇x+are the robot end-effector velocities before and after the contact. By online fitting the γ, we can optimizethese hybrid dynamics to calculate the optimal parameters.•analytical contact model with online parameter fitting : We model the contact explicitlyusing analytical models and fit the necessary parameters using online data, following [46,47].•contact force fitting : We fit a contact force model using online force sensor measurements.However, the hybrid impulse dynamics approach is not suitable for our requirements. As shownin Fig. 5, the contact force profile in contact-rich manipulation indicates that the robot maintainscontact with the environment for most of the time. Therefore, neglecting the entire contact processand modeling it implicitly is not appropriate for our applications.Similarly, analytical contact model with online parameter fitting does not fit our scenarios either.Although it has been successful in some pivoting tasks, it relies on the quasi-static assumption thatdoes not hold in our scenario. One of the main challenges of transferring the admittance parametersis to avoid the robot bouncing on the object. Moreover, the analytical model assumes point or slidingcontact modes, which may be hard to generalize to different tasks, such as assembly.Figure 5: Performance of online force fitting (in zaxis). In every time window, we collect theforce/torque measurements and use the least square to fit the force model Fext(x, ̇x) =a(t)x(t) +b(t) ̇x(t) +c(t). On the left, it shows the linear model can fit the force profile locally. However, itcan be extremely challenging to generalize to the next time window, as shown on the right.Finally, for contact force fitting , we assume a linear (spring-damping) contact force model: Fext=a(t)x(t) +b(t) ̇x(t) +c(t)within a short time window. We use the least square to estimate theparameters a,b, and cin real-time. Fig. 5 shows an example of fitting results. It can fit the forceprofile well in a short time window. However, as we need to apply the model learned in the previoustime window to the next step, the generalization ability is poor as it is hard to capture the peak ofthe force profile. Experiment videos comparing the performance of contact force fitting andrecord& replay are available on our website. We can observe that the contact force fitting method cannotstabilize the robot during contact.15Figure 6: Ablation on the weight parameter. The left figure shows the completion time and successrate with respect to different w, and the right figure shows the contact force.C.3 Ablation: Objective Weight SelectionIn this subsection, we would like to study the effect of weight selection. We evaluated differentweight parameters on both the assembly and pivoting tasks. The results are depicted in Figure 6. Forthe assembly task, all proposed method variations achieve a 100% success rate except for w <= 0.2.Smaller weight parameters tend to prioritize trajectory smoothness, which may not provide sufficientcontact force for successful insertion. On the other hand, in pivoting tasks, larger weight valuesled to a decrease in the success rate. It is because larger weight values prioritize task completion,potentially leading to a failure in establishing a stable initial contact for pivoting. These observationsalign with the objective design motivation. Based on our findings, selecting the parameter 0.4strikesa good balance between both objectives and yields the best overall performance.D Baseline ResultsD.1 Sim-to-real TransferFigure 7: Snapshots of baseline approaches for the sim-to-real experiment. The control parameterslearned in the simulation will result in a large contact force and makes the robot bounce on thesurface, which will, in turn, result in failures of the tasks.Here we provide snapshots of the baseline methods: Direct Transfer andManual Tune . As intro-duced in the paper, Direct Transfer baseline utilizes the offline learned policy and directly applies itto the real robot without fine-tuning as [26] did. We hope the domain randomization on object posi-tion and force information can provide good generalizability and make it robust and transferable toreal robots.However, as shown in Fig. 7, direct applying the learned policy cannot achieve both tasks success-fully. The main problem comes from the learned admittance control parameters. Where in thesimulation, applying such parameters to the robot will not result in the robot bouncing on the object.In contrast, it can enable the robot to finish the task very efficiently. However, in the real world, suchcontrol parameters will result in large contact force and oscillation behaviors of the robot, which inturn, let the robot fails to establish stable contact with the object and finish the task.For the Manual Tune baseline, we carefully tune the admittance parameters for each task in orderto make the robot achieve smooth behavior during the contact. Table. 4 summarizes the parameters.16Figure 8: Real-world manipulation tasksFigure 9: Snapshots of the screwing taskAs shown in Fig. 7, the manually tuned baseline can successfully achieve the task. However, sinceit requires human tuning, it is not time-consuming and task-dependent. A practical problem ofmanually tuning the control parameters is the need of trying various combinations of parameters.During this process, it is dangerous to let the robot interact with the environment and may causedamage to both the object and the robot.Tuned Admittance Parameters Assembly PivotingEnd-effector Mass M(kg) [3,3,3] [4 ,4,4]End-effector Inertia I(kgm2) [2,2,2] [2 ,2,2]Position Stiffness K(N/m ) [200,200,200] [300 ,300,300]Orientation Stiffness K(Nm/rad )[200,200,200] [200 ,200,200]Position Damping D(Ns/m ) [300,300,300] [300 ,300,300]Orientation Damping D(Nms/rad )[250,250,250] [250 ,250,250]Table 4: Manually tuned admittance control parameters for the experiments.D.2 Robot Screwing TaskFor the robot screwing task, we first execute the assembly policy that is learned previously for 8stepsto align the bolt and nut. Then, we continuously apply a rotational motion which rotates the bolt by20◦in the yaw direction while pushing down the bolt to screw the bolt to the nut. We conductedexperiments on an M8 bolt-nut assembly task for five times achieving a 100% success rate. Theexperiment videos are available on our website. The snapshots of the screwing task are depicted inFig. 9.17Figure 10: Snapshots of directly using the learned policy to generalize to various task settings. Thesnapshots and videos of the baseline methods are available on our website.D.3 Generalization to Different Task SettingsIn order to evaluate the generalization performance to different tasks, we conducted tests on variousvariations of tasks as depicted in Fig. 8. For assembly, these tasks included polygon-shaped pegholes, such as triangular peg-holes with an edge size of 51.4mm and a clearance of 1.4mm, as wellas pentagon peg-holes with an edge size of 57.8mm and a clearance of 1.3mm. Additionally, weperformed experiments on standard electric connectors, such as Ethernet and waterproof connectors,for further assessment.Regarding the pivoting task, we expanded the test set to include different objects. These objectsconsisted of an adapter with dimensions of 8.8∗4.1∗2.6cm3and a weight of 69g, an eraserwith dimensions of 12.2∗4.8∗3.0cm3and a weight of 36g, and a pocky with dimensions of14.8∗7.9∗2.3cm3and a weight of 76g.The snapshots of the Direct Transfer andManual Tune baselines can be seen in Fig.10 and 11,respectively. As observed in the sim-to-real experiments, the Direct Transfer baseline struggles toachieve stability during manipulation, resulting in failures when attempting to assemble or pivotobjects of different shapes. On the other hand, the Manual Tune baseline demonstrates high successrates when dealing with polygon-shaped peg-holes and when pivoting the eraser. This success canbe attributed to the similarity in geometric or dynamic properties between the learned object andthese specific test objects. However, the Manual Tune baseline fails to generalize its performance toobjects with significant differences, as illustrated in Fig.11(c) and (d).E Current Limitations and Future ImprovementsAs we discussed in the paper, our current framework has three main limitations:It assumes that the task settings in geometry are similar from training to testing. It uses a simplestrategy for estimating the contact force. It has a relatively low update frequency and may not besuitable for manipulating fragile objects.To address the first limitation, we plan to use meta-learning to learn the manipulation trajectorythat can generalize well to different task settings. Meta-learning has been shown to be effectivein generalizing the learned trajectory to various scenarios, and we believe that combining meta-learning and our proposed online residual admittance learning can bridge the sim-to-real gap formany contact-rich manipulation tasks. Safe reinforcement learning [53, 54, 55] is another domainthat we’d like to explore, as it can provide safety guarantees of the learned policy to enable a saferand smoother initial policy.18Figure 11: Snapshots of directly using learned trajectory and the manually tuned admittance controlparameters to generalize to various task settings. The snapshots and videos of the baseline methodsare available on our website.For the second limitation, we are interested in exploring and experimenting with the analytical con-tact model approach as discussed in the Appendix. Using an analytical model and estimating thekey parameters online may improve the performance. However, finding a general contact model ora method that can switch between different models will be the focus of our future work.The last limitation is related to the time window size for online force/torque sensor data collection.We will try different time window sizes and increase the update frequency to enhance the adaptationperformance in our future work. We also plan to incorporate recent advances in optimization toenable faster computation efficiency [56, 57].19 |
TgJ8vJUVUBR | TraCo: Learning Virtual Traffic Coordinator forCooperation with Multi-Agent ReinforcementLearningWeiwei LiuHuzhou Institute of Zhejiang UniversityZhejiang University Chinalww623@zju.edu.cnWei JingNetease Fuxi Robotics China21wjing@gmail.comLingping GaoAutonomous Driving LabAlibaba DAMO Academy Chinaglp.dlut@gmail.comKe GuoAlibaba DAMO AcademyUniversity of Hong Kong Chinakguo@cs.hku.hkGang XuCollege of Control Science and EngineeringZhejiang University Chinawuuya@zju.edu.cnYong LiuCollege of Control Science and EngineeringZhejiang University Chinayongliu@iipc.zju.edu.cnAbstract: Multi-agent reinforcement learning (MARL) has emerged as a populartechnique in diverse domains due to its ability to automate system controller de-sign and facilitate continuous intelligence learning. For instance, traffic flow is of-ten trained with MARL to enable intelligent simulations for autonomous driving.However, The existing MARL algorithm only characterizes the relative degree ofeach agent’s contribution to the team, and cannot express the contribution that theteam needs from the agent. Especially in the field of autonomous driving, the teamchanges over time, and the agent needs to act directly according to the needs of theteam. To address these limitations, we propose an innovative method inspired byrealistic traffic coordinators called the Traffic Coordinator Network (TraCo). Ourapproach leverages a combination of cross-attention and counterfactual advantagefunction, allowing us to extract distinctive characteristics of domain agents andaccurately quantify the contribution that a team needs from an agent. Throughexperiments conducted on four traffic tasks, we demonstrate that our method out-performs existing approaches, yielding superior performance. Furthermore, ourapproach enables the emergence of rich and diverse social behaviors among vehi-cles within the traffic flow.Keywords: autonomous driving, multi-agent reinforcement learning, counterfac-tual reasoning1 IntroductionThere are numerous Multi-Agent Systems (MAS) [1] present in nature and human society. Self-Driven Particles (SDP) [2] have been proposed to describe these systems. In SDP, each agentinteracts with its surrounding agents, considers the interests of others while pursuing its owngoals, and ultimately exhibits complex collective behavior. For instance, traffic flow used in au-tonomous driving is often considered a typical example of SDP [3]. While early SDP mod-els were relatively simple, some have been based on philosophical ideas, such as the Belief-7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Desire-Intention (BDI) model [4, 5], which improves the reasoning and decision-making abil-ities of agents. However, traditional control methods can struggle to describe actual groupbehavior due to the complexity of reasoning required for artificially designed rules [6, 7].Figure 1: Example of traffic flow managed by the trafficcoordinator: the vehicle operates based on its driving ca-pabilities and follows the commands issued by the trafficcoordinator.Multi-agent reinforcement learning(MARL) [8, 9, 10] has emerged as apromising algorithm for learning con-trollers to simulate SDP behaviors. Inthe SDP of autonomous driving trafficflow, reward decomposition is partic-ularly crucial. Each agent must strikea delicate balance between its owninterests and the team’s, which servesas the foundation of appropriate socialbehavior. However, achieving thisbalance is a challenging problem [11],especially in complex scenarios suchas navigating busy intersections. Evenexperienced drivers may require theassistance of a traffic coordinator tosafely navigate such environments, asvehicle interaction is constantly changing, making it difficult to measure each vehicle’s contributionfrom its perspective. Thus, agents must take practical actions that prioritize the team’s interests.Though previous approaches [12, 13] have attempted to evaluate an agent’s contribution to theteam, they often fail to address the reward balancing problem and perform poorly in traffic flowsimulation applications.We propose a novel benefit trade-off scheme inspired by the task of traffic coordination. Unlikeexisting schemes, we start from the needs of the team’s interests and make the agent act directlyaccording to the team’s interests. As shown in Figure 1, the cars at the intersection need to follow thetraffic coordinator’s orders and their driving capabilities. To implement this scheme, we introducethe Traffic Coordinator (TraCo) network. TraCo learns the local interactions within a dynamicnumber of surrounding agents, through the Cross-Attention mechanism [14]. Therefore, TraCois able to issue traffic order embedding based on the importance of surrounding agents relativeto the feature correlation of ego agents, without altering the network structure. The agents thenact according to the received orders and their states. In addition, We design the CounterfactualAdvantage Function (CAF) to measure the impact of the team’s orders on the team’s interests, andthe agent’s advantage function to measure its ability to act.The main contributions are summarized as follows:• We propose the TraCo network to capture the interactions better and evaluate the rewarddecomposition of agents from the perspective of team benefits. TraCo uses a virtual trafficcoordinator with a cross-attention mechanism to capture features and interactions, and issueTraCo commands within the traffic scene, thereby assisting agents in generating strategiesto act according to the interests of the team.• We incorporate CAF in TraCo to measure the impact of TraCo’s commands to the agenton the team’s interests. This allows agents to act directly in line with the team’s interests,while also promoting TraCo’s ability to move beyond the limitations of a feature extractionnetwork, and thus significantly improve the performance of the simulated traffic flow.• We conduct extensive experiments. The results show that TraCo performs superior in mul-tiple metrics, and the agents exhibit diverse social behaviors.22 Related Work2.1 MARL and Value DecompositionIn a multi-agent system, the agents share the same reward function. As a result, the rewards theyobtain may not accurately reflect their behaviors, leading to inaccurate policy updates. This is knownas the credit assignment problem [11] in MARL.In discrete action spaces, value decomposition is a solution. VDN [15] uses a simple summationto calculate the joint action-value function for decomposing the team’s reward signal to each agent.Compared with IQL[16], this centralized training method can guarantee the optimality of the jointaction value function to a certain extent. Nevertheless, the simple summation significantly limitsthe fitting ability of the joint action-value function. QMIX [13] improves the fitting process ofsimple summation in VDN to non-linear fitting subject to monotonic constraints. QTRAN [17]introduces an intermediate action-value function that approximates the real action-value functionand then decomposes the intermediate action-value function, which avoids the monotonic constraintof QMIX. In addition, given the limitation of the QMIX action-value function fitting ability withmonotonic constraints, the agent cannot explore the entire joint action space. MA VEN [18] sharesa hidden variable in the value function of each agent, and uses the mutual information obtained bymaximizing the trajectory information of the agent and the hidden variable information to increasethe divergence of the policy and make the actions more diverse.For continuous action state spaces, existing work often learns the joint state-value function directly.One such algorithm that does this is MADDPG [19], which extends the DDPG [20] algorithm tothe multi-agent system. MADDPG adopts a scheme of centralized training and decentralized ex-ecution, where the joint value function is computed and then used to evaluate the policy of eachagent. Another algorithm that follows a similar path is MAPPO [21], which extends PPO [22] tothe multi-agent domain. Interestingly, MAPPO requires only a minimal hyperparameter search toachieve comparable performance to state-of-the-art algorithms. Both MADDPG and MAPPO im-plicitly solve the multi-agent credit assignment problem. In contrast, COMA [23] tackles the creditassignment problem head-on by using a counterfactual baseline to evaluate the contribution of eachagent to the team. In addition, simple independent learning[24] that solely pursues self-interest maymake traffic vehicles aggressive and display irrationally selfish behavior. Unlike existing algorithmsthat measure their contribution to the team starting from the agent, distributing the team’s needsdirectly to the agents could be an interesting solution.2.2 Autonomous Traffic FlowCollecting data solely from the real world for autonomous driving is impractical and expensive.Therefore, traffic flow simulations have emerged as a popular alternative for modeling vehicle inter-actions. Traditional approaches [25] employed predetermined rules to control traffic flow. Recentresearch has introduced RL as a way to control vehicles in traffic flow. For example, CityFlow [26]employs RL algorithms to study traffic flow on a city-wide scale. RL has also been used to trainindividual vehicles [27] in controlled environments and examine vehicle-to-vehicle social interac-tions [28]. Additionally, SMARTS [29] studies the interaction capabilities between agents in differ-ent environments. Nevertheless, the study of interactions between agents in diverse environmentsremains an area of interest. In this context, our research investigates four common autonomous driv-ing scenarios, with a focus on applying the traffic coordinator network in continuous action spaces.Our approach starts with teams, which directly ask agents to act in favor of the team, explicitly ad-dressing the trade-off between team and self-interest, which allows us to model the traffic flow moreefficiently than previous work.3TrafficCoordinatorFCCross-AttenTraCoIterationActorActor ActorminusNative RenderFCMSE LossFigure 2: The architecture of the Traffic Coordinator network. The left image showcases thesimulated environment, with the third-person perspective visible in the bottom left corner. The ob-servations of agents around agent iare denoted as {ok,···, op}, while oirepresents the observationof agent i. We incorporate two evaluation networks ( Critic sandCritic tot) to fit individual andjoint state value functions, respectively. Equations (10) and (11) illustrate how the joint action-valuefunction can be computed via the joint state-value function, followed by using MSE loss to train theaction-value network.3 MethodsAutonomous vehicles must be vigilant to avoid collisions while reaching their destination. In real-ity, drivers drive carefully under the guidance of the traffic coordinator, which provides clear drivingdirections based on the surrounding situation, reducing the driving difficulty. This section providesa detailed overview of the TraCo network, which enhances the social behavior of traffic flow. To en-sure that compliance with TraCo’s orders improves the team’s interest, we utilize the counterfactualadvantage function. The TraCo network differs from traditional centralized algorithms by reduc-ing the information processing difficulties on each agent and providing clear centralized vehicleguidance.3.1 Traffic CoordinatorAs shown in Figure 2, We introduce a virtual agent, the traffic coordinator, to model the traffic flow.The traffic coordinator has in-domain observations and distributes order vectors zi∈Rdz(ziisthe order issued by the traffic coordinator to agent i, and dzis the order dimension) to each agent.The orders issued by the traffic coordinator network can be represented by z=zi|i∈N, whereNis the number of agents. The function fgenerates the order zand is parameterized by φ, withzi∼fφ(oi,si), where oiis the observation of agent i, and siis obtained by aggregating agentinformation in the domain, as illustrated in Figure 2. Upon receiving the traffic coordinator orderzi, each agent itakes actions according to its observation oi. In an episode, the traffic coordinatorobserves the state in the domain, calculates, and distributes orders ztto the agent. At each moment,any agent iwill act based on its individual observation and order zi,pi=π(oi, zi), ai∼pi. Theobjective function is formulated using PPO as follows:L(θ) =Σnimin(πθ(ai|oi, zi)πθ′(ai|oi, zi)Ai, cilp(πθ(ai|oi, zi)πθ′(ai|oi, zi),1−ε,1 +ε)Ai), (1)Since the information closer to the vehicle may be more important, as shown in Figure 2, In orderto capture the interactions between agents, we utilize a Cross-Attention network [14] in the trafficcoordinator network to extract relevant information.43.2 Counterfactual Advantage FunctionUpdating the policy solely based on Equation (1) results in the traffic coordinator network issuingorders only to extract in-domain features, with the agent action network making more comprehensivedecisions based on these features. The advantage function remains unchanged as follows:A(oi, zi, ai) =Q(oi, zi, ai)−V(oi, zi)=Est+1∽p(st+1|st,at)[r(st) +γVπ(st+1)−Vπ(st)](2)However, this approach fails to address the critical interests balance problem. Since traditional TDerrors only consider agent rewards, the contribution of each agent to the team is still not measured.Therefore, inspired by the difference rewards[30], we design counterfactual rewards to measure theimpact of whether each agent acts according to the team’s order on the team’s interests.rDi=rtot(oi, ai(zi),oi−,ai−)−rtot(oi, ai(z−i),oi−,ai−), (3)Here oi−andai−represent the joint observations and joint actions of agents other than agent i,ai(z−i)represents the action of the agent only based on its own observation when there is no trafficcoordinator order. rtotrepresents the team reward in the domain, rtot=rsk+···+rsp,{k···p∈domain i}. At this point, the reward for agent iis:ri=rsi+rDi, (4)At this point, agent i’s reward comes from its reward, and the team’s reward. According to Equation(3), obtaining rDineeds to perform actions with or without traffic coordinator orders so that theenvironment gives different reward signals, which requires modeling the environment. Obviously,this process is overly complicated. Researchers [31] suggest using function approximations insteadof simulators to estimate differential rewards.Similar to COMA [23], we design a centralized critic, which is used to estimate the joint actionsvalue function Qtotof all agents in the domain. Then, for each agent i, with this value functionQtot, we can compute a Counterfactual Advantage Function (CAF), keeping the actions ai−ofother agents fixed in the process:ADi(s,a) =Qtot(si,a(zi))−Qtot(si,a(z−i)) (5)In Equation (5), a centralized critic is employed to reason about counterfactuals. Specifically, thecritic considers the scenario where only the actions of agent ichange and computes the counter-factual advantage function ADi(si,ai), which reflects the contribution of agent ito the team. Thisapproach enables learning directly from the agent’s experience without relying on an additionalenvironment model.Based on this, the advantage function of agent iat this time is:Ai=Asi+α∗ADi (6)Asi=Qi(oi, ai)−Vi(oi)=E[rsi+γVπi−Vπi](7)where αrepresents the coefficient of the counterfactual advantage function. Asiis the advantagefunction obtained by agent ibased on its reward function. Please see the appendix for additionalmethods and pseudocode.4 Experiments4.1 Baseline Algorithms and Experiment SetupExperimental baseline algorithms include IPPO [24], MFPO [32], and CoPO [3]. IPPO uses PPOas an independent learner; MFPO encodes the state of surrounding agents as a mean state, which5Bottleneck TollgateParking Lot IntersectionCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoFigure 3: Performance comparison of TraCo and other baseline algorithms in four task scenarios.Note that the time interval step of the features extracted by the traffic coordinator network is 1.is used as an auxiliary input to the value function. CoPO1is to split the source of the agent intoits reward and the average reward of the surrounding agents, and the global reward controls theproportion of the two.We ran experiments using RLlib [33] with the aforementioned environments and algorithms on 4Nvidia GeForce RTX 2080Ti GPUs. Each trial was trained on over 1 million environment steps,equivalent to approximately 55 hours in a real-time traffic system or over 2,000 hours of individualdriving experience, assuming an average of 40 vehicles running at the same time. For the introduc-tion of the experimental scenarios and tasks, please refer to the appendix.4.2 ResultsIn Figure 3, we compare the success rates of our TraCo algorithm to those of the baseline algorithmsacross all tasks. As a result of the virtual traffic coordinator’s order, our TraCo algorithm outper-forms the baseline in three tasks and performs comparably to the baseline only in the Intersectiontask. Notably, in the Tollgate task, which involves the most agents, TraCo outperforms the strongbaseline CoPO by a significant margin. As shown in Figure 5, this task requires agents to exhibitactive cooperation behaviors, such as queuing and giving way, and strong interaction abilities amongagents. Populations generated by other algorithms fail to exhibit such behavior, leading to conges-tion. Interestingly, IPPO performs comparably to MFPO and even outperforms CoPO in the ParkingLot task, despite MFPO and CoPO having more intra-domain information. This is because the valueestimated by IPPO’s critic network includes noise perturbations that improve the algorithm’s explo-ration performance. Additionally, the Parking Lot task is continually changing due to communityfactors. Simply averaging or concatenating neighbors’ states as an additional input to the valuefunction makes training unstable, a phenomenon also observed in MARL from StarCraft [24, 34].1https://github.com/metadriverse/metadrive-benchmark/tree/main/MARL; As described in the repo link,CoPO measures performance using the maximum value of each set of experimental data with different ran-dom seed. However, for more comprehensive comparison, we use the average performance across 8 randomseeds in this study.6CoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCo*OUFSTFDUJPO5PMMHBUF #PUUMFOFDL1BSLJOH-PUFigure 4: Performance comparison among TraCo and baseline algorithms with three metrics.Give wayQueueing WaitingRushingReversingSlow movingEnteringQueueingQueueingCutting inWaitingCutting inBypassingFigure 5: Visualization of the social behavior of the population with TraCo. The social behavior ofeach agent is denoted with black dots, while the subsequent trajectory of each vehicle is indicatedby a decreasing intensity of color, with brighter colors representing more recent steps.Figure 4 depicts the lidar chart with three metrics after normalization. Despite the comparablesuccess rate, TraCo outperforms baseline algorithms on safety and efficiency metrics. To renderthe vehicle running track, MateDrive employs PyGame. Figure 5 illustrates that TraCo generatespopulations that exhibit social behaviors, such as reversing, cutting in line, queuing, waiting, andfollowing, in all four tasks to complete their goals. This demonstrates that the vehicle selects differ-ent driving styles based on the situation and simulates a range of interactive behaviors in the trafficsystem.4.3 GeneralizationTo evaluate the generalization ability, we vary the initial number of agents in the test phase to deter-mine their converged policies. Figure 6 illustrates that as the number of agents increases, the popu-lation success rate decreases due to road congestion and a higher likelihood of collisions. However,we observe that having too few agents did not improve the algorithm’s performance in the Intersec-tion task. We suspect that in the multi-agent algorithm, each agent’s policy may overfit the behavior7Bottleneck TollgateParking Lot IntersectionCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoCoPOIPPOMFPOTraCoFigure 6: Success rate for different initial numbers of vehicles at test time. The gray vertical linerepresents the initial number of vehicles during training, from which the algorithm policy is trained.of other agents, resulting in failure. This overfitting may occur because a reduced number of agentsleads to fewer encountered situations, which can limit the model’s ability to generalize. Additionally,we find that inputting the reward distribution coefficient, which is learned during CoPO training, asprior knowledge into the agent observation may interfere with the generalization ability of the algo-rithm once the number of agents changes during the test phase. Notably, TraCo outperforms baselinealgorithms, even when the number of agents in the population changes. This is because TraCo usesa cross-attention network to process the dynamic number of agent information in the domain, allow-ing its model to adapt to the community environment of the dynamic number of agents. Finally, wehave designed ablation experiments, which are detailed in the Appendix.5 LimitationTraCo has demonstrated its ability in traffic flow simulation by facilitating team instructions, whileagents exhibit complex social behaviors. However, there are still gaps in its ability to replicatereal-world vehicle behaviors, which can be attributed to the random exploration of reinforcementlearning. To address this shortcoming, we plan to integrate real data into our approach in the future,in order to constrain vehicle behavior and improve overall performance. In addition, TraCo stilllacks the ability to handle traffic lights, which may limit its application to certain urban drivingscenarios.6 ConclusionsWe present a novel approach to model traffic flow using the Traffic Coordinator Network (TraCo)with the Counterfactual Advantage Function (CAF) and an attention mechanism. TraCo modelsreal traffic coordination to enhance vehicle decision-making unlike traditional feature extractionnetworks. Our experiments demonstrate that TraCo-trained vehicles exhibit lower collision rates andhigher success rates than baseline models while demonstrating a diverse range of social behaviors.7 AcknowledgmentThis work is supported by NSFC 62088101 Autonomous Intelligent Unmanned Systems.8References[1] A. Dorri, S. S. Kanhere, and R. Jurdak. Multi-agent systems: A survey. Ieee Access , 6:28573–28593, 2018.[2] T. Vicsek, A. Czir ́ok, E. Ben-Jacob, I. Cohen, and O. Shochet. Novel type of phase transitionin a system of self-driven particles. Physical review letters , 75(6):1226, 1995.[3] Z. Peng, Q. Li, K. M. Hui, C. Liu, and B. Zhou. Learning to simulate self-driven particlessystem with coordinated policy optimization. Advances in Neural Information ProcessingSystems , 34:10784–10797, 2021.[4] M. Georgeff, B. Pell, M. Pollack, M. Tambe, and M. Wooldridge. The belief-desire-intentionmodel of agency. In Intelligent Agents V: Agents Theories, Architectures, and Languages: 5thInternational Workshop, ATAL’98 Paris, France, July 4–7, 1998 Proceedings 5 , pages 1–10.Springer, 1999.[5] G. I. Simari and S. D. Parsons. Markov Decision Processes and the Belief-Desire-IntentionModel: Bridging the Gap for Autonomous Agents . Springer Science & Business Media, 2011.[6] A. Czir ́ok and T. Vicsek. Collective behavior of interacting self-propelled particles. PhysicaA: Statistical Mechanics and its Applications , 281(1-4):17–29, 2000.[7] E. Bertin, M. Droz, and G. Gr ́egoire. Hydrodynamic equations for self-propelled particles:microscopic derivation and stability analysis. Journal of Physics A: Mathematical and Theo-retical , 42(44):445001, 2009.[8] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi,R. Powell, T. Ewalds, P. Georgiev, et al. Grandmaster level in starcraft ii using multi-agentreinforcement learning. Nature , 575(7782):350–354, 2019.[9] S. Bhalla, S. Ganapathi Subramanian, and M. Crowley. Deep multi agent reinforcement learn-ing for autonomous driving. In Canadian Conference on Artificial Intelligence , pages 67–78.Springer, 2020.[10] A. Malus, D. Kozjek, et al. Real-time order dispatching for a fleet of autonomous mobilerobots using multi-agent reinforcement learning. CIRP annals , 69(1):397–400, 2020.[11] M. Zhou, Z. Liu, P. Sui, Y . Li, and Y . Y . Chung. Learning implicit credit assignment forcooperative multi-agent reinforcement learning. Advances in neural information processingsystems , 33:11853–11864, 2020.[12] T. Rashid, G. Farquhar, B. Peng, and S. Whiteson. Weighted qmix: Expanding monotonicvalue function factorisation for deep multi-agent reinforcement learning. Advances in neuralinformation processing systems , 33:10199–10210, 2020.[13] T. Rashid, M. Samvelyan, C. S. De Witt, G. Farquhar, J. Foerster, and S. Whiteson. Mono-tonic value function factorisation for deep multi-agent reinforcement learning. The Journal ofMachine Learning Research , 21(1):7234–7284, 2020.[14] Z. Huang, X. Wang, L. Huang, C. Huang, Y . Wei, and W. Liu. Ccnet: Criss-cross attention forsemantic segmentation. In Proceedings of the IEEE/CVF international conference on computervision , pages 603–612, 2019.[15] P. Sunehag, G. Lever, A. Gruslys, W. M. Czarnecki, V . Zambaldi, M. Jaderberg, M. Lanctot,N. Sonnerat, J. Z. Leibo, K. Tuyls, et al. Value-decomposition networks for cooperative multi-agent learning. arXiv preprint arXiv:1706.05296 , 2017.[16] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, and R. Vicente.Multiagent cooperation and competition with deep reinforcement learning. PloS one , 12(4):e0172395, 2017.9[17] K. Son, D. Kim, W. J. Kang, D. E. Hostallero, and Y . Yi. Qtran: Learning to factorize withtransformation for cooperative multi-agent reinforcement learning. In International conferenceon machine learning , pages 5887–5896. PMLR, 2019.[18] A. Mahajan, T. Rashid, M. Samvelyan, and S. Whiteson. Maven: Multi-agent variationalexploration. Advances in Neural Information Processing Systems , 32, 2019.[19] R. Lowe, Y . I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information pro-cessing systems , 30, 2017.[20] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 , 2015.[21] C. Yu, A. Velu, E. Vinitsky, J. Gao, Y . Wang, A. Bayen, and Y . Wu. The surprising effectivenessof ppo in cooperative multi-agent games. In Thirty-sixth Conference on Neural InformationProcessing Systems Datasets and Benchmarks Track .[22] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[23] J. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson. Counterfactual multi-agentpolicy gradients. In Proceedings of the AAAI conference on artificial intelligence , volume 32,2018.[24] C. S. de Witt, T. Gupta, D. Makoviichuk, V . Makoviychuk, P. H. Torr, M. Sun, and S. Whiteson.Is independent learning all you need in the starcraft multi-agent challenge? arXiv preprintarXiv:2011.09533 , 2020.[25] S. E. Shladover. Review of the state of development of advanced vehicle control systems(avcs). Vehicle System Dynamics , 24(6-7):551–595, 1995.[26] H. Zhang, S. Feng, C. Liu, Y . Ding, Y . Zhu, Z. Zhou, W. Zhang, Y . Yu, H. Jin, and Z. Li.Cityflow: A multi-agent reinforcement learning environment for large scale city traffic sce-nario. In The world wide web conference , pages 3620–3624, 2019.[27] A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J.-M. Allen, V .-D. Lam, A. Bewley, andA. Shah. Learning to drive in a day. In 2019 International Conference on Robotics andAutomation (ICRA) , pages 8248–8254. IEEE, 2019.[28] S. Shalev-Shwartz, S. Shammah, and A. Shashua. Safe, multi-agent, reinforcement learningfor autonomous driving. arXiv preprint arXiv:1610.03295 , 2016.[29] M. Zhou, J. Luo, J. Villella, Y . Yang, D. Rusu, J. Miao, W. Zhang, M. Alban, I. Fadakar,Z. Chen, et al. Smarts: Scalable multi-agent reinforcement learning training school for au-tonomous driving. arXiv preprint arXiv:2010.09776 , 2020.[30] D. H. Wolpert and K. Tumer. Optimal payoff functions for members of collectives. In Modelingcomplexity in economic and social systems , pages 355–369. World Scientific, 2002.[31] M. K. Colby, W. J. Curran, and K. Tumer. Approximating difference evaluations with localinformation. In AAMAS , pages 1659–1660, 2015.[32] Y . Yang, R. Luo, M. Li, M. Zhou, W. Zhang, and J. Wang. Mean field multi-agent reinforce-ment learning. In International conference on machine learning , pages 5571–5580. PMLR,2018.[33] E. Liang, R. Liaw, R. Nishihara, P. Moritz, R. Fox, K. Goldberg, J. Gonzalez, M. Jordan,and I. Stoica. Rllib: Abstractions for distributed reinforcement learning. In InternationalConference on Machine Learning , pages 3053–3062. PMLR, 2018.10[34] J. Hu, S. Hu, and S.-w. Liao. Policy regularization via noisy advantage values for cooperativemulti-agent actor-critic methods. arXiv preprint arXiv:2106.14334 , 2021.[35] R. Varma. Picking loss functions-a comparison between mse, cross entropy, and hinge loss,2018.[36] Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue, and B. Zhou. Metadrive: Composing diverse drivingscenarios for generalizable reinforcement learning. IEEE Transactions on Pattern Analysis andMachine Intelligence , 2022.8 Appendix8.1 Method SupplementIn order to calculate the counterfactual advantage function ADi, the joint action value function Qtotmust be estimated, although only the state value function Vtotis estimated with PPO. Thus, we have:Vπ,γtot(st) :=Est+1:∞;at:∞"∞Xl=0γlrtott+1#, (8)and:Qπ,γtot(st, at) :=Est+1:∞;at+1:∞"∞Xl=0γlrtott+1#. (9)From equations (8, 9), we get:Qπ,γtot(st, at) :=rtott+γEst+2:∞;at+1:∞"∞Xl=0γlrtott+2#=rtott+γVπ,γtot(st+1)(10)In this way, we can obtain the action-value function Qtotfrom the state-value function Vtotin thePPO architecture. However, since state-value functions cannot evaluate agent actions, we design anaction-value function network that takes input approximating Equation (10). To train the network,we utilize Mean Square Error (MSE) loss [35]:MSE =1bbXi=1(yi−y′i)2=1bbXi=1(Qnettot−rtott−γVπ,γtot(st+1))2.(11)where bdenotes the batch size. Algorithm 1 shows the overall process of TraCO in the appendix.At this time, the partial differential of the counterfactual advantage function for agent ito take anaction under instruction ziis:∂∂ziADi=∂∂ziQ(si,a(zi))−∂∂ziQ(si,a(z−i))=∂∂ziQ(si,a(zi))(12)The above equation reveals that the agent i’s utility with counterfactual instructions aligns with theglobal learning objective, and maximizing the counterfactual reward can enhance the joint action-value function. Consequently, the agent’s advantage function can be decomposed into its individualand team contributions, as shown in Equation (6), which completes the value decomposition opera-tion.11Algorithm 1 TraCo for agent i1:Input: Randomly initialize TraCo, actor and critic network f,πandVwith weights φ,θπandθv2:forepisode=1, Tdo3: Get agents’ observations {o1,···, on}4: Getsi={ok,···, op}according to the distance5: Compute zi=f(oi, si), ai=π(oi, zi)6: Compute counterfactual advantage function ADiaccording to equations (5, 10, 11)7: Compute Aiaccording to equation (6)8: Update with PPO rules9:end for8.2 Experiment Platform and ScenariosWe use MetaDrive [36] as a simulator, which is capable for generating infinite scenarios with var-ious road maps and traffic settings to enable generalizable RL. In our setup, we use current state,navigation info, and surrounding data encoded in a vector of 72 lidar-like measurements as agentobservations, while the policy output is the acceleration and steering of the vehicle. As the mutualinfluence between vehicles decreases with distance, we define the in-domain state for the trafficcoordinator as the information splicing of different vehicles within a 40-meter radius of the ego-vehicle.As shown in Figure 5, we benchmark our method in four common autonomous driving tasks, whichare described in detail as follows:Bottleneck : The Bottleneck is to set up a narrow bottleneck lane between the eight lanes, forcingvehicles to give way and queue up to pass. The environment is initialized with 20 cars.Tollgate : The Tollgate environment models the real-world behavior of vehicles passing through atollgate, where agents are required to wait for a permission signal for 3 seconds before continuing.Failure to comply with this rule results in a failed episode. The environment is initialized with 40cars.Parking lot : The parking lot scenario in our simulation consists of 8 parking spaces. Spawn pointsfor vehicles are scattered both within and outside the parking lot, leading to simultaneous entry andexit of vehicles and thereby increasing the level of difficulty. The environment is initialized with 10cars.Intersection : At an unprotected intersection scenario, vehicles are required to negotiate and judgethe potential intentions of other parties in order to complete the task. The environment is initializedwith 30 cars.In this paper, we use three indicators to evaluate the performance of multi-agent algorithms. suc-cess rate is the ratio of vehicles successfully reaching the destination, safety is the vehicle non-collision rate, efficiency >= 0 indicates the difference between successes and failures in a unit oftime(Nsuccess −Nfailure )/T. Vehicles may travel at low speeds for the safety of driving, but thisis not conducive to the effective passage of vehicles.8.3 Ablation StudiesIn our previous experiments, we employed the traffic coordinator network solely as a feature ex-traction network, without considering the counterfactual advantage function. Therefore, it is crucialto verify the validity of this function. As illustrated in Figure 7, TraCo w/o CAF performs worsethan TraCo w/ CAF in all four autonomous driving tasks. This is because the traffic coordinatornetwork, when equipped with a counterfactual advantage function, not only extracts in-domain fea-tures but also evaluates the agent’s behavior based on these features. This evaluation allows for the12TollgateParking Lot IntersectionBottleneckTraCo w/ CAFTraCo w/o CAFTraCo w/ CAFTraCo w/o CAFTraCo w/ CAFTraCo w/o CAFTraCo w/ CAFTraCo w/o CAFFigure 7: Performance comparison of TraCo with and without counterfactual advantage functions.Table 1: The traffic coordinator network re-issues the command zaccording to the current situationat different time intervals, and the command zremains unchanged during this time interval.Bottleneck Tollgate Parking lot IntersectionSuccess Rate Efficiency Safety Success Rate Efficiency Safety Success Rate Efficiency Safety Success Rate Efficiency SafetyTraCo/1 0.36 ±0.13 0.26 0.36 0.36±0.19 0.22 0.38 0.27±0.04 0.21 0.27 0.73±0.05 0.51 0.73TraCo/2 0.37 ±0.09 0.27 0.37 0.32±0.15 0.18 0.34 0.15±0.07 0.11 0.16 0.72±0.01 0.51 0.72TraCo/4 0.38 ±0.07 0.28 0.38 0.17±0.16 0.09 0.2 0.14±0.06 0.11 0.15 0.72±0.03 0.51 0.72TraCo/6 0.42±0.09 0.3 0.42 0.25±0.19 0.14 0.28 0.17±0.04 0.13 0.17 0.73±0.04 0.51 0.73TraCo/8 0.34 ±0.12 0.25 0.34 0.29±0.18 0.16 0.31 0.21±0.04 0.16 0.21 0.74±0.02 0.53 0.74measurement of the agent’s contribution to itself and the surrounding team, effectively addressingthe interests balance problem.Taking inspiration from the behavior of real-life traffic coordinators, who issue commands basedon vehicle behavior and intersection information at time intervals rather than continuously directingvehicles, we designed different time intervals for the Traffic Coordinator Network (TroCo) to extractfeatures. As shown in Table 1, our experiments reveal that in complex traffic environments suchas Tollgate and Parking lot, where obstacles are numerous, roads are congested, and the behaviorof domain agents is difficult to predict, frequent direction is necessary to ensure optimal vehicledecision-making. However, in Bottleneck and Intersection tasks, where the purpose of the vehicleis clear, and the behavior is more predictable, frequent direction may interfere with the agent’sdecision-making. In such cases, an appropriate time interval can enhance the consistency of theagent’s behavior.13 |
9GRE34K0SB | AdaptSim: Task-Driven Simulation Adaptationfor Sim-to-Real TransferAllen Z. Ren1, Hongkai Dai2, Benjamin Burchfiel2, Anirudha Majumdar11Princeton University,2Toyota Research InstituteAbstract: Simulation parameter settings such as contact models and object geometry ap-proximations are critical to training robust manipulation policies capable of transferringfrom simulation to real-world deployment. There is often an irreducible gap betweensimulation and reality: attempting to match the dynamics between simulation and realitymay be infeasible and may not lead to policies that perform well in reality for a specifictask. We propose AdaptSim, a new task-driven adaptation framework for sim-to-realtransfer that aims to optimize task performance in target (real) environments. First, wemeta-learn an adaptation policy in simulation using reinforcement learning for adjustingthe simulation parameter distribution based on the current policy’s performance in atarget environment. We then perform iterative real-world adaptation by inferring newsimulation parameter distributions for policy training. Our extensive simulation andhardware experiments demonstrate AdaptSim achieving 1-3x asymptotic performanceand∼2x real data efficiency when adapting to different environments, compared tomethods based on Sys-ID and directly training the task policy in target environments.11 IntroductionLearning robust and generalizable policies for real-world manipulation tasks typically requires a substantialamount of training data. Since using real data exclusively can be very expensive or even infeasible, we oftenresort to training mostly in simulation. This raises the question: how should we specify simulation parame-ters to maximize performance in the real world while minimizing the amount of real-world data we require?Figure 1: AdaptSim iteratively improves taskperformance in dynamic scooping task under“irreducible” sim-to-real gap.A popular method is to perform domain randomization[1,2,3,4]: train a policy using a wide range of differentsimulation parameters in the hope that the policy can thus han-dle possible real-world variations in dynamics or observations.However, the trained policy may achieve good average per-formance, but perform poorly in a particular real environment.There has been work in performing system identification(Sys-ID) for providing a point or a distributional estimate ofparameters that best matches the robot or environment dy-namics exhibited in real-world data. This estimation can beperformed using either a single iteration [ 5] or multiple ones[6]. These adaptive domain randomization techniques allowtraining policies suited to specific target environments.While simple objects such as a box and its properties like theinertia can be well-modeled, there is a substantial amount of“irreducible” sim-to-real gap in many settings such as contact-rich manipulation tasks. Consider the taskof using a cooking spatula to dynamically scoop up small pieces of food from a table (Fig. 1). The exactgeometry of the pieces and spatula is difficult to specify, deformations such as the spatula bending againstthe table are not yet maturely implemented in simulators, and contact models such as point contact havebeen known to poorly approximate the complex real-world contact behavior [ 7] in these settings. In this1Webpage: https://irom-lab.github.io/AdaptSim/7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.case, real environments are out-of-domain (OOD) from simulation, and performing Sys-ID of simulationparameters might fail to train a useful policy for the real world due to this inherent irreducible gap.Contributions. In this work we take a task-driven approach: instead of trying to align the simulation withreal-world dynamics, we focus on finding simulation parameters such that the resulting policy optimizestask performance. Such an approach can lead to policies that achieve high reward in the real world evenwith an irreducible sim-to-real gap. We consider settings where the robot has access to a simulator betweeniterations of real-world interactions, allowing it to observe real-world dynamics and adapt the simulatoraccordingly with the goal of improving task performance in reality. We propose AdaptSim — a two-phaseframework where (i) an adaptation policy that updates simulation parameters is first meta-trained usingreinforcement learning in simulation, and (ii) then deployed on the real environment iteratively. Trainingthe adaptation policy to maximize task reward enhances the efficiency of real data usage by identifying onlytask-relevant simulation parameters and helps trained policies better generalize to OOD (real) environments.We demonstrate our approach achieving 1-3x asymptotic performance and ∼2x real data efficiency inOOD environments in three robotic tasks including two that involve contact-rich manipulation, comparedto methods based on Sys-ID and directly training the task policy in target environments.2 Related WorkSim-to-real transfer in robotics has been primarily addressed using Domain Randomization (DR) techniques[1,8,9,10,11,12,13] that inject noise in simulation parameters related to visuals, dynamics, and actuations.Below we summarize techniques that better adapt to real environments.Sys-ID domain adaptation. Inspired by classical work in Sys-ID [ 14,15], there has been a popular lineof work identifying simulation parameters that match the robot and environment dynamics in the realenvironment. BayesSim [ 6] and follow-up work [ 16,17] apply Bayesian inference to iteratively searchfor a posterior distribution of the simulation parameters based on simulation and real-world trajectories.However, these methods consider relatively well-modeled environment parameterizations such as objectmass or friction coefficient during planar contact; Sys-ID approaches can be brittle when the simulationdoes not closely approximate the real world [13, 18].Task-driven domain adaptation. AdaptSim better fits within a different line of work that aims to findsimulation parameters that maximize the task reward in target environments. Muratore et al. [ 19] applyBayesian Optimization (BO) to optimize parameters such as pendulum pole mass and joint dampingcoefficient in a real pendulum swing-up task. Other work focus on adapting to simulated domains only[20,21,22]. One major drawback of these methods is that they require a large number of rollouts in targetenvironments (e.g., 150 in [ 19]), which is very time-consuming for many tasks requiring human reset.AdaptSim meta-learns adaptation strategies in simulation and requires only a few real rollouts for inference(e.g.,20 in our pushing experiments).3 Problem FormulationEnvironment. In simulation, we consider a space Ωthat parameterizes quantities such as friction coeffi-cients and dimensions of geometric primitives. Let Edenote a distribution of sim parameters with supportonΩ. Denote a single sim environment E∈Ωand a real environment Er.Task Policy and Trajectory. We denote a task policy π∈Π:O→A that maps the robot’s observation ottoaction at. Running it in an environment results in a state-action trajectory τ(π;E):[0,T]×Π×E→S×Awith time horizon T. The trajectory is also subject to an initial state distribution. We specify tasks forthe robot using a reward function ( e.g.,pushing some object to a specific location on the table), and letR(τ)∈[0,1]denote the normalized cumulative reward accrued by a trajectory. We let R(π;E)denote thereward of running the task policy πin the environment E, in expectation over the initial state distribution.Goal. Our eventual goal is to find a task policy that maximizes the task performance in a real environmentEr. Instead of directly searching for the policy, we search for the best sim parameter distribution Efortraining πin the following bi-level optimization objective:supER(π∗E;Er),where π∗E:= supπEE∼E[R(π;E)], (1)2Figure 2: AdaptSim consists of two phases: (1) meta-training an adaptation policy in sim by maximizing task rewardon randomly sampled simulated target environments; (2) iteratively adapting simulation parameter distributions basedon real trajectories. The upper-right illustration shows that using only a few real trajectories, the task policy is adaptedto push the bottle closer to the target location (yellow cross).the optimal task policy for E. Performing the outer level of (1)requires interactions with Er(the realworld); we allow a small budget of such interactions. We emphasize that the objective above identifiesthe optimal distribution of simulation parameters for maximizing task performance, unlike objectives thatattempt to match the dynamics between simulation and reality.4 ApproachOne way to solve (1)is to perform blackbox optimization on Eby evaluating R(π∗E;Er)[19], whichrequires a large budget of real trajectories (see results in Sec. 6). AdaptSim instead amortizes the expensiveouter loop to simulation: it solves (1)for many simulated environments, learns the mapping to the solutions,and then infers the solution for Er. There are two phases (Fig. 2):1)Meta-learn the adaptation policy in sim : randomly sample target environments Es∈Ωin sim, andthen train an “adaptation” policy f:(E,τ)7→∆Eusing RL to maximize task reward in Es, by updatingthe sim parameter distribution (and the corresponding task policies) in iterations.2)Iteratively adapt sim parameters with real data : given a real environment Er, iteratively infer bettersim parameter distributions using the trained fand a few real trajectories; the task policy is iterativelyfine-tuned in sim to improve task reward with the updated parameter distribution.4.1 Phase 1: meta-learning the adaptation policy in simIn order to correctly infer simulation parameters for an unseen real environment at test time, we first trainthe adaptation policy to infer better parameters for many simulated target environments. This phase happensentirely in simulation. Formally, we model the problem as a partially-observable contextual bandit [23].Definition 1 A Simulation-Adaptation Contextual Bandit (SA-CB) is specified by a tuple (Ω,T,P,R):•Ωis the space of contexts. Each context corresponds to a simulated target environment Es; the context isnot directly observable.•Tis the space of partial observations of the context. Each observation corresponds to a trajectoryobserved by running the task policy in a given context.•Pis the space of actions. An action corresponds to choosing a sim parameter distribution E.•Ris the reward associated with choosing an action in a particular context ( i.e.,the reward R(π∗E;Es)ofthe task policy π∗Etrained with Ewhen deployed in the target environment Es).It may be difficult to infer the optimal E ∈ P using a single iteration of interactions with the targetenvironment — if the current task policy fails badly in the target environment, the interaction may reveallittle information. Thus, we iteratively apply incremental changes to E, with the parameter distributioninitialized as Ei=0. Solving the SA-CB (using techniques that we detail below), we meta-learn an adaptation3policy f(E,τ)to maximize:EEs∼UΩEE0∼UPIXi=0γiR(π∗Ei;Es), (2)whereEi+1=Ei+∆Ei,∆Ei=fEi,τ(π∗Ei;Es),andUΩandUPare uniform distributions over ΩandPrespectively, and γ<1is the discount factor. Thisis the expected discounted sum of task rewards over multiple interactions from i= 0to the adaptationhorizon I, over random sampling of simulated target environment and initial sim parameter distribution.Figure 3: Task-policy trajectories better reveal task-relevantinformation such as scooping dynamics under fast contact.Sim parameter distribution space. We choosethe space Pof possible simulation parameter distri-butions to be Gaussian with mean bounded withinΩand a fixed variance. We also use a fixed stepsizeδfor adapting each simulation parameter, rang-ing from 10% to15% of the parameter range de-pending on the dimension of Ω— thus the set ofpossible ∆Ealong each dimension is {δ,−δ,0}.Algorithm 1 Meta-learning the adaptation pol-icy in simRequire: (Ω,T,P,R), SA-CBRequire: Sf=∅, replay bufferRequire: SE=∅, set of all simulation parameterdistributions (and their task policies) used1: Initialize ε←12:fork←0toKdo3: Sample target Es∼UΩandEi=0∼UP4: fori←0toIdo5: Train task policy π∗Ei(Sec.A2.2)6: Collect τ(π∗Ei;Es)andR(π∗Ei;Es)7: Sample random ∆Eior infer ∆Ei=f(Ei,τ(·;·))8: Update Ei+1←E i+∆Ei9: AddEi,∆Ei,τ(·;·),R(·;·)toSf10: Add Ei(andπ∗Ei) toSE11: end for12: Train fusing Double Q-Learning and Sf13: Anneal εtowards 014:end for15:return f,SETask-policy trajectory as observation. We have chosenthe task policy π∗Eto generate the trajectory observationsused by the adaptation policy. Our intuition is that,compared to arbitrary policies or ones that generate themost “informative” trajectories in terms of dynamics[24],π∗Ebetter reveal the task-relevant information ofthe target environment . In the scooping task, the robotneeds to attempt to scoop up the pieces so it can learnabout the effect of the environment on the task (e.g., apiece with a flat bottom is generally harder to scoop).Simply pushing the pieces around does not exhibit thebehavior of the pieces under fast contact (Fig. 3).Training the adaptation policy using RL. The adap-tation policy fis parameterized using a Branching Du-eling Q-Network [ 25], which outputs the state-actionvalue of choosing any of the {δ,−δ,0}along each actiondimension. It takes in (1) the vector of the mean of cur-rent simulation parameter distribution and (2) trajectoryobservation. We apply reinforcement learning (RL) totrainfto maximize Eq. (2). In simulation, we collect K“adaptation trajectories”; each trajectory is a set{Ei,∆Ei, R(π∗Ei;Es), τ(π∗Ei;Es)}Ii=0and saved in a replay buffer Sf. Since each step involves trainingthe corresponding task policy π∗Ei, which can be expensive, we apply off-policy Double Q-Learning [ 26] forsample efficiency. Using this, the adaptation policy outputs the greedy action of a parameterized Q function,f(E,τ)=argmax∆EQ(E,τ;∆E). We use ε-greedy exploration with εinitialized at 1 and annealed to 0.This constitutes the first phase of AdaptSim. Algorithm 1 details the steps for collecting adaptationtrajectories in the inner loop (Line 4-15) and meta-learning the adaptation policy. We save all distributions(and their corresponding task policies, omitted in notations for convenience) in a set SE, which are usedagain in the second phase. Training the task policy for each Eis the most computationally heavy componentof Algorithm 1; in Sec. A2.2 we explain the heuristics applied to allow re-using task policies between Einorder to improve computational efficiency.4.2 Phase 2: iteratively adapt sim parameters with real dataAfter meta-training the adaptation policy to find good task policies for a diverse set of target environments insimulation, we can apply it for inference and perform adaptation for the real environment Er. Algorithm 2details the iterative process. We apply the same adaptation process as the inner loop of Algorithm 1 for Iriterations: train the task policy in simulation, evaluate it in the real environment, and infer the change ofsimulation parameters based on real trajectories. We always apply the greedy action from f(E,τ)(ε=0).4Figure 4: Setup of the dynamic pushing and dynamic scooping tasks in both simulation and reality.Algorithm 2 Iteratively adapt sim parameterswith real dataRequire: Er, real environmentRequire: (Ω,T,P,R), SA-CBRequire: f, adaptation policy trained in Phase 1Require: Sf, set of sim parameter distributions (andcorresponding task policies) from Phase 11: Sample S′ffromSf2:fori←0toIrdo3: forEi∈S′fdo4: Train or fine-tune the task policy π∗Eiinsim5: Collect τ(π∗Ei;Er)andR(π∗Ei;Er)inreal6: Update Ei+1←E i+fEi,τ(·;·)7: end for8:end for9:return π∗Eiwith the highest R(·;Er)Since we have sampled a large set SEof parameter dis-tributions and trained their task policies in Phase 1, wemay re-use them here. At the beginning of Phase 2, wesample S′E, a set of Ndistributions saved in SE, as theinitial distributions to be adapted independently. Usuallywe pick N=2considering the trade-off between num-ber of real trajectories needed and convergence of taskperformance (see Appendix A5 for analysis).5 TasksNext we detail the three robotic tasks for evaluatingAdaptSim and baselines. We choose these tasks anddesign the environments to highlight the irreducible gapbetween training and test domains.5.1 Swing-up of a linearized double pendulumThis is a classic control task where the goal is to swing up a simple double pendulum with two actuatedjoints at one end of the two links. We consider the dynamics linearized around the state with the pendulumat the top, and thus the optimal policy can be solved exactly using Linear Quadratic Regulator (LQR) [ 27]for a particular set of simulation parameters ( i.e.,a Dirac delta distribution). The task cost (reward) functionis defined with the standard quadratic state error and actuation penalty. The trajectory observation is evenlyspaced points along trajectory of the two joints.Simulation setup. The environment is parameterized with four parameters: m1andm2∈[1,2], point massof the two joints, and b1andb2∈[1,2], damping coefficients of the two joints. The dynamics is simulatedwith numerical integration without a dedicated physics simulator.5.2 Dynamic table-top pushing of a bottleThe robot needs to dynamically push a bottle to a particular target location on the table (Fig. 4). Since thetarget can be outside the workspace of the robot, the robot must push objects with high velocity — causingthem to slide after a short period of contact. The task policy is parameterized with a neural network thatmaps the desired target location to action including (1) planar pushing angle and (2) robot end-effectorspeed (see Appendix A4 for visualization) and the predicted reward. The network then acts as a state-value(Q) function and is trained off-policy while simulated trajectories are saved in a replay buffer. The taskcost (reward) is defined as the distance between the target location and the final location of the bottle. Thetrajectory observation is either (1) the final 2D position of the bottle only, or (2) evenly spaced points alongthe 2D trajectory — we consider both representations in the experiments.Notation Description Rangeμ table friction coefficient [0.05,0.2]e hydroelastic modulus [7] [1e4,1e6]μppatch friction coefficient [0.20,0.80]yp patch lateral location [−0.10,0.10]Table 1: Sim setup for the pushing task.Simulation setup. We employ the Drake physics sim-ulator [ 28] for its accurate contact mechanics. In thissimulated environment, a small patch of the table is sim-ulated with different physics properties, simulating a wetor sticky area on the work surface. Parameter settingsfor this task are shown in Table 1. The hydroelastic mod-ulus is a parameter of the hydroelastic contact model [ 7]implemented in Drake — it roughly simulates how “soft” the contact is between the objects, with lowervalues being softer.5Real setup. Two 3D-printed bottles (Fig. 4, Heavy and Light) with the same dimensions but differentmaterials and masses are used. With an idealized model, the sliding distance should only depend on thecontact surface but not the mass — which is the case in simulation — but in real experiments, we find thetwo bottles consistently travel different distances. Additionally, Heavy tends to rotate slightly despite beingpushed straight due to a slightly uneven bottom surface. This type of unmodeled effect exemplifies theirreducible sim-to-real gap. We also adhere a small piece of high-friction Neoprene rubber to the table,which decelerates the bottle and further complicate the task dynamics.5.3 Dynamic scooping of food pieces with a spatulaThe robot needs to use a cooking spatula to scoop up small food pieces on the table (Fig. 4). It is achallenging task that requires intricate planning of the scooping trajectory — we notice humans cannotcomplete the task consistently without a few trials to practice. The task policy is parameterized witha neural network that maps the initial positions of the food pieces to parameterization of the scoopingtrajectory: (1) initial distance of the spatula from the pieces, (2) initial pitch angle of the spatula from thetable, and (3) the timestep to lift up the spatula (see Appendix A4 for details), and the predicted reward.The task reward is defined as the ratio of food pieces on the spatula at the end of the action. The trajectoryobservation is evenly spaced points along 2D trajectories of the food pieces.Simulation setup. We again use the Drake simulator. The parameter settings are shown in Table 2.Notation Description Rangeμ friction coefficient [0.25,0.4]e hydroelastic modulus [1e4,5e5]g food piece geometry {ellipsoid ,cylinder }h food piece height [1.5cm,2.5cm]Table 2: Sim setup for the scooping task.Real setup. Six different kinds of food pieces are used(Fig. 4): (1) chocolate raisin, (2) (fake, rubber-like)sliced carrot, (3) (fake, rigid) sliced cucumber, (4) rawBrussels sprout, (5) raw sliced mushroom, and (6)Oreo cookie. They cover different shapes from beinground, ellipsoidal, to roughly cylindrical, and also havedifferent amounts of deformation and friction.6 ExperimentsThrough extensive experiments below, we demonstrate that AdaptSim improves asymptotic task perfor-mance compared to Sys-ID and other baselines when adapting to real and OOD simulated environments,while also improving data efficiency. For baselines, first we consider methods that directly optimizes thetask policy: (1) Uniform domain randomization (UDR): train a task policy to optimize the averagetask reward over environments from UΩ; (2)UDR+Target: fine-tune the task policy from UDR with realdata; (3) LearnInTarget: directly train a task policy with data in the target environment only by fitting asmall neural network that maps action to final reward. The policy then outputs the action with the highestpredicted reward. With enough real data, this baseline should act as the oracle or upper bound of taskperformance, but can be inefficient. Next, we consider two that perform SysID and iteratively train thetask policy like AdaptSim: (4) SysID-Bayes [ 6,29]:iteratively infer the sim parameter distribution basedon real trajectories to match dynamics in sim and reality, known as BayesSim; (5) SysID-Point: infer apoint estimate of the sim parameter instead of a distributional one (we hypothesize that in some settingsrandomizing sim parameters with a distribution can negatively impact task policy training).6.1 AdaptSim achieves better task performance through adaptationSim-to-Sim Adaptation. We perform experiments for all baselines adapting to different WD (Within-Domain) and OOD simulated environments. WD environments are generated by sampling all simulationparameters within Ωof each task, and OOD environments are generated by sampling some parametersoutside Ω(see Appendix A5 for details). Table 3 shows the adaptation results in the target environments inthe three tasks. While Sys-ID baselines achieve high reward in WD environments, AdaptSim outperformsSys-ID baselines in almost all OOD environments.Sim-to-Real Adaptation. Next we perform experiments for adapting to real environments. Fig. 5 showsthe average reward achieved at each adaptation iteration in the pushing and scooping tasks. Generally theperformance gap between AdaptSim and Sys-ID baselines is larger in reality, with AdaptSim achievingbetter performance. In the scooping task, for example, AdaptSim is able to train a task policy for slicedcucumbers with decent performance (60% success rate); the pieces are very thin and difficult to scoopunder (Fig. 8). Other baselines fail to scoop up the pieces.6Double Pendulum Swing-Up Bottle Pushing Food ScoopingMethod WD OOD-1 OOD-2 OOD-3 OOD-4 WD OOD-1 OOD-2 OOD-3 OOD-4 WD OOD-1 OOD-2 OOD-3 OOD-4AdaptSim 0.98 0.96 0.95 0.95 0.98 0.95 0.87 0.73 0.86 0.77 1.00 0.64 1.00 1.00 0.55SysID-Bayes [6] 0.85 0.76 0.79 0.23 0.96 0.98 0.80 0.65 0.81 0.79 0.90 0.66 0.81 1.00 0.36SysID-Point 0.95 0.60 0.73 0.39 0.76 0.94 0.84 0.68 0.85 0.78 0.94 0.63 0.90 1.00 0.42UDR - - - - - 0.68 0.65 0.61 0.67 0.58 0.65 0.22 0.43 0.55 0.12UDR+Target - - - - - 0.78 0.73 0.66 0.71 0.70 0.61 0.31 0.49 0.60 0.21LearnInTarget - - - - - 0.91 0.75 0.66 0.74 0.71 0.03 0.00 0.25 0.26 0.03Table 3: Sim-to-Sim Adaptation. Best average reward achieved over adaptation horizons at different WD and OODsimulated target environment in the three tasks. For the pendulum task, the values are normalized in [0,1]using thereward achieved by UDR (lower bound) and by using the best possible parameters within Ω(upper bound, estimatedwith exhaustive sampling). For the pushing task, the values are normalized with 20cmas the maximum error, which isthe range of possible goal locations in the forward direction.Figure 5: Sim-to-Real Adaptation. Reward achieved over adaptation iterations by all methods, in the task of pushing(left) and scooping up (right) different real objects (see Fig. 4 for images). Results are averaged over 10 trials in thepushing task and 5 in the scooping task.6.2 AdaptSim improves real data efficiencyReal data budgetMethod 0 4 8 16 24 32 40 48AdaptSim 0.30 0.69 0.80 0.83 0.84 0.84 0.82 0.83LearnInTarget 0.05 0.04 0.63 0.69 0.76 0.80 0.84 0.83UDR+Target 0.63 0.56 0.62 0.66 0.68 0.74 0.82 0.82BayesOpt - - - - 0.65 0.72 0.79 0.80Table 4: Adaptation Data Efficiency. Normalized rewardachieved using different amount of real data in the pushingtask with Heavy bottle.Pushing task. We compare AdaptSim with Learn-InTarget and UDR+Target with different numberof real data budget. With enough data, LearnInTar-get and UDR+Target should achieve high rewardin the target environment. We do not compare withSys-ID baselines here since Sec. 6.1 shows theytypically fail to achieve the same level of task per-formance in real environments. In the task of pushing Heavy bottle, Table 4 shows that AdaptSim achievesa similar level of task performance ( ∼0.83) using only 16 trials while LearnInTarget and UDR+Targetuses 40. Fine-tuning with real data in UDR+Target is ineffective until the real budget is sufficient andcan negatively impact the performance in the low-data regime ( e.g.,4 and 8). This also exemplifies usingsimulation to amortize data requirements for policy training. We also introduce a new baseline BayesOpthere based on [ 19] that directly optimizes Eq. (1)with Bayesian Optimization. However, with 24 rollouts(the minimum needed to initialize the optimization) it only achieves 0.65.Larger improvement in scooping task. While LearnInTarget and UDR+Target achieve reasonableperformance in the pushing task, LearnInTarget achieves low reward on all the food pieces in the scoopingtask, and UDR+Target does not improve upon the performance of UDR policies. The action space in thescooping task is more complex and requires significantly more data to search for or improve task policies.AdaptSim’s adaptation pre-training in simulation considerably amortizes the real data requirement.6.3 AdaptSim finds sim parameters that are different from ones from SysIDWe expect that AdaptSim finds simulation settings that achieve better task performance while not necessarilyminimizing the full dynamics discrepancies between sim and reality. Fig. 6 shows SysID-Bayes findsparameters that are closer to the target in the parameter space, but for the pendulum task, such parameterslead to inferior task reward compared to those found by AdaptSim. Moreover, we compute the dynamicsdiscrepancy, measured as the total variations between trajectories in the target environment and in the7Figure 7: Adaptation in Pushing Task. AdaptSim correctly learns to push the bottle swiftly and close to the target.The task-relevant sim parameters learned by AdaptSim noticeably differ from those by SysID-Bayes which tends tounderestimate table and patch friction, resulting in a less forceful push of the bottle and worse task performance.Figure 8: Adaptation in Scooping Task. With AdaptSim, the cucumber is successfully scooped up by lifting up thespatula off the table late; otherwise, the piece slips off the spatula. AdaptSim infers an ellipsoidal shape ( g=1, foodpiece geometry), while SysID-Bayes infers a cylindrical shape.environment with adapted parameters. The results are 17.6 vs. 12.1 for AdaptSim and SysID-Bayes inOOD-1 environment, 21.7 and 11.1 in OOD-2, 39.9 and 16.4 in OOD-3, 75.8 and 56.4 in OOD-4. Thusfor all four OOD target environments, SysID-Bayes finds sim parameters whose resulting dynamics arecloser to the target environment (lower discrepancies), but Table 3 shows the task performance is worse.Fig. 7 and Fig. 8 further show cases where SysID-Bayes under-performs AdaptSim and there are visibledifferences between sim parameter distributions found by the two approaches. In the pushing task, SysID-Bayes infers table and patch friction coefficients that are too low, and the trained task policy pushes thebottle with little speed. In the scooping task, interestingly, AdaptSim infers an ellipsoidal shape for thesliced cucumber despite it resembling a very thin cylinder, and the task policy achieves 60% success rate.Sys-ID infers a cylindrical shape but the task policy fails completely.7 DiscussionsFigure 6: Sim parameters found by AdaptSim vs. SysID-Bayes in the OOD-1 setting of the double pendulum task.The colors indicate the maximum possible reward at eachparameter. SysID-Bayes finds parameters closer to thetarget in the parameter space (dark red star), but the taskperformance is worse.Summary. We present AdaptSim , a framework forefficiently adapting simulation-trained task policiesto the real world. AdaptSim meta-learns how to adaptsimulation parameter distributions for better perfor-mance in diverse simulated target environments, andthen infers better distributions for training real-worldtask policies using a small amount of real data.Limitations and Future Work. In some settingsAdaptSim does not outperform baselines ( e.g.,OOD-4 in the pushing task and scooping up Brussels sproutin hardware, Fig. A8). First, AdaptSim’s task-drivenadaptation training requires the trained task policy being (nearly) optimal on the corresponding simulationparameter distribution — while it can be solved exactly in the double pendulum task, the task policy trainingin the two manipulation tasks can be noisy. Second, if the target environment is extremely OOD from thesimulation domain and the adaptation policy has not been trained with similar trajectories, AdaptSim maynot work as well. We believe the first issue can be mitigated by allowing more simulation budget for taskpolicy training and better design of task policy re-use. The second issue can be addressed by designing thesimulation parameter space Ωto better cover possible real-world behavior.8AcknowledgmentsThe authors were partially supported by the Toyota Research Institute (TRI), the NSF CAREER Award[#2044149], and the Office of Naval Research [N00014-23-1-2148, N00014-21-1-2803]. This article solelyreflects the opinions and conclusions of its authors and NSF, ONR, TRI or any other Toyota entity. Theauthors would like to thank the Dynamics and Simulation team and Dexterous Manipulation team at TRIRobotics for technical support and guidance. We also thank Yimin Lin for helpful discussions.References[1]I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert,G. Powell, R. Ribas, et al. Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113 ,2019.[2]A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk, K. V an Wyk,A. Zhurkevich, B. Sundaralingam, et al. DeXtreme: Transfer of Agile In-hand Manipulation fromSimulation to Reality. arXiv preprint arXiv:2210.13702 , 2022.[3]J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning quadrupedal locomotion overchallenging terrain. Science Robotics , 5(47):eabc5986, 2020.[4]A. Loquercio, E. Kaufmann, R. Ranftl, A. Dosovitskiy, V . Koltun, and D. Scaramuzza. Deep droneracing: From simulation to reality with domain randomization. IEEE Transactions on Robotics , 36(1):1–14, 2019.[5]V . Lim, H. Huang, L. Y . Chen, J. Wang, J. Ichnowski, D. Seita, M. Laskey, and K. Goldberg. Planarrobot casting with real2sim2real self-supervised learning. In Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA) , 2022.[6]F. Ramos, R. C. Possas, and D. Fox. Bayessim: adaptive domain randomization via probabilisticinference for robotics simulators. In Proceedings of Robotics: Science and Systems , 2019.[7]J. Masterjohn, D. Guoy, J. Shepherd, and A. Castro. V elocity Level Approximation of Pressure FieldContact Patches. IEEE Robotics and Automation Letters , 7(4):11593–11600, 2022.[8]K.-C. Hsu, A. Z. Ren, D. P . Nguyen, A. Majumdar, and J. F. Fisac. Sim-to-Lab-to-Real: Safereinforcement learning with shielding and generalization guarantees. Artificial Intelligence , 314:103811, 2023.[9]J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P . Abbeel. Domain randomization fortransferring deep neural networks from simulation to the real world. In Proceedings of the IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , 2017.[10] F. Sadeghi and S. Levine. Cad2rl: Real single-image flight without a single real image. In Proceedingsof Robotics: Science and Systems , 2017.[11] X. B. Peng, M. Andrychowicz, W. Zaremba, and P . Abbeel. Sim-to-real transfer of robotic controlwith dynamics randomization. In Proceedings of the IEEE International Conference on Robotics andAutomation (ICRA) , 2018.[12] G. B. Margolis and P . Agrawal. Walk These Ways: Tuning Robot Control for Generalization withMultiplicity of Behavior. In Proceedings of the Conference on Robot Learning (CoRL) , 2022.[13] F. Muratore, F. Ramos, G. Turk, W. Y u, M. Gienger, and J. Peters. Robot learning from randomizedsimulations: A review. Frontiers in Robotics and AI , 9, 2022.[14] M. Gautier and W. Khalil. On the identification of the inertial parameters of robots. In Proceedingsof the IEEE Conference on Decision and Control (CDC) , 1988.9[15] P . K. Khosla and T. Kanade. Parameter identification of robot dynamics. In Proceedings of the IEEEConference on Decision and Control (CDC) , 1985.[16] E. Heiden, C. E. Denniston, D. Millard, F. Ramos, and G. S. Sukhatme. Probabilistic Inference ofSimulation Parameters via Parallel Differentiable Simulation. In Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA) , 2022.[17] R. Antonova, J. Yang, P . Sundaresan, D. Fox, F. Ramos, and J. Bohg. A bayesian treatment of real-to-sim for deformable object manipulation. IEEE Robotics and Automation Letters , 7(3):5819–5826,2022.[18] C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song. Iterative residual policy: for goal-conditioned dynamic manipulation of deformable objects. In Proceedings of Robotics: Science andSystems , 2022.[19] F. Muratore, C. Eilers, M. Gienger, and J. Peters. Data-efficient domain randomization with bayesianoptimization. IEEE Robotics and Automation Letters , 6(2):911–918, 2021.[20] Q. Vuong, S. Vikram, H. Su, S. Gao, and H. I. Christensen. How to pick the domain randomization pa-rameters for sim-to-real transfer of reinforcement learning policies? arXiv preprint arXiv:1903.11774 ,2019.[21] W. Y u, C. K. Liu, and G. Turk. Policy transfer with strategy optimization. In Proceedings of theInternational Conference on Learning Representations (ICLR) , 2018.[22] N. Ruiz, S. Schulter, and M. Chandraker. Learning to simulate. In Proceedings of the InternationalConference on Learning Representations (ICLR) , 2019.[23] A. Bensoussan. Stochastic control of partially observable systems . Cambridge University Press,1992.[24] J. Swevers, C. Ganseman, D. B. Tukel, J. De Schutter, and H. V an Brussel. Optimal robot excitationand identification. IEEE Transactions on Robotics and Automation , 13(5):730–740, 1997.[25] A. Tavakoli, F. Pardo, and P . Kormushev. Action Branching Architectures for Deep ReinforcementLearning. In Proceedings of the AAAI Conference on Artificial Intelligence , 2018.[26] H. V an Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. InProceedings of the AAAI conference on artificial intelligence , 2016.[27] R. Tedrake. Underactuated Robotics . 2023. URL https://underactuated.csail.mit.edu .[28] R. Tedrake and the Drake Development Team. Drake: Model-based design and verification forrobotics, 2019. URL https://drake.mit.edu .[29] R. Antonova, F. Ramos, R. Possas, and D. Fox. BayesSimIG: Scalable Parameter Inference forAdaptive Domain Randomization with Isaac Gym. arXiv preprint arXiv:2107.04527 , 2021.[30] Y . Chebotar, A. Handa, V . Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox. Closing thesim-to-real loop: Adapting simulation randomization with real world experience. In Proceedings ofthe IEEE International Conference on Robotics and Automation (ICRA) , 2019.[31] A. Allevato, E. S. Short, M. Pryor, and A. Thomaz. Tunenet: One-shot residual tuning for systemidentification and sim-to-real robot task transfer. In Proceedings of the Conference on Robot Learning(CoRL) , 2020.[32] A. Ajay, M. Bauza, J. Wu, N. Fazeli, J. B. Tenenbaum, A. Rodriguez, and L. P . Kaelbling. Combiningphysical simulators and object-based networks for control. In Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA) , 2019.10[33] A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser. Tossingbot: Learning to throw arbitraryobjects with residual physics. IEEE Transactions on Robotics , 36(4):1307–1319, 2020.[34] A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid motor adaptation for legged robots. InProceedings of Robotics: Science and Systems , 2021.[35] W. Y u, J. Tan, C. K. Liu, and G. Turk. Preparing for the unknown: Learning a universal policy withonline system identification. In Proceedings of Robotics: Science and Systems , 2017.[36] B. Evans, A. Thankaraj, and L. Pinto. Context is everything: Implicit identification for dynamicsadaptation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) ,2022.[37] J. Liang, S. Saxena, and O. Kroemer. Learning active task-oriented exploration policies for bridgingthe sim-to-real gap. In Proceedings of Robotics: Science and Systems , 2020.[38] W. Jin and M. Posa. Task-Driven Hybrid Model Reduction for Dexterous Manipulation. arXivpreprint arXiv:2211.16657 , 2022.[39] A. Z. Ren and A. Majumdar. Distributionally robust policy learning via adversarial environmentgeneration. IEEE Robotics and Automation Letters , 7(2):1379–1386, 2022.[40] H. S. Gomes, B. L ́eger, and C. Gagn ́e. Meta learning black-box population-based optimizers. InProceedings of the International Conference on Learning Representations (ICLR) , 2023.[41] M. Wistuba, N. Schilling, and L. Schmidt-Thieme. Scalable gaussian process-based transfer surrogatesfor hyperparameter optimization. Machine Learning , 107(1):43–78, 2018.[42] Y . Chen, M. W. Hoffman, S. G. Colmenarejo, M. Denil, T. P . Lillicrap, M. Botvinick, and N. Freitas.Learning to learn without gradient descent by gradient descent. In Proceedings of the InternationalConference on Machine Learning (ICML) , 2017.[43] M. V olpp, L. P . Fr ̈ohlich, K. Fischer, A. Doerr, S. Falkner, F. Hutter, and C. Daniel. Meta-learningacquisition functions for transfer learning in bayesian optimization. In Proceedings of the InternationalConference on Learning Representations (ICLR) , 2020.[44] A. Z. Ren, B. Govil, T.-Y . Yang, K. Narasimhan, and A. Majumdar. Leveraging Language forAccelerated Learning of Tool Manipulation. In Proceedings of the Conference on Robot Learning(CoRL) , 2022.[45] L. Johannsmeier, M. Gerchow, and S. Haddadin. A framework for robot manipulation: Skillformalism, meta learning and adaptive control. In Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA) , 2019.[46] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-baseddeep reinforcement learning with model-free fine-tuning. In Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA) , 2018.[47] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcement learning.InProceedings of the International Conference on Learning Representations (ICLR) , 2018.[48] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-basedacceleration. In Proceedings of the International Conference on Machine Learning (ICML) , 2016.11AppendixA1 Extended Related WorkSys-ID domain adaptation. Inspired by classical work in Sys-ID [ 14,15], there has been a popular lineof work identifying simulation parameters that match the robot and environment dynamics in the realenvironment before task policy training. BayesSim [ 6] and follow-up work [ 16,17] applies Bayesianinference to iteratively search for a posterior distribution of the simulation parameters based on simulationand real-world trajectories. The inference problem has also been formulated using RL to minimizetrajectory discrepancies [ 30]. A different approach [ 31,32,33] learns a residual model of dynamics (oftenparameterized with a neural network) to match simulation or an ideal physics model with reality. However,all these methods consider relatively well-modeled environment parameterizations such as object mass orfriction coefficient during planar contact; Sys-ID approaches have been shown to fail in cases where thesimulation does not closely approximate the real world [ 13,18]. There is also work that avoids inferringthe full dynamics but adapts with a low-dimensional latent representation online [ 34,35,36], but therepresentation is still trained with regression to match dynamics or simulation parameters. Importantly,the Sys-ID approaches highlighted above are all task-agnostic; this can lead to poor performance whentrained task policies are sensitive to mismatches in dynamics between simulation and reality. Chi et al.[18] address the issue by using simulation to predict changes to trajectories from changes in actions asan implicit policy, but it requires the environment to be resettable, while AdaptSim works with randomlyinitialized object states.Task-driven domain adaptation. AdaptSim better fits within a different line of work that aims to findsimulation parameters that maximize the task reward in target environments. Muratore et al. [ 19] applyBayesian Optimization (BO) to optimize parameters such as pendulum pole mass and joint dampingcoefficient in a real pendulum swing-up task. Other work focus on adapting to simulated domains only[20,21,22]. One major drawback of these methods is that they require a large number of rollouts in targetenvironments (e.g., 700 in [ 19]), which is very time-consuming for many tasks requiring human reset.AdaptSim meta-learns adaptation strategies in simulation and requires only a few real rollouts for inference(e.g.,20 in our pushing experiments). Liang et al. [ 37] apply the same task-driven objective to learn anexploration policy in manipulation tasks, but the task policy is synthesized using estimated simulationparameters via Sys-ID. Jin et al. [ 38] applies task-drived reduced-order model for dexterous manipulationtasks, but again the model is identified with Sys-ID and no vision-based control is involved. Ren et al.[39] search for adversarial environments ( e.g.,objects) given the current task performance to robustify thepolicy, but unlike AdaptSim, the adversarial metric is measured in simulated domain only without real data.Learn to search/optimize. Our work involves learning optimization strategies through meta-learningacross a distribution of relevant problems, allowing for customization to the specific setting and increasedsample efficiency [ 40,41]. Chen et al. [ 42] meta-learns an RNN optimizer for black-box optimization.V olpp et al. [ 43] meta-learns the acquisition function in BO with RL; it is able to learn new explorationstrategies for black-box optimization and tuning controller gains in sim-to-real transfer. Meta RL trains thetask policy directly to optimize performance in new environments [ 44,45,46] — AdaptSim applies metaRL to optimize simulation parameters instead.A2 Additional details on approachA2.1 Sparse adaptation rewardIn practice, we are only concerned with the reward if it reaches some minimum threshold — a bad taskpolicy is not useful. Thus we use a sparse-reward version of Eq. (2),EEs∼UΩEE0∼UPIXi=0γi1R(π∗Ei;Es)≥RR(π∗Ei;Es), (A1)where 1()is the indicator function and Ris the sparse-reward threshold. Using a sparse reward alsodiscourages the adaptation policy from being myopic and getting trapped at a sub-optimal solution,12especially since we use a relatively small I(e.g.,5-10) in order to minimize the amount of real data, anduse a small discount factor γ(=0.9).A2.2 Task policy reuse across parameter distributionsAlgorithm 1 requires training the task policy for each E, which can be expensive with the two manipulationtasks. Our intuition is that we can share the task policy between parameter distributions of close distance,with the following heuristics:•Record the total budget ( i.e.,number of trajectories), and j, the number of simulation parameterdistributions that a task policy has been trained with.•Define distance between two parameter distribution D(·,·)such as L2 distance between the mean. If Eiis within a threshold Dfrom a previously seen distribution, re-use the task policy. If the policy is alreadytrained with Mmaxbudget total, do not train again; otherwise train with max(Mmin, αj−1M)budget,where α<1andMis the budget for training the policy for the first time.•If the nearby parameter distribution re-uses a task policy, do not re-use the same policy again. Thisprevents the same task policy being used for too many E.Remark 1 re-using task policies between parameter distributions makes the reward Rdepend on theadaptation history, as π∗Edepends on previous Ethat are used for training. We choose not to model thishistory dependency in f, as the reward should be largely dominated by the current E.A3 Additional details of adaptation policiesHyperparameters. Table A1 shows the hyperparameters used for the adaptation policy training in Phase 1,including those defining the heuristics for re-using task policies among simulation parameter distributions.We generally use smaller adaptation step δfor smaller dimensional Ω.TaskParameter Pendulum Pushing ScoopingTotal adaptation steps, K 1e4 1 e4 1 e4Adaptation horizon, I 10 8 8Adaptation step size, δ 0.10 0.15 0.15Adaptation discount factor, γ 0.9 0.9 0.9Sprase reward threshold, R 0.95 0.8 0.5Task policy reuse threshold, D - 0.16 0.16Task policy max budget, Mmax - 3 e4 4 e3Task policy budget discount, α - 0.9 0.9Task policy init budget, M - 1 e4 1.2 e3Table A1: Hyperparameters used in adaptation policy training for the three tasks.Trajectory observations. We detail the trajectory observation (as input to the adaptation policy) used inthe three tasks.•Pendulum task: each trial is 2.5 seconds long, and we use 12 evenly spaced points along the trajectories ofthe two joints, and thus each trajectory is 24 dimensional. For AdaptSim-State, SysID-Bayes-State, andSysID-Bayes-Point, again 12 points are used but sampled from the last 0.5 second only. One trajectory isused at each adaptation iteration — the trajectory input to the adaptation policy is 24 dimensional.•Pushing task: each trial is 1.3 seconds long, and we use 6 evenly spaced points along the X-Y trajectoryof the bottle, and thus each trajectory is also 12 dimensional. For AdaptSim-State, SysID-Bayes-State,and SysID-Bayes-Point, only the final X-Y position of the bottle is used. Two trajectories are used ateach adaptation iteration — the trajectory input to the adaptation policy is 24 dimensional.13•Scooping task: each trial is 1 second long, and we use X-Y position of the food piece at the time step[0,0.2,0.3,0.4,0.5,0.6,0.8,1.0]s (more sampling around the initial contact between the spatula and thepiece), and thus each trajectory is 16 dimensional. Two trajectories are used at each adaptation iteration— the trajectory input to the adaptation policy is 32 dimensional.In real experiments, we track the bottle position in the pushing task using 3D point cloud information froma Azure Kinect RGB-D camera, which we find accurate. In the scooping task, the food pieces are toosmall and thin to be reliably tracked with point cloud, and thus we resort to extracting the contours fromthe RGB image and then finding the corresponding depth values at the same pixels in the depth image.During fast contact there can be motion blur around the food piece, and thus we add Gaussian noise with0.2cm mean for X position and zero mean for Y position, and 0.2cm covariance for both, to the points inthe ground-truth trajectories in simulation. We use positive mean in X since the motion blur tends to occurin the forward direction.A4 Additional details of the task setup and task policiesTrajectory observation First, we remove the action sequence from the task-policy trajectory and keepthe state sequence only. Since the dynamics in real environments can be OOD, in order to achieve similarhigh-reward states as in simulated environments, the robot would need to use some actions not seen duringtraining (or not seen for the particular state), hindering the adaptation policy to generalize if action sequencewere included in the task policy trajectory. We assume that the task-relevant state sequence is covered byTif the task policy performs reasonably well in the real environment. This choice is also present in thestate-only inverse RL literature [ 47] that addresses train-test dynamics mismatch. See Fig. A4 and relateddiscussions in Sec. 6.3.A4.1 Dynamic pushing of a bottleTrajectory parameterization. Here we detail the trajectory of the end-effector pusher designed for thetask (Fig. A1). The trajectory is parameterized with two parameters: (1) planar pushing angle, which isthe yaw orientation of the pusher relative to the forward direction that controls the direction of the bottlebeing pushed, and (2) forward speed (of the end-effector), in the direction specified by the pushing angle.The pushing angle varies between −0.3radand0.3rad, and the forward speed varies between 0.4m/sand0.8m/s. We find 0.8m/sroughly the upper speed limit of the Franka Panda arm used. The pusher alsopitches upwards during the motion and the speed is fixed to 0.8rad/s. We design such trajectories tomaximize the pushing distance at the hardware limit.Initial and goal states. The bottle is placed at the fixed location ( x=0.56m,y=0, relative to the arm base)on the table before the trial starts. The goal location is sampled from a region where the X location isbetween 0.7 and 1.0m and Y location is at most 10 degrees off from the centerline (Fig. A1 top-right). Thepatch, a 10cm by 10cm square, is placed at x=0.75m with its center (lateral position is varied as one ofthe simulation parameter).Task policy parameterization. The task policy is parameterized using a Normalized Advantage Function(NAF) [ 48] that allows efficient Q Learning with continuous action output by restricting the Q value asa quadratic function of the action, and thus the action that maximizes the Q value can be found exactlywithout sampling. In this task, it maps the desired 2D goal location of the bottle to the two action parameters,planar pushing angle and forward speed. The policy is open-loop — the actions are determined before thetrial starts and there is no feedback using camera observations.Hardware setup. A 3D-printed, plate-like pusher is mounted at the end-effector instead of the paralle-jawgripper in both simulation and reality. We also wrap elastic rubber bands around the bottom of the pusherand contact regions of the bottle to induce more elastic collision, which we find increases the slidingdistance of the bottle.14Figure A1: Visualization of the pushing trajectory and goal locations in the Drake simulator. There are two actionparameters: (1) forward speed (of the end-effector) and (2) planar pushing angle ( i.e.,yaw orientation of the end-effector). The patch is not visualized.Figure A2: Visualization of the scooping trajectory and initial positions of the food piece in the Drake simulator. Thereare three action parameters: (1) initial distance (between the spatula and food piece), (2) initial pitch angle (of thespatula from the table), and (3) pitch rate (of the end-effector at time step t=0.25).A4.2 Dynamic scooping of food piecesTrajectory parameterization. Here we detail the trajectory of the end-effector with the spatula designedfor the task (Fig. A2). The end-effector velocity trajectory is generated using cubic spline with valuesclamped at five timesteps. The trajectory only varies in the X and pitch direction (in the world frame),while remaining zero in the other directions. The only value defining the trajectory that the task policylearns is the pitch rate, which is the pitch speed at the time t=0.25s and varies between −0.2rad/s and0.2rad/s. A positive pitch rate means the spatula lifting off the table late, while a negative one means liftingoff early (see the effects in Fig. 8). The other two values that the task policy outputs are the initial pitchangle of the spatula from the table (varying from 2to10degrees), and the initial distance between thespatula and the food piece (varying between 0.5cm to 2cm). Generally a higher initial pitch angle can helpscoop under food pieces with flat bottom, and a smaller angle helps scoop under ellipsoidal shapes. Wedesign such trajectories after extensive testing with food pieces of diverse geometric shapes and physicalproperties in both simulation and reality.Initial states. The food piece is randomly placed in a box area of 8x6cm in front of the spatula; the initialdistance is relative to the initial food piece location.Task policy parameterization. The task policy is parameterized using a NAF again. In this task, it mapsthe initial 2D position of the food piece to the three action parameters: pitch rate, initial pitch angle, andinitial distance.Hardware setup. We use the commercially available OXO Nylon Square Turner2as the spatula usedfor scooping. It has a relatively thin edge (about 1.2mm) that helps scoop under thin pieces. A box-like,3D-printed adapter with high-friction tape is mounted on the handle to help the parallel-jaw gripper graspthe spatula firmly. The exact 3D model of the spatula with the adapter is designed and used in the Drakesimulator; the deformation effect as it bends against the table is not modeled in simulation.2link: https://www.amazon.com/OXO-11107900LOW-Grips-Square-Turner/dp/B003L0OOSU15A5 Additional details of experimentsA5.1 Simulated adaptationTable A2 shows the simulation parameters used in different simulated target environments for the threetasks (results shown in Table 3).SettingTask Parameter WD OOD-1 OOD-2 OOD-3 OOD-4 RangePendulumm1 1.8 1.8 0.5 1.2 0.4 [1,2]m2 1.2 0.3 1.8 1.8 2.6 [1,2]b1 1.5 1.5 1.5 10.0 1.0 [1,2]b2 1.5 1.5 1.5 10.0 2.0 [1,2]Pushingμ 0.1 0.25 0.05 0.15 0.30 [0.05,0.2]e 1e5 5e4 1 e5 5e6 1e5[1e4,1e6]μp 0.6 0.1 0.9 0.1 0.15 [0.2,0.8]yp 0.05 -0.1 0.05 -0.15 0.1[−0.1,0.1]Scoopingμ 0.30 0.45 0.20 0.30 0.40 [0.25,0.4]e 5e4 1e4 5 e4 1e6 1e5[1e4,5e5]g 1 0 1 0 2 {0,1}h 2.0 1.4 2.2 2.8 1.9 [1.5,2.5]Table A2: Simulation parameters used in different simulated target environments for the three tasks. OOD parameters(outside the range used in adaptation policy training) are bolded. For gin the scooping task, 0stands for ellipsoid, 1forcylinder, and 2for box.A5.2 Real adaptationIn Fig. A7 and Fig. A9 we demonstrate additional visualizations of the pushing and scooping results withAdaptSim.A5.3 Additional studiesChoice of the simulation parameter space. To answer Q3, we perform a sensitivity analysis by fixing thetarget environment (OOD-1 in the double pendulum task) and varying the simulation parameter space. InOOD-1, the OOD parameter is m2=0.3while the range in Ωis[1,2]. Fig. A3 shows the results of rewardachieved after adaptation for AdaptSim and the two Sys-ID baselines, as the range shifts further away fromm2=0.3to[1.1,2.1],[1.2,2.2], and [1.3,2.3]. Sys-ID performance degrades rapidly, while AdaptSim ismore robust.Figure A3: Adaptation results for AdaptSim and Sys-ID baselines in OOD-1 setting of the double pendulum task, withdifferent m2ranges in Ωwhile m2=0.3in the target environment.Pitfalls of Sys-ID approaches. Fig. A4 demonstrates the dynamics mismatch between simulation andreality, which illustrates the pitfall of SysID approaches. We plot a set of bottle trajectories from randomlysampled simulation parameters from Ωwith a fixed robot action. We also plot the trajectories of Heavybottle being pushed with the same action in reality. There are segments of real trajectories that are not wellmatched by the simulated ones, and a slight mismatch can lead to diverging final states (and hence differenttask rewards).16Figure A4: Comparison of trajectories from the simulation domain (green, simulated with randomly sampled simulationparameter settings) and from Heavy bottle in reality (red), with the same robot action applied. The real dynamics canbe OOD from simulation (black boxes) while the final position of the bottle can be WD.Trade-off between real data budget and task performance convergance. In Sec. 4.2 we introduce N,the number of initial simulation parameter distributions that are sampled at the beginning of Phase 2 andthen adapt independently. There is a trade-off between the real data budget (linear to N) and convergenceof task performance. Adapting more simulation parameter distributions simultaneously can potentially helpthe task performance converge faster but also require more real data. Fig. A5 shows the effect with theLight bottle in the pushing task. We vary Nfrom 1 to 4 — each simulation parameter distribution takes 2trajectories at each iteration. N=1shows slow and also worse asymptotic convergence, which shows thatthe parameter distribution can be trapped in a low-reward regime. N=2performs the best with fastestconvergence in terms of number of real trajectories used. Using higher Nshows slower convergence. Notethat the convergence also depends on the dimension of the simulation parameter space Ω— we expectN >2is needed for the best convergence rate once the dimension increases from 4 used in the pushingtask.Figure A5: Task performance convergence with respect to the number of real trajectories used with varying N, thenumber of simulation parameter distributions adapting simultaneously in Phase 2 with the Light bottle in the pushingtask.Sensitivity analysis on adaptation step size. Adaption step size δcan affect the task performanceconvergence too — δbeing too low can cause slow convergence, while δbeing too high can preventconvergence since the simulation parameter distribution can “overshoot” the optimal one by a large margin.Fig. A6 shows the effect of adaptation step size ranging from 0.05 to 0.20 in OOD-1 setting of the doublependulum task. δ= 0.10performs the best while δ= 0.05shows slower convergence. δ= 0.15alsoachieves similar asymptotic performance but the reward is less unstable during adaptation, while withδ=0.20the reward does not converge at all.Comparison of simulation runtime. Compared to Sys-ID baselines, AdaptSim requires significantlylonger simulation runtime for training the adaptation policy in Phase 1. For example: SysID-Bayes usesroughly 6 hours of simulation walltime to perform 10 iterations of adaptation in the scooping task whileAdaptSim would take 36 hours for Phase 1, and 30 minutes for Phase 2 (i.e., 3 minutes per iteration), usingthe same computation setup. However, we re-use the same adaptation policy for different food pieces inthe scooping task, which amortizes the simulation cost.17Figure A6: Normalized reward at each adaptation iteration using different adaptation step size δ, in OOD-1 setting ofthe pendulum task.Figure A7: Adaptation results of the pushing task with two different target locations (yellow cross, top and bottomrows) over iterations. The right figure shows the inferred simulation parameter distribution (mean only).Figure A8: AdaptSim fails to synthesize a task policy for scooping up Brussels sprout. We consider such environmentextremely OOD from the simulation domain.18Figure A9: Adaptation results of scooping up (top) chocolate raisins, (middle) mushroom slice, and (bottom) Oreocookie with AdaptSim.19 |
QG_ERxtDAP- | Curiosity-Driven Learning ofJoint Locomotion and Manipulation TasksClemens Schwarke∗, Victor Klemm∗, Matthijs van der Boon,Marko Bjelonic, and Marco HutterRobotic Systems Lab, ETH Zurich, Switzerland{cschwarke, vklemm, mvander, markob, mahutter }@ethz.chAbstract: Learning complex locomotion and manipulation tasks presents signif-icant challenges, often requiring extensive engineering of, e.g., reward functionsor curricula to provide meaningful feedback to the Reinforcement Learning (RL)algorithm. This paper proposes an intrinsically motivated RL approach to reducetask-specific engineering. The desired task is encoded in a single sparse reward,i.e., a reward of “+1” is given if the task is achieved. Intrinsic motivation enableslearning by guiding exploration toward the sparse reward signal. Specifically, weadapt the idea of Random Network Distillation (RND) to the robotics domainto learn holistic motion control policies involving simultaneous locomotion andmanipulation. We investigate opening doors as an exemplary task for robotic ap-plications. A second task involving package manipulation from a table to a binhighlights the generalization capabilities of the presented approach. Finally, theresulting RL policies are executed in real-world experiments on a wheeled-leggedrobot in biped mode. We experienced no failure in our experiments, which con-sisted of opening push doors (over 15 times in a row) and manipulating packages(over 5 times in a row).Keywords: Curiosity, Reinforcement Learning, Wheeled-Legged Robots1 IntroductionFigure 1: A wheeled-legged robot picking upa package and opening a door while balanc-ing on two wheels. Motions can be seen athttps://youtu.be/Qob2k ldLuw.Recent advancements in Reinforcement Learn-ing (RL) have propelled legged and wheeled-legged robots beyond the confines of the lab,empowering them to fulfill a broad range ofpractical applications [1, 2, 3, 4, 5, 6]. Nev-ertheless, tasks in environments designed forhuman interaction remain challenging as theyfrequently require coordination between loco-motion and manipulation. Research often aimsto reduce task complexity by manually dividingthe problem into multiple stages [7]. However,this approach lacks the exploration of holisticsolutions that consider the interplay betweendifferent stages. Solving these problems end-to-end with minimal task-specific engineering,i.e., without including a variety of dense rewardterms that need to be tuned extensively, remainsan open challenge.∗Shared first authorship.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.This work proposes an RL approach that solves the task as a whole with little need for task-specificengineering. We adopt a sparse reward setting, rewarding “+1” when the desired task is achieved.Although simple in its formulation, the discovery of sparse rewards is in general challenging andrequires strategies for the exploration of the environment. Because the reward only exists in asmall fraction of the state space, random exploration induced by, e.g., epsilon-greedy policies isnot sufficient to pick up the reward signal in reasonable time [8]. In our work, intrinsic motivationis employed to guide the agent. For the first time, the concept of Random Network Distillation(RND) [9], which models intrinsic motivation, has been successfully applied and validated in areal-world robotic task, as shown in Fig. 1. Our main contributions can be summarized as follows:1. A curiosity-driven sparse reward RL approach for learning end-to-end locomotion and ma-nipulation tasks without laborious, task-specific engineering (Section 3)2. The notion of curiosity states as guiding mechanisms, allowing to focus curiosity on non-directly observable states, while maintaining simple deployment (Section 2.2)3. An analysis of emerging learning behaviors (Section 4.1)4. Successful real-world validation of RL policies on a wheeled-legged robot in bipedmode [10], by repeatedly opening a door and manipulating a package (Section 4.2)1.1 Mechanisms to Guide ExplorationWe group existing approaches that employ guided exploration to improve learning performance overrandom exploration into three main categories. Expert Demonstrations can be an effective tool toteach a desired skill [11, 12, 10, 13], but require predominantly hand-crafted demonstrations. Cur-riculum Learning [14, 15, 16, 17, 18] involves gradually increasing task difficulty during training,but the generation and efficient scheduling of intermediate tasks are still considered unsolved andsubject to ongoing research [17, 19, 20]. Intrinsic Motivation , more specifically curiosity, denotesthe ability to learn without external rewards for the pure sake of knowledge gain. Integrating thismechanism into our learning algorithms holds great promise, particularly due to its task-agnosticnature, which distinguishes it from expert demonstrations or curriculum learning.Past research has explored three main approaches to modeling intrinsic motivation. In an initialendeavor to incorporate curiosity into an RL algorithm, [21] introduced a reward based on the Eu-clidean distance between the prediction of a model learning the environmental forward dynamics andthe observed transition, effectively rewarding surprise . The concept of measuring novelty usingprediction error from a dynamics model raises a significant concern: it tends to favor stochastic tran-sitions [22], as they are difficult or even impossible to predict accurately. To address this challenge,[23] adopts a different approach by predicting a state embedding instead of the complete world state.Another line of work focuses on estimating and rewarding learning progress within specific re-gions of the state space [24, 25, 26]. However, measuring learning progress in high dimensionalcontinuous state spaces remains computationally infeasible [23]. Count-based exploration methodswork by rewarding state novelty directly by keeping track of the number of visits for each state andprioritizing less visited states [27]. To tackle exploration in continuous state spaces, [28] proposespseudo-counts, a generalization of count-based exploration methods. Burda et al. [9] propose RND,a method based on predicting information about the current state to measure state novelty. Theytrain a prediction model alongside the RL agent in a supervised fashion, improving its accuracy forvisited states during the learning process. The prediction error can then be utilized as an intrinsicreward signal, where familiar states yield more accurate predictions compared to less visited or un-visited states. Given our focus on loco-manipulation tasks characterized by continuous state spacesand potentially complex task dynamics, we consider the third option as the most suitable.1.2 Loco-ManipulationIn recent years, approaches to solving complex loco-manipulation tasks are dominated by RL,e.g., [6, 29, 30], with some works fully or partly relying on model predictive control (MPC) [31, 32].2A common design choice to break down the complexity of loco-manipulation is to divide the prob-lem into a locomotion and a manipulation task and to control them individually [32, 33]. Thisintroduces new challenges in engineering the communication between both controllers and leadsto sub-optimal behaviors as synergies between different body parts can not be fully exploited [34].Other works focus on manipulating objects through locomotion, e.g., pushing an object throughwalking forward [31, 35, 36]. More dynamic manipulation tasks are investigated by a different bodyof work that studies soccer skills [30, 37, 38, 6]. Many of these works combine multiple low-levelskills into a more capable controller, mostly through a hierarchical control framework and a skill li-brary [30, 38, 35, 37]. This could be a future research direction to combine the controllers proposedin this work. However, little work exists on how to leverage intrinsic motivation to learn complexloco-manipulation tasks in a lean task-independent framework.In this study, the primary task under examination is opening doors. A quadrupedal robot by BostonDynamics equipped with an arm has demonstrated impressive performance in the task of openingdoors [39]. However, limited information is available regarding the specific approach employed,only that it is model-based. Other works divide the door opening task into sub-tasks that can besolved in sequence [40, 7]. This requires task-specific modeling and tailoring of the control scheme,which can be identified as a common shortcoming in many of the mentioned works. In the following,we aim for a more holistic approach.2 Curiosity FormulationStructured exploration is a crucial factor for successful learning in sparse reward settings. RND [9]offers an intuitive and computationally efficient approach to intrinsic motivation, while the curiositystate proposed in this work focuses exploration.2.1 Random Network DistillationThe RND module consists of two function approximators. A randomly initialized target network fencodes states s∈ S into an unknown embedding f(s). The target stays fixed during the wholetraining process. A predictor network ˆfestimates the target’s embedding, given the same input sas the target. The predictor network is trained alongside the RL agent on the visited states in asupervised fashion with a Mean Squared Error (MSE) loss. The prediction error, i.e., the differencebetween the outputs of both networks serves as the intrinsic reward signal defined byrintrinsic =f(s)−ˆf(s)2. (1)Intuitively, familiar state regions yield small prediction errors as the predictor is already trained onsimilar states. Not yet visited state regions lead to large errors and therefore large intrinsic rewards.As the agent visits an unfamiliar region repeatedly, the prediction error decreases.2.2 Curiosity StateWhile [9] applies the RND module directly to the agent’s observations oinstead of the state s,we propose a more flexible formulation where a curiosity state sc=φ(s)is passed to the RNDmodule to stay independent of the observations. The mapping φcan be freely chosen, as long asthe state scan be implicitly inferred from the environment’s feedback. This way, curiosity canbe focused on the desired quantities, even though they might not be directly observable duringdeployment of the motion policy. The thereby introduced flexibility allows to leverage the curiositymodule in simulation while keeping deployment simple, thus showing a practical adaption of theRND formulation to the robotics domain. While this formulation allows for arbitrary mappings φ,it suffices to consider φto select a subset of the full state without further modification. A sketch ofthe implemented RND module is shown in Fig. 2.33 Task-Specific RL FormulationWe illustrate an effective sparse reward formulation on the exemplary task of door opening, detailingthe chosen rewards, observations, and the introduced curiosity state. Subsequently, we adapt theformulation to the task of package manipulation to demonstrate the straightforward generalizationof the proposed approach to different tasks.3.1 RewardsTargetPredictorFigure 2: Random Network Distillation (RND)module. A mapping φencodes the full systemstatesinto the curiosity state sc. The target net-work with fixed weights (blue) embeds the curios-ity state scinto a latent representation. The pre-dictor network (green) is updated during trainingand attempts to match the target’s output. The dif-ference between outputs serves as the intrinsic re-ward signal.The chosen reward function consists of thethree reward terms r=rintrinsic +rtask+rshaping .The first term rintrinsic is defined by equation 1and motivates the agent to explore the rele-vant part of the state space. In this work, weuse Multi Layer Perceptrons (MLPs) with 1and 2 hidden layers with 5 neurons and a one-dimensional output for the target and predictornetwork, respectively. The second term rtaskis the only task-specific reward. For the taskof door opening, it is defined intuitively by re-peatedly giving a reward of +1 while the task isachieved, i.e., when the door hinge angle qhingeis in a desired interval, and no reward other-wise:rtask=1,if1.5< q hinge<2.10,otherwise.(2)The inclusion of the third term rshaping serves as an incentive for the robot to maintain a stand-ing posture and adopt more refined, less forceful strategies. This emphasis on smoother and lessaggressive policies is crucial for achieving effective sim-to-real deployment within the realm of RLcontrol [15, 41]. For a comprehensive breakdown of the rewards and their respective weights, pleaserefer to Appendix A.1.3.2 ObservationsIn terms of observations, our approach focuses on minimizing task dependency and avoids the useof complex exteroceptive information. The only observation that pertains specifically to the dooris the relative position CrCHof the door handle origin in the robot’s camera frame. We freeze theinitial door handle position and provide it as an additional input CrCH initto the policy to determinethe degree to which the door is open.We provide a list of standard observations that are not task-specific and have been covered in previ-ous works [10, 4] in Appendix A.1. We note that the observations are subject to empirical normal-ization for proper scaling of input variables.3.3 Curiosity ImplementationAn advantage of the proposed curiosity state scis its independence of the robot’s observations.Since the RND module is only needed for training, it is possible to include quantities that are easyto attain in simulation but hard to estimate in reality and thus unfit as observations. To focus theagent’s curiosity on the door, we include door hinge and handle angles qdooras well as their angularvelocities ̇qdoorin the curiosity state. Adding the distance between the robot and the door handledCHinduces faster interaction with the door. To avoid an intrinsic reward signal for moving too faraway from the door, the distance is clipped according to dCH= min( ∥rCH∥2,2), and the curiositystate is defined as sc=q⊤door ̇q⊤door dCH⊤.43.4 Generalization to Package ManipulationWe show the task independence and generalization capability of the proposed approach by subjectingit to a second task requiring different locomotion and manipulation skills. We choose the exemplarytask of package manipulation, i.e. grabbing, moving, and dropping a package, as it involves a freelymoving object, as opposed to the fixed-base articulated door. To encode the task, we define a similarsparse task reward rtask, given byrtask=1,ifIrpackage∈ S bin0,otherwise, (3)where Irpackage is the position of the package in the inertial frame IandSbinincludes the spacein and above the bin. We include the space above the bin to reduce reward sparsity as the agentstill learns to drop the package. It is however not necessary to include this space for successfullearning. Instead of observing the door handle position, the agent now observes the relative package,table, and bin positions CrCP,CrCT, and CrCB, respectively. As before, relative positions areobserved in the camera frame. The last necessary change concerns the curiosity state. We defineit assc=I ̇r⊤package dPB dCP⊤, where I ̇rpackage is the linear package velocity in the inertialframe. The distances between package and bin dPBand camera and package dCPare again clipped.4 Experimental Validation and DiscussionTo validate our approach, we conduct experiments in simulation and the real world. For implemen-tation details please refer to the Appendix. All experiments are conducted with the wheeled-leggedquadrupedal robot in Fig.1, a dynamic robot that can perform hybrid motions between walking anddriving. Recently, [10] discovered a bipedal locomotion mode through RL, further increasing therobot’s versatility. In bipedal mode, the front legs can serve as arms to manipulate objects.4.1 Simulation ResultsQualitative examples of the door opening and package-grabbing motions can be seen in the supple-mentary video.2In Fig. 4, we present quantitative results for all three tasks with different rewardsettings. In general, we noticed that there is a divide between runs in which the agent learns toFigure 3: Snapshot sequences (left to right) of the push door and package manipulation experiments.Opening the door takes 2.5 seconds, grasping and dropping the package 1.5 seconds.2To enable pull door experiments, we equip the robot with basic hooks attached to the front wheels.50 2 4 6 8 10 12 14 16Successful Training RunsSparse Reward(cw=0)Original RNDDense RewardsOurs(cw=100)Ours(cw=200)Ours(cw=300)0010675000324000864Push DoorsPull DoorsPackagesFigure 4: Number of training runs (out of 10 for each task) in which the agent successfullylearned to accomplish the given task for different reward formulations and different intrinsic re-ward weights (denoted as cw). Every experiment was conducted for random seeds 1-10.complete a given task and others where the sparse reward is solely triggered through randomness.Only in rare cases, the sparse reward is discovered but remains small due to shaping rewards thathinder skill discovery. We consider training runs with a final success rate greater than 25 % success-ful to exclude the aforementioned cases. Successful runs show success rates significantly above thisthreshold, as can be seen in Appendix A.3, where we also provide a more thorough analysis for thecase of (out-of-distribution) environment disturbances of various magnitudes. The best-performingpolicies achieve success rates of 99 % ,92 % , and 99 % , for the push door, pull door, and packagemanipulation tasks, respectively. In the following, we discuss the relevant findings of learning withcuriosity.Training Evaluation: The need for guided exploration becomes clear when training with extrinsicreward signals only, as reported in Fig. 4. Indeed, the investigated skills are not learned. Instead, thelearning process ends in a local optimum and the agent learns to stand without moving. The addedintrinsic reward signal guides the agent toward states that involve manipulating the door or package.An exemplary learning process is shown in Fig. 5 for a push door. The learning process for packagemanipulation evolves similarly. First, the robot discovers the package and moves it on the table toincrease the intrinsic reward signal. After learning to grab the package with both wheels, the robotstarts to wiggle the package and moves it closer to the bin. Once the sparse task reward is found, theintrinsic reward decreases as the agent optimizes its behavior to achieve the desired task.RND Evaluation: Experiments show that the network architecture of the RND module can be keptminimal. While a predictor of the same size as the target should be able to fully approximate thetarget, choosing a larger predictor leads to more consistent intrinsic reward curves over differenttraining runs. The weight of the intrinsic reward does not require extensive tuning since learningis successful for a large interval, as can be seen in Fig. 4. Normalizing the reward empirically asin [9] does not improve training and prevents the reward from converging to values close to zero.Again, convergence to small values is necessary to model the loss of interest as the agent gets morefamiliar with the environment. Normalization scales the reward with its standard deviation and thusleads to larger rewards at the end of training. In contrast, normalization of the curiosity state iscrucial for a proper decay of the intrinsic reward over the course of training. The curiosity stateincludes quantities that can reach high magnitudes like velocities and is therefore subjected to em-pirical normalization, i.e., it is normalized with a running estimate of its mean and variance. Whilenormalization improves the convergence properties of the intrinsic reward, it violates the theoret-ical foundation of state visitation-based curiosity. During training, the same state keeps changingits normalized representation due to varying normalization parameters. Different states could havealmost identical representations for different points in time and thus also almost identical target em-beddings. Even though this distorts the measure for state familiarity, curiosity state normalizationdoes not seem to have a negative influence on exploration and is highly recommended to improveconvergence to a small reward signal.60.00.2Task Reward0.40 250 500 750 1000 1250 1500 1750 2000Iteration024Intrinsic RewardB DACA BC DFigure 5: Task specific reward rtaskand intrinsic reward rintrinsic over the course of learning to open apush door. The mean and standard deviation include 4 successful out of 5 training runs for a curiosityweight of 100 and random seeds 1-5. After learning how to stand (A), the robot starts to play withthe door handle and the intrinsic reward increases (B). Next, the robot opens the door slightly whilestill manipulating the door handle (C). The agent then discovers the sparse door opening reward andstarts to optimize toward it (D).Curiosity State Evaluation: To assess the relevance of the curiosity state notion, we probe theoriginal RND formulation of [9], that applies the curiosity module to the entirety of the observationspace. We note that skill discovery is not feasible for the investigated tasks due to two reasons.First, the large state space might cause an unreasonable amount of exploration to properly convergein reasonable time. Second, parts of the state space that would be crucial to explore might not bedirectly observable, e.g., the door handle angle. Using a curiosity state instead of the observationspace allows the use of privileged information which might not be directly observable in the realworld, but is available in simulation.Emerging Behaviors: A key finding of this paper is the sensitivity of the learning process tosmall changes in the training environment. Changing the seed used to initialize networks or setrandomized quantities is enough to alter the resulting policy significantly. RL approaches usuallyrely on dense task rewards that heuristically steer the agent toward a feasible trajectory. This mightbias the agent toward suboptimal behavior preventing discovery of the sparse reward. We evaluate anaive implementation of a dense reward setup of comparable complexity to our method. For simpletasks like opening push doors, the dense reward setup is able to discover the skill, but for the other,more complex tasks it is not. In contrast to a dense task reward setting, a sparse task reward settingdoes not bias the agent toward any trajectory. Randomness in the exploration can thus lead to thediscovery of different minima and trajectories, especially in contact-rich scenarios. As a result,different behaviors emerge as shown in Fig. 6 where the robot is holding the door open in a varietyof poses. For the task of package manipulation, differences are more subtle. During task execution,policies mainly differ in the leg configuration and the stepping pattern of the robot.4.2 Real-World ResultsWe demonstrate the effectiveness of the proposed approach by opening a push door over 15 times ina row without a single failure in a lab environment. Learned policies are able to let the robot standand navigate toward the door. The robot reaches for the door handle with its right wheel and attemptsto press it while pushing against the door. As soon as the door is unlocked, the robot swings the dooropen and holds it open while standing still, as shown in the top row of Fig. 3. Sim-to-real transferbenefits from modeling the robot’s Field of View (FOV) in training, explained in Appendix A.2.Since the real robot lacks the added hooks for pulling doors, we leave pull-door experiments tofuture work. For the task of package manipulation (shown in the bottom row of Fig. 3), the robotrobustly grabs the package and drops it into the bin over 5 times in a row. In one instance out ofall tests, the robot lost its grip on the package. However, the robot quickly regrabbed the packageand successfully delivered it. Policies exhibit highly dynamic behavior. If a more gentle motion isdesired, a task reward that does not favor quick task completion would be more appropriate.74.3 LimitationsFigure 6: A wheeled-legged robot holding thedoor open in a variety of poses.Depending on the desired task, the intrinsic mo-tivation approach might come with some trade-offs. On one hand, the sparsity of rewards en-courages the exploration of diverse behaviors(see Fig. 6), which can lead to new solutions,often neglected by more dense approaches. Onthe other hand, it complicates the process ofenforcing a specific behavior. Although denseshaping rewards can be added to enforce thedesired behavior, they limit exploration and re-duce training robustness. This can be seen inFig. 4, where the investigated skills are not dis-covered in every training run.Secondly, since the intrinsic reward signal is not vanishing completely, the agent stays curious aboutits environment even after finding the task reward. In our case, this did not cause unwanted behav-iors. If it does for other applications of RND, the weight of the reward could be scheduled.Lastly, as the predictor network is trained in a supervised fashion, overfitting to specific regions ofthe state space could occur. Although not observed in this work, this might be exploited by the RLagent in a back-and-forth fashion. Subsequent switching to different state space regions would gainintrinsic reward repeatedly. Common methods to avoid overfitting such as regularization or dropoutscould be employed in that case.5 Conclusions and Future WorkWe show that intrinsic motivation for exploration proves successful in simulation, yielding motionpolicies for complex tasks that involve locomotion and manipulation. Our RL method proves togeneralize over multiple tasks requiring different sets of skills. We note that different behaviorsemerge for small changes in the training environment. This phenomenon is explained by the absenceof dense task rewards that bias the agent toward specific trajectories and is inherent to the sparsereward setting.The introduced notion of a curiosity state guides exploration toward the reward in an efficient mannerand allows learning of various tasks with basic task-dependent observations. We note that curiositystate normalization is crucial for proper reward convergence during the course of training, eventhough it distorts the measure of state familiarity.To validate the proposed method, trained motion policies are executed on a wheeled-legged robot inbiped mode. Experiments show that the robot is able to successfully and robustly open a push doorin a lab environment, over 15 times in a row without failure, as well as manipulate a package througha grabbing, moving, and dropping motion over 5 times in a row without failure. We conclude thatthe investigated approach proves valuable for the robotics control domain as it enables the learningof highly complex skills with a minimal amount of task-specific engineering.Future research could involve further investigation into the chosen curiosity formulation, the no-tion of penalty-based surprise [42] could allow for gentle policies without the need for task-specificshaping rewards. Other potential continuations include investigating controllers that achieve multi-ple tasks by combining the control policies proposed in this work.AcknowledgmentsThis project has received support from the European Union’s Horizon Europe and H202 FrameworkProgramme under grants agreement No. 101070596 and No. 852044 as well as from the SwissNational Science Foundation (SNSF) as part of project No. 166232.8References[1] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V . Tsounis, V . Koltun, and M. Hutter. Learn-ing agile and dynamic motor skills for legged robots. Science Robotics , 4(26):5872, 2019.[2] J. Lee, J. Hwangbo, and M. Hutter. Robust recovery controller for a quadrupedal robot usingdeep reinforcement learning. arXiv e-prints , 2019.[3] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robust per-ceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62):2822, 2022.[4] J. Lee, M. Bjelonic, and M. Hutter. Control of wheeled-legged quadrupeds using deep rein-forcement learning. In Robotics in Natural Settings: CLAWAR 2022 , pages 119–127. Springer,2022.[5] N. Rudin, D. Hoeller, M. Bjelonic, and M. Hutter. Advanced skills by learning locomotion andlocal navigation end-to-end. In 2022 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 2497–2503. IEEE, 2022.[6] T. Haarnoja, B. Moran, G. Lever, S. H. Huang, D. Tirumala, M. Wulfmeier, J. Humplik, S. Tun-yasuvunakool, N. Y . Siegel, R. Hafner, et al. Learning agile soccer skills for a bipedal robotwith deep reinforcement learning. arXiv e-prints , 2023.[7] H. Ito, K. Yamamoto, H. Mori, and T. Ogata. Efficient multitask learning with an embodiedpredictive model for door opening and entry with whole-body control. Science Robotics , 7(65):8177, 2022.[8] J. Achiam and S. Sastry. Surprise-based intrinsic motivation for deep reinforcement learning.arXiv e-prints , pages arXiv–1703, 2017.[9] Y . Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation.InSeventh International Conference on Learning Representations , pages 1–17, 2019.[10] E. V ollenweider, M. Bjelonic, V . Klemm, N. Rudin, J. Lee, and M. Hutter. Advanced skillsthrough multiple adversarial motion priors in reinforcement learning. In 2023 IEEE Interna-tional Conference on Robotics and Automation (ICRA) , pages 5120–5126. IEEE, 2023.[11] M. Kalakrishnan, L. Righetti, P. Pastor, and S. Schaal. Learning force control policies forcompliant manipulation. In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. , pages 4639–4644.IEEE, 2011.[12] J. Ram ́ırez, W. Yu, and A. Perrusqu ́ıa. Model-free reinforcement learning from expert demon-strations: a survey. Artificial Intelligence Review , pages 1–29, 2022.[13] X. B. Peng, Z. Ma, P. Abbeel, S. Levine, and A. Kanazawa. Amp: Adversarial motion priorsfor stylized physics-based character control. ACM Transactions on Graphics (TOG) , 40(4):1–20, 2021.[14] Y . Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings ofthe 26th annual international conference on machine learning , pages 41–48, 2009.[15] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massivelyparallel deep reinforcement learning. In Conference on Robot Learning , pages 91–100. PMLR,2022.[16] M. Riedmiller, R. Hafner, T. Lampe, M. Neunert, J. Degrave, T. Wiele, V . Mnih, N. Heess,and J. T. Springenberg. Learning by playing solving sparse reward tasks from scratch. InInternational conference on machine learning , pages 4344–4353. PMLR, 2018.[17] C. Florensa, D. Held, M. Wulfmeier, M. Zhang, and P. Abbeel. Reverse curriculum generationfor reinforcement learning. In Conference on robot learning , pages 482–495. PMLR, 2017.9[18] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. To-bin, P. Abbeel, and W. Zaremba. Hindsight experience replay. In Proceedings of the 31stInternational Conference on Neural Information Processing Systems , pages 5055–5065, 2017.[19] C. Florensa, D. Held, X. Geng, and P. Abbeel. Automatic goal generation for reinforcementlearning agents. In International conference on machine learning , pages 1515–1528. PMLR,2018.[20] S. Forestier, R. Portelas, Y . Mollard, and P.-Y . Oudeyer. Intrinsically motivated goal explo-ration processes with automatic curriculum learning. The Journal of Machine Learning Re-search , 23(1):6818–6858, 2022.[21] J. Schmidhuber. A possibility for implementing curiosity and boredom in model-buildingneural controllers. In Proc. of the international conference on simulation of adaptive behavior:From animals to animats , pages 222–227, 1991.[22] J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEETrans. Auton. Mental Develop. , 2(3):230–247, 2010.[23] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning , pages 2778–2787.PMLR, 2017.[24] P.-Y . Oudeyer, F. Kaplan, and V . V . Hafner. Intrinsic motivation systems for autonomousmental development. IEEE Trans. Evol. Comput. , 11(2):265–286, 2007.[25] A. Baranes and P.-Y . Oudeyer. Robust intrinsically motivated exploration and active learning.In2009 IEEE 8th International Conference on Development and Learning , pages 1–6. IEEE,2009.[26] M. Frank, J. Leitner, M. Stollenga, A. F ̈orster, and J. Schmidhuber. Curiosity driven reinforce-ment learning for motion planning on humanoids. Frontiers in neurorobotics , 7:25, 2014.[27] G. Ostrovski, M. G. Bellemare, A. Oord, and R. Munos. Count-based exploration with neuraldensity models. In International conference on machine learning , pages 2721–2730. PMLR,2017.[28] M. G. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos. Unifyingcount-based exploration and intrinsic motivation. In Proceedings of the 30th InternationalConference on Neural Information Processing Systems , pages 1479–1487, 2016.[29] J. Merel, S. Tunyasuvunakool, A. Ahuja, Y . Tassa, L. Hasenclever, V . Pham, T. Erez, G. Wayne,and N. Heess. Catch & carry: reusable neural controllers for vision-guided whole-body tasks.ACM Transactions on Graphics (TOG) , 39(4):39, 2020.[30] Y . Ji, G. B. Margolis, and P. Agrawal. Dribblebot: Dynamic legged manipulation in the wild.arXiv preprint arXiv:2304.01159 , 2023.[31] A. Rigo, Y . Chen, S. K. Gupta, and Q. Nguyen. Contact optimization for non-prehensile loco-manipulation via hierarchical model predictive control. In 2023 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 9945–9951. IEEE, 2023.[32] Y . Ma, F. Farshidian, T. Miki, J. Lee, and M. Hutter. Combining learning-based locomotionpolicy with model-based manipulation for legged mobile manipulators. IEEE Robot. Autom.Lett., 7(2):2377–2384, 2022.[33] X. Cheng, A. Kumar, and D. Pathak. Legs as manipulator: Pushing quadrupedal agility beyondlocomotion. arXiv preprint arXiv:2303.11330 , 2023.10[34] Z. Fu, X. Cheng, and D. Pathak. Deep whole-body control: learning a unified policy formanipulation and locomotion. In Conference on Robot Learning , pages 138–149. PMLR,2023.[35] K. N. Kumar, I. Essa, and S. Ha. Cascaded compositional residual learning for complex inter-active behaviors. IEEE Robotics and Automation Letters , 2023.[36] O. Nachum, M. Ahn, H. Ponte, S. S. Gu, and V . Kumar. Multi-agent manipulation via locomo-tion using hierarchical sim2real. In Conference on Robot Learning , pages 110–121. PMLR,2020.[37] Y . Ji, Z. Li, Y . Sun, X. B. Peng, S. Levine, G. Berseth, and K. Sreenath. Hierarchical reinforce-ment learning for precise soccer shooting skills using a quadrupedal robot. In 2022 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 1479–1486. IEEE,2022.[38] X. Huang, Z. Li, Y . Xiang, Y . Ni, Y . Chi, Y . Li, L. Yang, X. B. Peng, and K. Sreenath. Creatinga dynamic quadrupedal robotic goalkeeper with reinforcement learning. arXiv e-prints , pagesarXiv–2210, 2022.[39] Boston Dynamics. Testing robustness, 2018. URL https://youtu.be/aFuA50H9uek .[40] M. Stuede, K. Nuelle, S. Tappe, and T. Ortmaier. Door opening and traversal with an industrialcartesian impedance controlled mobile robot. In Proc. IEEE Int. Conf. Robot. Autom. , pages966–972. IEEE, 2019.[41] J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning quadrupedal locomo-tion over challenging terrain. Science robotics , 5(47):5986, 2020.[42] S. H. Huang, M. Zambelli, J. Kay, M. F. Martins, Y . Tassa, P. M. Pilarski, and R. Hadsell.Learning gentle object manipulation with curiosity-driven deep reinforcement learning. arXive-prints , pages arXiv–1903, 2019.[43] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac gym: High performance gpu based physics simulation forrobot learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasetsand Benchmarks Track (Round 2) , 2021.[44] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv e-prints , 2017.[45] E. Olson. Apriltag: A robust and flexible visual fiducial system. In Proc. IEEE Int. Conf.Robot. Autom. , pages 3400–3407. IEEE, 2011.11AppendixIn the following, we provide implementation details of the simulation and real-world experiments,as well as further quantitative evaluations of the investigated approach.A.1 Simulation SetupWe train with NVIDIA’s Isaac Gym [43] and employ Proximal Policy Optimization (PPO) [44]. Adetailed description of the used training pipeline can be found in [15]. A full training run comprises2000 policy updates to ensure reward convergence for all investigated tasks. It takes one hour totrain a policy on a single NVIDIA RTX 2080 Ti graphics card. Subsequently, we give a detaileddescription of the training environment.Reward Formulation: The definitions and weights of the reward terms used for the door andthe package task are detailed in Table 1. We decided to add two task-related shaping rewards forthe task of package manipulation to improve the behavior for real-world tests. Namely, the agentreceives penalties for generating high package velocities and exerting large contact forces onto thetable. Notice that this choice does not violate the idea of the proposed approach. Firstly, the addedpenalties are unrelated to the main task, which is still defined by a single sparse reward. Secondly,our approach first generates unbiased behaviors and can then be augmented for more pleasing results.In contrast, other formulations bias the agent as a byproduct of defining the desired task in a densefashion. Penalizing table contacts and the package velocity, which is part of the chosen curiositystate, clearly increases the difficulty of discovering the desired skill. To compensate for this, weemploy a simple reward scaling scheme. The first 1000 training iterations serve as a discoveryphase, as most runs discover the sparse reward in that time. Shaping and standing rewards are activebut scaled by a factor of 0.1. The second half of training acts as a shaping phase where the scalingfactor is gradually increased to 1 over the course of 500 iterations.Observations: The corresponding observation definitions can be found in Table 2. All observationsare subject to noise to account for uncertainties and sensor noise in reality. For more detail in thatregard, please refer to [15].Randomization: To improve generalization to different environments, as well as robustness againstmismatches between simulation and reality, masses and friction coefficients are randomized as de-tailed in Table 3. Additionally, the robot spawns in a randomized pose, i.e., initial position, orienta-tion, and joint configuration vary. All randomized properties are sampled from a uniform distributionin the interval of [μ−ε2, μ+ε2]for every training environment.Termination Conditions: Episodes terminate after 8 seconds, resetting the environments to theirinitial state. An episode terminates early if either the robot is in collision, or if the robot’s centeris too low, i.e., if the robot does not manage to stand and falls. The second condition acceleratestraining but is not necessary for successful learning. We also terminate an episode if the package isnot in contact with either the table or the front wheels to prevent the agent from directly throwing thepackage. This termination condition is disabled in close proximity to the bin to allow the droppingof the package into the bin.Door Model: The considered doors feature standard lever door handles that need to be pressed toa certain degree to unlock the door. In simulation, the handle needs to be pressed once to keepthe door unlocked for the rest of the episode. Dynamics of the hinge and handle are modeled asspring-damper systems with a constant torque offset τconst. This is achieved by applying the torqueτdoor=τconst+ diag( k)·qdoor+ diag( d)· ̇qdoor, (4)to the door joints. Constants τconst,k, anddare randomized by sampling from a uniform distribution.Measurements on the lab door provide reference values for realistic door dynamics. Further detailsare provided in Table 3.Field of View Simulation: To mimic the perception system of the real robot we simulate the FOVfor egocentric vision, as introduced in simulation experiments in [29], resulting in behaviors that12Table 1: RewardsName Formula WeightIntrinsic RewardRND prediction errorf(sc)−ˆf(sc)2100Task RewardsDoor opened1,if1.5< q hinge<2.10,otherwise1.0Package delivered1,ifIrpackage∈ S bin0,otherwise1.0Standing RewardsHeight Izbase 0.5Upright baseπ/2−arccos( IeBx·IeIz)π/20.5Straight shoulder joints −∥qshoulders ∥20.5Straight knee joints exp(−∥qknees∥2) 0 .25Shaping RewardsJoint torque −∥τ∥21.5·10−5Joint acceleration −∥ ̈q∥22.5·10−7Joint velocity −∥ ̇q∥22.5·10−4Action difference −∥a−aprev∥21.0·10−2Table contact force −∥Fc, table∥21.0·10−5Package velocity −∥I ̇rpackage∥21.0·10−2Table 2: ObservationsRobot-related ObservationsB ̇rbase∈R3Linear base velocityBωbase∈R3Angular base velocityBg∈R3Projected gravity vectorqlegs∈R12Joint configuration without wheelsohooks∈R4Hook directions (for pull doors) ̇q∈R16Joint velocityaprev∈R16Previous actionsDoor-related ObservationsCrCH∈R3Relative door handle positionCrCH init∈R3Relative initial door handle positionPackage-related ObservationsCrCP∈R3Relative package positionCrCT∈R3Relative table positionCrCB∈R3Relative bin position13Table 3: Randomization ParametersUniformly Randomized Property Mean μ Range ε UnitGlobal friction coefficient 0.75 0 .75 -Robot position (x, y) 0 0 .6 mInitial robot yaw angle 0 1 radInitial joint angle deviation 0 1 radAdded robot mass 0 10 kgPackage mass 1.375 1 .0 kgDoor torque offset τconst [10 0 ]⊤[10 0 ]⊤N mDoor spring coefficient k [0 5]⊤[0 5]⊤ N mradDoor damping coefficient d [25 1 ]⊤[25 1 ]⊤N m sradactively direct the robot’s gaze. A visual marker, further explained in Section A.2, specifies theposition of the door handle. Consequently, the observation CrCHis only available if the marker isdetected by a camera. Always passing the door handle observation in the simulation would thereforenot capture the real system behavior. Instead, the observation is set to 0if the visual marker leavesthe camera’s FOV. This way, the agent learns to approximately partition the observation space andreason about when it is necessary to observe the visual marker. The agent can develop behaviorsto mitigate a lost observation and to actively keep the marker in the FOV. An illustration of theapproach is provided in Fig. 7. Note that the second door-related observation CrCH initis not set to 0because the initial door handle position is static with respect to the inertial frame. The observationcan thus be bootstrapped with the onboard localization of the robot even if the visual marker leavesthe FOV.14A.2 Real-World SetupWe utilize AprilTags [45] to obtain task-related observations in the real world. The AprilTag systemfeatures a vision-based algorithm that determines the relative position and orientation of detectedtags. Two visual markers attached to the door provide the relative door handle position observationCrCH. If the robot does not detect the tags, the observation is set to 0to achieve the same behavioras in simulation. The initial door handle position observation CrCH initis determined by two markersattached to the door frame. We make use of the robot’s onboard localization to obtain an observationeven if the tags leave the FOV of the camera. AprilTags also provide relative positions of thepackage, bin, and table. We do not make use of the proposed FOV simulation for the packagemanipulation task for two reasons. Firstly, it increases the difficulty of learning the desired behaviorbecause the robot tries to keep the package in the FOV by leaning over the bin and falling. Secondly,the package is kept in the FOV naturally until the package is dropped, rendering the additional FOVconstraint unnecessary for this task.Furthermore, we note a few limitations with the current experimental setup. To achieve a 100 %success rate in the series of real-world experiments, it was vital to get reliable door observationsthrough the camera system. Especially for fast rotations, the used visual fiducial system sufferedfrom image blur and low frame rates. Observations might also degrade over longer periods of timeif the fiducials leave the FOV, as the robot then purely relies on its localization. With the remediesmentioned above, we were able to resolve these issues, but longer horizon tasks might need morecareful considerations.Figure 7: Door setup and FOV simulation. Components of the curiosity state scare marked inred, while observations are marked in blue. The green cone represents the camera’s FOV. A visualmarker, attached to the door, is used to calculate the door handle observation CrCH. If the vectorfrom the camera to the visual marker (orange) leaves the FOV cone, the door handle observation isset to0.15A.3 Quantitative ResultsA.3.1 Comparison to a Naive Dense Reward SettingDense Rewards Ours(cw=100)Ours(cw=200)Ours(cw=300)405060708090100Success Rate (%)Push DoorsPull DoorsPackagesFigure 8: Success rate mean and stan-dard deviation of successful trainingruns for different reward formulations,intrinsic reward weights (denoted ascw), and random seeds 1-10.To highlight the benefits of the curiosity-driven approach,we draw a comparison to a basic dense reward approachthat involves comparable engineering effort.3For thedoor opening tasks, we define three dense rewards asguidance toward the sparse task reward. These rewardsare defined as to minimize the distance from the wheelto the door handle, the door handle angle, and the doorhinge angle. The hinge angle reward is clipped to en-sure that the robot does not open the door too far sincewe consider the task only fulfilled if the door is openedwithin a specified angular window. We were able to tunethe reward weights to deliver a similar performance to ourapproach for push doors, yielding an average success rateof91 % in simulation, as seen in Fig. 8. We could not findweights that would result in successfully learning the pulldoor task. The package manipulation task is augmentedwith three dense rewards as well. These increase with de-creasing distance between the right wheel and the right side of the package, the distance betweenthe left wheel and the left side of the package, and the distance between the package and the bin.Again, we were unable to train a policy that achieves the desired task, as shown in Fig. 4.A.3.2 Intrinsic Reward Scale SensitivityThe training process shows a high level of robustness with respect to the scale of the intrinsic reward,as can be seen in Fig. 4. This can be explained by the reward’s dynamic magnitude. At the beginningof training, the RND prediction error is large enough to overcome the local minimum imposed byshaping rewards. During training, the intrinsic reward shrinks and allows for optimization towardthe task and shaping rewards. If the weight is chosen too large, the intrinsic reward might not decayenough such that the sparse reward, although discovered, might get overlooked in the optimization.A.3.3 Robustness Against Environment VariationWe investigate success rates in multiple simulation experiments to analyze the robustness of learnedpolicies against the variation of different environment parameters, including parameter values thatare out-of-distribution of the learning tasks. We report the results in Fig. 9. The success ratesare determined by observing 1000 differently randomized environments for one episode. First, therobot’s initial position and orientation are randomized over a larger interval than during training.Second, the door handle height is randomized uniformly. Even though the handle height was notrandomized during training, policies are able to adapt to different heights. Last, the package positionand the bin position are randomized uniformly. Again, neither was randomized during training. Thepackage position is randomized for a range up to 0.4 m, as this covers the entire table surface.Trained policies exhibit a high level of robustness against the investigated disturbances.3We also investigated curriculum learning as an alternative. The (naive) task curricula consisted of spawningthe robot in favorable positions (e.g. very near to the door, with one wheel touching the door handle), and thengradually increasing the task difficulty by increasing the distance to the door. In our experiments, the robot wasnot able to successfully discover the desired behavior. We are convinced that the tasks could be solved withmore intricate curricula, but these stand in no perspective with regard to the engineering effort to the proposedcuriosity-driven approach.161.0 1.5 2.0 2.5 3.0Randomization Multiplier60708090100Success Rate (%)Push DoorsPull DoorsPackages(a)0.0 0.1 0.2 0.3 0.4Randomization Range in Meters5060708090Success Rate (%)Push DoorsPull Doors (b)0.00 0.25 0.50 0.75 1.00Randomization Range in Meters80859095100Success Rate (%)Package RandomizationBin Randomization (c)Figure 9: Simulation success rates for different out-of-distribution experiments. In (a), the initialrobot position and yaw angle randomization range used during training is multiplied. In (b), the doorhandle height is randomized and in (c), the package and bin positions are randomized uniformly.17 |
7Pkzm2FgUmq | SLAP: Spatial-Language Attention PoliciesPriyam Parashar1Vidhi Jain2Xiaohan Zhang1,3Jay Vakil1Sam Powers1,2Yonatan Bisk1,2Chris Paxton11Meta2Carnegie Mellon3SUNY BinghamtonAbstract: Despite great strides in language-guided manipulation, existing workhas been constrained to table-top settings. Table-tops allow for perfect and consis-tent camera angles, properties are that do not hold in mobile manipulation. Taskplans that involve moving around the environment must be robust to egocentricviews and changes in the plane and angle of grasp. A further challenge is ensur-ing this is all true while still being able to learn skills efficiently from limited data.We propose Spatial-Language Attention Policies (SLAP) as a solution. SLAP usesthree-dimensional tokens as the input representation to train a single multi-task,language-conditioned action prediction policy. Our method shows an 80% successrate in the real world across eight tasks with a single model, and a 47.5% successrate when unseen clutter and unseen object configurations are introduced, evenwith only a handful of examples per task. This represents an improvement of 30%over prior work (20% given unseen distractors and configurations). We see a 4ximprovement over baseline in mobile manipulation setting. In addition, we showhow SLAPs robustness allows us to execute Task Plans from open-vocabulary in-structions using a large language model for multi-step mobile manipulation. Forvideos, see the website: https://robotslap.github.io1 IntroductionTransformers have demonstrated impressive results on natural language processing tasks by be-ing able to contextualize large numbers of tokens over long sequences, and even show substantialpromise for robotics in a variety of manipulation tasks [1, 2, 3]. However, when it comes to usingtransformers for mobile robots performing long-horizon tasks, we face the challenge of representingspatial information in a useful way. In other words, we need a fundamental unit of representation -an equivalent of a “word” or “token” - that can handle spatial awareness in a way that is independentof the robot’s exact embodiment. We argue this is essential for enabling robots to perform manipula-tion tasks in diverse human environments, where they need to be able to generalize to new positions,handle changes in the visual appearance of objects and be robust to irrelevant clutter. In this work,we propose Spatial-Language Attention Policies (SLAP), that use a point-cloud based tokenizationwhich can scale to a number of viewpoints, and has a number of advantages over prior work.SLAP tokenizes the world into a varying-length stream of multiresolution spatial embeddings, whichcapture a local context based on PointNet++ [4] features. Unlike ViT-style [1], object-centric [5, 3],or static 3D grid features [2], our PointNet++-based [4] tokens capture free-form relations betweenobserved points in space. This means that we can combine multiple camera views from a movingcamera when making decisions and still process arbitrary-length sequences.Our approach leverages a powerful skill representation we refer to as “attention-driven robot poli-cies” [6, 7, 8, 2, 9] operating on an input-space combining language with spatial information. Unlikeother methods that directly predict robot motor controls [10, 1], these techniques predict goal posesin Cartesian space and integrate them with a motion planner [6, 8, 2] or conditional low-level pol-icy [9] to execute goal-driven motion. This approach requires less data but still has limitations, suchas making assumptions about the size and position of the camera in the input scene and long training7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: We propose SLAP, which allows us to learn skills for mobile manipulators to accomplishmulti-step tasks given natural language goals. Our system works by training a language-conditionedinteraction prediction module , which will determine which areas of a scene should be interactedwith, in addition to an action policy which operates on predicted interaction points. This allows usto scale to more complex scenes, while predicting continuous actions.times [7, 6, 2]. However, these methods fall into a different trap: they make strong assumptionsabout how big the input scene is [2], where the camera is [7, 6], and generally take a very long timeto train [7, 6], meaning that they could not be used to quickly teach policies in a new environment.SLAP uses a hybrid policy architecture. The interaction prediction module determines which partsof the tokenized environment the robot focuses on, and a relative action module predicts parametersof continuous motion with respect to the interaction features in the world. SLAP generalizes betterto unseen positions and orientations, as well as distractors, while being unrestricted by workspacesize and camera placement assumption, using fewer demonstrations and training in roughly a day.We evaluate SLAP on two robot platforms. First, on a Franka Panda we can perform a direct skillscomparison to the current state-of-the-art, PerAct, [2], where we demonstrate better performancewith 80% success rate on 8 static real-world tasks on held-out scene configurations and a 47.5%success rate tested with out-of-distribution objects. Second, unlike prior work, we move beyondthe stationary camera views and robot arms of a table-top setting, and demonstrate SLAP on theHello Robot Stretch RE-2 mobile manipulator with an ego-centric camera and 6-DoF end-effectorconfiguration. In this setting, we also include task planning to successfully execute natural languagetask instructions with ten demonstrations over five learned skills and three heuristic skills (Fig. 1).When SLAP is compared to the PerAct baseline for four skills in a controlled setting, we see a 4ximprovement in the task success rate (Table 3).2 Related WorkAttention-Based Policies. Attention-based policies have been widely studied in prior research andhave been found to have superior data efficiency, generalization, and the ability to solve previouslyunsolvable problems [11, 9, 6, 12, 2, 13]. However, these approaches often rely on strong assump-tions about the robot’s workspace, such as modeling the entire workspace as a 2D image [12, 6, 7, 8]or a 3D voxel cube with predetermined scene bounds [2, 9]. This restricts their applicability andmay lead to issues related to camera positioning, workspace location, and discretization size. Ad-2“Pick up lemon from basket”“Close the drawer”2) Predict ActionsGiven location from interaction prediction module“Pick upthe bottle”InputApproachInteractRetreatDemonstrations1) Predict Interaction PointGiven language command, noisy observationsRoll-outFigure 2: Spatial Language Attention Policies (SLAP) learn language-conditioned skills from fewdemonstrations in a wide variety of cluttered scenes. SLAP has two components: an “interactionprediction” module which localizes relevant features in a scene, and an “action prediction” modulewhich uses local context to predict an executable action.ditionally, these works can be seen, at least partly, as applications of object detection systems likeDetic [14] or 3DETR [15], but they lack the manipulation component.Compared to previous works, some recent studies focus on unstructured point clouds [11, 16]. Theseapproaches demonstrate improved data efficiency and performance compared to traditional behav-ior cloning. For instance, Where2Act [11] and V AT-Mart [16] predict interaction trajectories, whileUMPNet [17] supports closed-loop 6DoF trajectories. They share a common framework: a general-izable method to predict the interaction location and then predict local motion for the robot.Training Quickly with Attention-Based Policies. CLIPort [7] and PerAct [2] are attention-basedpolicies similar to Transporter Nets [6]. While fitting our definition of attention-based policies,they confine their workspace, use a rigid grid-like structure and treat action prediction as a dis-crete classification. While still a limited workspace, SPOT [12], demonstrated the usefulness of 2Dattention-based policies for fast RL training, including sim-to-real transfer, and Zeng et al. [6] haveshown these policies are valuable for certain real-world tabletop tasks like kitting.Manipulation of Unknown Objects. Manipulation of unknown objects includes segmentation [18,19], grasping [20, 21], placement [22], and multi-object rearrangement either from a goal image [23,24] or from language instructions [25, 26]. These approaches rely, generally, on first segmentingrelevant objects out, and then predicting how to grasp them and where to move them using separatepurpose-built models, including for complex task and motion planning [27].Language and Robotics. Language is a natural and powerful way to specify goals for multi-taskrobot systems. Several recent works [10, 1, 28] use a large-language model for task planning to com-bine sequential low-level skills and assume to learn the low-level skills with IL or RL. To realisticallyhandle language task diversity, we need to learn these skills quickly. SLAP is more sample-efficientthan prior IL or RL approaches. In PaLM-E [3], textual and multi-modal tokens are interleaved asinputs to the Transformer for handling language and multimodal data to generate high-level plansfor robotics tasks. Our approach is a spatial extension of this strategy.Language for Low-Level Skills. A number of works have shown how to learn low-level language-conditioned skills, e.g. [7, 2, 1, 29]. Like our work, Mees et al. [29] predicts 6DoF end effector goalpositions end-to-end and sequences them with large language models. They predict a 2D affordanceheatmap and depth from RGB; We do not predict depth, but specifically look at robustness andgeneralization, where theirs is trained from play data in mostly-fixed scenes. Shridhar et al. [2]predict a 3D voxelized grid and show strong real-word performance with relatively few examples,but don’t look at out-of-domain generalization and are limited to a coarse voxelization of the world.33 ApproachMost manipulation tasks necessarily involve interacting with environment objects [11]. We define an‘atomic skill’ as a task that can be specified by an interaction point, and a sequence of relative offsetsfrom this interaction point. For example, pick(’mug’) is an atomic skill as it can be defined interms of an interaction point on the ‘mug’ and subsequent relative waypoints for approach, grasp,and lift actions. Similarly, open(’drawer’) is an atomic skill for which the interaction pointis on the drawer handle, and relative waypoints from it can be defined for approach, grasp, andpull. While these examples illustrate the concept, SLAP can handle prediction of variable numberof waypoints per skill.We train a two-phase language-conditioned policy π(x, l), which takes visual observation xanda language command las inputs and predicts an interaction point pI, as well as a set of relativemotions , which are offsets from this point, instead of absolute coordinates.However, any realistictask given to a home robot by a user typically involves more than one atomic skill. Our system breaksdown a high-level natural language task description ( T) into a sequence of atomic skill descriptions{lj}and uses them to condition the atomic skill motion policies. Our full paradigm is as follows:T → { l0, ..., l n} → { πj(xj, lj)}n∀j∈n, π j:= (πI, πR), where:πI(xj, lj)→pI (3D interaction point)πR(xj, lj, pI)→ {a}m (sequence of actions)The interaction point pIis predicted by an Interaction Prediction Module πI, and the continuouscomponent of the action by a Relative Action Module πR. The Interaction Prediction Module πIpredicts where the robot should attend to ; it is a specific location in the world, where the robot willbe interacting with the object as a part of its skill, as shown in Fig. 2. πRpredicts a relative actionsequence with respect to this contact point in the Cartesian space. These actions are then providedas input to a low-level controller to execute the trajectory. These models are trained using labeledexpert demonstrations; a complete overview of the training process is shown in Fig. 4. Overall, thesystem outputs a sequence of end-effector actions a.3.1 Scene RepresentationThe input observation xis a structured point-cloud (PCD) in the robot’s base-frame, constructedby combining the inputs from a sequence of pre-defined scanning actions. This point cloud is thenpreprocessed by voxelizing at a 1mm resolution to remove duplicate points from overlapping cameraviews. The pointcloud is then used as input into both πIandπR.ForπI, we perform a second voxelization, this time at 5mm resolution. This creates the down-sampled set of points P, such that the interaction point ˆpI∈P. This means πIhas a consistentlyhigh-dimensional input and action space - for a robot looking at its environment with a set of Naggregated observations, this can be 5000-8000 input “tokens” representing the scene.While SLAP discretizes the world similar to prior work [30, 31, 2], our approach ensures fine reso-lution even in larger scenes. We couple this with a set-based learning formulation which allows usto attend to fine details in a data-efficient manner.3.2 Interaction Prediction ModuleWe use our insight about tasks being shaped around an interaction point to make learning morerobust and more efficient: instead of predicting the agent’s motion directly, we formulate our πIto solely focus on predicting a specific point pI∈P, representing a single 5mm voxel that isreferred to as the “interaction point”. This formulation is akin to learning object affordance andcan be thought of as similar to prior work like Transporter Nets in 2D [6]. We hypothesize thatpredicting attention directly on visual features, even for manipulation actions, will make SLAP4Language: Close the drawer77x128PerceiverIONx64Nx64Nx1Modified SA LayerLanguage EncoderProprio EncoderReLU(Linear)Uniformly voxelized PCD (P)PCD with duplicates removed(x)Top 5%: S(k∈P=p!) Interaction PointAdditional state:gripper-state, g-width, time-stepFigure 3: An overview of the architecture of the interaction prediction module. The point cloud isdownsampled to remove duplicates and encoded using two modified set-abstraction layers. The SAlayers generate a local spatial embedding which is concatenated with proprioceptive features - in ourcase, the current gripper state. Both spatial and language features are concatenated and input into aPerceiverIO transformer backbone. We then predict an interaction score per spatial feature and theargmax is chosen as the interaction site for command l.more general. We use a PerceiverIO [32] backbone to process the data, based on prior work onlanguage-conditioned real-world policies [2].We first pass our input point cloud through two modified set abstraction layers [4] which result in asub-sampled point-cloud with each point’s feature capturing the local spatial structure around it attwo different resolutions. This encourages the classifier to pay attention to local structures ratherthan a specific point that may not be visible in real-world settings. We concatenate the CLIP [33]tokenized natural language command with the encoded point cloud to create an input sequence.Each point i∈Pin the point cloud is assigned a score with respect to task τjwhich results in theinteraction point for that task, pjI:= argmaxx,y,z(S(i=pjI|l, x, P, Dj)), where Djis the set of expertdemonstrations provided for task τj. The IPM architecture overview is provided in Fig. 3. Notewe also use binary semantic features from Detic in the Stretch experiment for training SLAP as anadditional feature channel apart from the color-channels.Modified Set Abstraction Layer. The default SA layer as introduced by Qi et al. [4] uses farthestpoint sampling (FPS) to determine which locations feature vectors are created. FPS ensures thatsubsampled point-cloud is a good representation of a given scene, without any guarantees about thegranularity. However, it’s very sensitive to the number of points selected - in most PointNet++-basedpolicies, a fixed number of points are chosen using FPS [4]. However, SLAP must adapt to scenesof varying sizes, possibly with multiple views, and still not miss small details.We propose an alternative PointNet++ set abstraction layer, which computes embeddings based onthe original and an evenly downsampled version of the point-cloud, P. This results in a denserspatial embedding by considering a subset of all points within a certain radius of each-point in thedownsampled point-cloud. This downsampled set of points guarantees we can attend to even smallfeatures, and allows us to predict an interaction point pIfrom the PointNet++ aggregated features.3.3 Relative Action ModuleThe relative action module relies on the interaction point predicted by the classifier and operates ona cropped point cloud, xR, around this point to predict the actions associated with this sequence.As in the interaction prediction module, the model uses a cascade of modified set abstraction layersas the backbone to compute a multi-resolution encoding feature over the cropped point cloud. Wetrain three multi-head regressors (described further below) over these features to predict the actionsfor the overall task. Specifically, πRhas three heads, one for each component of the relative actionspace: gripper activation g, position offset δp, and orientation q. Our LSTM-based architecture(details in B.2) can predict skills with variable number of actions (3,4 in our experiments).5“Open top drawer”Interaction PredictionAction PredictionTrainingTestingInput Scene“Open top drawer”Data CollectionInteraction points and keyframes, localized in observed point clouds:“Open bottom drawer”“Place in the drawer”“Pick up bottle”Figure 4: Illustration of the complete process for training SLAP. Demonstrations are collected andused to train the Interaction Prediction module and the Action Prediction Module separately.Also note that the cropped input point-cloud is not perfectly centered at the ground truth interactionpoint ˆpI, but rather with some noise added: ˆpI′= ˆpI+N(0, σ). This is done to force the actionpredictor to be robust to sub-optimal interaction point predictions by the interaction predictor moduleduring real-world roll-outs. Thus, for each part of the action sequence, the keyframe position iscalculated as: p=pI+δp. When acting, the robot will move to (p, q)via a motion planner, andthen will send a command to the gripper to set its state to g.3.4 Training SLAPTo collect data, an expert operator guides the robot through a trajectory, pressing a button to recordkeyframes representing crucial parts of a task. At each keyframe, we record the associated expertaction ˆa= (ˆδp,ˆq,ˆg). We assume that low-level controllers exist - in our case, we use Polymetis [34]for the Franka arm and Hello Robot’s controllers1for Stretch. Example tasks are shown in Fig. 5.Interaction Prediction Module. We train πIwith a cross-entropy loss, predicting the interactionpoint pIfrom the downsampled set of coarse voxels P. We additionally apply what we call alocality loss (Lloc), as per prior work [35]. Conceptually, we want to penalize points the further theyare from the contact point, both to encourage learning relevant features as well as to aid in ignoringdistractors. To achieve this, we define the locality loss as: Lloc=Pk∈Psoftmax( fk)∥ˆpI, k∥2,where fkis the output of the transformer for point k∈P. The softmax turns fkinto attentionover the points, meaning that Lloccan be interpreted as a weighted average of the square distances.Points further from ˆpIare therefore encouraged to have lower classification scores. Combining ourtwo losses, we obtain LI=CE(P,ˆpI) +w|P|Lloc, where wis a scaling constant that implicitlydefines how much spread to allow in our points.Relative Action Module. To train πR, we use the weighted sum of three different losses. We traina= (p, q, g ) =πR(xR)with an L2 loss over the δp, a quaternion distance metric for the loss on qbased on prior work [36] and binary cross-entropy loss for gripper action classification (Sec. A.3).3.5 Task PlanningConsider a natural language instruction from a user such as ‘put away the bottle on a table’. Wedecompose it to a sequence of atomic skills as: goto(’bottle’) ,pick up(’bottle’) ,goto(’table’) , and place on(’table’) . We procedurally generate natural language andcode templates for 16 task families. Refer AC. We use LLaMA [37] models for in-context learn-ing [38, 39] and adapter fine-tuning [40] to learn the mapping between natural language task instruc-tions to the corresponding sequence of atomic skills.1https://github.com/hello-robot/stretch_ros6Figure 5: Examples of tasks executed on a Franka arm through our trained model in a clean setting.We trained numerous tasks (left) and tested on both seen and unseen objects (right).4 ExperimentsWe report the success rate of our model for 8 real-world manipulation tasks in Table 2, and compareit against prior baselines trained using the same labeling scheme. Overall, we see an improvementof1.6xover our best comparative baseline, PerAct [2]. Our robotics code was implemented usingthe HomeRobot framework [41]2. We test each model under two different conditions: Seen settingassumption; i.e. those with seen distractor objects and objects placed roughly in the same rangeof positions and orientations as in the training data in any relative arrangement (inlcuding unseen).Second, we test under unseen setting assumptions; i.e. those with unseen distractor objects and theimplicated object placed significantly out of the range of positions and orientations already seen.We run 5 tests per scene setting per skill per model and report the percentage success numbers inTable 2. We compare our model against Perceiver-Actor (PerAct) [2]. We train each model for thesame number of training steps and choose the SLAP model based on the best validation loss. ForPerAct, we use the last checkpoint, per their testing practices [2].We also run a per-skill evaluation of SLAP and PerAct on Stretch under the unseen setting assump-tion (see Table. 3). This was accomplished by adding unseen distractor objects to the scene andmoving the robot base position within reachable distance of the object. We needed to increase theworkspace bounds of PerAct (1.5m cube from 1m) to capture the larger observation area for themobile manipulator, to keep it consistent with SLAP. Note that demonstrations were collected on adifferent robot than the one policies were deployed on.4.1 Longitudinal Task Execution on StretchGT InferredHeuristic 66.0 48.5Learned 80.0 53.2Total 68.5 58.5Table 1: End-to-end per-formance. Learned skillsoutperform heuristics ex-cept when Detic fails.We trained a multi-task model for the Stretch robot for five skills using10 demonstrations each. This model was deployed in an end-to-endsystem which operates over code-list generated by a task-planner (as inSec. 3.5). We ran five prompts end-to-end with four to eight skills each,using ground truth plans - we verify the viability of generating thesetask plans in §4.2. These experiments are done in the unseen settingwith the robot starting anywhere with respect to the objects.For fair evaluation in the low-data regime, we add some structure byspecifying an orthogonal viewing direction for objects. Once the agentfinds the object of interest, it fires an initial prediction using SLAP to find most promising interactionpoint. This prediction happens under any dynamic viewing angle of the object (we assume the robotcan see the object). This dynamic prediction and pre-programmed viewing angle is used to approachthe object at an orthogonal viewing angle where the model fires an actionable prediction for the fullskill execution (details see §C.3). We observe the learned skills suffering significantly when inferredplans are used to create language prompts for SLAP. This is due to discrepancies between objectdescriptions that the LLM generates and Detic’s object detection capabilities. SLAP does not getthe necessary semantic masks thus its predictions suffer. On the other hand, when semantic features2https://github.com/facebookresearch/home-robot7Seen UnseenSkill Name PerAct SLAP PerAct SLAPOpen bottom drawer 00% 80% 00% 60%Open top drawer 60% 80% 40% 40%Close drawer 100% 100% 40% 40%Pick lemon from basket 60% 80% 10% 40%Pick bottle 60% 60% 60% 40%Place into the drawer 60% 80% 40% 60%Place into the basket 40% 100% 10% 60%Place into the bowl 40% 60% 00% 40%Average Success Rate 50% 80% 27.5% 47.5%Improvement 1.6x 1.7xTable 2: SLAP and PerAct [2] performance on real worldFranka manipulation tasks. We evaluate both seen scenes (seenobject positions and distractors), but in different arrangements,and unseen scenes with previously-unseen object positions anddistractors. SLAP is notably better overall in both conditions.Skill Name PerAct SLAPOpen Drawer 0% 60%Close Drawer 40% 100%Take bottle 0% 80%Pour into bowl 40% 80%Table 3: SLAP on a mobilemanipulator using a multi-taskmodel across 4 skills, over 5tries. With semantic predictionsadded to our feature space, wesee the model perform better onunseen scenes with new distrac-tors and unseen relative positionof the robot with respect to thescenefrom Detic are available IPM performance significantly improves even with unseen distractors (80%against 47.5% in Table 2). We still see failures when relative position is significantly perturbed.Please see §E.1 for ablation analysis of our design against PerAct.4.2 Task planning with in-context learning and fine-tuning LLaMaTask Lat.LLaMa Verb Noun Acc. Corr. (sec.)IC 7B 83 81 76 61 16.4IC 30B 81 81 76 62 27.6FT 7B 100 98 99 91 19.5Table 4: Fine-tuning (FT) outperforms in-context learning (IC) for same latency.Previous work has shown the strength of language mod-els as zero-shot planners [42], a result strengthenedby improved techniques for “in-context learning” orprompting [43]. To verify that models can produce taskplans with the skills we defined, we experiment withboth in-context learning (IC) [44] of LLaMA [37] andadapter fine-tuning (FT) [40]. Table 4 presents the mod-els’ verb, noun, and combined accuracies. Task Correctness is a binary score if the entire predictionwas correct, and finally, latency is measured in seconds on a single A6000 with 16 GB RAM.3HighTask Correctness from a small model verifies the compatibility of our skills with LLM task planning.5 LimitationsSLAP has high variance in out-of-distribution situations, resulting in complete failure if πIfails tocorrectly identify the context. For πR, multimodal or noisy data still poses issues; replacing πRwith a policy which can better handle this data, e.g. Diffusion Policies [46]. Overall system hasmultiple points-of-failure due to heuristic policies, unaligned language and vision models; end-to-end trainable architectures and cross-modal alignment could help.6 ConclusionWe propose a new method for learning visual-language policies for decision making in complexenvironments. SLAP is a novel architecture which combines the structure of a point-cloud basedinput with semantics from language and accompanying demonstrations to predict continuous end-effector actions for manipulation tasks. We demonstrate SLAP on two hardware platforms, includingan end-to-end evaluation on a mobile manipulator, something not present in prior work. SLAP alsooutperforms previous state-of-the-art, PerAct, on both mounted and mobile robot setups.3Adaptor fine-tuning increases the model size by ∼6%, which accounts for the additional latency comparedto IC. We use standard inference libraries so results are comparable, but not optimized for runtime [45].87 AcknowledgementsWe thank Sriram Yenamandra for his help working with the Stretch codebase and running real-robotnavigation and grasping. More generally, we thank the HomeRobot team for software support [41].References[1] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[2] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. arXiv preprint arXiv:2209.05451 , 2022.[3] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprintarXiv:2303.03378 , 2023.[4] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. Advances in neural information processing systems , 30, 2017.[5] W. Yuan, C. Paxton, K. Desingh, and D. Fox. Sornet: Spatial object-centric representations forsequential manipulation. In Conference on Robot Learning , pages 148–157. PMLR, 2022.[6] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLRG, 2021.[7] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[8] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation.arXiv preprint arXiv:2210.05714 , 2022.[9] S. James and A. J. Davison. Q-attention: Enabling efficient learning for vision-based roboticmanipulation. IEEE Robotics and Automation Letters , 7(2):1612–1619, 2022.[10] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[11] K. Mo, L. J. Guibas, M. Mukadam, A. Gupta, and S. Tulsiani. Where2act: From pixels toactions for articulated 3d objects. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision , pages 6813–6823, 2021.[12] A. Hundt, B. Killeen, N. Greene, H. Wu, H. Kwon, C. Paxton, and G. D. Hager. ““Goodrobot!”: Efficient reinforcement learning for multi-step visual tasks with sim to real transfer.IEEE Robotics and Automation Letters , 5(4):6724–6731, 2020.[13] P.-L. Guhur, S. Chen, R. Garcia, M. Tapaswi, I. Laptev, and C. Schmid. Instruction-drivenhistory-aware policies for robotic manipulations. arXiv preprint arXiv:2209.04899 , 2022.[14] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In ECCV , 2022.[15] I. Misra, R. Girdhar, and A. Joulin. An end-to-end transformer model for 3d object detection.InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 2906–2917, 2021.9[16] R. Wu, Y . Zhao, K. Mo, Z. Guo, Y . Wang, T. Wu, Q. Fan, X. Chen, L. Guibas, and H. Dong.Vat-mart: Learning visual action trajectory proposals for manipulating 3d articulated objects.arXiv preprint arXiv:2106.14440 , 2021.[17] Z. Xu, Z. He, and S. Song. Umpnet: Universal manipulation policy network for articulatedobjects. arXiv preprint arXiv:2109.05668 , 2021.[18] C. Xie, Y . Xiang, A. Mousavian, and D. Fox. Unseen object instance segmentation for roboticenvironments. IEEE Transactions on Robotics , 37(5):1343–1359, 2021.[19] Y . Xiang, C. Xie, A. Mousavian, and D. Fox. Learning rgb-d feature embeddings for unseenobject instance segmentation. In Conference on Robot Learning , pages 461–470. PMLR, 2021.[20] A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox. 6-dof grasping for target-drivenobject manipulation in clutter. In 2020 IEEE International Conference on Robotics and Au-tomation (ICRA) , pages 6232–6238. IEEE, 2020.[21] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dofgrasp generation in cluttered scenes. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 13438–13444. IEEE, 2021.[22] C. Paxton, C. Xie, T. Hermans, and D. Fox. Predicting stable configurations for semanticplacement of novel objects. In Conference on Robot Learning , pages 806–815. PMLR, 2022.[23] A. H. Qureshi, A. Mousavian, C. Paxton, M. C. Yip, and D. Fox. Nerp: Neural rearrangementplanning for unknown objects. arXiv preprint arXiv:2106.01352 , 2021.[24] A. Goyal, A. Mousavian, C. Paxton, Y .-W. Chao, B. Okorn, J. Deng, and D. Fox. Ifor: Iter-ative flow minimization for robotic object rearrangement. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 14787–14797, 2022.[25] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[26] W. Liu, T. Hermans, S. Chernova, and C. Paxton. Structdiffusion: Object-centric diffusion forsemantic rearrangement of novel objects. arXiv preprint arXiv:2211.04604 , 2022.[27] A. Curtis, X. Fang, L. P. Kaelbling, T. Lozano-P ́erez, and C. R. Garrett. Long-horizon manip-ulation of unknown objects via task and motion planning with estimated affordances. In 2022International Conference on Robotics and Automation (ICRA) , pages 1940–1946. IEEE, 2022.[28] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans. arXiv preprint arXiv:2303.12153 , 2023.[29] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances overunstructured data. arXiv preprint arXiv:2210.01911 , 2022.[30] V . Blukis, C. Paxton, D. Fox, A. Garg, and Y . Artzi. A persistent spatial semantic representationfor high-level natural language instruction execution. In Conference on Robot Learning , pages706–717. PMLR, 2022.[31] S. Y . Min, D. S. Chaplot, P. Ravikumar, Y . Bisk, and R. Salakhutdinov. Film: Followinginstructions in language with modular methods. arXiv preprint arXiv:2110.07342 , 2021.[32] A. Jaegle, S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, D. Zoran,A. Brock, E. Shelhamer, et al. Perceiver io: A general architecture for structured inputs &outputs. arXiv preprint arXiv:2107.14795 , 2021.10[33] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International Conference on Machine Learning , pages 8748–8763. PMLR, 2021.[34] Y . Lin, A. S. Wang, G. Sutanto, A. Rai, and F. Meier. Polymetis. https://facebookresearch.github.io/fairo/polymetis/ , 2021.[35] S. Powers, A. Gupta, and C. Paxton. Evaluating continual learning on a home robot, 2023.[36] C. Paxton, Y . Bisk, J. Thomason, A. Byravan, and D. Foxl. Prospection: Interpretable plansfrom language by predicting the future. In 2019 International Conference on Robotics andAutomation (ICRA) , pages 6942–6948. IEEE, 2019.[37] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi `ere,N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama:Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023.[38] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan,R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler,M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever,and D. Amodei. Language models are few-shot learners. ArXiv , 2020.[39] E. Aky ̈urek, D. Schuurmans, J. Andreas, T. Ma, and D. Zhou. What learning algorithm isin-context learning? investigations with linear models, 2023.[40] E. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, L. Wang, and W. Chen. Lora: Low-rankadaptation of large language models, 2021.[41] S. Yenamandra, A. Ramachandran, K. Yadav, A. Wang, M. Khanna, T. Gervet, T.-Y . Yang,V . Jain, A. W. Clegg, J. Turner, et al. Homerobot: Open-vocabulary mobile manipulation.arXiv preprint arXiv:2306.11565 , 2023.[42] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Ex-tracting actionable knowledge for embodied agents. arXiv preprint arXiv:2201.07207 , 2022.[43] O. Ram, Y . Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y . Shoham.In-Context Retrieval-Augmented Language Models, Jan. 2023. URL http://arxiv.org/abs/2302.00083 . arXiv:2302.00083 [cs].[44] J. Wei, J. Wei, Y . Tay, D. Tran, A. Webson, Y . Lu, X. Chen, H. Liu, D. Huang, D. Zhou, andT. Ma. Larger language models do in-context learning differently, Mar. 2023. URL http://arxiv.org/abs/2303.03846 . arXiv:2303.03846 [cs].[45] J. Fernandez, J. Kahn, C. Na, Y . Bisk, and E. Strubell. The Framework Tax: DisparitiesBetween Inference Efficiency in Research and Deployment. ArXiv , 2023. URL https://arxiv.org/abs/2302.06117 .[46] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023.[47] C. C. Kemp, A. Edsinger, H. M. Clever, and B. Matulevich. The design of stretch: A com-pact, lightweight mobile manipulator for indoor human environments. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 3150–3157. IEEE, 2022.[48] B. Akgun, M. Cakmak, J. W. Yoo, and A. L. Thomaz. Trajectories and keyframes for kines-thetic teaching: A human-robot interaction perspective. In Proceedings of the seventh annualACM/IEEE international conference on Human-Robot Interaction , pages 391–398, 2012.11[49] A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makoviichuk,K. Van Wyk, A. Zhurkevich, B. Sundaralingam, et al. Dextreme: Transfer of agile in-handmanipulation from simulation to reality. arXiv preprint arXiv:2210.13702 , 2022.[50] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic graspmetrics. arXiv preprint arXiv:1703.09312 , 2017.[51] N. M. M. Shafiullah, C. Paxton, L. Pinto, S. Chintala, and A. Szlam. Clip-fields: Weaklysupervised semantic fields for robotic memory. arXiv preprint arXiv:2210.05663 , 2022.[52] B. Bolte, A. Wang, J. Yang, M. Mukadam, M. Kalakrishnan, and C. Paxton. Usa-net: Unifiedsemantic and affordance representations for robot memory. arXiv preprint arXiv:2304.12164 ,2023.12AppendixTable of ContentsA Training 13A.1 Data Collection and Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . 13A.2 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14A.3 Action Prediction Losses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14A.4 Skill Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15A.5 Training for PerAct and SLAP . . . . . . . . . . . . . . . . . . . . . . . . . . . 15B Relative Action Module 15B.1 MLP Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15B.2 LSTM-based Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 15C High-level Task Planning and Execution 16C.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16C.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16C.3 System Architecture and Plan Executionr . . . . . . . . . . . . . . . . . . . . . 17D Experimental Setup for Skills 18D.1 In vs. Out Of Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18D.2 Skill Definitions and Success Conditions . . . . . . . . . . . . . . . . . . . . . 18D.3 Language Annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22D.4 Out of distribution Results from SLAP . . . . . . . . . . . . . . . . . . . . . . 22D.5 Motion Planning Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23E Additional Analysis 23E.1 Ablations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23E.2 Visualizing the Learned Attention . . . . . . . . . . . . . . . . . . . . . . . . . 23E.3 Language Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25F Additional Related Work 25A TrainingBelow is expanded information on our training, algorithm, and data processing to improve repro-ducibility.A.1 Data Collection and AnnotationWhen collecting an episode with the Franka arm, we first scan the scene with a pre-defined list ofscanning positions to collect an aggregated x. In our case, we make no assumption as to what or howmany these are, or how large the resulting input point cloud xis. With the Hello Robot Stretch [47],we collect data based on exactly where the robot is looking.Then, we collect demonstration data using kinesthetic teaching for the Franka arm (demonstratorphysically moves the robot) and via controller teleoperation for the Stretch robot. The demonstratormoves the arm through the trajectory associated with each task, explicitly recording the keyframes[48] associated with action execution. These represent the salient moments within a task – thebottlenecks in the tasks’ state space, which can be connected by our low-level controller.13A.2 Data ProcessingWe execute each individual skill open-loop based on an initial observation. We use data augmenta-tion to make sure even with relatively few examples, we still see good generalization performance.Data Augmentation. Prior work in RGB-D perception for robotic manipulation (e.g. [18, 49]) hasextensively used a variety of data augmentation tricks to improve real-world performance. In thiswork, we use three different data augmentation techniques to randomize the input scene xused totrainpI=πI(x, l):•Elliptical dropout: Random ellipses are dropped out from the depth channel to emulateocclusions and random noise, as per prior work [50, 18]. Number of ellipses are sampledfrom a Poisson distribution with mean of 10.•Multiplicative Noise: Again as per prior work [50, 18, 22], we add multiplicative noisefrom a gamma process to the depth channel.•Additive Noise: Gaussian process noise is added to the points in the point-cloud. Param-eters for the Gaussian distribution are sampled uniformly from given ranges. This is toemulate the natural frame-to-frame point-cloud noise that occurs in the real-world.•Rotational Randomization: Similar to prior work [2, 22, 25], we rotate our entire scenearound the z-axis within a range of ±45degrees to help force the model to learn rotationalinvariance.•Random cropping: withp= 0.75, we randomly crop to a radius around ˆpI+δ, where δisa random translation sampled from a Gaussian distribution. The radius to crop is randomlysampled in (1,2)meters.Data Augmentation for πR.We crop the relational input xR⊂xaround the ground-truth pI, usinga fixed radius r= 0.1m. We implement an additional augmentation for learning our action model.Since pIis chosen from the discretized set of downsampled points P, we might in principle belimited to this granularity of response. Instead, we randomly shift both pIand the positional actionδpby some uniformly-sampled offset δr∈R3, with up to 0.025mof noise. This lets πRadapt tointeraction prediction errors of up to several centimeters.A.3 Action Prediction LossesFollowing [36] for the orientation, we can compute the angle between two quaternions θas:θ= cos−1(2⟨ˆq1,ˆq2⟩2). (1)We can remove the cosine component and use it as a squared distance metric between 0 and 1. Wethen compute the position and orientation loss as:LR=λp∥δp−ˆδp∥22+λq(1− ⟨ˆq, q⟩) (2)where λpandλqare weights on the positional and orientation components of the loss, set to 1and1e−2respectively.Predicting gripper action is a classification problem trained with a cross-entropy loss. For input weuse the task’s language description embedding and proprioceptive information about the robot asinput, i.e. s= (l, gact, gw, ts)where gactis 1 if gripper is closed and 0 otherwise, gwis the distancebetween fingers of the gripper and tsis the time-step. The gripper action loss is then:Lg=λgCE(g,ˆg) (3)where λgis the weight on cross-entropy loss set to 0.0001. The batch-size is set at 1 for thisimplementation.We train πIandπRseparately for n= 85 epochs. At each epoch, we compare validation per-formance to the current best - if validation did not improve, we reset performance to the last bestmodel.14δpqSA LayerMLPMLP + ReLULinear6DOF Keyframe (for training)p!Language + Proprio Embeddings+gσFigure 6: Regression model architecture with separate heads for each output. The point-cloud iscropped around the interaction point with some perturbation and passed to a cascade of set ab-straction layers. Encoded spatial features are then concatenated with language and proprioceptionembeddings to predict position offset of action from interaction point, absolute orientation and grip-per action as a boolean.A.4 Skill WeightingIn Stretch experiments, we used a wide range of skills with different error tolerances and corre-sponding variances. As a result, we needed to use two different sets of weights for learning position,orientation and gripper targets. A more forgiving weight-set for noisier tasks like pouring and han-dover, and a tighter weight-set for task with more consistent trajectories like opening and closingthe drawer. These weights were empirically determined but can be further optimized via hyper-parameter tuning methods.A.5 Training for PerAct and SLAPIn all our experiments we ensure PerAct and SLAP are trained on the same data volume . Datavolume is defined as total number of augmented samples per collected sample. Note this results indifferent number of training steps per model. This is due to the LSTM-based model updating onceper trajectory while PerAct updates once per sample in a trajectory.B Relative Action ModuleIn our work, the Relative Action Module πRis assumed to be some local policy which predictsend-effector poses. In our case, we implement two different versions of this policy, one which wasused on the static Franka manipulator and one which was implemented on the Stretch. In both cases:• The policy predicts an end effector pose relative to the predicted interaction point from πI• The policy is conditioned on a local crop around this interaction point.B.1 MLP ImplementationFig. 6 gives an overview of the MLP version of the regression model. The model takes in thecropped point cloud (augmented during training as discussed in Sec. A.2. We saw that injectingrandom noise to the interaction point during training allowed the policy to, at test time, recoverfrom failures (because it predicted an interaction point near the correct area, instead of at the correctposition).B.2 LSTM-based ImplementationWe observed the MLP architecture suffer when positional distribution of actions varied widely withrespect to the interaction point position across tasks due to the multi-modality this introduced. Thuswe modified the above model by adding an LSTM to condition each task and action with a hidden-15δpqPointNet++ SALanguage CLIP embedding+gTime one-hot embeddingProprioception embeddingRegression HeadMLP + LSTM + MLPIncrementsFigure 7: LSTM-based regression model architecture based on the regression head and PointNet++embeddings introduced in Fig. 6. LSTM-based architecture shows higher stability in learning actiondistribution with wider distribution due to the conditioning effect.state. This model exhibited better performance in learning wider action distribution, based on ourinitial experiments with the outlined Stretch tasks.C High-level Task Planning and ExecutionC.1 DatasetWe procedurally generated a dataset consisting of more than 500k tuples of language instructionsand the corresponding sequence of atomic skills. The data is created for 16 task families and can beextended further in the future. For each task family, 10% samples are held-out for evaluation. Thedistribution of samples across task families is shown in 5.For each task family, we define a corresponding template containing the sequence of atomic skills.This means that the sequence of atomic skill “verbs” is the same among the samples of a taskfamily. Each sample within a task family differs in terms of language instructions and object(s) ofinteraction.To populate these templates and generate the data, we create a list of more than 150 movable objectskitchen objects, surfaces like table ,kitchen counter and articulated objects like drawer ,cabinets . For pour skill, we create a list of “spillable” items such as cup of coffee , orbowl of jelly beans . Similarly, for wipe skill, we have a list of items to wipe with such assponge , orbrush .C.2 ModelsTable 4 shows that in-context learning with 5 examples from the same task family achieves closeto 76% accuracy. There is no training involved with in-context learning, so it can’t overfit. Forfinetuning, our evaluation consists of a held-out dataset with unseen variations in the phrasing ofthe language instruction and/or object(s) of interaction. The goal of the fine-tuned model is todemonstrate that it is possible to achieve improved task planning performance with lower latencythan in-context learning with larger models. We do not evaluate the generalization of the fine-tunedmodel with unseen task families in this work. The remaining results in this work use the fine-tunedmodel for task planning.16Table 5: Data Distribution of procedurally generated samples across task families.Task Family Total number of samplesBring X From Y Surface To Pour In Z Then Place On W Surface 35640Bring X From Y Articulated To Wipe Z 612Move X From Y Surface To Z Surface 16092Move X From Y Surface To Z Articulated 301400Take X From Human To Z Articulated 19300Bring X From Y Surface To Human 5933Bring X From Y Surface To Wipe Z 3168Take X From Human To Pour In Z 2184Take X From Human To Z Surface 8050Take X From Human To Wipe Z 1458Move X From Y Articulated To Z Articulated 97923Bring X From Y Articulated To Human 13617Bring X From Y Surface To Pour In Z 3960Bring X From Y Articulated To Pour In Z 8712Move X From Y Articulated To Z Surface 50060Take X From Human To Pour In Z And Place On Y Surface 19656587765C.3 System Architecture and Plan ExecutionrWe use SLAP with three heuristic policies: S EARCH FOROBJECT (πsearch ), P ICKUPOBJECT(πpick) and P LACE ON(πplace ).πsearch uses Detic, frontier-based exploration policy and a lan-guage query to explore the map until object described by the language query is found in the view ofthe robot. This is also one of the primary points of failure when LLM is integrated into the pipeline.LLM expects Detic to be able to handle any freeform query of the object (for example, “detect opendrawer” is a typical output from the LLM however always fails as Detic has no notion of an opendrawer).πpickis a heuristic picking policy which always grabs given object from the top given its mask fromDetic and depth from camera. This is a generally robust policy but fails when contextual, task-oriented grasps are required in a task. For example consider a bottle placed within a cabinet, theonly pick policy which will solve this scene is one where gripper grasps bottle laterally and not fromthe top. πplace places whatever object is in robot’s gripper on the surface previously detected byDetic. Another point of failure, since up-close Detic is not able to detect objects like table, countersurfaces, etc.SLAP is used after the object is successfully detected by Detic, given language query, and is in therobot’s field of view. Note the robot at this point can be at any unconstrained orientation and positionwith respect to the object, the only condition sufficed here is that the object is within sight. We useSLAP’s interaction prediction module to estimate the affordance over this object with Detic’s maskas an additional feature, and predict an interaction point on the object. The robot then uses hand-engineered standoff orientations to move to a head-on position with respect to the object, where weuse the full SLAP system to predict the action trajectory.The standoff orientations are used so that SLAP can be tested fairly within reasonable bounds ofthe state distribution it was trained for. Training data for all our skills is recorded from a verynarrow range of robot positioning with respect to the objects (see Appendix D.1). This means theaction prediction module’s sense of action orientation is not robust to huge rotational variations overobject’s position around the robot’s egocentric frame. Interaction prediction module on the otherhand is very robust as it does not need to consider directionality, just the local structure of objectand related affordance. See Fig. 8 for details on how regions are assigned to specific objects, andrelated stand-off orientations provided to the robot so that it is always facing the object head-onafter estimating the interaction point. Given predicted interaction point pi, pre-determined standoff17CounterTableTableHuman for handoverBowl and bottleDrawerChairFigure 8: Diagram showing where immovable objects were placed in the environment (note thatchair can move anywhere within the blue region). The three colored regions signify the placementassignment for different artifacts involved in the tasks. The circular symbol signifies the robot’spre-determined orientation where the beak represents where the robot will be facing. Position forrobot’s placement is determined based on predicted piorientation vector vecstandoff and pre-determined 3D standoff distance vector diststandoff , therobot moves to an orientation vecstandoff and position:posnext=pi+diststandoff ∗(−1∗vecstandoff )The orientation vector is of the form (1,0,0)or(0,1,0)in this evaluation.D Experimental Setup for SkillsHere we refer to atomic skills learned by SLAP as simple tasks or “tasks”. This allows us to discusscorresponding “actions” that are defined in terms of the relative offset from the interaction point.D.1 In vs. Out Of DistributionWe used a number of objects for our manipulation experiments, which included both in- and out-of-distribution objects (see Fig. 9 and Fig. 10). One goal of SLAP is to show that our methodsgeneralize much better than others to different types of scenes and different levels of clutter.We also randomized the objects with seen and unseen clutter around them, as well as placed themin unseen environments. Fig. 12 and Fig. 13 show the extent of variation captured in test scenesagainst training scenes for Stretch experiments. Fig. 11 shows a range of test scenes from Frankaevaluations.D.2 Skill Definitions and Success ConditionsEvery real-world task scene had a sub-sample of all within-distribution objects. Following describesskills and their success condition for Franka experiments:18In-distributionOut of distributionFigure 9: Within distribution objects used at training time and out-of-distribution objects introducedduring testing in our experiments.Figure 10: Seen objects and unseen distractors used in longitudinal experiments with Stretch.Close drawerClose drawerOpen bottom drawerPick lemon from basketFigure 11: Snapshot of test scenes from Franka evaluations to show the range of variation at test-time1. Open the top drawer•Task: Grab the small loop and pull the drawer open. Drawer configuration withintraining data is face-first with slight orientation changes•Action labeling: Approach the loop, grab the loop, pull the drawer out•Success metric: When the drawer is open by 50% or more2. Open the bottom drawer•Task: Grab the cylindrical handle and pull the drawer open. Drawer configurationwithin training data is face-first with slight orientation changes. Note significantlydifferent grasp is required than for top drawer•Action labeling: Approach the handle, grab it, pull the drawer out•Success metric: When the drawer is open by 50% or more3. Close the drawer19Figure 12: Pour into bowl task: Showing variability and out-of-domain distribution covered bytest against training samples. Top row: Training scenes. Note that the bowl was always placedsomewhere on this particular region (right of sink) on the counter. Bottom row: Test scenes. Notebowl is surrounded by unseen clutter, placed in novel unseen environment and at relatively differentpositioning with respect to the camera and robot. Green boundary signifies successful episode, reda failed episode.Figure 13: Close drawer task: Showing variability and out-of-domain distribution covered by testagainst training samples. Top row: Training scenes. Note the absence of any clutter and narrowrange of relative positioning of drawer with respect to the camera and robot. Bottom row: Testscenes. Note presence of objects used in other tests in the same frame. Green boundary signifiessuccessful episode, red a failed episode.•Task: This task is unqualified, i.e. the instructor does not say whether to close the topor bottom drawer instead the agent must determine which drawer needs closing fromits state and close it. Align the gripper with the front of whichever drawer is openand push it closed. The training set always has only one of the drawers open, in afront-facing configuration with small orientation changes•Action labeling: Approach drawer from the front, make contact, push until closed•Success metric: When the drawer is closed to within 10% of its limit or when arm ismaximally stretched out to its limit (when the drawer is kept far back)204. Place inside the drawer•Task: Approach an empty spot inside the drawer and place whatever is in hand insideit•Action labeling: Top-down approach pose on top of the drawer, move to make contactwith the surface and release the object, move up for retreat•Success metric: Object should be inside the drawer5. Pick lemon from the basket•Task: Reach into the basket where lemon is placed and pick up the lemon•Action labeling:•Success metric: Lemon should be in robot’s gripper•Considerations: Since the roll-out is open-loop and a lemon is spherical in nature, atrial was assigned success if the lemon rolled out of hand upon contact after the 2ndaction. This was done consistently for both PerAct and SLAP.6. Place in the bowl•Task: Place whatever is in robot’s hand into the bowl receptacle•Action labeling: Approach action on top of the bowl, interaction action inside thebowl with gripper open, retreat action on top of the bowl•Success metric: The object in hand should be inside the bowl now7. Place in the basket•Task: Place the object in robot’s hand into the basket•Action labeling: Approach action on top of the free space in basket, interaction actioninside the basket with gripper open, retreat action on top of the basket•Success metric: The object is inside the basket8. Pick up the bottle•Task: Pick up the bottle from the table•Action labeling: Approach pose in front of the robot with open gripper, grasp posewith gripper enclosing the bottle and gripper closed, retreat action at some heightfrom previous action with grippers closed•Success metric: The bottle should be in robot’s gripper off the tableNotably, success for opening drawers is if the drawer is 50% open after execution; this is becausesometimes the drawer is too close to the robot’s base for it to open fully with a fixed-base Frankaarm.Following describes the skills undertaken in Stretch experiments and their success conditions:1. Open the drawer•Task: Grab the handle of the drawer and pull the drawer open•Action labeling: Approach the handle, grab the handle, pull the drawer out and opengrasp•Success metric: When the drawer is open by 100%2. Close the drawer•Task: Align with the surface of drawer’s face and push the drawer close•Success metric: When the drawer is closed within 10% of fully-closed configuration3. Pour into bowl•Task: Skill starts with a cup filled with candies already in robot’s gripper. Align cupwith the bowl and turn the cup in a pouring motion•Success metric: When≥50% of candies are in the bowl21Place in drawerPick up bottleOpen top drawerFigure 14: Examples of out of distribution predictions made by πI. We show that it is able to handleheavy clutter around the implicated object to predict interaction points. Note that the prediction forbottle picking is sub-optimal in this example.4. Take bottle•Task: Approach bottle with gripper orientation in the right configuration, grasp thebottle, lift the bottle up and retract keeping the grasp•Success metric: The bottle is in robot’s gripper in a stable configuration at the end ofexecution5. Handover to person•Task: Approach hand of the person with object in hand, align with hand’s surface andrelease the object, finally retracting the gripper back•Success metric: The object is in human’s hand at the end of executionNote that we count the success for pouring ≥50% of the candies because we are comparing thistask to pouring a liquid. Liquid would pour out completely in the intended final configuration dueto different dynamics.D.3 Language AnnotationsIn the following, we include the list of language annotations used in our experiments. Table 6 showsthe language that was used to train the model; we’re able to show some robustness to differentlanguage expressions. We performed a set of experiments on held-out, out-of-distribution languagedespite this not being the focus of our work; this test language is shown in Table 7.D.4 Out of distribution Results from SLAPWe show more results for the attention point predicted by πIin Fig. 14. For the placement task,the agent has never seen a heavily cluttered drawer inside before, but it is able to find flat space thatindicates placement affordance. For the bottle picking task, this sample has a lemon right next to thebottle which changes the shape of the point-cloud around the bottle. We see that πIis able to find aninteraction point albeit with placement different from expert and lower down on the bottle. Similarly,the open-top drawer sample has more heavy clutter on and around the drawer to test robustness.Fig. 15 shows the prediction and generated trajectory for picking up a previously unseen bottle.Note that while the models are able to detect the out of distribution bottle, the trajectory actuallyfails because the bottle is much wider and requires more accuracy in grasping. For the mobilemanipulator domain, we observe SLAP performed better than vanilla PerAct on every count. Our22timeFigure 15: A generalization example of success for our model. The new bottle has same shape asthe within distribution bottle but is much taller, different in color and wider in girth. The model isable to predict the interaction site and a feasible trajectory around it. We note though the executionof this trajectory was a failure; due to wider girth of the bottle the predicted grasp was not accurateenough to enclose the object.hypothesis is that SLAP’s better performance is due to the addition of semantic features, moreefficient training, and higher resolution due to a non-grid point-cloud representation.D.5 Motion Planning FailuresOur evaluation system has a simple motion planner which is not collision aware as a result we sawa number of task failures for both the models. However, we note that the frequency of task failuresdue to motion planning problems was higher for PerAct. We think it is because PerAct predicts eachaction of the same task as an entirely separate prediction trial, while SLAP forces continuity on therelative motions for the same task by centering them around the interaction point (see Fig. 15). Thatsaid, we also note with a collision-aware motion planner PerAct may not run into such issues as seenduring our evaluations. However the planner setting and conditions were same across both models inthese evaluations. The authors note in their own paper their heavy reliance on good motion planningsolutions [2].E Additional AnalysisE.1 AblationsHybrid vs Monolithic Architecture (Table-top). We train SLAP and PerAct such that they observesame amount of data. SLAP outperforms PerAct on six of eight tasks when tested in in-distributionsettings and five of eight tasks in out-of-distribution settings on Franka. PerAct performs equallywell as our model for two of eight tasks on our in-distribution scenes. Similarly, for our “hard”generalization scenes, PerAct performed equally well in two cases, and actually outperformed SLAPwhen picking up a bottle. Under similar experimentation conditions, SLAP outperforms PerAct inall four tasks in cluttered scenes for the mobile manipulator environment on Stretch. In failure cases,πRpredicted the correct trajectory, but not with respect to the right part of the object.Unseen Scene Generalization. We see a drop in the success rate for both PerAct and SLAP whentested on out-of-distribution settings. PerAct would often predict the correct approach actions, butthen it would fail to grasp accurately. With SLAP, however, we saw that pIwas predicted fairlyaccurately, but the regressor would fail for out-of-distribution object placements specifically becauseof bad orientation prediction. When πIfailed, it was because the position and orientation of thetarget object was dramatically different, andunseen distractors confused it. We see better results forSLAP under Stretch setting due to the addition of semantic features from Detic.E.2 Visualizing the Learned AttentionSince we use scores to choose the final interaction point, our classifier model is naturally inter-pretable, being able to highlight points of interest in a scene. We visualize this attention by selectingthe points with the highest 5% of interaction score given a language command l.23timePlace in drawerPlace in bowlFigure 16: Examples of failure cases for our baseline, PerAct, for the “place in drawer” and “placein bowl” tasks. In the top example, the gripper is moved from drawer’s side towards inside, insteadof from the top as demonstrated by expert. The gripper ends up pushing off the drawer to the sideas our motion-planner is not collision-aware. Note that SLAP does not exhibit such behaviors as πRimplicitly learns the collision constraints present in demonstrated data. In the bottom example, eachaction prediction is disjointed from previous and semantically wrong.Figure 17: An example out-of-distribution SLAP failure where an extreme sideways configurationof the drawer is paired with unseen distractors for the “open top drawer” skill. Note that the attentionmask ranks other distractors in its top 5% and fails to choose an optimal interaction point.24E.3 Language GeneralizationBy using pretrained CLIP language embeddings to learn our spatial attention module πI, our modelcan generalize to unseen language to some extent. We tested this by running an experiment where weevaluate performance on in-distribution scene settings, prompted by a held-out list of language ex-pressions. We choose three representative tasks for this experiment and run 10 tests with 2 differentlanguage phrasings.F Additional Related WorkWe note some other related work related to the larger language-conditioned, mobile manipulationdomain that SLAP is situated in, but not as directly relevant.Vision-Language Navigation. Similar representations are often used to predict subgoals for explo-ration in vision-language navigation [30, 31, 8, 51, 52]. HLSM builds a voxel map [30], whereasFiLM builds a 2D representation and learns to predict where to go next [31]. VLMaps proposes anobject-centric solution, creating a set of candidate objects to move to [8], while CLIP-Fields learnsan implicit representation which can be used to make predictions about point attentions in respondsto language queries [51], but does not look at manipulation. Similarly, USA-Net [52] generatesa 3D representation with a lot of semantic features including affordances like collision. Such arepresentation can naturally be incorporated for collision-aware action plans at prediction time.25Task Name Training Annotationspick up the bottle pick up a bottle from the tablepick up a bottlegrab my water bottlepick up a lemon pick the lemon from inside the white basketgrab a lemon from the basket on the tablehand me a lemon from that white basketplace lemon in bowl place the lemon from your gripper into the bowladd the lemon to a bowl on the tableput the lemon in the bowlplace in the basket place the object in your hand into the basketput the object into the white basketplace the thing into the basket on the tableopen bottom drawer open the bottom drawer of the shelf on the tablepull the second drawer outopen the lowest drawerclose the drawer close the drawerspush in the drawerclose the drawer with your gripperopen top drawer open the top drawer of the shelf on the tablepull the first drawer outopen the highest drawerplace in the drawer put it into the drawerplace the object into the open draweradd the object to the drawerTable 6: Examples of language used to train the model.Task Name Held-Out Test AnnotationsPick up the bottle Grab the bottle from the tablePick up the water bottleOpen the top drawer Pull top drawer outOpen the first drawerPlace into the drawer Add to the drawerPut inside the drawerTable 7: Examples of out-of-distribution language annotations used for evaluation26 |
3uwj8QZROL | Scaling Up and Distilling Down:Language-Guided Robot Skill AcquisitionHuy Ha1, Pete Florence2, and Shuran Song11Columbia University2Google DeepMindAbstract: We present a framework for robot skill acquisition, which 1) efficiently scaleup data generation of language-labelled robot data and 2) effectively distills this data downinto a robust multi-task language-conditioned visuo-motor policy. For (1), we use a largelanguage model (LLM) to guide high-level planning, and sampling-based robot planners(e.g. motion or grasp samplers) for generating diverse and rich manipulation trajectories.To robustify this data-collection process, the LLM also infers a code-snippet for thesuccess condition of each task, simultaneously enabling the data-collection process todetect failure and retry as well as the automatic labeling of trajectories with success/failure.For (2), we extend the diffusion policy single-task behavior-cloning approach to multi-tasksettings with language conditioning. Finally, we propose a new multi-task benchmarkwith 18 tasks across five domains to test long-horizon behavior, common-sense reasoning,tool-use, and intuitive physics. We find that our distilled policy successfully learned therobust retrying behavior in its data collection procedure, while improving absolute successrates by 33:2%on average across five domains. All code, data, and qualitative policyresults are available at our project website.Figure 1: Language-guided Skill Acquisition enables scalable robot learning. In the data generation stage, a LLMtakes as input task descriptions (a) and uses sampling-based robotic planners and privileged simulation information (b) toperform task-directed exploration. This enables the scaling up of language and task-success labeled dataset generation (c).In the second stage, the dataset is filtered for success and distilled down into a closed-loop language-conditionedvisuomotor policy for real world deployment (d).1 IntroductionHow can we scalably acquire robust, reusable, real-world manipulation skills? This question has been the driv-ing force behind extensive research in robot learning. Attempts in the field have focused on two primary aspects:First, how to scale up the data collection for a diverse range of manipulation skills, which involves effortssuch as improving the hardware [ 1,2] and software [ 3,4] which support demonstration collection, utilizationof non-robotics datasets [ 5,6], or trial-and-error explorations [ 7]. The second aspect of this question concernseffective learning from the collected data, which delves into exploring effective action representations [ 8–10]and policy formulations [ 11,12] that can robustly model the training data and generalize to novel scenarios.This paper proposes a new framework that provides a comprehensive solution for both aspects byleveraging language guidance, while using no expert demonstrations or reward specification/engineering.We contribute two key components with our framework:•Scaling Up Language-Guided Data Generation: Our data-collection policy is a large language model(LLM) which has access to a suite of 6DoF exploration primitives ( i.e., sampling-based robot planners andutilities). Given an input task description, this policy first simplifies the task by recursively decomposingit into subtasks, resulting in a hierarchical plan ( i.e., task tree). Next, this plan is grounded into a sequence7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.of 6DoF exploration primitives, which generates diverse robot trajectories for the task. Finally, the datacollection policy verifies the trajectories’ success with an inferred success function and retries the taskuntil it succeeds. This verify & retry step not only improves the data-collection policy’s success, but alsoadds robot experience on how to recover from failure, an important trait for downstream policy distillation.This data generation approach is scalable, enabling significantly more efficient autonomous task-directedexploration than unguided alternatives ( i.e., reinforcement learning) while not being limited by the lackof low-level understanding of the LLM-only solution.•Distilling Down to Language-Conditioned Visuomotor Policy: We distill these robot experiences intoa visuo-linguo-motor policy that infers control sequences from visual observations and a natural languagetask description. To enable effective learning of high entropy, diverse robot trajectories, we extend thediffusion policy [ 12] to handle language-based conditioning for multi-task learning. This allows the learnedpolicy to be reused and recomposed through language-based planners. We found that our distilled policysuccessfully learned the robust retrying behavior from its data collection policy, while improving uponits absolute success rate across five domains by 33:2%. Further, we demonstrate that our policy directlytransfers to the real-world without fine-tuning using domain randomization.Our framework combines these two components to get the best of both worlds – leverage LLM’scommon-sense reasoning abilities for efficient exploration while learning robust and re-usable 6DoFskills for real-world deployment. In summary, the key contribution of this paper is a new framework forvisuo-linguo-motor policy learning that is enabled by three novel components:•A new language-guided data collection framework that combines language-based task planner with 6DoFrobot utilities ( e.g. motion planning, grasp sampling).• New formulation of diffusion-based policy that effectively learns multi-task language-conditionedclosed-loop control policies.•In addition to our algorithmic contributions, we also contribute a new multi-task benchmark that includes18 tasks across five domains, requiring long-horizon ( 800control cycles), common sense, tool-use,and intuitive physics understanding – capabilities lacking in existing manipulation benchmarks.2 Related WorksScaling visuo-linguo-motor data. In learning vision-and-language-conditioned motor policies forreal-world deployment [ 9,10,13–18], one of the most important questions is how to scale up “robot-completedata” – data that has robot sensory inputs ( e.g. vision), action labels ( e.g. target end-effector & grippercommands), and task labels ( e.g. language description, success). The most prevalent paradigm is to usehumans to annotate both actions ( e.g. teleoperation) and language [9, 10, 13–18]. When providing actionlabels, humans can either provide task-specific [ 9,10,15,18], or task-agnostic (“play”) data [ 13,14,16,19].A primary limitation, however, is that data scalability is human-limited.Other prior works have proposed strategies to enable more-autonomously-scalable data. To scale languageannotation, prior works study using visual-language models [20, 21], or procedurally post-hoc providedin simulation [ 19]. To scale action labels, methods study how to use autonomous sub-optimal policies fromrandom [ 7] to learned [ 22] policies. Human egocentric videos [ 6,23,24] has also been shown to be relevant torobot learning [ 5,25], but is not robot-complete (lacks action labels), and requires cross-embodiment transfer.Towards unsupervised exploration, prior works have also investigated evolving environments [ 26,27] andembodiments [ 28], automatic task generation [ 29], leveraging language guidance [ 30,31] and world-modelerror [ 32], but have not been demonstrated to scale to 6 DoF robotic skill learning. While these approachesreduce human efforts, they are still limited in optimality, generality, and/or completeness of robot data labels.Another option for the autonomous data collection policy is to use a model-based policy, e.g. task andmotion planning (TAMP) [ 33]. Our approach extends such methods in terms of flexibility and task generalityby leveraging LLM’s common-sense knowledge. However, in contrast to recent works which use LLMsas the final policy [ 34–40], we use the LLM-based planner as a suboptimal data-collection policy. We thendistill only successful trajectories into an observable-information [ 41–43] policy, allowing the distilled policyto improve upon its LLM data collection policy’s performance.Policy Representations and Multi-task Policy Distillation. One primary question in visuo-motorlearning [44] has been how to represent the policy for effective learning, i.e. to enable high precision,multi-modal robot behavior [ 2,11,12,45,46]. Another related question has been how to best train multi-taskpolicies [47, 48], including those conditioned on language [9, 10, 13, 15, 16, 18]. Our work presents thenovel formulation of bringing diffusion-based [ 49,50] policies [ 12] into the language-conditioned [ 51,52]visuomotor domain. Additionally, prior works in multi-task language-conditioning typically focus oncloning policies from experts, meanwhile we study distilling data from a success-filtered suboptimal policy.Success-filtering [11, 53] can be viewed as the simplest form of offline RL [54].2Figure 2: Benchmark. We validate our approach on a new multi-task benchmark addressing challenging long-horizontasks ( i.e., 800 control cycles) requiring language understanding (e.g., put [object] to [top] drawer), common senseknowledge (e.g., send a package for return requires raising the mailbox flag), tool-use (e.g., catapult), and intuitive physics(e.g., balance the bus). The tasks are best viewed on our our project website.3 ApproachWe propose a new framework for robot learning that performs automatic data collection and policy learningfrom only a task description. Our design is grounded on four key observations:•We recognize the importance of random exploration in reinforcement learning, but aim to not be constrainedby its inefficiency for long-horizon, sparse reward tasks.•We acknowledge the usefulness of LLM’s common-sense and zero-shot capabilities, but believe languageis not by itself the ideal representation for robust, rich, and precise robotic manipulation.• We are inspired by the effectiveness of robotic planning methods, e.g. TAMP , but wish to be flexibleto novel tasks and domains and non-reliant on ground truth state during policy inference.• We aim to achieve the simplicity and effectiveness of behavior cloning in distilling collected robotexperience into a policy for real-world deployment, while side-stepping the requirement for costly humandemonstrations or play data collection.Using no human demonstration or manually specified reward, our framework combines the strengthsof these four areas into a unified framework for both efficient task-directed exploration and multi-taskvisuo-linguo-motor policy learning.Method Overview. In the data generation phase, we use an LLM to recursively decompose ( §3.1) tasksinto a hierachical plan ( i.e., task tree) for exploration and ground the plan into sampling-based robot utilitiesand motion primitives (§3.2). Next, the LLM infers success-detection functions for each task in the plan(§3.3), providing success-labeling. This autonomous data generation process outputs a replay buffer oftask-directed exploration experience, labeled with language descriptions and success labels. In the trainingphase (§3.4), we filter this data for success according to the LLM inferred success condition and distill itinto a multi-task vision-and-language-conditioned diffusion policy [12].3.1 Simplify: Task Planning and DecompositionGiven a task description, the first step is to generate a high-level task plan. To improve the flexibility towork with any tasks and 3D assets, we opted for an LLM-based planner to leverage their common-sense andzero-shot reasoning skills. Unlike classical TAMP planners, our framework does not require domain-specificengineering and transition function design to work with new tasks.Concretely, our recursive LLM planner takes as input the task description, the simulation state, and outputsa plan in the form of a task tree (Fig. 3a). To do so, the LLM first checks whether the task descriptioninvolves the robot interacting with multiple or only one object. For instance, “move the package into themailbox” involves opening the mailbox before picking up the package and putting the mailbox in, and shouldbe considered a multi-object task. Meanwhile, “with the mailbox opened, move the package into the mailbox”should be a single-object task. For the base case of single-object tasks, we prompt the LLM to which objectpart name to to interact. For the case of multi-object tasks, we prompt the LLM to decompose the task intosubtasks, and recurse down each subtask.3Figure 3: Language-Driven Robot Data Generation takes as input the task description and simulation state, andoutputs a replay buffer, labelled with language descriptions and success. It starts by using an LLM to simplify tasksrecursively (a) until the task involves only one object, resulting in a hierarchical exploration plan. Next, the plan isgrounded (b) into a sequence of 6 DOF exploration primitives ( e.g. grasp samplers, motion planners, etc.) and rolled outin simulation to give an unlabelled robot trajectory. Finally, an LLM infers a successfunctioncode-snippet, and uses it toverify (c) and label it with succeeded or failed. If the trajectory failed, the LLM retries the exploration plan with a differentrandom seed ( e.g. a different grasp pose from the grasp sampler). If the robot succeeds or run out of time, the labeledtrajectory is returned.3.2 Ground: Compiling a Plan into Robot UtilitiesWith the generated task tree §3.1, the next step is to ground the high-level plan into physical actions. Here,the choice of the low-level robot API critically defines the system’s capability and, therefore, becomes akey differentiating factor between different systems. In principle, there are three desired properties we wantto see in the action space design:•Flexibility. Planar actions [10, 37] aren’t flexible enough to manipulate prismatic and revolute joints.•Scalable. Namely, actions should not require human demonstrations to acquire [9, 10, 13–16, 35].•Language-friendly. While joint sequences can encode any action, it is not language-friendly.We propose to ground the LLM’s plan with API calls into a set of robot utility functions, which include asampling-based motion planner, a geometry-based grasp and placement sampler, and motion primitives for ar-ticulated manipulation. We refer to these utilities as 6 DOF Exploration Primitives (Fig 3b) because, by virtue ofbeing pseudo-random , the sampling-based utilities generate diverse robot trajectories, enabling effective explo-ration for rich 6 DoF manipulation settings. For instance, our grasp and placement samplers samples uniformlyamongst all points in the object part’s point cloud to find good grasps and placements poses, respectively, whichare used as input into a rapidly-exploring random trees [ 55] motion planner that samples uniformly in jointspace. This results in diverse grasps, placements, and motion trajectories connecting grasps and placements.For each leaf node in the inferred task tree ( §3.1), the grounding process takes as input the node’s task de-scription ( e.g. “open the mailbox”), its associated object part name ( e.g. “mailbox lid”), and the simulation state,and outputs a sequence of 6 DoF Exploration Primitive API calls. Using the object part name, we can parsethe object’s kinematic structure from the simulation state and handle articulated and non-articulated ( i.e., rigid,deformable) objects separately. For non-articulated objects, the LLM is prompted to choose the pick & placeobject names, used to sample grasp and placement pose candidates. For articulated objects (with either revoluteor prismatic joints), the leaf node’s associated object part name is used to sample a grasp candidate followedby a rotation or translation primitive conditioned on its joint parameters (i.e., joint type, axis, and origin).Exploration Plan Rollout. Each node in the exploration plan is grounded only when it is being executed,where the order of execution follows a pre-order tree traversal. By keeping track of the subtask’s state,sub-segments of robot trajectory can be labelled with the subtask’s description, thereby providing dense andautomatic text labels for the trajectory. For instance, all actions taken during the inferred subtask “open themailbox” can be labeled with both the subtask’s description “open the mailbox” and the root task description“move the package into the mailbox”.Since grounding happens only when a task node is visited, each node’s grounding process is independentof the other leaf nodes, depending only on the simulation state when it is evaluated. While this simplifiesplanning significantly, it also means that failed execution can occur. For instance, a grasp candidate mayrender all placement candidates infeasible.43.3 Verify & Retry: Robustifying the Data Collection PolicyRecall, the planning and grounding step can fail, especially when we consider long-horizon tasks. To addressthis, we propose a verify & retry (Fig. 3c) scheme, which uses environment feedback to detect failed execution.Verify. For each task, the LLM infers a success function code snippet given the task description,simulation state, and API functions to for query simulation state (e.g., checking contact or joint values, etc).This amounts to prompting the LLM to complete a task success function definition that outputs a booleanvalue, indicating task success. For instance, given the task “raise the mailbox flag”, the LLM’s inferredcode snippet should check whether the mailbox’s flag hinge is raised (Fig. 3c, highlighted green).Retry. When a trajectory is labeled failed, the robot retries the same sequence of robot utilities with adifferent random seed ( i.e., for the sampling-based robotic utilities) without resetting the simulation stateuntil the task succeeds. For instance, in the bus balance task (Fig. 2, top left), the robot would repeatedlytry different grasp and place candidates until the bus is balanced. In the tree traversal process §3.2, nodesonly yield execution to its parent task when the node’s inferred success condition returns true. This designnot only leads to higher success rates in data generation but also provides useful demonstrations on howto recover from failure . In the output replay buffer, the only failed trajectories are ones which timed-outor led to invalid states ( e.g. object dropped on the floor).3.4 Language-conditioned Policy DistillationFigure 4: Language-Conditioned Policy Distillation .The policy takes as input a task description, two RGB cam-era views, and gripper proprioception data, and outputs asequence of gripper poses and closing command.We extend diffusion policy [ 12], a state-of-the-art ap-proach for single-task behavior cloning, to the multi-task domain by adding language-conditioning. Thispolicy takes as input a task description CLIP [56]feature, proprioception history, and visual observa-tions, and outputs a sequence of end effector controlcommands. Following Robomimic [4]’s findings,we use a wrist-mounted view in addition to a global(workspace) view to help with tasks requiring precisemanipulation. We use their ResNet18-based [57]vision encoders, one for each view. We found thatusing only the latest visual observation along with the full observation horizon of proprioception maintainsthe policy’s high performance while reducing training time. When used in conjunction with the DDIM [ 58]noise scheduler, we found that we could use a 10shorter diffusion process at inference (5 timesteps atinference, 50 timesteps at training) while retaining a comparable performance. Quantitatively, when using a10 dimensional action space *, our policy can be run at 35Hzon an NVIDIA RTX3080.4 EvaluationDomainComplexgeometryArtic-ulationCommonsenseTooluseMulti-taskLonghorizonBalance 7 7 7 7 7 7Catapult 7 3 3 3 3 7Transport 3 7 7 7 7 7Mailbox 7 3 3 7 7 3Drawer 3 3 7 7 3 3Table 1: Benchmark Suite.Our experiments try to validate two questions: 1) Can our datageneration approach efficiently perform task-directed explo-ration? 2) Can our policy learning approach effectively distill amulti-modal, multi-task dataset into a generalizable and robustvisuo-linguo-motor policy?Our Benchmark contains 18 tasks across 5 domains (Fig. 2 Tab. 1), with the following properties:•6DoF & articulated manipulation , for deadling with complex object geometry and articulation.•Geometry Generalization. In our bin transport domain, the robot must generalize its bin transport skill tounseen object instances, with novel shapes, sizes, and colors.•Intuitive physics. Robots should understand the physical properties of the world and use this knowledgeto perform tasks. In the bus balance domain, the robot needs to learn the precise grasping and placement tobalance a large bus toy on a small block. In the catapult domain, where the block is placed along a catapultarm determines how far the block will be launched, and, thus, which bin (if any) the block will land in.•Common-sense reasoning & Tool-use. Natural language task description is user-friendly but oftenunder-specifies the task. Common-sense can help to fill in the gaps. In the mailbox domain, given the task“send the package for return”, the robot should understand that it not only needs put the package inside, butalso raise the mailbox flag to indicate that the package is ready for pickup. In the catapult domain, the robotneeds to understand that pressing the catapult’s button will activate the catapult, and that the block needs tobe placed on the catapult arm to be launched.*3 for position, 6 for rotation using the upper rows of the rotation matrix, and a gripper close command5Figure 5: High Entropy yet Precise Language-Guided Action Sequences. Running the pseudorandom language-conditioned diffusion process with different seeds on the same observations yields language-consistent (a-c, differentcolors for different task descriptions), high entropy actions when possible (a-f, object grasping, transports, & placements)and precise actions when necessary (d, narrow mailbox with large package). Further, domain randomization enables asimulation trained policy (e) to generalize to the real world (f).•Multi-task conditioning. Given the same visual observations but different task description, the robotshould perform different and task-relevant actions. The catapult domain has 3 tasks for three target bins,and the drawer domain has 12 tasks.•Long horizon behaviour. Our longest horizon domain, mailbox, takes at least 4 subtasks to complete(open the mailbox, put the package in the mailbox while its opened, close the mailbox, then raise themailbox flag) which can require up to 800 control cycles. In the drawer domain, the robot needs to open thedrawer, move the object into the drawer, then close it, which takes about 300 control cycles.The benchmark is built on top of the MuJoCo [3] simulator, using assets from the Google Scanneddataset [ 59,60]. We use a table-top manipulation set-up with a 6DoF robot arm. The task success in evaluationis a manually designed function, instead of LLM generated function used for data collection.Metrics. We report the success rates (%) averaged over 200 episodes in Table 2, a task completionefficiency plot in Fig. 6, and qualitative results in Fig. 5. If a domain has multiple tasks then we report theaverage performance of all tasks. We also compare different LLMs in Table 4 (10 samples per task) andinvestigate the sources of error in our system for the mailbox domain in Table 3 (200 trials per execution).Data Generation Baselines. Code-as-Policy [ 37] is a state-of-the-art approach for using an LLM directlyas a robot policy by making state ( e.g. query present objects) and action primitive API calls to a robot. Givenan LLM-inferred code string, they execute the snippet in an open-loop fashion. Crucially, in their tabletop manipulation setting, they assume access to planar action primitives. Thus, we introduce the followingbaselines, which build on top of Code-as-Policy and each other as follows:•LLM-as-Policy (2D) : Similar to code-as-policy using planar pick-and-place, but we use ground truthobject segmentation instead of their off-the-shelf object detectors [61, 62].•(+) 6 DOF robot utils : Builds on top of the previous baseline by adding access to 6 DOF robot utilitiesfor grasping, placement, motion planning, and articulated manipulation.•(+) Verify & Retry : Adding to the previous baselines, this baseline uses the LLM’s predicted successcondition to label trajectories and retry failed ones. Since the robot utilities involve pseudo-random samplers(e.g. RRT, grasp sampling), retrying the task means running these samplers again using the pseudo-randomstate and environment state from where failed trajectory left it. Since we use this approach as our datageneration policy, it also serves as an ablation of our approach.Policy Distillation Ablations. We compare against BC-Z [ 15]’s single-task policies which does not useFiLM conditioning (used in their bin emptying and door opening tasks). To understand the effects of ourpolicy learning design decisions in the single-task regime, we fix training time and dataset size (2 days usingat least 500 successful trajectories), and provide the following ablations:•Action Generation : Instead of using diffusion processes conditioned on the policy input embedding todecode actions, it is typical use multi-layer perceptrons. Following Jang et al. [ 15], we use one MLP withtwo hidden layers and ReLU activations for end effector position, one for the orientation, and another for6gripper command. This standard policy architecture is deterministic, and is trained with mean-squared errorloss for pose and binary cross entropy loss for gripper command.•Action Space : Besides our absolute end effector pose action space, Delta-Action and velocity controlspaces is another popular action space choice [4, 15, 63–65]. We also ablate BC-Z’s execution actionhorizon (Exec) while keeping their original prediction horizon (Pred).•Observation Encoder : All approaches encode images using a ResNet18 [ 57] architecture. Although theoriginal architecture was designed with an average pooling layer, its typical for robotic policies to use aspatial softmax pooling [44] layer instead.•Data usage :No-Retry trains on successful trajectories generated from the data generation approachwithout V erify & Retry, so it does not observe any recovery behavior.4.1 Data Collection Policy EvaluationApproachPlanar 6DoFAverageBalance Catapult Transport Mailbox DrawerLLM-as-Policy (2D) 28.0 33.3 21.5 0.0 0.0 27.6(+) 6DoF Robot Utils 5.5 2.5 35.0 0.0 1.3 8.8(+) V erify & Retry 45.0 7.3 82.0 3.0 31.8 33.8Distill No Retry 67.5 38.5 32.5 0.0 22.7 32.2Distill Ours 79.0 58.3 80.0 62.0 55.8 67.0Table 2: Success Rates (%) for data generation (top) anddistillation approaches (bottom) over 200 trials.6DoF exploration is critical. First, we verifydifferent approach’s ability to perform and ex-plore in 6DoF, which is crucial for general manip-ulation. When 6DoF exploration is introduced,we first observe a drop in the average successrate for simple tasks that could be accomplishedwith planar actions (Balance, Transport, Tab. 2).However, this ability is critical for exploring com-plex tasks, providing data to improve upon in thelater distilling stage. In particular, we observed that 6DoF actions are important for grasping diverse objectswith complex geometry (Transport, Tab. 2), and manipulating articulated objects (Drawer, Mailbox, Tab. 2).Subtask Planning V erify ExecutionOpen mailbox 100 100 43.5Put package in mailbox 100 100 28.5Raise mailbox flag 100 100 62.0Close mailbox 100 100 94.2Table 3: Sources & Propagation ofError . Accuracy (%) of planning, veri-fication, and execution success rate (%)for each mailbox subtask.Moreover, 6DoF exploration also helps in diversifying the datacollection strategy, which provides the possibility to improveupon in the later distilling stage. For example in the catapultdomain, LLM-as-Policy (2D) is only able to solve one of three pos-sible goals (the closest bin) using a deterministic strategy. However,it provides no useful data for learning the other two goals, makingit a poor data-collection policy. In contrast, incorporating 6 DOFrobot utilities achieves lower but non-zero average success rates inall bins ( 16:3%,3:3%, and 2:2%, full table in appendix), whichprovide much better exploration data for distillation.Verify & Retry always helps. In the verify & retry step, the LLM retries all tasks until they are successful.This simple addition improves performance in all domains, with 2,3,8, and 13in transport, catapult,balance, and drawer domains. Without this crucial step, we observe 0:0%success rate in the mailbox domain,underscoring the difficulty of flawlessly executing long sequences of 6 DOF actions, and the importance ofrecovery after failure.Model Size Planning SuccessLLAMA2 7B 42.0 10.013B 62.0 48.3GPT3 175B 82.0 91.1Table 4: LLM Evaluation .Language Model Scaling. In addition to the final task success, weprovide more detailed analysis of planning and success condition inferenceaccuracy in Tab. 4. We evaluate on the proprietary GPT3 [66] (175Btext-davinci-003) and the open LLAMA2 [67] (7B and 13B). We foundthat Llama models struggles in complex planning domains because theydo not follow instructions provided in the prompts. For instance, in the drawer domain, both models fail toaccount for drawer opening and closing. However, we observe an upwards trend with respect to Llama modelsize, with the 13B model outperforming the 7B model by +20:0%and+38:3%in planning and successverification accuracy respectively.4.2 Distilled Policy EvaluationRobustness In, Robustness Out. By filtering trajectories with LLM’s inferred success condition, distilledpolicies inherit the robustness of their data collection policies while improving upon success rates ( +23:4%and+33:2%for no-retry and ours, Tab. 2). Since our distilled policy learned from a robust data collectionpolicy, it also recovers from failures ( e.g. failed grasps or placements) and continuously retries a task until itsucceeds. Meanwhile, since the no-retry distilled policy learned from a data collection policy which did notretry upon failure, it is sensitive and brittle, leading to 34:8%lower average success rate across all domainscompared to ours (Tab. 2).High Performance From Diverse Retry Attempts. Plotting how long policies take to solve the bal-ance task (Fig. 6), we observed that our policy and its data collection policy continuously tries a diverse7set of grasps and placements after each failed attempt until it succeeds. This results in higher successrates as the policy is given more time, and is reflected in their monotonically increasing success rates.success by t (%)time t (seconds)Ours 79.0%LLM-Policy 28.0%(2D) Distill 67.5%(No Retry) LLM-Policy (6DoF) 5.5%LLM-Policy 45.0%(6Dof+Retry) 80604020020406080100Figure 6: Distilled Robustness .Our policy inheritsrobust recovery from failure behavior from its datacollection policy , while improving upon success rate.In contrast, baselines plateau after their first grasp/plate-ment attempts. This highlights the synergy of two designdecisions. First, the verify & retry step ( §3.3) is crucialfor demonstrating retrying behavior, but is by itself insuffi-cient if each retrying action is the identical as the previousone. Instead, opting for a diffusion policy (§ 3.4) forlearning from and generating high-entropy, diverse retryattempts (Fig 5) is also essential for high performance.Policy Learning Baselines. We investigate policylearning design decisions on the single-task balance do-main, and remove language conditioning. While BC-Zfound spatial softmax hurt their performance and opted fora mean pool, we observed using spatial softmax improvedperformance by + 5:0%. Further, we found that switchingfrom delta to absolute action spaces improved successrates +6:5%and+9:5%when using the MLP actiondecoder and our diffusion action decoder, respectively,confirming Chi et al. [ 12]’s findings. Lastly, we find that using our pseudo-random diffusion-based actionencoder consistently outperforms a deterministic MLP action mappings, regardless of other design decisions.MethodOutput Input SuccessGeneration Rep. ExecPredPool Proprio (%)BC-Z FeedForwardDelta 1 10 Avg 7 0.0FeedForwardDelta 4 10 Avg 7 15.0FeedForwardDelta 8 10 Avg 7 18.5Ours FeedForwardDelta 8 16 Spatial 3 29.0FeedForwardAbs 8 16 Spatial 3 35.5Diffusion Delta 8 16 Spatial 3 69.5Diffusion Abs 8 16 Avg 3 76.5Diffusion Abs 8 16 Spatial 3 79.0Table 5: Policy Learning Ablations . Ac-tion generation using diffusion models [ 50]robustly outperforms feed-forward modelsacross other policy design decisions.Sim2Real Transfer. We evaluated a policy trained on do-main randomized synthetic data in a real world transport taskwith five novel objects (Fig. 5e). Averaging across ten episodesper object, our policy achieved 76% success rate, demonstrat-ing the effectiveness of our approach in Sim2Real transfer.4.3 LimitationsBy using priviledged simulation state information, the LLMcan infer success conditions which uses ground truth contact,joint information, and object poses. This means our imple-mentation of the data generation phase is limited to simulationenvironments, and our policy requires sim2real transfer. Fur-ther, Our data generation method relies on existing 3D assetsand environments, which presents a further opportunity for scaling up with assets from 3D generative modelsor procedural generation. Finally, while our approach’s dataset contains text labels and success labels for allsubtasks, we have only evaluated its effectiveness in learning the root task. Learning from all subtasks andgrowing a robot’s set of learned, reusable sub-skills over time to enable compositional generalization is left forfuture work.5 ConclusionWe proposed “Scaling Up and Distilling Down”, a framework that combines the strengths of LLMs, sampling-based planners, and policy learning into a single system that automatically generates, labels, and distillsdiverse robot-complete exploration experience into a multi-task visuo-linguo-motor policy. The distilled policyinherits long-horizon behaviour, rich low-level manipulation skills, and robustness from its data collectionpolicy while improving upon performance beyond its training distribution. We believe that this integratedapproach is a step towards putting robotics on the same scaling trend as that of LLM development while notcompromising on the rich low-level control.AcknowledgmentsWe would like to thank Cheng Chi, Zeyi Liu, Samir Yitzhak Gadre, Mengda Xu, Zhenjia Xu, Mandi Zhaoand Dominik Bauer for their helpful feedback and fruitful discussions. This work was supported in part byGoogle Research Award, NSF Award #2143601, and #2132519. We would like to thank Google for theUR5 robot hardware. The views and conclusions contained herein are those of the authors and should not beinterpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.8References[1]S. Song, A. Zeng, J. Lee, and T. Funkhouser. Grasping in the wild: Learning 6dof closed-loop graspingfrom low-cost demonstrations. IEEE Robotics and Automation Letters , 5(3):4978–4985, 2020.[2] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation withlow-cost hardware. arXiv preprint arXiv:2304.13705 , 2023.[3] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE, 2012.doi:10.1109/IROS.2012.6386109.[4]A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese, Y . Zhu, andR. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrations for robot manipulation.InarXiv preprint arXiv:2108.03298 , 2021.[5]S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual representation forrobot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[6]K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang, M. Liu,X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 18995–19012, 2022.[7]J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-driven reinforcementlearning. arXiv preprint arXiv:2004.07219 , 2020.[8]J. Wu, X. Sun, A. Zeng, S. Song, J. Lee, S. Rusinkiewicz, and T. Funkhouser. Spatial action maps formobile manipulation. arXiv preprint arXiv:2004.09141 , 2020.[9]M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for robotic manipulation.InProceedings of the 6th Conference on Robot Learning (CoRL) , 2022.[10] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. InProceedings of the 5th Conference on Robot Learning (CoRL) , 2021.[11] P . Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, andJ. Tompson. Implicit behavioral cloning. Conference on Robot Learning (CoRL) , November 2021.[12] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy: Visuomotorpolicy learning via action diffusion. In Proceedings of Robotics: Science and Systems (RSS) , 2023.[13] C. Lynch and P . Sermanet. Language conditioned imitation learning over unstructured data. arXivpreprint arXiv:2005.07648 , 2020.[14] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. Ben Amor. Language-conditionedimitation learning for robot manipulation tasks. Advances in Neural Information Processing Systems ,33:13139–13150, 2020.[15] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z: Zero-shottask generalization with robotic imitation learning. In A. Faust, D. Hsu, and G. Neumann, editors,Proceedings of the 5th Conference on Robot Learning , volume 164 of Proceedings of Machine LearningResearch , pages 991–1002. PMLR, 08–11 Nov 2022. URL https://proceedings.mlr.press/v164/jang22a.html.[16] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P . Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.[17] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitation learningover unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212, 2022.[18] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman,A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprintarXiv:2212.06817 , 2022.[19] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard. Calvin: A benchmark for language-conditionedpolicy learning for long-horizon robot manipulation tasks. IEEE Robotics and Automation Letters , 7(3):7327–7334, 2022.9[20] T. Xiao, H. Chan, P . Sermanet, A. Wahid, A. Brohan, K. Hausman, S. Levine, and J. Tompson.Robotic skill acquisition via instruction augmentation with vision-language models. arXiv preprintarXiv:2211.11736 , 2022.[21] J. Zhang, K. Pertsch, J. Zhang, and J. J. Lim. Sprint: Scalable policy pre-training via language instructionrelabeling. arXiv preprint arXiv:2306.11886 , 2023.[22] S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditioned robot behaviorfrom offline data and crowd-sourced annotation. In Conference on Robot Learning , pages 1303–1315.PMLR, 2022.[23] R. Goyal, S. Ebrahimi Kahou, V . Michalski, J. Materzynska, S. Westphal, H. Kim, V . Haenel, I. Fruend,P . Yianilos, M. Mueller-Freitag, et al. The” something something” video database for learning andevaluating visual common sense. In Proceedings of the IEEE international conference on computervision , pages 5842–5850, 2017.[24] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro,T. Perrett, W. Price, et al. Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of theEuropean Conference on Computer Vision (ECCV) , pages 720–736, 2018.[25] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from” in-the-wild”human videos. arXiv preprint arXiv:2103.16817 , 2021.[26] R. Wang, J. Lehman, J. Clune, and K. O. Stanley. Paired open-ended trailblazer (poet): Endlesslygenerating increasingly complex and diverse learning environments and their solutions. arXiv preprintarXiv:1901.01753 , 2019.[27] M. Jiang, M. Dennis, J. Parker-Holder, J. Foerster, E. Grefenstette, and T. Rockt ̈aschel. Replay-guidedadversarial environment design. Advances in Neural Information Processing Systems , 34:1884–1897,2021.[28] J.-B. Mouret and J. Clune. Illuminating search spaces by mapping elites. arXiv preprintarXiv:1504.04909 , 2015.[29] K. Fang, T. Migimatsu, A. Mandlekar, L. Fei-Fei, and J. Bohg. Active task randomization: Learningvisuomotor skills for sequential manipulation by proposing feasible and novel tasks. arXiv preprintarXiv:2211.06134 , 2022.[30] Y . Du, O. Watkins, Z. Wang, C. Colas, T. Darrell, P . Abbeel, A. Gupta, and J. Andreas. Guidingpretraining in reinforcement learning with large language models. arXiv preprint arXiv:2302.06692 ,2023.[31] S. Mirchandani, S. Karamcheti, and D. Sadigh. Ella: Exploration through learned language abstraction.Advances in Neural Information Processing Systems , 34:29529–29540, 2021.[32] R. Mendonca, S. Bahl, and D. Pathak. Alan: Autonomously exploring robotic agents in the real world.arXiv preprint arXiv:2302.06604 , 2023.[33] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P . Kaelbling, and T. Lozano-P ́erez. Integratedtask and motion planning. Annual review of control, robotics, and autonomous systems , 4:265–293,2021.[34] W. Huang, P . Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extractingactionable knowledge for embodied agents. In International Conference on Machine Learning , pages9118–9147. PMLR, 2022.[35] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances.arXiv preprint arXiv:2204.01691 , 2022.[36] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P . Florence, A. Zeng, J. Tompson, I. Mordatch, Y . Chebotar,et al. Inner monologue: Embodied reasoning through planning with language models. In 6th AnnualConference on Robot Learning .[37] J. Liang, W. Huang, F. Xia, P . Xu, K. Hausman, B. Ichter, P . Florence, and A. Zeng. Code as policies:Language model programs for embodied control. In arXiv preprint arXiv:2209.07753 , 2022.10[38] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong,T. Y u, et al. Palm-e: An embodied multimodal language model. ICML , 2023.[39] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural language instructionsto feasible plans. arXiv preprint arXiv:2303.12153 , 2023.[40] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg.Progprompt: Generating situated robot task plans using large language models. In 2023 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 11523–11530. IEEE, 2023.[41] A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrains usingegocentric vision, 2022.[42] D. Seita, A. Ganapathi, R. Hoque, M. Hwang, E. Cen, A. K. Tanwani, A. Balakrishna, B. Thananjeyan,J. Ichnowski, N. Jamali, K. Y amane, S. Iba, J. F. Canny, and K. Goldberg. Deep imitation learning ofsequential fabric smoothing policies. CoRR , abs/1910.04854, 2019. URL http://arxiv.org/abs/1910.04854.[43] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robust perceptivelocomotion for quadrupedal robots in the wild. Science Robotics , 7(62):eabk2822, 2022.[44] S. Levine, C. Finn, T. Darrell, and P . Abbeel. End-to-end training of deep visuomotor policies. TheJournal of Machine Learning Research , 17(1):1334–1373, 2016.[45] K. Hausman, Y . Chebotar, S. Schaal, G. Sukhatme, and J. J. Lim. Multi-modal imitation learningfrom unstructured demonstrations using generative adversarial nets. Advances in neural informationprocessing systems , 30, 2017.[46] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodeswith one stone, 2022.[47] T. Y u, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: A benchmarkand evaluation for multi-task and meta reinforcement learning. In Conference on robot learning , pages1094–1100. PMLR, 2020.[48] D. Kalashnikov, J. V arley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. Mt-opt: Continuous multi-task robotic reinforcement learning at scale. arXiv preprintarXiv:2104.08212 , 2021.[49] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning usingnonequilibrium thermodynamics. In International Conference on Machine Learning , pages 2256–2265.PMLR, 2015.[50] J. Ho, A. Jain, and P . Abbeel. Denoising diffusion probabilistic models. Advances in Neural InformationProcessing Systems , 33:6840–6851, 2020.[51] C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. L. Denton, K. Ghasemipour, R. Gontijo Lopes,B. Karagol Ayan, T. Salimans, et al. Photorealistic text-to-image diffusion models with deep languageunderstanding. Advances in Neural Information Processing Systems , 35:36479–36494, 2022.[52] R. Rombach, A. Blattmann, D. Lorenz, P . Esser, and B. Ommer. High-resolution image synthesis withlatent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 10684–10695, 2022.[53] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P . Abbeel, A. Srinivas, and I. Mordatch.Decision transformer: Reinforcement learning via sequence modeling. Advances in neural informationprocessing systems , 34:15084–15097, 2021.[54] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review, andperspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[55] S. M. LaV alle et al. Rapidly-exploring random trees: A new tool for path planning.[56] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P . Mishkin,J. Clark, et al. Learning transferable visual models from natural language supervision. In InternationalConference on Machine Learning , pages 8748–8763. PMLR, 2021.11[57] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings ofthe IEEE conference on computer vision and pattern recognition , pages 770–778, 2016.[58] J. Song, C. Meng, and S. Ermon. Denoising diffusion implicit models. In International Conference onLearning Representations .[59] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh, andV . V anhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items, 2022.URL https://arxiv.org/abs/2204.11918.[60] K. Zakka. Scanned Objects MuJoCo Models, 7 2022. URL https://github.com/kevinzakka/mujocoscanned objects.[61] A. Kamath, M. Singh, Y . LeCun, G. Synnaeve, I. Misra, and N. Carion. Mdetr-modulated detection forend-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference onComputer Vision , pages 1780–1790, 2021.[62] X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision and languageknowledge distillation. arXiv preprint arXiv:2104.13921 , 2021.[63] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P . Abbeel. Deep imitation learningfor complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 5628–5635. IEEE, 2018.[64] P . Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotor policy learning.IEEE Robotics and Automation Letters , 5(2):492–499, 2019.[65] A. Mandlekar, F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox. Iris: Implicitreinforcement without interaction at scale for learning control from offline robot manipulation data. In2020 IEEE International Conference on Robotics and Automation (ICRA) , pages 4414–4420. IEEE,2020.[66] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P . Dhariwal, A. Neelakantan, P . Shyam,G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural informationprocessing systems , 33:1877–1901, 2020.[67] H. Touvron, L. Martin, K. Stone, P . Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P . Bhar-gava, S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprintarXiv:2307.09288 , 2023.12 |
kSXh83gWWy | Context-Aware Deep Reinforcement Learning forAutonomous Robotic Navigation in Unknown AreaJingsong Liang∗Zhichen Wang∗Yuhong Cao†∗Jimmy ChiunMengqi Zhang Guillaume SartorettiNational University of Singapore{jingsongliang, zhichenwang, caoyuhong, jimmy.chiun }@u.nus.edu{mpezmq, guillaume.sartoretti }@nus.edu.sgAbstract:Mapless navigation refers to a challenging task where a mobile robot must rapidlynavigate to a predefined destination using its partial knowledge of the environ-ment, which is updated online along the way, instead of a prior map of the environ-ment. Inspired by the recent developments in deep reinforcement learning (DRL),we propose a learning-based framework for mapless navigation, which employsa context-aware policy network to achieve efficient decision-making (i.e., maxi-mize the likelihood of finding the shortest route towards the target destination),especially in complex and large-scale environments. Specifically, our robot learnsto form a context of its belief over the entire known area, which it uses to reasonabout long-term efficiency and sequence show-term movements. Additionally, wepropose a graph rarefaction algorithm to enable more efficient decision-making inlarge-scale applications. We empirically demonstrate that our approach reducesaverage travel time by up to 61.4% and average planning time by up to 88.2%compared to benchmark planners (D*lite and BIT) on hundreds of test scenarios.We also validate our approach both in high-fidelity Gazebo simulations as wellas on hardware, highlighting its promising applicability in the real world withoutfurther training/tuning.Keywords: deep reinforcement learning, mapless navigation, context-awaredecision-making1 IntroductionStartEndFigure 1: Illustration of navigationthrough an unknown environment.The ground vehicle generates a feasibletrajectory toward the destination relyingon the sensory inputs and destination.Autonomous navigation is an essential capability for mo-bile robots, and can be broadly divided into local andglobal planning. Local planning typically focuses onshort-term collision avoidance, which provides the robotwith reactive kinematic commands to navigate through itsnearby surroundings [1, 2]. Global planning, on the otherhand, requires the robot to consider the broader environ-mental information and provide movement decisions ata higher level to determine a long-term route towards thetarget destination [3, 4]. Although many works have stud-iedmap-based navigation , where the robot relies on priorinformation about the environment, in this work, we fo-cus on global planning for mapless navigation , where arobot starts navigation to the destination in a completelyunknown/unmapped environment. Throughout the task,*These authors contributed equally to this work.†Corresponding authors.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.the robot incrementally constructs/updates its partial belief/map of the environment using onboardsensors, to guide its global path planning. The robot is tasked with reaching the target destination asquickly as possible (in other words, exploring the least amount of the environment).Although map-based navigation has been relatively well-studied [5, 6, 7], mapless navigation re-mains an open challenge: it requires the robot to explicitly or implicitly predict the potential pathto the target destination based on partial knowledge of the environment. For example, conventionalapproaches often assume the shortest path lies behind the nearest frontier (i.e., the boundary betweentraversable and unknown areas) to the destination. Such predictions naturally have a significant im-pact on navigation efficiency; however, improving the accuracy of these predictions is non-trivial,especially in complex scenarios with plenty of dead-ends and dense obstacles. We believe existingapproaches lack context-awareness when predicting potential paths. Being context-aware meansthat the robot is capable of adaptively reason about potential global paths according to its belief overthe current environment, instead of following a fixed rule for all scenarios that may be significantlydifferent. It also allows the robot to make non-myopic decisions that benefit long-term efficiencyand maximize the likelihood of reaching the destination as fast as possible.In this paper, we investigate and propose a context-aware DRL framework for mapless navigation.Specifically, our robot reasons about the context of its belief over the entire known area using anattention-based neural network. During the navigation task, our robot builds and updates its belief(represented as a graph), which serves as input of our neural network. The policy network modelsthe inter-dependencies between distinct areas in the agent’s belief/graph, to finally output the nextwaypoint for navigation. We further propose a graph rarefaction algorithm to filter out redundantnodes and corresponding edges in this graph for more efficient policy learning in large-scale envi-ronments. Compared to existing learning-based approaches [8, 9, 10], our model not only outputs anear-optimal policy to sequence movement decisions towards the target destination, but also endowsthe agent with time-efficient re-planning abilities even in complex, large-scale environments. To as-sess the performance and generalizability of our approach, we compare our model to representativestate-of-the-art baselines in hundreds of simulated maps at three complexity levels, where we high-light improvements up to 61% for average travel time and 88% for average planning time over thesebaselines. We also validate the effectiveness of our approach in large-scale Gazebo simulations, anddeploy our planner on hardware in a real-world scenario. To the best of our knowledge, we are thefirst work that proposes a DRL-based approach to mapless navigation [11]. Our full code and trainedmodel is available at https://github.com/marmotlab/Context_Aware_Navigation .2 Related WorksSearch-based approaches: These approaches are primarily based on discrete grids or a graph withnode priority evaluation, such as Dijkstra [12] and A* [5]. Given a prior map, they are able to searchfor near-optimal paths [6]. Dynamic variants of search-based approaches, e.g., D* [13], LPA* [3],D* lite [4], were proposed to approach dynamic planning in unknown environments. In most cases,these algorithms replan at low frequency, by reusing the planned route from the previous search.However, a robot may have to replan a new feasible route upon realizing that the current one isimpractical (e.g., reaching a dead-end) [14]. In this case, such incremental search comes at a highcomputational cost, especially in complex environments. Furthermore, these algorithms are highlydependent on heuristic function, which results in them being neither cognitive nor robust [15].Sampling-based approaches: More recently, the rapidly-exploring Random Tree (RRT) [16] fam-ily, which includes original RRT, RRT* [17], and RRT-Connect [18], has been leveraged for pathplanning due to its low computation costs. Owing to the drawback of random sampling, the gener-ated paths are typically sub-optimal and unstable [19], and are prone to being stuck in local minima,especially in complex environments. To achieve effective sampling, several improvements havebeen proposed [20, 21, 22, 23]. These approaches perform well in map-based navigation [7], buttypically suffer from long planning time in the absence of prior information [24], mainly due toextensive sampling in both known and unknown areas.2Viewpoint Graph Refined Viewpoint Graph EncoderActionPolicyDecoderMaskMaskCurrent Encoded EmbeddingsEncoded Embeddings Neighboring Encoded Embeddings Node Features Target NodeFigure 2: Framework of our context-aware DRL policy network. The initial and refined view-point graphs are built from the agent’s partial belief and graph rarefaction, respectively. The encoderfirst incorporates global information from the current partial map and the destination node throughself-attention. This embedded information is then used by the decoder to reason about the depen-dencies between the current node and its neighbors, to finally generate the action policy.Learning-based approaches: Learning-based mapless navigation approaches have aroused interestand demonstrated effectiveness in recent years [2, 9]. They can be broadly subdivided into localand global planning. Most of the previous learning-based works have focused on local planningthrough end-to-end models [11], including vision-based, i.e., successor-feature-based [8], LiDAR-based [9, 10, 25], or imitation-learning-based approaches [26, 27]. Both [9] and [28] note that aglobal path planner should usually be used for trust-worthy navigation in unknown environments.Furthermore, these local planners are typically trained and validated in simple environments, raisingconcerns about their generalizability to more complex cases. To the best of our knowledge, we arethe first work that proposes a learning-based global planner for mapless navigation [11].3 ApproachIn this section, we consider mapless navigation as a sequential decision-making RL problem and de-tail our context-aware policy network, as well as our graph rarefaction algorithm to further improvelonger-horizon and larger-scale planning.3.1 Mapless navigation as a RL ProblemWe formulate the mapless navigation task as a partially observable Markov decision process(POMDP), expressed as a tuple (S,A,T,R,Ω,O, γ)with the state space S, the action space A, thestate transition function T, the reward function R, the observation space Ω, the observation functiongiven the true state s′of the environment O(ot∈Ω|s′, a)and the action state a, and the discount fac-torγ. To promote efficient navigation towards the target destination, the RL objective is aimed to findan optimal policy π∗that maximizes the expected discounted reward Eat∼π(·|ot)hPTt=1γt−1rti.The policy πcan be considered as a mapping function from otto the next action at.Observation Ω:At each decision step t, our observation is represented as ot= (Gt, St), whichconsists of the viewpoint graph Gtand the planning attributes St. The robot gets the updated obser-vation in the limited sensor range ds(80 in practice training). The agent first obtains the viewpointsetVt, which is generated uniformly in the known area D. To construct a traversable graph, everynode in Vtconstructs up to knearest edges with each other, where the edges in Etcan only connectbetween collision-free nodes, i.e., nodes that are line of sight with each other. The construction ofGt= (Vt, Et)eliminates the concerns of collision with obstacles. Moreover, the planning attributesStprovide additional information about the observed environment and the target for each node in Vt.Inspired by [29], every node has a direction vector ⃗ vwhich acts as a signpost for the target, whichconsists of a unit vector ˆvindicating the direction towards the target and the Euclidean distance3|⃗ v|from the node to the target. Stfurther include an indicator δiwhich records whether the nodehas been visited before. With the knowledge of the former trajectory, the agent can produce a moreinformed policy. Stalso includes the utility uiwhich is referred to as the number of observable fron-tiers. These frontiers represent the areas with the potential to the target, which are generated at theboundary of the observed and unknown areas. Stof each node in Vtare formulated as {⃗ vi, δi, ui}.It is worth noting that in large-scale environments, Vtwould be populated densely. Meanwhile, theinformation contained within them is sparse. Therefore, we implement a graph rarefaction algorithm(pseudo code in Appendix A) to prune irrelevant nodes and extract key edges of the viewpoints.Specifically, graph rarefaction first clusters the non-zero utility node set Uinto multiple groupsaccording to the threshold radius dth(30 in practice). Then, the algorithm uses A* to search for theshortest path ζ, from the robot’s current position ptto each group. After that, the refined node setVr⊆Uis constructed by waypoints on ζ, which is either out of dthor out of line of sight . Thecomputation complexity is O(M+NKd), where Mis the number of non-zero utility nodes, Nthenumber of nodes chosen to compute the A* path, Kthe number of edges for each node, and dthenumber of nodes on the resulting path. After the graph rarefaction process, the refined viewpointgraph Gr= (Vr, Er)would represent the complete information in the current robot belief. Finally,theGralong with the corresponding planning states Srare concatenated as the input of the policynetwork, i.e., the refined observation os= (Gr, Sr).Action A:The collision-free graph extends incrementally along with the update of the agent’sobservations otfor every decision step. Our context-aware policy network outputs the stochasticpolicy πfor the agent to select the next waypoint among the neighboring nodes. Then, the agentupdates its partial map while moving to the next waypoint.Reward R:To promote efficient navigation, the agent receives a reward consisting of three parts.The first part rsis a constant time step penalty rs(−0.5in practice). The second part rb=d(st−1)−d(st)provides continuous feedback on the robot’s proximity to the target destination, where d(s)isthe distance between the agent current position ptand the destination location computed by A*. rbis used to encourage the agent to reach the destination as fast as possible. The last part rfis a fixedfinishing reward, set to 20while reaching the destination and 0otherwise. To sum up, the overallreward is rt(ot, at) =rs+cb·rb+rf, where cbis a scaling parameter ( cb= 1/64in practice).3.2 Policy NetworkInspired by [30, 31], we design an attention-based neural network (shown in Fig. 2) to sequenceefficient movement decisions towards the destination. In our policy network, the encoder embedsthe overall information of the current partial map, and the decoder utilizes the learned global featuresto reason about the dependencies between the current node and its neighbors and finally output anaction policy (probability distribution over neighboring nodes).Encoder: The refined observation is first normalized and projected into d-dimensional (128 in prac-tice), termed hn. The node features hnare then passed into multiple attention layers (6 in practice)to aggregate the spatial representation of the current observation. The input of each attention layerconsists of a query vector hqand a key-value vector hk,vof the same dimension. The attention layerupdates the query vector with the weighted sum of the value, where the attention weight depends onthe similarity between the query and key. In each attention layer, the query qi, key kiand value viare first calculated as qi=WQhqi,ki=WKhk,viandvi=WVhk,virespectively, where WQ,WK,WV∈Rd×dare learnable matrices. Next, the similarity between the query qiand the key kjis com-puted with a scaled dot product as uij=qTi·kj√d. The attention weights aijare then obtained using asoftmax function: aij=euijPnj=1euij. Finally, the output embedding from an attention layer, denotedash′i, is calculated as the weighted sum of value vectors as vj:h′i=Pnj=1aijvj. Additionally, anencoder edge mask Mis applied to prevent each node from accessing its neighboring features. Theoutput of the encoder, which we term as the encoded embeddings ˆhe, provides condensed spatialinformation to the decoder.4Decoder: In the decoder, the encoded embeddings of the current node (termed current encodedembeddings ,ˆhc) are first extracted from the encoder as well as its neighboring encoded embeddings(termed neighboring encoded embeddings ,ˆhn). Then the current encoded embeddings and encodedembeddings are fed into an attention layer with hq=ˆhc, hk,v=ˆhn. This layer calculates the outputattention weights, representing the relevance of each neighboring node to the current node. Theseattention weights are concatenated with hcand projected back to the d-dimensional feature together,termed current decoded embeddings , ̃hc. The current decoded embeddings incorporate informationfrom neighboring nodes into the representation of the current node. Finally, the current decodedembeddings and the encoded embeddings of neighboring nodes are fed into a pointer layer [32],which is an attention layer that directly uses the normalized attention weights θas output.3.3 Training SettingsInspired by Chen et al. [33], we implement a random dungeon map dataset for training, where thereare a total of 800 maps (each map is 1000×1000 pixels ) like figure 3. To build the updatedcollision-free graph, the agent treats the points in the known free area as candidate viewpoints from1600 points which are uniformly distributed to cover each dungeon map. Our model is trained usingthe Soft Actor-Critic (SAC) algorithm [34], where the maximum episode step is set to 128, and thesize of the replay buffer is set to 10000. The target entropy is set to 0.01·logk, where krepresentsthe number of neighboring nodes. We use the Adam optimizer to optimize policy and critic networkswith learning rates of 1×10−5and2×10−5respectively. Our model is trained on a workstationwith one i9-10980XE CPU and one NVIDIA GeForce RTX 3090 GPU, and the training starts aftercollecting 2,000 steps in the replay buffer. We utilize Ray [35], a distributed framework for machinelearning to parallelize data collection. Training requires around 12 hours to converge.4 ExperimentsSimple Medium ComplexFigure 3: Examples scenarios with differentcomplexity, showing the occupied area (greycells), free space (white cells), start points (yel-low block), and target destination (red block).We set a timeout, i.e., maximum decision steps(128 in practice), to prevent infinite navigationscenarios during testing, and a test is considereda failure if it exceeds this limit. To obtain a com-plete picture, we report the main performancemetrics for mapless navigation, including the av-erage success rate S(p), average travel distanceD(m), and average travel time T(s)(includingfailed cases for both latter ones). In particular,T(s)is the sum of each algorithm’s step planningtimeTp(s)and the robot’s resulting motion execution time Te(s). We first compare our approachwith state-of-the-art conventional baselines in numerous dungeon environments (Fig. 3). Then, wecompare our model, some variants of our model, and the FAR planner (referred to as “FAR”) [24]in a large-scale Gazebo environment (Fig. 5). Lastly, we deploy our trained model on hardware in areal-world scenario (Fig. 6).4.1 Comparisons in dungeon environmentsTo ensure a fair comparison, we create a random set of testing environments using the random dun-geon map generator [33], which were never seen by our trained model. Fig. 3 shows the diversecomplexity of testing environments, which can be categorized into three types (50 scenarios each),noted as simple, medium, and complex. We define the quantitative criteria of the scenario complex-ity as follows: (i) The Euclidean distance between the start point and target destination in each sce-nario. (ii) The overall number of connecting corridors in all rooms. (iii) The number of intersectionsthat will be encountered along the visually-optimal path. We consider two search-based approaches,LPA* [3], D* lite [4], as well as two sampling-based approaches: RRT [16], and BIT [23].5Table 1: Comparison results with state-of-the-art baselines in simple, medium, and complexscenarios (50 scenarios for each test set, standard deviation in parentheses). Environments arerandomized 200×200m2dungeons, the LiDAR’s scanning range is 30m, the robot’s constantvelocity 1.0m/s, and the graph connectivity parameter k= 20 for our model.Criteria Ours D* lite LPA* RRT BITSD(m) 224(±72) 214(±83) 293(±116) 398(±208) 275(±166)T(s) 285(±90) 383(±152) 414(±206) 662(±381) 404(±237)Tp(s) 0.20(±0.08) 1.24(±0.56) 0.74(±0.47) 0.67(±0.54) 0.55(±0.46)S(p) 100% 100% 100% 94% 98%MD(m) 259(±83) 237(±33) 383(±130) 497(±222) 360(±152)T(s) 329(±105) 484(±87) 594(±298) 852(±565) 563(±285)Tp(s) 0.18(±0.12) 1.52(±0.73) 0.79(±0.55) 0.68(±0.48) 0.53(±0.44)S(p) 100% 98% 94% 92% 98%CD(m) 375(±119) 349(±94) 466(±163) 544(±222) 493(±215)T(s) 477(±151) 680(±221) 754(±265) 1063(±782) 806(±407)Tp(s) 0.22(±0.13) 1.66(±0.95) 0.83(±0.60) 0.62(±0.42) 0.55(±0.45)S(p) 98% 94% 88% 86% 90%Ours D* lite LPA* RRT (partial) BIT (partial)Figure 4: Trajectory visualization in a representative complex scenario. The blue line is therobot’s trajectory starting at the yellow dot and ending at the red dot. The green lines in RRT andBIT are sampling trees in known areas, which are not included in D(m).Evaluation results are reported in Table 1. For search-based approaches, our model outperformsD* lite in terms of S(p),T(s), and Tp(s). Despite D* lite showing a lower D(m), our model stillsurpasses it in terms of T(s)by 15-32%, primarily due to the significantly shorter Tp(s), which isreduced by 83-88%. D* lite is known for finding near-optimal paths, by searching and assessingnode priorities incrementally. However, our results illustrate that D* lite results in high computationtimes (the worst Tp(s)among all approaches), which prevents its use for real-time planning. More-over, our model outperforms LPA* in three criteria by a substantial margin of approximately 23%,45%, and 73%. For sampling-based approaches, our model surpasses RRT and BIT in all criteria.Our model finishes the task more than 30-41% faster than BIT, and more than 56-61% faster thanRRT. Additionally, our model demonstrates superior performance in terms of D(m), with improve-ments ranging from 18-28% to BIT and 31-49% to RRT. There, our results illustrate that both RRTand BIT are prone to generate inefficient trajectories, which consequently increases D(m)andT(s).Furthermore, we also conduct ablation experiments (reported in Appendix B) regarding the graphrarefaction algorithm and design of the encoder.Fig. 4 shows an example where our planner generates a more efficient trajectory, while other base-lines suffer from the misleading placement of the target destination. There, our model can be seento exhibit more interest in unknown areas, with a higher drive toward the destination. We believethat this strategy substantially reduces the likelihood of aimless exploration and consistently aids ingenerating superior navigation trajectories.4.2 Comparisons in Gazebo environmentWe evaluate our planner and some variants (see Table 2) in a highly convoluted environment charac-terized by narrow corridors and various obstacles. The robot is required to navigate through a series6Table 2: Evaluations of our model and baselines in large-scale, complex Gazebo environment(130m×100m).We conduct these experiments to evaluate the performance of our model andFAR [24] (An efficient planning framework capable of handling path planning in unknown envi-ronments). The robot’s constant velocity is 2.0 m/s, the LiDAR’s scanning range is 30m, and theD(m)andT(s)are the overall values after traveling to 7successive goals. Our models were trainedwithk= 20 , and used as is (no extra training) for the k= 5,10tests.Ours FAR Ours (no GR) Ours ( k= 10 )Ours ( k= 5)D(m) 1367.2 2060.1 1483.5 1849.2 2103.6T(s) 850.7 1359.4 1131.5 1112.1 1382.0Tp(s) 0.83(±0.37) 2.42(±0.35) 1.96(±0.29) 0.38(±0.11) 0.36(±0.12)0 (Start)1 234567(a) Our planner0 (Start)1 234567 (b) FARFigure 5: Trajectories comparison of two planners in large-scale indoor Gazebo environment.of predefined target points successfully, similar to the setting presented in FAR [24]. To ensure fair-ness and consistency in the evaluations, we reset the planners after reaching each target point, whichensures each navigation task begins in a fully unknown environment.As shown in Fig. 5(a) and 5(b), our model is capable of doing more informed exploration and gen-erating efficient trajectories towards the target destinations. Our experimental results (see Table 2)demonstrate that the significant reduction achieved by our model in terms of D(m)andT(s), by upto 692.9 mand 508.7 s, when compared to FAR. It provides evidence that our model excels in gener-ating more optimal paths than FAR in unknown environments. Additionally, our model outperformsFAR in terms of Tp(s), needing only 0.83 sper planning step.In addition, we conduct further comparisons among several variants of our model, where we vary thenumber of neighboring nodes k(20, 10, and 5 respectively). Our experimental results indicate thatk= 10 surpasses FAR in all criteria, without additional training. Furthermore, Tp(s)withk= 5is less than that with k= 10 by only 0.02s, while the D(m)ofk= 5 is larger than k= 10 upto254.4m, indicating much less efficient trajectories with k= 5. Thus, it is not wise to set kas asmaller value due to too sparse connectivity between nodes. To evaluate the significance of graphrarefaction, we test a variant of our model without sparsification, which results in more than 136%Tp(s)and 8% D(m)compared to k= 20 . Therefore, we believe that graph rarefaction significantlybenefits our model.4.3 Hardware Validation in a Real-World ScenarioOur ground vehicle utilizes a Leishen C16 LiDAR for localization and mapping (shown in Fig. 6).The real-world scenario is a 60×15m2laboratory with randomly placed obstacles (e.g., chairs,boxes, and camera tripod). The vehicle should travel through a series of predefined points by fol-lowing the waypoint published by our model. After receiving a waypoint, the local planner [36]generates real-time and feasible motion commands for the ground vehicle. The experimental trajec-76Ground Vehicle45330 (Start)12456(End)012Figure 6: Validations in real-world environment. The ground vehicle, provided with partiallyobserved point-cloud data, starts at point 0and subsequently traverses a series of consecutive points.tory depicts a high-quality solution, which validates our model’s effectiveness and shows promisingapplicability for real-world environments.5 LimitationsThe limitations of our method mainly revolve around adaptive sampling and smooth motion control:• We currently sample the observations uniformly, which may not precisely capture the spa-tial representation of the environment and leads to sub-optimal navigation strategies. Totackle this, future work will develop an online adaptive sampling strategy.• Our model currently plans the next waypoint under the assumption that the robot is omni-directonial (i.e., has no motion constraints), which may make reaching this waypoint diffi-cult in practice. The incorporation of a local motion planner (and its use to inform/train thepolicy) may facilitate navigation in unknown environments with dynamic obstacles.6 ConclusionIn this paper, we propose a context-aware DRL framework for mapless navigation that allows a robotto build a context of its entire partial belief over the environment, to infer the shortest route towardsa target destination. Our model achieves high-quality decision-making, especially in complex andlarge-scale environments, where it allows the robot to sequence short-term movement decisionsinformed by global information about known areas. We also propose a graph rarefaction algorithm tofilter out redundant nodes and corresponding edges in the graph input of our neural network, towardsdeployment in large-scale environments. We empirically demonstrate that our model outperformsstate-of-the-art baselines in terms of average travel time and average planning time, with powerfulgeneralizability to complex unknown environments never seen during training. Finally, we validateour approach in high-fidelity Gazebo simulations as well as on hardware, revealing promises forrobotic deployments in the real world without further tuning.Future work will first focus on the construction of a local planner with more considerations on kine-matic/dynamics constraints. Then, we will extend our framework to multi-agent mapless navigation,where robots need to reason about each other and plan efficient paths cooperatively, by leveragingsynergies and avoiding redundant work.AcknowledgmentsThis work was supported by Temasek Laboratories (TL@NUS) under grant TL/FS/2022/01.8References[1] N. Bucki, J. Lee, and M. W. Mueller. Rectangular pyramid partitioning using integrated depthsensors (rappids): A fast planner for multicopter navigation. IEEE Robotics and AutomationLetters , 5(3):4626–4633, 2020.[2] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 , 2015.[3] S. Koenig, M. Likhachev, and D. Furcy. Lifelong planning a*. Artificial Intelligence , 155(1-2):93–146, 2004.[4] S. Koenig and M. Likhachev. Dˆ* lite. Aaai/iaai , 15:476–483, 2002.[5] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination ofminimum cost paths. IEEE Transactions on Systems Science and Cybernetics , 4(2):100–107,1968. doi:10.1109/TSSC.1968.300136.[6] P. Raja and S. Pugazhenthi. Optimal path planning of mobile robots: A review. Internationaljournal of physical sciences , 7(9):1314–1320, 2012.[7] J. D. Gammell, T. D. Barfoot, and S. S. Srinivasa. Batch informed trees (bit*): Informedasymptotically optimal anytime search. The International Journal of Robotics Research , 39(5):543–567, 2020.[8] J. Zhang, J. T. Springenberg, J. Boedecker, and W. Burgard. Deep reinforcement learning withsuccessor features for navigation across similar environments. In 2017 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 2371–2378. IEEE, 2017.[9] M. Pfeiffer, M. Schaeuble, J. Nieto, R. Siegwart, and C. Cadena. From perception to decision:A data-driven approach to end-to-end motion planning for autonomous ground robots. In IEEEInternational Conference on Robotics and Automation (ICRA) , page 1527–1533. IEEE, 2017.[10] L. Tai, G. Paolo, and M. Liu. Virtual-to-real deep reinforcement learning: Continuous con-trol of mobile robots for mapless navigation. In 2017 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 31–36. IEEE, 2017.[11] X. Xiao, B. Liu, G. Warnell, and P. Stone. Motion planning and control for mobile robotnavigation using machine learning: a survey. Autonomous Robots , 46(5):569–597, 2022.[12] E. W. Dijkstra. A note on two problems in connexion with graphs. In Edsger Wybe Dijkstra:His Life, Work, and Legacy , pages 287–290. 2022.[13] M. Likhachev, D. I. Ferguson, G. J. Gordon, A. Stentz, and S. Thrun. Anytime dynamic a*:An anytime, replanning algorithm. In ICAPS , volume 5, pages 262–271, 2005.[14] A. T. Le, M. Q. Bui, T. D. Le, and N. Peter. D lite with reset: Improved version of d lite forcomplex environment. In 2017 First IEEE International Conference on Robotic Computing(IRC) , pages 160–163. IEEE, 2017.[15] D. Ferguson, M. Likhachev, and A. Stentz. A guide to heuristic-based path planning. In Pro-ceedings of the international workshop on planning under uncertainty for autonomous systems,international conference on automated planning and scheduling (ICAPS) , pages 9–18, 2005.[16] S. M. LaValle et al. Rapidly-exploring random trees: A new tool for path planning. 1998.[17] S. Karaman and E. Frazzoli. Sampling-based algorithms for optimal motion planning. Theinternational journal of robotics research , 30(7):846–894, 2011.9[18] J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query pathplanning. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conferenceon Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 2, pages995–1001. IEEE, 2000.[19] A. Perez, R. Platt, G. Konidaris, L. Kaelbling, and T. Lozano-Perez. Lqr-rrt*: Optimalsampling-based motion planning with automatically derived extension heuristics. In 2012IEEE International Conference on Robotics and Automation , pages 2537–2542. IEEE, 2012.[20] A. H. Qureshi and Y . Ayaz. Potential functions based sampling heuristic for optimal pathplanning. Autonomous Robots , 40:1079–1093, 2016.[21] J. Wang, W. Chi, M. Shao, and M. Q.-H. Meng. Finding a high-quality initial solution for therrts algorithms in 2d environments. Robotica , 37(10):1677–1694, 2019.[22] J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot. Informed rrt: Optimal sampling-based pathplanning focused via direct sampling of an admissible ellipsoidal heuristic. In 2014 IEEE/RSJInternational Conference on Intelligent Robots and Systems , pages 2997–3004. IEEE, 2014.[23] J. D. Gammell, S. S. Srinivasa, and T. D. Barfoot. Batch informed trees (bit*): Sampling-basedoptimal planning via the heuristically guided search of implicit random geometric graphs. In2015 IEEE International Conference on Robotics and Automation (ICRA) , pages 3067–3074,2015. doi:10.1109/ICRA.2015.7139620.[24] F. Yang, C. Cao, H. Zhu, J. Oh, and J. Zhang. Far planner: Fast, attemptable route plannerusing dynamic visibility update. In 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 9–16, 2022. doi:10.1109/IROS47612.2022.9981574.[25] J. Jin, N. M. Nguyen, N. Sakib, D. Graves, H. Yao, and M. Jagersand. Mapless navigationamong dynamics with social-safety-awareness: a reinforcement learning approach from 2dlaser scans. In 2020 IEEE international conference on robotics and automation (ICRA) , pages6979–6985. IEEE, 2020.[26] Y . F. Chen, M. Everett, M. Liu, and J. P. How. Socially aware motion planning with deepreinforcement learning. In 2017 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 1343–1350. IEEE, 2017.[27] M. Everett, Y . F. Chen, and J. P. How. Motion planning among dynamic, decision-makingagents with deep reinforcement learning. In 2018 IEEE/RSJ International Conference on In-telligent Robots and Systems (IROS) , pages 3052–3059. IEEE, 2018.[28] J. Gao, W. Ye, J. Guo, and Z. Li. Deep reinforcement learning for indoor mobile robot pathplanning. Sensors , 20(19):5493, 2020.[29] G. Sartoretti, J. Kerr, Y . Shi, G. Wagner, T. S. Kumar, S. Koenig, and H. Choset. Primal:Pathfinding via reinforcement and imitation multi-agent learning. IEEE Robotics and Automa-tion Letters , 4(3):2378–2385, 2019.[30] Y . Cao, Y . Wang, A. Vashisth, H. Fan, and G. A. Sartoretti. Catnipp: Context-aware attention-based network for informative path planning. In Proceedings of The 6th Conference on RobotLearning , volume 205, pages 1928–1937, 2023.[31] Y . Cao, T. Hou, Y . Wang, X. Yi, and G. Sartoretti. Ariadne: A reinforcement learning approachusing attention-based deep networks for exploration. In 2023 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 10219–10225, 2023. doi:10.1109/ICRA48891.2023.10160565.[32] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks. Advances in neural informationprocessing systems , 28, 2015.10[33] F. Chen, S. Bai, T. Shan, and B. Englot. Self-learning exploration and mapping for mobilerobots via deep reinforcement learning. In AIAA SciTech Forum , page 0396, 2019.[34] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In International conference on machinelearning , pages 1861–1870. PMLR, 2018.[35] P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul,M. I. Jordan, et al. Ray: A distributed framework for emerging ai applications. In 13th USENIXSymposium on Operating Systems Design and Implementation (OSDI 18) , pages 561–577,2018.[36] J. Zhang, C. Hu, R. G. Chadha, and S. Singh. Falco: Fast likelihood-based collision avoid-ance with extension to human-guided navigation. Journal of Field Robotics , 37(8):1300–1313,2020.11AppendixA Graph RarefactionAlgorithm 1: Graph Rarefaction AlgorithmInput: non-zero utility node set U, mapM, robot position pt, threshold radius dthInitialize refined node set Vr←U, covered node set U← ∅;forv∈Udoifv∈Uthen continue;Find nearby node set Nindth;forv′∈Ndoifline(v, v′)is collision free then ̄U←v′;endFind path ζfrom pttov, set ref node vref=v;fori∈ |ζ|doifline(vref, ζi)is not collision free or L(vref, ζi)≥dththenvref=ζi−1, Vr←vrefendendOutput: collision free edge set Erbased on VrandM.B Ablation experimentsWe carried out ablation experiments on our model to evaluate the impact of its key parameters. Wefirst introduced the various ablation cases, which are our model with the original setting (termedOurs ), our model with original viewpoint graph (i.e., without graph rarefaction, termed Ours (noGR)), and our model with two encoder layers (termed Ours (Encoder-2) ). All the cases are evalu-ated in 2D environments (same test set as in the paper) and reported in Table 3.Table 3: Ablation experiments on our model in simple (S), medium (M), and complex (C)scenarios (50 maps per test set). The comparison metrics are average travel distance D(m), motionexecution time T(s), step planning time Tp(s), and test success rate per scenario S(p), respectively.The standard deviation of the comparative metrics is indicated by the parentheses.Criteria Ours Ours (no GR) Ours (Encoder-2)SD(m) 224(±72) 245(±55) 382(±198)T(s) 285(±90) 313(±101) 489(±253)Tp(s) 0.20(±0.08) 0.34(±0.06) 0.18(±0.10)S(p) 100% 100% 98%MD(m) 259(±83) 276(±104) 443(±233)T(s) 329(±105) 452(±126) 561(±294)Tp(s) 0.18(±0.12) 0.31(±0.09) 0.13(±0.03)S(p) 100% 100% 88%CD(m) 375(±119) 410(±127) 611(±240)T(s) 477(±151) 573(±235) 774(±305)Tp(s) 0.22(±0.13) 0.38(±0.15) 0.16(±0.04)S(p) 98% 96% 84%As shown in Table 3, the model Ours (no GR) degrades slightly in terms of D(m),T(s), and S(p)compared to our original model. However, we note that our original model achieves a computationalefficiency that is twice as fast as the step planning time Tp(s), which indicates the significance ofthe graph rarefaction algorithm in our approach. Moreover, we find that the model Ours (Encoder-2)exhibits a large drop in performance, particularly in the medium and complex scenarios. Thisresult indicates that an encoder with more layers assists the robot in avoiding myopic decisions,particularly on long-term mapless navigation tasks.12 |
veGdf4L4Xz | KITE: Keypoint-Conditioned Policiesfor Semantic ManipulationPriya Sundaresan , Suneel Belkhale, Dorsa Sadigh, Jeannette BohgStanford UniversityAbstract: While natural language offers a convenient shared interface for humansand robots, enabling robots to interpret and follow language commands remains alongstanding challenge in manipulation. A crucial step to realizing a performantinstruction-following robot is achieving semantic manipulation — where a robotinterprets language at different specificities, from high-level instructions like ‘Pickup the stuffed animal’ to more detailed inputs like ‘Grab the left ear of the elephant.’To tackle this, we propose KITE: Keypoints + Instructions to Execution, a two-step framework for semantic manipulation which attends to both scene semantics(distinguishing between different objects in a visual scene) and object semantics(precisely localizing different parts within an object instance). KITE first groundsan input instruction in a visual scene through 2D image keypoints, providing ahighly accurate object-centric bias for downstream action inference. Provided anRGB-D scene observation, KITE then executes a learned keypoint-conditioned skillto carry out the instruction. The combined precision of keypoints and parameterizedskills enables fine-grained manipulation with generalization to scene and objectvariations. Empirically, we demonstrate KITE in 3 real-world environments: long-horizon 6-DoF tabletop manipulation, semantic grasping, and a high-precisioncoffee-making task. In these settings, KITE achieves a 75%,70%, and 71% overallsuccess rate for instruction-following, respectively. KITE outperforms frameworksthat opt for pre-trained visual language models over keypoint-based grounding, oromit skills in favor of end-to-end visuomotor control, all while being trained fromfewer or comparable amounts of demonstrations. Supplementary material, datasets,code, and videos can be found on our website.1Keywords: Semantic Manipulation, Language Grounding, Keypoint Perception1 IntroductionLanguage has the potential to serve as a powerful communication channel between humans and robotsin homes, workplaces, and industrial settings. However, two primary challenges prevent today’srobots from handling free-form language inputs. The first is enabling a robot to reason over what tomanipulate. Instruction-following requires not only recognizing task-relevant objects from a visualscene, but possibly refining visual search to specific features on a particular object. For instance,telling a robot to “Open the top shelf” vs. “Yank open the bottom shelf” of a cabinet requires not onlyparsing and resolving any liberties taken with phrasing and localizing the cabinet in the scene ( scenesemantics), but also identifying the exact object feature that matters for the task — in this case thetop or bottom handle ( object semantics). In this work, we refer to instruction-following with sceneand object awareness as semantic manipulation . Similarly, pick-and-place is a standard manipulationbenchmark [ 1,2,3,4], knowing how to pick up a stuffed animal by the ear versus leg, or a soap bottleby the dispenser versus side requires careful discernment. After identifying what to manipulate, thesecond challenge is determining how the robot can accomplish the desired behavior, i.e., low-levelsensorimotor control. In many cases, low-level action execution requires planning in SE(3) with 6degrees-of-freedom (DoF), such as reorienting the gripper sideways to grasp and pull open a drawer.{priyasun, belkhale }@stanford.edu, dorsa@cs.stanford.edu, bohg@stanford.edu1https://tinyurl.com/kite-site7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Real-World Semantic Manipulation Environments: We visualize our semantic manipulationframework KITE on three real-world environments: long-horizon instruction following, semantic grasping,and coffee-making. Using keypoint-based grounding, KITE contextualizes scene-level semantics (‘Pick up thegreen/red/blue/brown coffee pod’) as well as object-level semantics (‘Pick up the unicorn by the leg/ear/tail’,‘Open the cabinet by the top/middle/bottom shelf’) and precisely executes keypoint-conditioned skills.Going beyond grasping, we want robots to assist us in many daily real-world tasks which may requireeven finer-grained precision. Making coffee, for example, is a simple daily task for humans, but for arobot it involves complex steps like reorienting a mug from sideways to upright or carefully insertinga coffee pod into an espresso machine. Thus to achieve semantic manipulation, robots must extractscene and object semantics from input instructions and plan precise low-level actions accordingly.Leveraging advances in open-vocabulary object detection [ 5,6,7,8], prior works in language-basedmanipulation determinine what to manipulate via bounding boxes or keypoints obtained from pre-trained [ 9] or fine-tuned vision language models (VLMs) [ 10]. So far, these works operate at thelevel of scene semantics (distinguishing amongst objects) rather than object semantics (identifyingwithin-object features). In addition, these works do not apply VLMs to any complex manipulationbeyond simple pick-and-place. To address these shortcomings, follow-up works couple the whatandhow subproblems together and learn end-to-end 6-DoF language-conditioned policies fromdemonstrations [ 11,12,13]. However, learning high dimensional action spaces from raw sensor inputssuch as images or voxelized scene representations can require excessive amounts of data [ 12,13] andcan be difficult in high-precision tasks especially when using discretized actions [11].Between approaches that leverage pre-trained visual representations from off-the-shelf VLMs, andthose that plan directly from pixels or voxels, we lack an intermediate object-centric representationthat can link natural language to scene and object semantics. Prior work has demonstrated thatopen-vocabulary VLMs can address scene semantics to some extent by locating different objects withcoarse bounding boxes [ 9], but these representations are still too granular to precisely locate partson objects. A suitable visual representation would be one that can represent both across-object orwithin-object features, and is interpretable enough to inform downstream 6-DoF action planning. Weargue that keypoints provide this happy medium by offering a way to precisely pinpoint objects atthe scene-level or even features within an object ( what ) and a way to condition downstream 6-DoFmanipulation ( how) on a region of interest.In this work, we present KITE: Keypoints + Instructions ToExecution, a flexible framework forsemantic manipulation. KITE is decoupled into a grounding policy which maps input images andlanguage commands to task-relevant keypoints, and an acting module which employs keypoint-conditioned skills to carry out low-level 6-DoF actions. We show that KITE can be trained from justa few hundred annotated examples for the grounding model, and less than 50 demos per skill for theacting module, while outperforming and generalizing better than methods that do not make use ofeither keypoints or skills. We experimentally evaluate KITE on semantic manipulation across threechallenging real-world scenarios with varying tiers of difficulty: 6-DoF tabletop manipulation, se-mantic grasping, and coffee-making (Figure 1). Results indicate that KITE demonstrates fine-grainedinstruction following, while exhibiting a capacity for long-horizon reasoning and generalization.22 Related WorkLanguage-Based Manipulation : Many recent works ground language to manipulation skills as ameans for long-horizon instruction following. Several methods learn language conditioned policiesend-to-end using imitation learning; however, end-to-end learning can require many demonstrationsand can be brittle to novel scenes or objects [ 12,11,14,15,16,17,18,19,10,20]. To improve sampleefficiency, Shridhar et al. [11] predicts robot waypoints rather than low-level actions, conditionedon language and point cloud inputs. Waypoints alone do not specify how to go from one waypointto another, and thus fail to capture dynamic tasks such as peg insertion or pouring motions. Otherworks take a hierarchical approach that first learn or define a library of language-conditioned skills,and then plan over skills with large language models (LLM) [ 21,22,23,24,25,26]. However, eachskill often requires hundreds of demonstrations. In addition, the LLM planner can only reason aboutthe scene at a high level, lacking visual grounding . Alternatively, Liang et al. [27] query the LLM togenerate code using an API for low-level skills, but predicting continuous parameters of these skillsis challenging for an ungrounded LLM limiting this approach to tasks that do not require precision ordexterity. Vision-Language Models (VLM) are often proposed to ground LLM planners. For instance,Stone et al. [28] leverage a pretrained VLM to identify task-relevant objects from language. However,this approach has also been limited to pick and place tasks suggesting that today’s pretrained VLMsstruggle with more precise language instructions. In contrast, KITE maps language and vision directlyto desired keypoints, enabling precise and semantic manipulation over long horizons.Skill-Based Manipulation : Many prior works study multi-task manipulation by defining skills torepresent sub-tasks, and then composing these skills over long horizons. These skills can either belearned to output each action or parameterized by expert-defined features. In reinforcement learning(RL), hierarchy can be imposed on the policy to learn both skills and composition end-to-end, butthese methods can be sample inefficient and rarely integrate well with natural language [ 29,30,31].Other RL works parameterize skills to reduce the action space size for sample efficiency, but theseskills are usually rigid and cannot generalize to new settings [ 32,33,34]. Imitation learning (IL) aimsto learn skills from demonstrations in a more sample efficient manner than RL, but these skills stillfail to generalize to scene perturbations [ 35,36]. Furthermore, in both IL and RL, connecting learnedskills to precise language in a generalizable fashion is an open challenge. KITE avoids learning skillsfrom scratch and instead defines a library of keypoint-conditioned skills, where the exact parametersof each skill are learned from demonstration. We show that keypoint-conditioned skills are sampleefficient to learn and generalizable to new objects, while also easily integrated with precise language.Keypoints for Manipulation : Keypoints have emerged in the literature as a more robust skillrepresentation for manipulation [ 37,10]. Keypoints are 2D points on images that serve as a naturalintermediary between images and low-level behaviors. Several methods use keypoints to force themodel to attend to the most important features in the input images [ 37]. Others predict keypointsand then translate keypoints into 3D points, or directly predict 3D points, to parameterize low levelbehaviors in a general and visually-grounded fashion [ 38,10,39,40]. For example, Shridhar et al.[10] parameterize pick and place tasks using keypoints learned with image supervision, showing thatthis keypoint abstraction generalizes better to new objects. Keypoint action spaces have also helpedin deformable object manipulation, for example in the domains of cloth folding, rope untangling,and food manipulation [ 41,42,43,44,45]. For many of these prior works, keypoints have yet to beintegrated with language, and the methods that are linked to language are limited to a small library ofprimitives, usually focusing only on pick and place scenarios. Our approach defines a much broaderlibrary of keypoint-conditioned skills, and integrates keypoints with complex language instructions.3 KITE: Keypoints + Instructions To ExecutionIn this work, our goal is to train an instruction-following agent capable of performing semanticmanipulation at both scene and object-level granularity. We accomplish this with KITE, a sample-efficient and generalizable framework operating in two stages: grounding language into keypointsandacting on those keypoints. In this section, we first formalize the semantic manipulation problem(Section 3.1), discuss data collection (Section 3.2), and then discuss the training procedures for thegrounding (Section 3.3) and acting modules (Section 3.4).3Figure 2: KITE System Overview: KITE receives an image observation Italong with user instruction itandgrounds these inputs to a 2D semantic keypoint in the image. After inferring which skill type ltis appropriatefrom a set of skill labels, KITE takes an RGB-D point cloud observation Pt, annotated with the deprojectedkeypoint M t, and infers the appropriate waypoint policy πfor execution. After executing this action, KITEreplans based on a new observation (It+1, it+1)and repeats the whole process.3.1 Semantic Manipulation Problem FormulationWe aim to tackle instruction-following with scene and object-level semantic awareness using a libraryofskills . We assume each skill can be parameterized by 6-DoF waypoints, and we decouple eachskill into a waypoint policy πandcontroller ρto move between waypoints in a task-specific manner.Additionally, we assume each skill can be represented by a skill label l, e.g., pick ,open , etc. Weconstruct a library LofMspecialized skills where L={l1: (π1, ρ1), . . . , lM: (πM, ρM)}mapsfrom a skill label to an underlying policy πand controller ρ.We assume access to multiple calibrated cameras that provide RGB and depth. We assume that atleast one “grounding” camera can partially see all relevant objects. An observation ot= (It,Pt)where It∈RW×H×3is the image from the grounding camera, and Pt∈RD×6is a multi-viewpoint cloud. We denote low-level robot actions at= (x, y, z, ψ, θ, φ )at time twhich consist of theend-effector position, yaw, pitch, and roll. We denote waypoints as κwhich is also a 6-DoF pose, butrepresents a high-level pose (e.g., grasp pose for pick ) rather than a low-level action.At time t, given an instruction it, we want to know which skill to execute by associating itto acorresponding skill label lt∈ {l1, . . . , lM}. Next, we aim to infer a 2D keypoint [u, v]in the currentvisual observation otwhich grounds itto an associated object or object part. Finally, the chosenskill (π, ρ) =L[lt]is executed (Figure 2). For each skill, we want to find a waypoint policy πthat takes as input the visual observation otand 2D keypoint [ut, vt]and outputs Kwaypoints:π: (ot,[ut, vt])→ {κ1, . . . , κK}. Then, the associated controller ρcan output a low-level trajectorybetween waypoints ρ:{κ1, . . . , κK} → τ={(ot, at), . . . , (ot+T−1, at+T−1)}(i.e. via linearinterpolation, motion planning, etc.) for the robot to execute. For multi-step manipulation tasks, werestart the above process at each step with the new observation ot+Tand paired language input it+T.We consider instructions which refer to scene semantics, such as specifying desired spatial rearrange-ments of objects (e.g. “Pick up the lemon”), and object semantics, which reference desired objectparts to be manipulated (e.g. “Grab the kangaroo stuffed animal by the tail”). As we do not assumeaccess to the interaction history, our space of feasible language inputs excludes post-hoc feedback(“Pick up the other marker”) or online-corrections (“No, to the left!”), which we leave to future work.Next we outline how KITE learns to predict keypoints (grounding) and learns each skill π(acting).3.2 Demonstration CollectionTo learn both the grounding and acting modules, we collect a dataset Dπconsisting of Nexpertdemonstrations per skill. Each demonstration has an initial observation, a list of Kwaypoints,and a language instruction: Dπ=(on,{κ1n. . . κKn}, in) :n∈ {1, . . . , N }. For instance, for apick skill, we record the initial image and point cloud, provide an instruction (e.g., ‘Pick up thelemon’), and then kinesthetically move the robot and record each end-effector waypoint. We use thecalibrated robot-to-camera transformation to automatically project each robot end-effector pose κjnto2D coordinates [un, vn]in the image plane of the camera used for grounding. For each skill, we trainthe acting module from Dπ. Aggregating across all skills yields a dataset of paired images, keypointannotations, and language instructions with which to train the grounding module.43.3 Grounding ModuleThe grounding module learns to identify 2D keypoints from RGB images that correspond to objectfeatures mentioned in an input instruction. We draw inspiration from recent works which use explicitlysupervised [ 10,38] or self-supervised keypoint attention networks [ 37,46] to implement a groundingmodel Qground . Specifically, we learn a grounding function Qground (u, v, I t, it)representing thelikelihood of keypoint [u, v]given image Itand paired language instruction it. In this work, weattempt to learn a single-step look-ahead grounding function that takes a language input (e.g. “Putthe lemon into the cabinet”) and outputs the most immediately relevant keypoint (e.g. the pixel forthe lemon if not already grasped, otherwise the pixel for the cabinet drawer to be placed) (see Fig. 2).Given this grounding function Qground , we infer the 2D pixel in the image with highest likelihood:[ut, vt] = arg maxu,vQground (u, v, I t, it) (1)In practice, we implement Qground using the two-stream architecture from [ 10] which fuses pre-trained CLIP [ 47] embeddings of visual and textual features in a fully-convolutional network tooutput a heatmap of Qground . The grounding function is trained with a binary cross-entropy lossbetween the predicted heatmaps and 2D Gaussian heatmaps centered at the ground-truth pixel.3.4 Acting ModuleAlthough keypoints can pinpoint both scene and object semantics, they critically lack the 3D geometriccontext necessary to recover precise 6-DoF actions for a given task. For instance, the command“Pick up the bowl” may result in a predicted keypoint located at the bottom of a bowl, where thereis no feasible grasp The exact 6-DoF actions are also dependent not just on the keypoint, but alsolanguage: “pick the lemon” and “cut the lemon” have similar keypoints but require completelydifferent actions. We need a way to refine a predicted keypoint into candidate 6-DoF actions based ona desired language command, which we discuss next.Skill Selection: Given a free-form language instruction, KITE first leverages the knowledge of LLMsto determine the appropriate skill label (e.g. it=“Put the lemon in the cabinet” should result inthe LLM outputting ˆlt=‘pick place’ ), following prior work [ 48,21]. The procedure entailsprompting the LLM, in our case OpenAI’s text-davinci-003 [49], with in-context examplesof instructions and the appropriate skill type (see Appendix B.3 for examples of our promptingstrategy). At test-time, we concatenate the example prompt with instruction itand generate skill labelˆlt∈ {l1, . . . , lM}using the LLM. Then, we obtain the skill, consisting of the waypoint policy πandcontroller ρvia lookup in the library: (π, ρ) =L[ˆlt].Learning Waypoint Policies: Given the keypoint [ˆut,ˆvt]predicted by the grounding module andskill label ˆlt, we need to learn a waypoint policy πto perform the skill. KITE learns πfor each skillfrom demonstrations of keypoints {κ1, . . . , κK}. The waypoint policy πtakes a point cloud Ptandkeypoint [ˆut,ˆvt]as input, and aims to output Kwaypoints {κ1, . . . , κK}to execute the chosen skill.In KITE we align both 3D point cloud and a 2D keypoint representations by “annotating” Ptwith thekeypoint. We do this by first taking the depth image Dtfrom the same view as It, and deprojectingall nearby pixels within a radius R:KR={[u, v]∈It,∥[u, v]−[ˆut,ˆvt]∥< R}to their associated3D points PR={(x, y, z ) =deproject (u, v),∀(u, v)∈ KR}. This yields a set of “candidate”points to consider for interaction. In the bowl grasping example, PRwould be points on the bottomof the bowl. Next, we augment the point cloud Ptwith a 1-channel mask Mt∈RN×1(Fig. 2).For any point (x, y, z )∈ Pt, the mask channel label is 1 if (x, y, z )∈ PR(i.e., the point is in closeproximity to the deprojected keypoint) and 0 otherwise.Given the pointcloud and keypoint mask, KITE predicts all Kwaypoints relative to individual pointsin the point cloud. For each point, we classify which of the Kwaypoints it is nearest to, alongwith the offset to the desired 7-DoF end-effector pose (position and quaternion) for each of the Kwaypoints. To do so, we adapt the PointNet++ [ 50] architecture, and define Qπ: (Pt,Mt)→ P πwhere Pπ∈RN×dandd=K×(1 + 3 + 4) . Continuing the example of grasping a bowl, thepredicted pose offsets for each point on the bottom of the bowl (ungraspable) should lead to thebowl rim (graspable). See Appendix A for more details about the actor. We supervise Qπusing the5following per-point loss:Lskill=λclsCE(ˆk, k) +λori(1− ⟨ˆqˆk, qk⟩) +λposL1([ˆxˆk,ˆyˆk,ˆzˆk],[xk, yk, zk]) (2)The first term corresponds to the 1-hot cross-entropy classification loss between the predictedwaypoint index ˆkand the true nearest waypoint index k. The remaining terms supervise the predictedgripper orientation and position using waypoint κkfor only the points that have classification label k,so as to only penalize points that matter (are in close proximity) to the κk.Action Module Inference: At test time, given a point cloud Ptwith associated keypoint maskMt, we use Qπto obtain ˆPπ. By taking the highest likelihood point for each of the Kindices(representing the points nearest each of the Kwaypoints). Then, we index the predicted end-effectorposes in ˆPπby these Kindices, resulting in Kwaypoints {ˆκ1, . . . , ˆκK}. Finally, we obtain the finaltrajectory to carry out the skill with using the skill-specific controller: τ=ρ({ˆκ1, . . . , ˆκK}).In summary, KITE’s full pipeline first grounds a language command itin an observation otviaQground to infer keypoints (Section 3.3), infers the skill label lt(Section 3.4), maps the label to askill and controller (π, ρ) =L[lt], then and executes πandρ, and finally replans.4 ExperimentsIn this section, we aim to answer the following questions: (Q1) How well does KITE handle scenesemantic instructions? (Q2) How well does KITE handle precise object-semantic instructions? (Q3)Does KITE’s scene and object semantic awareness generalize to unseen object instances? and (Q4)Can KITE’s primitives capture precise and dexterous motions beyond pick and place? We first outlineKITE’s key implementation details and the baseline methods we benchmark against. Finally, weanalyze KITE’s comparative and overall performance across three real-world environments whichstress test scene and object-aware semantic manipulation (Section 4.1).Implementation Details: Across all evaluation environments, we specify a library of skills andcollect 50 kinesthetic demonstrations per-skill. In order to improve precision of the grounding module,and because keypoint supervision is easy to obtain compared to kinesthetic teaching, we supplementthe grounding dataset obtained from kinesthetic data collection by manually labeling a small amountof images with paired language instructions (0.75:1 supplemental to original samples ratio). Weimplement the grounding module according to the architecture from [ 10] and each waypoint policyin the acting module as a PointNet++ backbone [ 50] with a point cloud resolution of 20K points. SeeAppendix A for more details and visualizations of grounding model predictions.Baselines: We benchmark KITE’s performance against two state-of-the-art instruction-followingframeworks. The first is PerAct [ 11], which trains a PerceiverIO [ 51] transformer backbone topredict waypoint poses end-to-end conditioned on a voxelized scene representation and language. Incomparing against PerAct, we hope to understand whether KITE’s use of keypoint-parameterizedskills can offer better precision over end-to-end actions. To understand the value of keypoint-basedgrounding over frozen representations obtained from VLMs, we compare to RobotMoo [ 9], whichextends a library of language-conditioned skills to additionally condition on segmentation masksfrom an open-vocabulary object detector. Since exact models and data were not released, we use astate-of-the-art VLM and our set of learn skills for RobotMoo. See Appendix A.3 for more details.4.1 Real-World EvaluationWe explore three real-world manipulation environments that provide a rich testbed to explore KITE’ssensitivity to scene and object semantics. Task variations are detailed in Appendix B.1. Across allexperimental trials, we use a Franka Emika 7DoF robot and 3 Realsense D435 RGB-D cameras.Tabletop Instruction-Following: We train a library of four skills: {pick ,place ,open ,close}to reorganize a tabletop environment with 15 different household objects and an articulated storageorganizer with three pull-out drawers (see Figure 1, Appendix B.1). While we do not test oncompletely unseen objects, we randomly vary the positions of objects on the table and the degree ofclutter by adding distractor objects to the scene.Table 1 compares KITE against PerAct and RobotMoo in this setting. We evaluate all approacheswith 12 trials of instruction-following across three tiers of difficulty, ranging from a few objects on6the table and fairly straightforward language instructions (Tier 1), a visually cluttered table (Tier 2),and a cluttered table with more ambiguous instructions (Tier 3). See Appendix B.1 for examples ofthe objects considered and variations across tiers.Figure 3: Semantic Grasping Experimental Setup:We evaluate KITE on semantic grasping across rigidtools, deformable objects, and articulated items. Weshow 17 of the 20 objects tested along with ground-truth semantic labels for different features. The top rowincludes objects seen during grounding module training,and the bottom consists of unseen object instances.We first evaluate individual actions ( open ,close ,pick ), finding KITE to be the most ro-bust and repeatable. KITE’s use of precise key-point grounding enables scene semantic aware-ness (Q1) over different objects ( pick, place )and object semantic understanding (Q2) by dis-tinguishing amongst different drawer handleswith the open andclose skills. PerAct’s disad-vantage is its discrete visual space, where anyslight 1-off voxel predictions can make it diffi-cult to grasp objects or cabinet handles. Due tothe weak classification objectives it is trainedon, its most common failure mode is misclassi-fied gripper opening/closing actions. These fail-ures are alleviated by the parameterized skillsused in KITE and RobotMoo. Unsurprisingly,RobotMoo does well at grasping different ob-jects referenced in language, as VLMs trainedon internet-scale data have strong object priors.Still, RobotMoo struggles with object semantics like the distinction amongst top, middle, or bottomdrawers when opening or closing. We find that KITE is also the most competitive framework forlong-horizon sequential reasoning (last two columns in Table 1), and the most common failures stillinclude grasping the wrong object or with slightly misaligned gripper poses. RobotMoo’s inability toreason over multiple cabinet handles for opening and closing impedes its long-horizon performance,whereas PerAct’s compounding precision errors render long horizon tasks especially difficult.open close pick pick → place open → pick→place → closeTier 1KITE 1 0.92 0.83 0.75 0.75RobotMoo 0.33 0.41 0.75 0.41 0.08PerAct 0.08 0.5 0.33 0.08 N/ATier 2KITE 1 0.83 0.76 0.66 0.42RobotMoo 0.36 0.36 0.55 0.36 0.09Tier 3KITE 0.75 0.83 0.58 0.66 0.58RobotMoo 0.33 0.42 0.5 0.42 0.0Table 1: Tabletop Instruction Following Results: Across 12 trials per method per tier, KITE outperformsboth RobotMoo and PerAct for individual actions ( open ,close ,pick ) and chaining together up to four actionsin sequence. We test all approaches on Tier 1 (fewer objects, straightforward language), Tier 2 (more objects,straightforward language), and Tier 3 (more objects, more free-form language). KITE’s use of parameterizedskills gives it an edge with precision over PerAct, which is highly susceptible to one-off voxel predictions. Thismakes skills like opening and picking especially hard, and renders the approach virtually ineffective for highercomplexity tiers (2 and 3). RobotMoo is the most competitive approach to KITE, but its main pitfall is a lack ofobject semantic awareness such as distinguishing amongst different-level cabinet handles.Semantic Grasping: Aside from recognizing and manipulating different objects, we explore ingreater detail whether KITE can perform object -semantic manipulation (Q2). We evaluate KITEon the task of semantic grasping , with instructions of the form “Pick up the X by the Y” (i.e.‘stuffed bear’ and ‘ear’; ‘marker’ and ‘cap’; ‘shoe’ and ‘laces’) (examples in Fig. 1). For these trials,we train Qground on a subset of rigid tools, deformable items, and articulated items (Fig. 3) andretain the keypoint-conditioned pick skill from Section 4.1. We summarize the findings in Table 2with 26 trials per category of items, noting that KITE can achieve precise semantic grasping withgeneralization to unseen object instances (Q2, Q3). We omit a comparison to PerAct as its difficulties7with pick-and-place in the tabletop environment are only exacerbated in the semantic grasping settingwhere specific intra-object features matter. In the trials summarized in Table 2, KITE outperformsRobotMoo, suggesting the utility of keypoints to pinpoint specific object parts compared to coarsesegmentation masks or bounding boxes output by VLMs. We also observe that the majority of KITE’sfailures in this setting are due to misinterpretations with symmetry (i.e. grasping the left instead ofright handle of the pliers), rather than a completely erroneous keypoint as is common in RobotMoo.We posit that this could be alleviated with more diverse data of object semantic variations.Rigid Tools Deformable Objects Articulated Items FailuresA B CSeen Instances KITE 0.77 0.77 0.70 5 3 3Unseen Instances KITE 0.70 0.54 0.70 4 5 4All RobotMoo 0.23 0.35 0.19 2 36 7Table 2: Semantic Grasping Results : Across 20 total objects, 3 diverse object categories, and 26 trials permethod per category, KITE achieves the highest rate of pick success for various object semantic features (Fig. 3),and with the least severity of failures. We categorize failure modes as follows, with (A) denoting a symmetryerror (picking the left instead of right handle), (B) representing a grounding error with an erroneous keypointprediction, and (C) indicating a manipulation failure (wrong inferred orientation or slip during grasping).reorient mug pour cup refill keurig load podKITE 8/12 9/12 8/12 9/12Table 3: Coffee-Making Results: KITE handles fine-grained manipulationacross 4 skills requiring highly precise manipulation.Coffee-Making: Finally, weseek to answer whether KITEcan execute fine-grained be-haviors from instructions (Q4)by studying a coffee-makingscenario with four skills: {reorient mug,pour cup,refill keurig ,load pod}(examples inFig. 1). We evaluate on the same object instances seen in training, but subject to spatial variationsand language variations (i.e. ‘Place the blue/red/green/brown pod in the machine’, ‘Pour the red/greypitcher into the mug/Keurig refill area,‘Place the cup/mug that’s sideways right-side-up.’). Even forthese very fine-grained motions, KITE is able to follow instructions with 67-75% success (Table 3).The main failures reside with low-level control errors rather than grounding, such as partial coffeepod insertion, misaligned mugs and pitchers during pouring, or slippage during mug reorientation.This suggests that the individual skills could benefit from scaling up demonstration collection, whileretaining the existing grounding modules.5 DiscussionSummary In this work, we present KITE a framework for semantic manipulation that identifies 2Dtask-relevant keypoints and extrapolates 6-DoF actions accordingly. By leveraging 2D keypointsto precisely localize semantic concepts, KITE is adept at recognizing semantic labels both acrossdifferent object instances and on different regions within the same object. KITE does action-planningby drawing from a library of parameterized skills. Empirically, we find that KITE surpasses existinglanguage-based manipulation frameworks along the axes of scene semantic awareness and objectsemantic awareness. We also find that KITE can be trained from orders of magnitude less data andwith large precision gains over end-to-end approaches, while exhibiting an ability to generalize andoperate over extended horizons. Finally, we show that KITE offers a flexible interface for instruction-following, including tabletop rearrangement, fine-grained grasping, and dexterous manipulation.Limitations and Future Work One limiting factor of KITE is its reliance on building a libraryof skills. However, we show that a relatively small library of keypoint-parameterized skills isexpressive enough to accomplish many standard manipulation tasks with object variations over anextended horizon. Additionally, KITE requires less than 50 demonstrations per new skill, meaningthat adding new skills is fairly straightforward. We also note that KITE’s grounding module istrained from scratch. As VLMs continually improve and in the future may be able to pinpointkeypoints in images, it would be interesting to replace or enhance KITE’s grounding module withthese models. Additionally, we acknowledge that KITE currently executes skills in an open-loopmanner as parameterized by waypoints. In, the future, we are excited to extend KITE’s skills withclosed-loop feedback and extend the complexity of these skills to even more dexterous settings.8AcknowledgmentsThis work is in part supported by funds from NSF Awards 2132847, 2006388, 2218760, as well as the Office ofNaval Research, and FANUC. Toyota Research Institute provided funds to support this work. We also thankSidd Karamcheti and Yuchen Cui for their helpful feedback and suggestions. Priya Sundaresan is supported byan NSF GRFP.References[1]Y .-W. Chao, W. Yang, Y . Xiang, P. Molchanov, A. Handa, J. Tremblay, Y . S. Narang, K. Van Wyk, U. Iqbal,S. Birchfield, et al. Dexycb: A benchmark for capturing hand grasping of objects. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9044–9053, 2021.[2]M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dof grasp generationin cluttered scenes. In 2021 IEEE International Conference on Robotics and Automation (ICRA) , pages13438–13444. IEEE, 2021.[3]J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0:Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprintarXiv:1703.09312 , 2017.[4]N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez, J. M.Romano, and P. R. Wurman. Analysis and observations from the first amazon picking challenge. IEEETransactions on Automation Science and Engineering , 15(1):172–188, 2016.[5]X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision and language knowledgedistillation. arXiv preprint arXiv:2104.13921 , 2021.[6]M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Mahendran,A. Arnab, M. Dehghani, Z. Shen, et al. Simple open-vocabulary object detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part X , pages728–755. Springer, 2022.[7]X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classes usingimage-level supervision. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel,October 23–27, 2022, Proceedings, Part IX , pages 350–368. Springer, 2022.[8]S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, et al. Grounding dino:Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499 ,2023.[9]A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia,C. Finn, et al. Open-world object manipulation using pre-trained vision-language models. arXiv preprintarXiv:2303.00905 , 2023.[10] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. InConference on Robot Learning , pages 894–906. PMLR, 2022.[11] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for robotic manipulation.InConference on Robot Learning , pages 785–799. PMLR, 2023.[12] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z: Zero-shottask generalization with robotic imitation learning. In Conference on Robot Learning , pages 991–1002.PMLR, 2022.[13] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman,A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprintarXiv:2212.06817 , 2022.[14] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. Ben Amor. Language-conditionedimitation learning for robot manipulation tasks. Advances in Neural Information Processing Systems , 33:13139–13150, 2020.[15] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitation learning.arXiv preprint arXiv:2204.06252 , 2022.[16] F. Codevilla, M. M ̈uller, A. L ́opez, V . Koltun, and A. Dosovitskiy. End-to-end driving via conditionalimitation learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) , pages4693–4700, 2018. doi:10.1109/ICRA.2018.8460487.[17] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. Concept2robot: Learning manipulation conceptsfrom instructions and human demonstrations. The International Journal of Robotics Research , 40(12-14):1419–1434, 2021.9[18] A. Akakzia, C. Colas, P.-Y . Oudeyer, M. Chetouani, and O. Sigaud. Grounding language to autonomously-acquired skills via goal generation. In ICLR 2021-Ninth International Conference on Learning Representa-tion, 2021.[19] P. Goyal, R. J. Mooney, and S. Niekum. Zero-shot task adaptation using natural language. CoRR ,abs/2106.02972, 2021. URL https://arxiv.org/abs/2106.02972 .[20] H. Hu, D. Yarats, Q. Gong, Y . Tian, and M. Lewis. Hierarchical decision making by generating andfollowing natural language instructions. Advances in neural information processing systems , 32, 2019.[21] A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian, et al.Do as i can, not as i say: Grounding language in robotic affordances. In Conference on Robot Learning ,pages 287–318. PMLR, 2023.[22] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y . Chebotar,P. Sermanet, T. Jackson, N. Brown, L. Luu, S. Levine, K. Hausman, and brian ichter. Inner monologue:Embodied reasoning through planning with language models. In 6th Annual Conference on Robot Learning ,2022. URL https://openreview.net/forum?id=3R3Pz5i0tye .[23] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg.Progprompt: Generating situated robot task plans using large language models. arXiv preprintarXiv:2209.11302 , 2022.[24] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extractingactionable knowledge for embodied agents. In International Conference on Machine Learning , pages9118–9147. PMLR, 2022.[25] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural language instructionsto feasible plans. arXiv preprint arXiv:2303.12153 , 2023.[26] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu, and L. Fan.Vima: General robot manipulation with multimodal prompts. arXiv preprint arXiv:2210.03094 , 2022.[27] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies:Language model programs for embodied control. In arXiv preprint arXiv:2209.07753 , 2022.[28] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia,C. Finn, and K. Hausman. Open-world object manipulation using pre-trained vision-language model. InarXiv preprint , 2023.[29] P. Dayan and G. E. Hinton. Feudal reinforcement learning. Advances in neural information processingsystems , 5, 1992.[30] L. X. Shi, J. J. Lim, and Y . Lee. Skill-based model-based reinforcement learning. In 6th Annual Conferenceon Robot Learning , 2022. URL https://openreview.net/forum?id=iVxy2eO601U .[31] S. Mirchandani, S. Karamcheti, and D. Sadigh. Ella: Exploration through learned language ab-straction. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P. Liang, and J. W. Vaughan, editors, Ad-vances in Neural Information Processing Systems , volume 34, pages 29529–29540. Curran Asso-ciates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/f6f154417c4665861583f9b9c4afafa2-Paper.pdf .[32] C. Agia, T. Migimatsu, J. Wu, and J. Bohg. Stap: Sequencing task-agnostic policies. arXiv preprintarXiv:2210.12250 , 2022.[33] S. Nasiriany, H. Liu, and Y . Zhu. Augmenting reinforcement learning with behavior primitives for diversemanipulation tasks. In IEEE International Conference on Robotics and Automation (ICRA) , 2022.[34] M. Dalal, D. Pathak, and R. Salakhutdinov. Accelerating robotic reinforcement learning via parameterizedaction primitives. In NeurIPS , 2021.[35] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learning latent plansfrom play. CoRR , abs/1903.01973, 2019. URL http://arxiv.org/abs/1903.01973 .[36] S. Belkhale and D. Sadigh. Plato: Predicting latent affordances through object-centric play. In Conferenceon Robot Learning , pages 1424–1434. PMLR, 2023.[37] S. James and A. J. Davison. Q-attention: Enabling efficient learning for vision-based robotic manipulation.IEEE Robotics and Automation Letters , 7(2):1612–1619, 2022.10[38] L. Manuelli, W. Gao, P. Florence, and R. Tedrake. kpam: Keypoint affordances for category-level roboticmanipulation. In Robotics Research: The 19th International Symposium ISRR , pages 132–157. Springer,2022.[39] C. Wang, M. Chai, M. He, D. Chen, and J. Liao. Clip-nerf: Text-and-image driven manipulation of neuralradiance fields. CoRR , abs/2112.05139, 2021. URL https://arxiv.org/abs/2112.05139 .[40] J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik. Lerf: Language embedded radiance fields,2023.[41] X. Ma, D. Hsu, and W. S. Lee. Learning latent graph dynamics for deformable object manipulation. CoRR ,abs/2104.12149, 2021. URL https://arxiv.org/abs/2104.12149 .[42] H. Ha and S. Song. Flingbot: The unreasonable effectiveness of dynamic manipulation for cloth unfolding.InConference on Robotic Learning (CoRL) , 2021.[43] V . Viswanath, J. Grannen, P. Sundaresan, B. Thananjeyan, A. Balakrishna, E. Novoseller, J. Ichnowski,M. Laskey, J. E. Gonzalez, and K. Goldberg. Disentangling dense multi-cable knots. In 2021 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 3731–3738, 2021. doi:10.1109/IROS51168.2021.9636397.[44] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, J. Ichnowski, E. R. Novoseller, M. Hwang,M. Laskey, J. E. Gonzalez, and K. Goldberg. Untangling dense non-planar knots by learning manipulationfeatures and recovery policies. CoRR , abs/2107.08942, 2021. URL https://arxiv.org/abs/2107.08942 .[45] P. Sundaresan, S. Belkhale, and D. Sadigh. Learning visuo-haptic skewering strategies for robot-assistedfeeding. In 6th Annual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=lLq09gVoaTE .[46] C. Wang, R. Wang, A. Mandlekar, L. Fei-Fei, S. Savarese, and D. Xu. Generalization through hand-eyecoordination: An action space for learning spatially-invariant visuomotor control. In 2021 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 8913–8920. IEEE, 2021.[47] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin,J. Clark, et al. Learning transferable visual models from natural language supervision. In Internationalconference on machine learning , pages 8748–8763. PMLR, 2021.[48] J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser.Tidybot: Personalized robot assistance with large language models. arXiv preprint arXiv:2305.05658 ,2023.[49] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry,A. Askell, et al. Language models are few-shot learners. Advances in neural information processingsystems , 33:1877–1901, 2020.[50] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in ametric space. Advances in neural information processing systems , 30, 2017.[51] A. Jaegle, S. Borgeaud, J.-B. Alayrac, C. Doersch, C. Ionescu, D. Ding, S. Koppula, D. Zoran, A. Brock,E. Shelhamer, et al. Perceiver io: A general architecture for structured inputs & outputs. arXiv preprintarXiv:2107.14795 , 2021.[52] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg,W.-Y . Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643 , 2023.11KITE: Keypoints + Instructions To ExecutionSupplementary MaterialIn this section, we outline additional details regarding the implementation of KITE, the real-worldenvironments studied, and qualitative results of all methods. Please refer to our website to bestunderstand the task diversity and qualitative performance through videos of KITE performing real-world semantic manipulation.A Implementation DetailsA.1 KITEAs discussed in Section 3.4, KITE trains a PointNet++ model to output all Krelative waypoints and aone-hot waypoint index for each point in the point cloud. The auxiliary one-hot classification outputlayer classifies which waypoint each point in the point cloud is most relevant for manipuation (nearestto). In practice, many skills like pick orplace can be parameterized by just K= 1 waypoint(where to grasp, where to place). In this case, the one-hot classification output is reduced to binaryclassification of which points are near the graspable object or target location, respectively. Formore general skills parameterized by Kwaypoints, we can supervise the K-th end-effector posepredictions per-point by taking the loss of the predicted and ground truth gripper pose for that pointcompared to ground truth. This loss is provided in the main text in Eq. (2).Figure 4: KITE Grounding Predictions: KITE’s grounding model is able to accurately predict keypointsfor both scene semantic instructions (e.g., “grab the lemon” and “put the green pod in”) and object semanticinstructions (e.g., “shut the top drawer” and “take a peek at the 2nd shelf”.To artificially scale the data KITE’s grounding module is trained on, we apply various randomcolorspace and affine transformations to the dataset collected with all skills to augment 8X beforetraining. We train the grounding module and each skill policy using the Adam optimizer with learningrate0.0001 , which takes 3hours and 1hour on an NVIDIA GeForce GTX 1070 GPU, respectively.A.2 PerActFor each evaluation environment, we consolidate each of the skill datasets used to train KITE intoone multi-task dataset with which to train PerAct. The input to PerAct is a 753voxel grid (although12the original PerAct implementation used a 1003voxel resolution, we adjust our workspace boundsaccordingly to retain the same voxel resolution). We represent waypoints as 1-hot encodings in thisvoxel grid, end-effector orientations as a discrete prediction over 5bins each for yaw, pitch, and roll,and the gripper open/closed state as a binary indicator variable as in [ 11]. For each environment, wetrain PerAct for 7200 iterations.Figure 5: PerAct Predictions: We visualize PerAct predictions on the task of opening a cabinet with multipledrawers. Although PerAct exhibits some reasonable predictions (last column), it struggles with localizing thecorrect handle (1st, 3rd columns). Even when localizing the correct handle (2nd column), the slight imprecisionof the predict vs. ground truth action can lead to downstream manipulation failure.A.3 RobotMooWe note that the original implementation of RobotMoo leveraged the RT-1 [ 13] skill learningframework. This set of skills were trained with months of data collection, amassing thousands oftrajectories for 16 object categories, and RobotMoo further extended these policies to 90 diverse objectcategories. As this is not reproducible in our setting, we implement RobotMoo by using KITE’slibrary of skills, but conditioning them on VLM predictions instead of our keypoints. Specifically,while the original RobotMoo implementation used OwLViT, we use the more recent state-of-the-artopen vocabulary object detectors Grounding DINO [ 8] and Segment Anything [ 52] jointly. Withthese models, we obtain segmentation masks for objects referenced in an input instruction. We takethe center pixel coordinate of these segmentation masks as input to our acting module rather theoutput of Qground .Figure 6: RobotMoo Predictions: We visualize the predictions for RobotMoo’s perception stack on tabletopinstruction following images. Although RobotMoo exhibits decent scene awareness and an ability to localizedifferent object instances, it struggles with object semantics. Specifically, RobotMoo struggles to localize all thedifferent drawers in the scene, let alone distinguish amongst the topvs.middle vs.bottom handles.B Real-World Experimental DetailsB.1 Task VariationsFor each real environment used in evaluation, we stress-test all methods across task variations rangingfrom diversity in the input language instructions to amount of clutter and distractor objects. Wesummarize each axis of variation for long-horizon tabletop instruction following (Table 4), semanticgrasping (Table 5), and coffee-making (Table 6) below.13Tier SkillVariationsLanguage Scene1open ‘Open the [top/middle/bottom] cabinet’ randomized position of cabinetclose ‘Close the [top/middle/bottom] drawer’ randomized position of cabinetpick ‘Pick up the [lemon/screwdriver/lego/bowl/expo marker] randomized object positionsplace ‘Put the [...] [away, in the [...] drawer, in the bowl]’ randomized object positions1open ‘Open the [top/middle/bottom] cabinet’ randomized position +distractor objects (clothes strewn)close ‘Close the [top/middle/bottom] drawer’ randomized position +distractor objects (clothes strewn)pick ‘Pick up the [lemon/screwdriver/lego/bowl/expo marker, randomized position + scene cluttereggplant, carrot, corn, lime, scissors,ketchup, coffee pod]place ‘Put the [...] [away, in the [...] drawer, in the bowl]’ randomized object positions+ scene clutter3open ‘Yank open the top drawer’ randomized position +‘Give the 2nd drawer a tug’ distractor objects (clothes strewn)‘Take a peek at the 3rd drawer pls’‘Can you check the top drawer?’close ‘Close it’ distractor objects (clothes strewn)‘Give it a push’‘Let’s shut the drawer’‘Go ahead and close the top drawer’‘Close any open drawer’‘The one 3rd from the bottom needs to be shut’pick ‘Grab me the [...]’ randomized object positions‘Do me a favor and get me the [...]’ + scene clutter‘Can you pass me the [...]?’‘Get the [...]’‘Locate the [...]’‘Could you hand me the [...] please’place ‘Grab the [...] and put it [...]’ randomized object positions‘Take the [...] and place it [...]’ + scene clutter‘Pick up the [...] and put it [...]’‘Fetch the [...] and drop it [...]’‘Plop the [...] into the bowl’Table 4: Tabletop Instruction Following Environment VariationsCategory Object Language VariationsRigid Toolshammer middle, end, tooltip, hammerhead, metal, wooden handle, center, tipT-tool left side, left T, right side, right T, bottom handle, top handleDeformable Objectsshoe heel, toe, back, front, shoelaces, laces, lace-up areastuffed animal head, nose, ear, tail, belly, foot, arm, leg, tummy, elephant trunkArticulated Itemspliers joint, left handle, right handle, top handle, bottom handleclamp joint, left handle, right handle, top handle, bottom handlescissors joint, left hole, right hole, smaller hold, bigger hold, larger holdmarker + twist-off cap cap, expo label, center, label, endTable 5: Semantic Grasping Environment VariationsB.2 Primitive InstantiationsIn this section, we describe the instantiation of our library of skills for each real-world environment:Tabletop Instruction Following: We parameterize each skill in the tabletop manipulation settingvia a single waypoint κ= (x, y, z, ψ, θ, φ )specifying the primary point of interaction.•open : With its gripper open and predicted orientation (ψ, θ, φ ), the robot approaches 5cm.away from a closed drawer handle at predicted position (x, y, z ). Next, the robot moves to(x, y, z )in the same orientation and closes the gripper to grasp the cabinet handle. Finally,it executes a linear pull by moving to the approach position, keeping the orientation fixed,before releasing the handle.•close : The robot approaches 5cm. away from an opened drawer handle at position (x, y, z )with orientation (ψ, θ, φ ), then executes a linear push towards (x, y, z )to close the drawer.•pick : The robot approaches the object located at (x, y, z ), closes its gripper, and lifts 5cm.•place : While holding an object grasped with the pick primitive, the robot moves 5cmabove the desired place location (x, y, z )with orientation (ψ, θ, φ )and opens the gripper toits maximum width, releasing the object.14Coffee-Making In the coffee-masking tasks, we implement a library of 4 skills which test KITE’sability to handle precise or dynamic movements. Since pour cup,refill keurig , and load podall involve grasping, we finetune the pick skill from tabletop instruction-following with 50 demon-strations across pitchers and coffee pods, respectively. Then, we can parameterize each skill with asingle waypoint κ= (x, y, z, ψ, θ, φ )as follows:•reorient mug: The robot attempts to grasp a mug, initially oriented sideways, with pose κbefore resetting to a canonical upright (untilted) end-effector pose.•pour cup/refill keurig : After grasping a pitcher, the robot moves to position (x, y, z )denoting the position of the vessel to be poured into (cup or refill compartment of Keurig).Starting from an untilted end-effector pose, the robot gradually rotates at a constant velocityto(ψ, θ, φ ), denoting the final pour orientation.•load pod: After grasping a coffee pod with the pick primitive, the robot moves 2cm.above (x, y, z ), the sensed position of the K-cup slot with orientation (ψ, θ, φ ). Next, therobot releases its grasp to drop the pod into the compartment. As this task requires highprecision, it is often the case that after releasing the pod, it is not completely inserted orproperly aligned. Thus, the load podprimitive moves downward an additional 2cm inattempt to push the pod into place. We note that we do not evaluate this skill with realliquids for safety reasons, but measure success in terms of visual alignment between thepitcher and vessel.Skill Language Scenereorient mug‘Flip the mug right-side up’ randomized mug position, roll (−π/2, π/2)‘Put the mug upright’‘Grab the mug and put it it right-side-up’‘Can you place the mug right side up?’‘Get the mug that’s laying flatand flip it upright’pour cup‘Fill up the mug that’s right-side up’ randomized pitcher (red / gray)‘Pour me a glass’ + cups (compostable, Dixie, mug)‘Pour the red pitcher into the mug’‘Grab the silver-handle pitcher and fillup the brown cup’‘Refill the Dixie cup with the red pitcher’refill keurig‘Refill the espresso machine’ randomized pitcher (red / gray)‘Grab the red/silver pitcher and + randomized Keurig pose‘fill up the water compartment’load pod‘Load the blue K-cup’ randomized coffee pod (red/blue/green/brown)‘Can you put the red pod in?’ + randomized Keurig‘Insert the green pod’‘Start a brew with the brown pod’Table 6: Coffee-Making VariationsB.3 LLM PromptingSkill Label Inference: In this section, we briefly outline how KITE retrieves the skill label ltfor input instruction itvia LLMs. Below, we provide a sample prompt which we feed as input totext-davinci-003 to obtain ltin tabletop instruction following setting.Listing 1: LLM Prompting for Skill Label Inference1i_t =input (" Enter instruction :")2"""3Input : " Pick upthe lemon "4Output : [" pick "]56Input : "Put the screwdriver away "7Output : [" pick ", place "]89Input : "Pls grab methe screwdriver and put itaway "10Output : [" pick ", " place "]1112Input : " Grab the green bowl "13Output : [" grasp "]1415Input : "Put the lemon inthe bowl "1516Output : [" pick ", " place "]1718Input : " Open the top drawer "19Output : [" open "]2021Input : "Pls shut the drawer "22Output : [" close "]2324Input : "put the expo marker away "25Output : [" pick ", " place "]2627Input : "put the Blue lego inthe cabinet "28Output : [" pick ", " place "]2930Input : ’%s’31Output :32"""%i_t16 |
h-geaPzuJu | DROID: Learning from Offline HeterogeneousDemonstrations via Reward-Policy DistillationSravan Jayanthi∗,1, Letian Chen∗,†,1, Nadya Balabanska2, Van Duong2, Erik Scarlatescu1,Ezra Ameperosa1, Zulfiqar Zaidi1, Daniel Martin1, Taylor Del Matto1, Masahiro Ono2,Matthew Gombolay11School of Interactive Computing, Georgia Institute of Technology2Jet Propulsion Laboratory, California Institute of Technology∗Equal Contribution,†Corresponding Author: letian.chen@gatech.eduAbstract: Offline Learning from Demonstrations (OLfD) is valuable in domainswhere trial-and-error learning is infeasible or specifying a cost function is diffi-cult, such as robotic surgery, autonomous driving, and path-finding for NASA’sMars rovers. However, two key problems remain challenging in OLfD: 1) hetero-geneity : demonstration data can be generated with diverse preferences and strate-gies, and 2) generalizability : the learned policy and reward must perform well inunseen test settings beyond the limited training regime. To overcome these chal-lenges, we propose Dual Reward and policy Offline Inverse Distillation (DROID)that leverages diversity to improve generalization performance by decomposingcommon-task and individual-specific strategies and distilling knowledge in boththe reward and policy spaces. We ground DROID in a novel and uniquely chal-lenging Mars rover path-planning problem for NASA’s Mars Curiosity Rover. Wecurate a novel dataset along 154 Sols (Martian days) and conduct a novel, em-pirical investigation to characterize heterogeneity in the dataset. We find DROIDoutperforms prior SOTA OLfD techniques, leading to a 21% improvement in mod-eling expert behaviors and 90% closer to the task objective of reaching the finaldestination. We also benchmark DROID on the OpenAI Gym Cartpole and LunarLander environments and find DROID achieves 23% (significantly) better perfor-mance modeling unseen holdout heterogeneous demonstrations.Keywords: Learning from Heterogeneous Demonstration, Network Distillation,Offline Imitation Learning1 IntroductionDeep Reinforcement Learning (Deep RL) has achieved success in generating high-performance con-tinuous control behaviors but requires a high-fidelity simulator or reward-annotated dataset [1, 2, 3,4, 5, 6, 7, 8]. A counterexample is the Mars Path-Planning (MPP) problem where one must constructa path through a series of waypoints for the Mars rover to traverse towards a destination without anexplicit notion of reward or simulator. Not only is this challenging due to the chaotic terrain but alsobecause of factors such as physical limitations of the rover’s capabilities and unobservable terraininformation [9, 10]. Expert human Rover Planners (RPs) at NASA design paths under time andsafety constraints based on their expertise crafted over years of experience – knowledge that has yetto be codified [9]. Efforts have been made to automate the process with symbolic and connectionist(e.g., Deep RL) approaches [11, 12, 13]. However, these methods do not match the human RPs’success because it is difficult to codify experts’ knowledge into a cost function [13, 14], and a largehomogeneous dataset is usually required to learn robust policies [9]. Such challenges not only ex-ist in the MPP problem but are prevalent in other robotic applications such as surgery, search andrescue, self-driving, and elderly care [15, 16, 17, 18, 19, 20, 21].7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: This figure shows how DROID infers (blue arrows) the underlying expert strategies giventhe MPP dataset with varying preferences (e.g., Aggressive vs. Risk-Averse) across heterogeneousstates (Smooth to Chaotic Terrains). DROID performs knowledge distillation to a common taskpolicy (black arrows) shared by all demonstrators, infers a reward that encodes each expert’s latentpreference (orange arrows), and identifies the shared reward across demonstrators (green arrows).Learning from Demonstration (LfD) is a promising paradigm to address this challenge: LfD meth-ods learn by having users demonstrate the desired behavior on the robot, removing the need for costfunction specification [22]. However, most LfD approaches are limited by the need for many envi-ronment interactions [23, 24, 25]. In robotic applications, the exploratory environment interactionscould be costly (e.g., damaged or lost rovers), unethical (e.g., in surgery), or unsafe. Appropriately,Offline LfD (OLfD) has been proposed as a framework that allows for training a robot policy solelyfrom pre-recorded demonstrations with no assumption about a viable simulator [26].OLfD relaxes the requirement for a reward function and a simulator, it however faces several al-gorithmic challenges that limit its full potential [26, 27]. First, a key challenge for OLfD is het-erogeneity within the demonstration set. Each expert has individual preferences stemming fromvarying cognitive biases [28, 29] or different latent goals [30, 31] for accomplishing a given task. Ifthe LfD algorithm assumes homogeneity about heterogeneous data, the robot may fail to infer theexpert intention [28, 32] and the learned policy may perform poorly [28, 33]. On the other hand,learning each heterogeneous behavior separately is data-inefficient and prone to overfitting. Thesecond critical challenge is learning from a limited dataset [34]. The limited data makes it difficultfor the learned policy and reward functions to capture the user’s latent intentions and to generalizebeyond the demonstrated setting [15, 35, 36, 37].In this paper, we propose a novel OLfD approach, DualReward and Policy Offline InverseDistillation (DROID), that simultaneously distills a common task policy andreward from diversedemonstrators, while modeling individual preferences along both strategy-specific policies andre-wards . This approach allows us to extract an unbiased task objective while understanding variousstyles of accomplishing the task (Figure 1). Our contributions are three-fold:1. We curate a novel dataset with RP-designed Mars Curiosity Rover paths for 154 Sols (Martiandays) covering various terrains on Mars. We conduct a novel, empirical investigation to character-ize heterogeneity in the dataset, motivating the need for a OLfD approach robust to heterogeneity.2. We propose DROID, a framework that simultaneously distills knowledge through the learnedpolicy and reward. We also introduce two improvements (Augmented Regularization & RewardMaximization) to the underlying IRL algorithm [15] to improve generalization performance.3. We show DROID achieves 29% and17% better modeling performance (measuring the distancebetween expert demonstration and the generated trajectory) than previous SOTA in Cartpole andLunar Lander, respectively. On the MPP problem, we also find DROID outperforms SOTA andgets90% closer to the goal point (an important objective in the Mars path planning domain).22 Related WorkIn this section, we discuss related works in offline LfD, reward and policy distillation, and theMars Path Planning problem. There has been extensive work for LfD in robotics and learning fromheterogeneous demonstrations [30, 29, 33, 38, 39, 40, 41, 42, 43] but our focus is offline learningfrom diverse, limited demonstrations, an unexplored problem setting.Offline LfD – Few previous OLfD approaches model demonstration heterogeneity [44, 45, 46, 47,48], which could cause the learned reward function and policy to fail at generalizing beyond theoriginal demonstrated setting [35, 36] and capturing the individual preference of the expert [37].Some work in OLfD attempts to tackle multimodal expert behaviors by increasing model capacitythrough Diffusion [49], BeT (Behavior Transformers) [50], and representation learning [51], buteach fails to overcome the fundamentally mode-seeking behavior of LfD. DROID’s explicit policyand reward decomposition is critical to success in modeling heterogeneity.Reward and Policy Distillation – Several frameworks consider commonalities among reward func-tions across heterogeneous demonstrations [29, 38, 41]. However, these methods rely on online in-teractions with the environment, which is infeasible in many robotic domains. Policy distillation hasbeen studied to improve policy transfer performance [52, 53]. However, DROID is the first to studysimultaneous reward and policy distillation, particularly in the challenging setting of OLfD.Mars Path Planning – For NASA, it requires much human effort to plan paths for the Mars Curios-ity Rover [9]. Current approaches such as Autonomous Navigation (AutoNav) [54] do not considerall hazards that humans deem dangerous to the rover [55]. Hedrick et al. [56] proposes efficientMartian path planning, and Rover-IRL [57] learns a cost function from demonstration, but both failto plan under missing/occluded terrain maps, which is a key obstacle in the Mars domain [9]. Our al-gorithm, DROID, instead directly learns from how previous RPs drive in the midst of occluded infoand has the ability to model an RP’s behavior when planning along unknown parts of the terrain.3 PreliminariesIn this section, we introduce preliminaries on Markov Decision Processes (MDP), Offline Learningfrom Demonstration (OLfD), and Multi-Strategy Reward Distillation (MSRD).Markov Decision Process – A MDP, M, is a 6-tuple, ⟨S,A, R, T, γ, ρ 0⟩.SandAcorrespond tothe state and action spaces, R(s, a)the reward, and T(s′|s, a)the transition probability for states′after performing action ain state s.γ∈(0,1)is the discount factor and ρ0denotes the initialstate distribution. The policy π(a|s)represents the probability of choosing action ain state s. TheQ-value is defined as QπR(s, a) =Eπ,T[P∞t=0γtR(st, at)|s0=s, a0=a], denoting the expecteddiscounted cumulative return following πin the MDP under reward R.Offline Learning from Demonstration – IRL considers an MDP sans reward function (MDP \R)and infers the reward function Rbased on a set of demonstration trajectories D={τ1, τ2,···, τN},where Nis the number of demonstrations. Our method leverages Approximate Variational RewardImitation Learning (A VRIL) [58] as the underlying IRL approach. A VRIL considers a distribu-tion over the reward function and approximates the posterior, p(R), with a variational distribution,qφ(R). A VRIL introduces a second variational approximation for QπER(s, a)withQθ(s, a)andensures the variational reward distribution, qφ(R), is consistent with the variational Q function,Qθ(s, a), by Bellman equation. We show the final objective function of A VRIL in Equation 1.LA VRIL =X(s,a,s′,a′)∈Dlog"exp(βQθ(s, a))Pb∈Aexp(βQθ(s, b))#−DKL(qφ(R)∥p(R)) +λlogqφ(Qθ(s, a)−γQθ(s′, a′)) (1)Multi-Strategy Reward Distillation – We propose a general reward distillation framework basedon a previous online IRL technique, MSRD [38]. MSRD decomposes the per-strategy reward, Ri,for strategy i, as a linear combination of a common task reward and a strategy-only reward withneural network parameters φTaskandφS−i:Ri=RφTask+RφS−i. MSRD leverages a regularizationloss to distill common knowledge into φTaskand retain personalized information in φS−i.3Figure 2: This figure shows the dataset curation process for the MPP problem. We unify the heightmaps created by onboard cameras into a single “gaming area” (middle figure) and then plan thedriving path based on features calculated on the gaming area height map.4 Mars Curiosity Rover Path Planning ProblemIn this section, we introduce the Mars Path Planning (MPP) problem and how we 1) curate thedataset, 2) construct an MDP for OLfD, 3) analyze heterogeneity present across RPs.Dataset Curation The raw data consists of height maps created by photos captured by Curiosityacross multiple Sols (Martian days). The multi-resolution height maps are processed into a sin-gle 64x64 “gaming area” by interpolation of overlapping height maps and scaling along each axis(Figure 2). The processed gaming area is then used to calculate nine features identified by RPs:(1) distance to the goal point, (2) unknown data percentage, (3) average roughness, (4) maximumroughness, (5) average pitch, (6) average roll, (7) maximum pitch, (8) maximum roll, and (9) turningangle. More details for the features are available in the Supplementary.MDP Problem Setup We create a novel formulation to convert the MPP problem into an MDP:a state contains the terrain information of the Sol and the current and target locations for the pathplanning. The action space, A, consists of all possible next waypoints in the gaming area. Thereward function is constructed as a function of features associated with the path specified by aonthe terrain s:R(s, a) =f(ψ(s, a))where ψ: (S×A)→R9is the path feature mapping.Figure 3: This figure shows heterogeneity in MPPdataset. Comparisons with p < . 05are repre-sented by connecting lines. RP IDs are labeledand marked as red.Analysis of Heterogeneity We perform a PER-MANOV A test with α= 0.05and the Holmmethod for the correction of multi-tests withthe 37 different RPs to answer whether differ-ent strategies exist among drivers in the fea-ture space. The test shows significant differ-ences along seven pairs of RPs, particularlyalong the paths designed by RP 26 with re-spect to 5 other RPs, as shown in Figure 3 withTSNE [59] to reduce features into two dimen-sions (further explanation provided in the sup-plementary). This result shows heterogeneity inthe RP-generated paths from differences in ter-rains of the Sols and the expert RP strategies,motivating the need for offline learning fromheterogeneous demonstration approaches.5 MethodsThe challenges of heterogeneity and limited data are prevalent for OLfD in many robotic applica-tions, particularly in the MPP problem as shown in Section 4. To overcome these challenges, wepropose our algorithm, DROID , for learning high-performing policies by distilling common infor-mation across heterogeneous demonstrations in both the policy space and the reward function space.5.1 Reward DistillationWe propose a general reward distillation approach for OLfD. We model the reward distributionsas mean-field Gaussian distributions partitioned on each state-action pair and let the reward neuralnetworks output the mean and standard deviation of the Gaussian distributions. The advantage ofGaussian-distribution models for task reward ( RTask(s, a)∼qφTask(R) =N(μTask(s, a), σ2Task(s, a)))4and strategy rewards ( RS−i(s, a)∼qφS−i(R) =N(μS−i(s, a), σ2S−i(s, a))) is that the summationof two Gaussian distributions is still Gaussian distribution, as shown in Equation 2, where Ridenotesthe random variable for the reward distribution of strategy i.Ri(s, a) =RTask(s, a) +RS−i(s, a)∼ N(μTask(s, a) +μS−i(s, a), σ2Task(s, a) +σ2S−i(s, a))(2)Assuming the number of strategies, M, and the strategy label, cτforτ, is known a priori, we canperform reward distillation on the strategy reward, as shown in Equation 3.LRD({φS−i}Mi=1;D) =E(τ,cτ)∈D[||μS−cτ(s, a)||] (3)Intuitively, LRDpushes the strategy reward to output 0and therefore encourages common knowledgeto flow to the shared task reward and each individual strategy reward only captures preferences.The MSRD reward distillation formulation is a special case where the reward distribution for eachstrategy collapses to a Dirac delta function.5.2 Policy DistillationWe propose DROID to leverage commonalities in both reward and policy spaces. As policies areimplicitly defined via the Qfunction in A VRIL by π(a|s) = arg max a∈AQ(s, a), we construct theQ function for each strategy as a combination of task Q function and strategy Q function: Qi=QθTask+QθS−i. As such, we propose to regularize the output of the strategy Q values, QS−i, as inEquation 4, to encourage common information to be distilled into the task Q-value, QTask.LPD({θS−i}Mi=1;D) =E(τ,cτ)∈D[||QS−cτ(s, a)||] (4)DROID’s explicit knowledge distillation across diverse policies aids in improving generalizationperformance as the shared policy benefits from modeling all demonstrations in the offline dataset.5.3 Enhancing DROID for Offline LfDWe present two enhancements to construct the inductive bias useful for learning more accuraterewards and better-performing policies in OLfD.Improvement 1: Augmenting Dataset for Regularization. We introduce an augmented dataset,D′={(s, b)|s∈ D, b∈A}, for regularizing qφ(R)to a prior p(R)compared with A VRIL’s KLdivergence regularization only within the demonstration (Equation 1). By extending the operationofDKLto be on the entire action space, we are encouraging a conservative estimate of the rewardfor any action that is not taken by demonstrators, following the pessimistic principle in OLfD [60].L+KL(φ;D) =Xs∈D,a∈ADKL(qφ(R(s, a))||p(R(s, a))) (5)Intuitively, Equation 5 could be viewed as a data augmentation technique to regularize reward learn-ing across the entire action space. In contrast, A VRIL’s variational lower-bound only regularizesLKL(φ) =P(s,a)∈DDKL(qφ(R(s, a)||p(R(s, a))), on(s, a)samples from D. We provide a lemmato describe the effect of this augmented regularization in the supplementary.Improvement 2: Reward Maximization. The second improvement we propose is to maximize thereward given to the demonstrated action, as shown in Equation 6.Lmax-action-reward (φ;D) =−X(s,a)∈Dr(s, a)where r(s, a)∼qφ(R(s, a)) (6)In A VRIL, the reward function learning relies on the two-stage process of 1) Q function learning(first-term of Equation 1) and then 2) reward learning by compatibility (third-term of Equation 1).Our proposed Lmax-action instead directly encourages high reward for demonstrated state-action pairs,allowing the reward learning to be faster without reliance on a successful Q function learning. Com-bining these two enhancements and distillations on reward and policy, we summarize the loss func-tion for DROID in Equation 7. We also provide more details regarding the two improvements and apseudocode for DROID in the supplementary.L=X(s,a)∈DlogexpβQθ(s, a)Pb∈Aexp(βQθ(s, b))−Xs∈D,a∈ADKL(qφ(R(s, a))||p(R(s, a))) + Lmax-action (φ;D)+X(s,a,s′,a′)∈Dλlogqφ(Qθ(s, a)−γQθ(s′, a′)) +LRD({φS−i}Mi=1;D) +LPD({θS−i}Mi=1;D)(7)56 ResultsIn this section, we show that DROID achieves strong performance in two OpenAI Gym environ-ments (CartPole and LunarLander) [61] and the more difficult Mars Path Planning problem com-pared to prior works (Section 6.1-6.2) and DROID’s own ablations (Section 6.3). We focus ouranalysis on three questions that address the 3 challenges in Offline LfD: heterogeneity, policy gen-eralizability to unseen task settings, and reward transferrability to downstream tasks.Q1: Diverse Demonstration Modeling – How well does DROID perform at modeling differentpreferences from heterogeneous demonstrations in Offline LfD?Q2: Policy Generalizability – How well can the learned policies perform in an unseen holdout testdataset (e.g., modeling unseen demonstrations in CartPole, planning unseen terrains in MPP)?Q3: Reward Generalizability – How successful are the learned rewards in encoding experts’ latentobjectives and inducing high-performing downstream policies on unseen test settings?For the benchmark experiments, we compare DROID against a collection of OLfD baselines: a)Behavior Cloning (BC) Batch: a single BC model across the dataset [44, 45], b) BC Single, whichtrains a BC model for each demonstrator, c) Diffusion model [49] which is a generative modelingtechnique that leverages denoising to model multimodal demonstrations, d) Behavior Transformers(BeT) [50], which models next actions conditioned on the sequence observations, e) A VRIL Batch,a single A VRIL model on all data, f) A VRIL Batch XL, which increases model capacity fourfoldto better model the heterogeneous dataset, g) A VRIL State Representation Learning (SRL) [51]which implicitly models multimodal data by inducing a representation space trained by predictingthe corresponding action, h) A VRIL Single, training separate models for each expert and i) MSRD-Offline: MSRD with reward distillation adapted from DROID.6.1 Cartpole and LunarLanderWe evaluate DROID against baselines on four metrics: Frechet Distance [62], KL Divergence [63],Undirected Hausdorff Distance [64], and Average Log Likelihood. We train each method on adataset of 60 trajectories from 20 distinct strategies generated by jointly optimizing an environmentreward and a diversity reward from DIAYN [65]. Diverse CartPole strategies include swinging todifferent ends of the track and oscillating at varying periodicity while diverse Lunar Lander (LL)strategies include different landing attack angles and touchdown techniques. More experiment de-tails and videos of demonstrations and learned policies are provided in the supplementary.Q1: Diverse Demonstration Modeling. Table 1 summarizes the results of modeling and imitationon the training demonstrations. We find DROID performs significantly ( p < .05) better on the Undi-rected Hausdorff ( 19% in Cartpole, 20% in LL) and Frechet Distance ( 32% in Cartpole) comparedto the best baselines, showing that DROID models heterogeneous behaviors better and minimizesdeviation between the learned policy’s behavior and the expert demonstrations.Q2: Policy Generalizability. We study how well each method’s policy performs on unseen demon-strations. Results in Table 1 section “Policy Generalizability” show DROID significantly outper-forms ( p < . 05) baselines along Frechet and Undirected Hausdorff. Especially, DROID’s per-formance gain over MSRD-Offline being larger on generalization than imitation shows the policydistillation in DROID is essential to learn a generalizable policy.Q3: Reward Generalizability. As a further analysis of generalization performance, we study howsuccessful the learned reward functions are at inducing high-performing policies. We train offlineRL policies with CQL [66] and compare performances on the holdout test set. The results in Table1 section “Reward Generalizability” show DROID achieves significantly better ( p < . 05) perfor-mance on log-likelihood (underperforms on Frechet Distance and Undirected Hausdorff), showingDROID’s reward function can induce a similarly well-performing policy.6.2 Mars Path PlanningWith the success of DROID on Cartpole, we further test it against benchmarks on the more challeng-ing MPP problem, where RPs optimize for a complex objective considering goal locations, strate-gies, and safety constraints. Since there is no clear ground truth reward in the MPP problem, westudy the performance along four metrics: Distance from (expert) Waypoint, Final Distance (from6Table 1: This table shows performance comparisons and significance of DROID and baselines inCartpole (left) and Lunar Lander (right). Bold denotes the best-performing model for the metric. *denotes significance of p < .05against the second-best model.CartPoleBenchmark KL Frechet Undirected LogMethod Divergence Distance Hausdorff LikelihoodDiverse Demonstration Modeling ( n= 40 )BC Batch 10.046 1.192 0.969 -25.599BC Single 9.440 1.176 0.923 -24.729Diffusion 13.687 2.922 2.867 -165.517BeT 13.755 2.901 2.836 -138.629A VRIL Batch 7.608 0.933 0.729 -48.113A VRIL Batch XL 8.840 1.023 0.775 -45.991A VRIL SRL 11.335 1.395 1.069 -28.713A VRIL Single 10.051 1.294 0.895 -48.910MSRD-Offline 7.479 0.621 0.476 -40.453DROID (ours) 6.047 0.425∗0.261∗-37.948Policy Generalizability ( n= 20 )BC Batch 10.792 1.237 1.026 -33.079BC Single 9.330 1.111 0.881 -32.018Diffusion 13.825 3.050 2.911 -164.980BeT 13.864 2.959 2.853 -138.629A VRIL Batch 8.367 1.004 0.786 -52.843A VRIL Batch XL 7.878 0.950 0.738 -50.698A VRIL SRL 11.320 1.449 1.056 -34.710A VRIL Single 7.960 1.006 0.717 -54.023MSRD-Offline 7.582 0.584 0.458 -44.173DROID (ours) 5.271 0.412∗0.207∗-38.057Reward Generalizability ( n= 20 )A VRIL Batch 8.923 1.197 1.152 -180.305A VRIL Batch XL 8.809 1.099 0.994 -180.770A VRIL SRL 8.732 1.124 1.030 -181.487A VRIL Single 9.017 1.418 1.280 -178.544MSRD-Offline 8.368 1.403 1.274 -179.694DROID (ours) 8.048 1.441 1.336 -175.528∗Lunar LanderBenchmark Frechet Undirected Log GTMethod Distance Hausdorff Likelihood RewardDiverse Demonstration Modeling ( n= 40 )BC Batch 5.496 5.400 -122.774 -24.613BC Single 3.847 3.787 -124.510 -19.736Diffusion 4.213 4.174 -299.733 -50.409BeT 3.980 3.926 -277.258 -54.614A VRIL Batch 6.209 5.436 -275.345 -72.539A VRIL Batch XL 1.380 1.342 -154.998 -7.132A VRIL SRL 2.012 1.919 -57.810 -13.275A VRIL Single 1.488 1.459 -151.956 -1.329MSRD-Offline 3.643 3.353 -247.261 -30.652DROID 1.153 1.063∗-54.941 -6.637Policy Generalizability ( n= 20 )BC Batch 5.186 5.020 -198.297 -21.872BC Single 3.434 3.356 -197.667 -17.448Diffusion 4.165 4.146 -299.734 -47.071BeT 3.755 3.717 -277.258 -47.544A VRIL Batch 5.621 5.131 -275.345 -64.985A VRIL Batch XL 1.400 1.374 -185.511 -4.492A VRIL SRL 2.080 1.979 -873.198 -12.429A VRIL Single 1.544 1.470 -182.000 -5.313MSRD-Offline 3.476 3.244 -251.386 -28.495DROID (ours) 1.158∗1.061∗-139.133 -2.020Reward Generalizability ( n= 20 )A VRIL Batch 4.042 3.835 -326.569 -28.436A VRIL Batch XL 4.079 4.049 -378.684 -66.989A VRIL SRL 3.474 3.332 -327.652 -27.960A VRIL Single 4.305 3.159 -328.189 -26.141MSRD-Offline 4.870 4.662 -327.508 -33.656DROID (ours) 3.438 3.326 -328.853 -25.271desired goal point), Undirected Hausdorff Distance (how closely the generated path and expert pathalign), and Average Log Likelihood. We train each technique on a dataset of 117 distinct trainingsols (each with three waypoints: start point, midpoint, and goal point) from 37 RPs and hold out onesol per RP (37 sols) as a test dataset to evaluate generalization performance. Note data is extremelylimited as each RP is associated with only 2-5 Sols, and we treat each RP as a unique expert strategy.We provide more experiment details in the supplementary.Q1: Diverse Demonstration Modeling. We show in Table 2 that DROID is more successful at theimitation objective with respect to baseline approaches. DROID outperforms ( p < . 05) baselinesby95% on reaching the goal point (Final Distance) along with 20% better modeling on the strategicpreference (Log Likelihood). Policy distillation ensures DROID accomplishes the task goal whileallowing it to model expert waypoint preferences well.Q2: Policy Generalizability. On a holdout set of unseen Sols, Table 2 shows that DROID achievesbetter ( p < . 05) performance on the Undirected Hausdorff ( 21%) along with Final distance ( 90%)compared to best baselines. Despite limited and heterogeneous data, DROID’s learned policy gener-alizes well to achieve high performance and model expert preferences closely compared to baselines.Q3: Reward Generalizability. Similar to the OpenAI Gym experiments, we evaluate how success-ful the learned reward functions are at inducing high-performing policies by training CQL policies.The results in Table 2 section “Reward Generalizability” show DROID’s learned reward successfullyinduces policies with significantly better ( p < . 05) performance along the Undirected Hausdorffmetric ( 12%) and the Final Distance metric ( 36%).6.2.1 Qualitative AnalysisWe visualize the task reward and strategy reward learned by DROID for a randomly selected Sol insupplementary Figure 6 to understand RPs’ preferences. We observe that the task reward encodesthe common goal of converging at the goal point by giving high rewards to the goal area. In contrast,the strategy reward correctly identifies the midpoint preference. The illustration shows DROID cansuccessfully decompose the shared goal along with modeling latent strategies on unseen domains.7Table 2: This table shows performance comparisons of DROID and baselines in MPP. Bold denotesthe best-performing model. *denotes p < .05significance against the second-best method.Benchmark Undirected Distance from Final LogMethods Hausdorff Waypoint Distance LikelihoodDiverse Demonstration Modeling ( n= 114 )BC Batch 8.924 9.408 5.020 -43.192BC Single 14.824 9.245 13.200 -192.530Diffusion 11.397 8.421 9.591 -43.894A VRIL Batch 11.825 10.517 1.428 -9.129A VRIL Batch XL 10.066 8.939 1.643 -65.306A VRIL SRL 9.096 8.401 6.313 -26.425A VRIL Single 10.099 8.445 7.356 -15.662MSRD-Offline 8.162 7.580 4.476 -27.667DROID (ours) 6.780∗4.592 0.070∗-7.261∗Policy Generalizability ( n= 49 )BC Batch 7.791 9.562 3.197 -39.923BC Single 14.570 12.591 2.755 -186.310Diffusion 10.886 8.775 3.506 -43.175A VRIL Batch 8.387 10.020 3.878 -22.623A VRIL Batch XL 11.032 8.589 4.205 -61.561A VRIL SRL 8.885 6.082 3.935 -23.756A VRIL Single 8.336 8.888 5.951 -17.357MSRD-Offline 9.732 8.886 7.462 -148.964DROID (ours) 6.144∗6.407 0.277∗-18.483Reward Generalizability ( n= 49 )A VRIL Batch 9.385 9.502 1.823 -15.501A VRIL Batch XL 9.396 9.502 1.925 -15.653A VRIL SRL 10.341 9.753 0.928 -16.475A VRIL Single 10.456 10.407 0.676 -18.039MSRD-Offline 8.799 8.867 1.268 -14.515DROID (ours) 8.240 * 8.764 0.433 * -15.0506.3 AblationIn this section, we perform ablation studies to evaluate the utility of different components of DROID.Ablation 1-6 corresponds to the following: 1) DROID without A VRIL improvements 1 and 2; 2)DROID without A VRIL Improvement 1; 3) DROID without A VRIL Improvement 2; 4) DROIDwithout distillation; 5) DROID without policy distallation; 6) DROID without reward distillation.Supplementary Table 7 shows the results of our ablation study in Cartpole and MPP domains.DROID outperforms Ablation 4 on all metrics, showing the importance of distillation across dif-ferent strategies on the Diverse Demonstration Modeling task. Ablation 5 overfits to the demonstra-tions and achieves poorer generalization performance compared to DROID, which demonstrates thebenefit of policy distillation to improve policy generalizability. Ablation 6 achieves worse Log Like-lihood, suggesting that reward distillation’s ability to share knowledge among demonstrations helpsDROID understand expert behaviors. Ablation 1-3 perform worse than DROID in all tasks exceptLog likelihood in MPP, demonstrating that our two A VRIL enhancements are effective in improvingDROID’s ability to learn demonstrator’s latent intention. Overall, our ablation study highlights theimportance of both reward and policy distillation, as well as the two enhancements, in achievingstate-of-the-art performance. More ablation study details are included in the supplementary.7 Conclusion, Limitations, and Future WorkIn this paper, we introduce an OLfD technique, DROID, that expands the applicability of OLfD withheterogeneous and limited data by a novel decomposition of the policy and reward models. Ourresults on both simulated and real-world data demonstrate that DROID outperforms SOTA methods,particularly in capturing difficult-to-articulate knowledge from rover path planners at NASA.There are several limitations with DROID. Firstly, DROID assumes experts have stationary pref-erences across demonstrations. Secondly, DROID assumes access to the number of strategies andthe strategy label for each demonstration, which may be non-trivial to obtain. Thirdly, in the MPPdomain, we extract nine features, which may not encompass all features an RP considers. In futurework, we plan to explore modeling nonstationary strategies for demonstrators, test DROID’s utilitywith RP’s workflow, explore automatic feature extraction for Mars terrain, and relax the assumptionabout known strategy labels with behavior clustering [30] or online inference [29].8AcknowledgmentsWe wish to thank our reviewers for their valuable feedback in revising our manuscript. This workwas supported by the National Institutes of Health (NIH) under Grant 1R01HL157457, by a NASAEarly Career Fellowship under Grant 80NSSC20K0069, and by the National Science Foundation(NSF) under Grant #2219755.References[1] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V . Kumar, H. Zhu, A. Gupta,P. Abbeel, and S. Levine. Soft actor-critic algorithms and applications. CoRR , abs/1812.05905,2018.[2] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[3] R. Paleja, Y . Niu, A. Silva, C. Ritchie, S. Choi, and M. Gombolay. Learning interpretable,high-performing policies for continuous control problems. arXiv preprint arXiv:2202.02352 ,2022.[4] E. Seraj, Z. Wang, R. Paleja, D. Martin, M. Sklar, A. Patel, and M. Gombolay. Learningefficient diverse communication for cooperative heterogeneous teaming. In Proceedings of the21st International Conference on Autonomous Agents and Multiagent Systems , pages 1173–1182, 2022.[5] A. Silva, N. Moorman, W. Silva, Z. Zaidi, N. Gopalan, and M. Gombolay. Lancon-learn:Learning with language to enable generalization in multi-task manipulation. IEEE Roboticsand Automation Letters , 7(2):1635–1642, 2022. doi:10.1109/LRA.2021.3139667.[6] E. Seraj, L. Chen, and M. C. Gombolay. A hierarchical coordination framework for jointperception-action tasks in composite robot teams. IEEE Transactions on Robotics , 38(1):139–158, 2021.[7] S. Konan, E. Seraj, and M. Gombolay. Iterated reasoning with mutual information in coopera-tive and byzantine decentralized teaming. arXiv preprint arXiv:2201.08484 , 2022.[8] S. G. Konan, E. Seraj, and M. Gombolay. Contrastive decision transformers. In 6th AnnualConference on Robot Learning , 2022.[9] D. M. Gaines, R. C. Anderson, G. B. Doran, W. Huffman, H. Justice, R. M. Mackey, G. R.Rabideau, A. R. Vasavada, V . Verma, T. A. Estlin, L. M. Fesq, M. D. Ingham, M. W. Maimone,and I. A. D. Nesnas. Productivity challenges for mars rover operations, 2016.[10] A. R. Vasavada. Mission Overview and Scientific Contributions from the Mars Science Lab-oratory Curiosity Rover After Eight Years of Surface Operations. Space Sci Rev , 218(3):14,2022.[11] D. I. Koutras, A. C. Kapoutsis, A. A. Amanatiadis, and E. B. Kosmatopoulos. Marsexplorer:Exploration of unknown terrains via deep reinforcement learning and procedurally generatedenvironments. Electronics , 10(22), 2021. ISSN 2079-9292. doi:10.3390/electronics10222751.URL https://www.mdpi.com/2079-9292/10/22/2751 .[12] R. Hu and Y . Zhang. Fast path planning for long-range planetary roving based on a hierarchicalframework and deep reinforcement learning. Aerospace , 9(2), 2022. ISSN 2226-4310. doi:10.3390/aerospace9020101. URL https://www.mdpi.com/2226-4310/9/2/101 .[13] J. Zhang, Y . Xia, and G. Shen. A novel deep neural network architecture for mars visualnavigation. CoRR , abs/1808.08395, 2018. URL http://arxiv.org/abs/1808.08395 .9[14] T.-H. Cheng, C.-P. Wei, and V . S. Tseng. Feature selection for medical data mining: Compar-isons of expert judgment and automatic approaches. In 19th IEEE symposium on computer-based medical systems (CBMS’06) , pages 165–170. IEEE, 2006.[15] A. J. Chan and M. van der Schaar. Scalable bayesian inverse reinforcement learning, 2021.[16] F. Jarboui and V . Perchet. Offline inverse reinforcement learning. CoRR , abs/2106.05068,2021. URL https://arxiv.org/abs/2106.05068 .[17] S. A. Murphy, M. J. van der Laan, and J. M. Robins. Marginal Mean Models for DynamicRegimes. J Am Stat Assoc , 96(456):1410–1423, Dec 2001.[18] A. Kumar, A. Singh, S. Tian, C. Finn, and S. Levine. A workflow for offline model-freerobotic reinforcement learning. CoRR , abs/2109.10813, 2021. URL https://arxiv.org/abs/2109.10813 .[19] X. Fang, Q. Zhang, Y . Gao, and D. Zhao. Offline reinforcement learning for autonomousdriving with real world driving data. In 2022 IEEE 25th International Conference on Intelli-gent Transportation Systems (ITSC) , pages 3417–3422, 2022. doi:10.1109/ITSC55140.2022.9922100.[20] R. F. Prudencio, M. R. O. A. Maximo, and E. L. Colombini. A survey on offline reinforcementlearning: Taxonomy, review, and open problems. IEEE Transactions on Neural Networks andLearning Systems , pages 1–0, 2023. doi:10.1109/tnnls.2023.3250269. URL https://doi.org/10.1109%2Ftnnls.2023.3250269 .[21] M. Fatemi, M. Wu, J. Petch, W. Nelson, S. J. Connolly, A. Benz, A. Carnicelli, and M. Ghas-semi. Semi-markov offline reinforcement learning for healthcare, 2022.[22] S. Schaal. Learning from demonstration. In M. C. Mozer, M. Jordan, andT. Petsche, editors, Advances in Neural Information Processing Systems , volume 9.MIT Press, 1997. URL https://proceedings.neurips.cc/paper/1996/file/68d13cf26c4b4f4f932e3eff990093ba-Paper.pdf .[23] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adverserial inverse reinforcementlearning. In Proceedings of the International Conference on Learning Representations (ICLR) ,2018.[24] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learningmethods. ACM Computing Surveys (CSUR) , 50(2):1–35, 2017.[25] I. Kostrikov, K. K. Agrawal, D. Dwibedi, S. Levine, and J. Tompson. Discriminator-actor-critic: Addressing sample inefficiency and reward bias in adversarial imitation learning. arXivpreprint arXiv:1809.02925 , 2018.[26] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrations forrobot manipulation. CoRR , abs/2108.03298, 2021. URL https://arxiv.org/abs/2108.03298 .[27] G. Tucker. Tackling open challenges in offline reinforcement learning, Aug 2020. URL https://ai.googleblog.com/2020/08/tackling-open-challenges-in-offline.html .[28] E. F. Morales and C. Sammut. Learning to fly by combining reinforcement learning withbehavioural cloning. In Proceedings of the International Conference on Machine Learning(ICML) , page 76, 2004.[29] L. Chen, S. Jayanthi, R. Paleja, D. Martin, V . Zakharov, and M. Gombolay. Fast lifelongadaptive inverse reinforcement learning from demonstrations, 2023.10[30] S. Nikolaidis, R. Ramakrishnan, K. Gu, and J. Shah. Efficient model learning from joint-actiondemonstrations for human-robot collaborative tasks. In 2015 10th ACM/IEEE InternationalConference on Human-Robot Interaction (HRI) , pages 189–196. IEEE, 2015.[31] L. Chen, R. R. Paleja, M. Ghuy, and M. C. Gombolay. Joint goal and strategy inference acrossheterogeneous demonstrators via reward network distillation. CoRR , abs/2001.00503, 2020.URL http://arxiv.org/abs/2001.00503 .[32] S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza. Power to the people: The roleof humans in interactive machine learning. AI Magazine , 35(4):105–120, Dec. 2014.doi:10.1609/aimag.v35i4.2513. URL https://ojs.aaai.org/index.php/aimagazine/article/view/2513 .[33] R. Paleja, A. Silva, L. Chen, and M. Gombolay. Interpretable and personalized apprentice-ship scheduling: Learning interpretable scheduling policies from heterogeneous user demon-strations. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Ad-vances in Neural Information Processing Systems , volume 33, pages 6417–6428. CurranAssociates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/477bdb55b231264bb53a7942fd84254d-Paper.pdf .[34] X. Chen, A. Ghadirzadeh, T. Yu, J. Wang, A. Y . Gao, W. Li, L. Bin, C. Finn, and C. Zhang.Lapo: Latent-variable advantage-weighted policy optimization for offline reinforcement learn-ing. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Ad-vances in Neural Information Processing Systems , volume 35, pages 36902–36913. CurranAssociates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/efb2072a358cefb75886a315a6fcf880-Paper-Conference.pdf .[35] A. Szot, A. Zhang, D. Batra, Z. Kira, and F. Meier. BC-IRL: Learning generalizable rewardfunctions from demonstrations. In The Eleventh International Conference on Learning Repre-sentations , 2023. URL https://openreview.net/forum?id=Ovnwe_sDQW .[36] S. Yue, G. Wang, W. Shao, Z. Zhang, S. Lin, J. Ren, and J. Zhang. Clare: Conservativemodel-based reward learning for offline inverse reinforcement learning, 2023.[37] J. Maghakian, P. Mineiro, K. Panaganti, M. Rucker, A. Saran, and C. Tan. Personalized rewardlearning with interaction-grounded learning (igl), 2023.[38] L. Chen, R. R. Paleja, M. Ghuy, and M. C. Gombolay. Joint goal and strategy inferenceacross heterogeneous demonstrators via reward network distillation. In Proceedings of theInternational Conference on Human-Robot Interaction (HRI) , 2020.[39] H. Ravichandar, A. S. Polydoros, S. Chernova, and A. Billard. Recent advances in robotlearning from demonstration. Annual Review of Control, Robotics, and Autonomous Systems ,3, 2020.[40] A. Correia and L. A. Alexandre. A survey of demonstration learning, 2023.[41] Y . Li, J. Song, and S. Ermon. Infogail: Interpretable imitation learning from visual demonstra-tions. Advances in Neural Information Processing Systems , 30, 2017.[42] R. R. Paleja, A. Silva, L. Chen, and M. Gombolay. Interpretable and personalized ap-prenticeship scheduling: Learning interpretable scheduling policies from heterogeneous userdemonstrations. In Proceedings of the Conference on Neural Information Processing Systems(NeurIPS) , 2020.[43] L. Chen, R. Paleja, and M. Gombolay. Learning from suboptimal demonstration via self-supervised reward regression. In Proceedings of Conference on Robot Learning (CoRL) , 2020.11[44] S. Ross, G. J. Gordon, and J. A. Bagnell. No-regret reductions for imitation learning andstructured prediction. CoRR , abs/1011.0686, 2010. URL http://arxiv.org/abs/1011.0686 .[45] M. Bain and C. Sammut. A framework for behavioural cloning. Machine Intelligence 15,Intelligent Agents , 15, 03 2000.[46] I. Kostrikov, O. Nachum, and J. Tompson. Imitation learning via off-policy distribution match-ing. CoRR , abs/1912.05032, 2019. URL http://arxiv.org/abs/1912.05032 .[47] I. Kostrikov, K. K. Agrawal, S. Levine, and J. Tompson. Addressing sample inefficiency andreward bias in inverse reinforcement learning. CoRR , abs/1809.02925, 2018. URL http://arxiv.org/abs/1809.02925 .[48] D. Jarrett, I. Bica, and M. van der Schaar. Strictly batch imitation learning by energy-baseddistribution matching, 2021.[49] T. Pearce, T. Rashid, A. Kanervisto, D. Bignell, M. Sun, R. Georgescu, S. V . Macua, S. Z.Tan, I. Momennejad, K. Hofmann, and S. Devlin. Imitating human behaviour with diffusionmodels, 2023.[50] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone, 2022.[51] H. Zang, X. Li, J. Yu, C. Liu, R. Islam, R. T. D. Combes, and R. Laroche. Behavior priorrepresentation learning for offline reinforcement learning, 2023.[52] A. A. Rusu, S. G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu,V . Mnih, K. Kavukcuoglu, and R. Hadsell. Policy distillation, 2016.[53] J. Xing, T. Nagata, X. Zou, E. O. Neftci, and J. L. Krichmar. Policy distillation with selectiveinput gradient regularization for efficient interpretability. ArXiv , abs/2205.08685, 2022.[54] S. Daftry, N. Abcouwer, T. Del Sesto, S. Venkatraman, J. Song, L. Igel, A. Byon, U. Rosolia,Y . Yue, and M. Ono. Mlnav: Learning to safely navigate on martian terrains. IEEE Roboticsand Automation Letters , 7(2):5461–5468, 2022.[55] E. Hilgemann. How to drive a mars rover, Dec 2020. URL https://medium.com/predict/how-to-drive-a-mars-rover-6f0870b0c8e1 .[56] G. Hedrick, N. Ohi, and Y . Gu. Terrain-aware path planning and map update for mars samplereturn mission. IEEE Robotics and Automation Letters , 5(4):5181–5188, 2020. doi:10.1109/LRA.2020.3005123.[57] M. Pflueger, A. Agha, and G. S. Sukhatme. Rover-irl: Inverse reinforcement learning with softvalue iteration networks for planetary rover path planning. IEEE Robotics and AutomationLetters , 4(2):1387–1394, 2019. doi:10.1109/LRA.2019.2895892.[58] A. J. Chan and M. van der Schaar. Scalable bayesian inverse reinforcement learning. arXivpreprint arXiv:2102.06483 , 2021.[59] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learningresearch , 9(11), 2008.[60] L. Shi, G. Li, Y . Wei, Y . Chen, and Y . Chi. Pessimistic q-learning for offline reinforcementlearning: Towards optimal sample complexity, 2022.[61] G. Brockman, V . Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.Openai gym. arXiv preprint arXiv:1606.01540 , 2016.12[62] K. Toohey and M. Duckham. Trajectory similarity measures. SIGSPATIAL Special , 7:43–50,05 2015. doi:10.1145/2782759.2782767.[63] S. Kullback and R. A. Leibler. On Information and Sufficiency. The Annals of MathematicalStatistics , 22(1):79 – 86, 1951.[64] F. Hausdorff. Grundz ̈uge der Mengenlehre . Chelsea, 1914.https://en.wikipedia.org/wiki/Grundz[65] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skillswithout a reward function. In International Conference on Learning Representations , 2019.URL https://openreview.net/forum?id=SJx63jRqFm .[66] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforce-ment learning. CoRR , abs/2006.04779, 2020. URL https://arxiv.org/abs/2006.04779 .[67] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcementlearning. In Proceedings of the National Conference on Artificial intelligence (AAAI) , pages1433–1438, 2008.[68] M. Hessel, J. Modayil, H. van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot,M. G. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning.CoRR , abs/1710.02298, 2017. URL http://arxiv.org/abs/1710.02298 .[69] L. F. Kozachenko and N. N. Leonenko. Sample estimate of the entropy of a random vector.Probl. Inf. Transm. , 23(1-2):95–101, 1987.[70] W. Brown. Nasa to begin attempts to free sand-trapped mars rover, Nov 2009. URL https://www.nasa.gov/mission_pages/mer/news/mer20091112.html .[71] L. S. Shapley. A value for n-person games. In H. W. Kuhn and A. W. Tucker, editors, Con-tributions to the Theory of Games II , pages 307–317. Princeton University Press, Princeton,1953.[72] W. Gong, X. Xie, and Y .-J. Liu. Human experience–inspired path planning for robots.International Journal of Advanced Robotic Systems , 15:172988141875704, 02 2018. doi:10.1177/1729881418757046.[73] R. Ram ́on-Vigo, N. P ́erez-Higueras, F. Caballero, and L. Merino. Transferring human naviga-tion behaviors into a robot local planner. In The 23rd IEEE International Symposium on Robotand Human Interactive Communication , pages 774–779, 2014. doi:10.1109/ROMAN.2014.6926347.[74] S. Dergachev, K. Muravyev, and K. Yakovlev. 2.5d mapping, pathfinding and path followingfor navigation of a differential drive robot in uneven terrain, 2022.[75] B. Hao, H. Du, J. Zhao, J. Zhang, and Q. Wang. A Path-Planning approach based on potentialand dynamic Q-Learning for mobile robots in unknown environment. Comput Intell Neurosci ,2022:2540546, June 2022.[76] E. Murphy. Planning and exploring under uncertainty, Jan 2010. URL https://ora.ox.ac.uk/objects/uuid3Abb3d85f6-117b-4f5e-92ab-b6acc87aef79 .[77] T. Yu, B. Deng, J. Gui, X. Zhu, and W. Yao. Efficient informative path planning via normalizedutility in unknown environments exploration. Sensors (Basel, Switzerland) , 22, 2022.[78] L. Cuevas, M. Ram ́ırez, I. Shames, and C. Manzie. Path planning under risk and uncertaintyof the environment. 2021 American Control Conference (ACC) , pages 4231–4236, 2021.13[79] Y . Yin, Z. Chen, G. Liu, and J. Guo. A mapless local path planning approach using deepreinforcement learning framework. Sensors , 23:2036, 02 2023. doi:10.3390/s23042036.[80] K. Weerakoon, A. J. Sathyamoorthy, U. Patel, and D. Manocha. Terp: Reliable planning inuneven outdoor environments using deep reinforcement learning, 2021.[81] F. Meng, L. Chen, H. Ma, J. Wang, and M. Q.-H. Meng. Nr-rrt: Neural risk-aware near-optimalpath planning in uncertain nonconvex environments. arXiv preprint arXiv: 2205.06951 , 2022.14A Offline LfD Enhancements DetailA VRIL considers a distribution over the reward function and approximates the posterior, p(R|D),with a variational distribution, qφ(R). It is trained by maximizing the Evidence Lower BOund(ELBO), shown in Equation 8, where p(R)is the prior distribution for the reward function and πEis the expert policy. The second equation follows by the assumption of Boltzmann rationality of thedemonstrator [67].ELBO (φ) =Eqφ[logp(D|R)]−DKL(qφ(R)||p(R)])=EqφX(s,a)∈Dlogexp (βQπER(s, a))Pb∈Aexp (βQπER(s, b))−DKL(qφ(R)||p(R)])(8)Directly optimizing ELBO is not feasible as the gradient of QπER(s, a)with respect to φis intractable.Therefore, A VRIL introduces a second variational approximation for QπER(s, a)withQθ(s, a)andensures the variational reward distribution, qφ(R), is consistent with the variational Q function,Qθ(s, a), by Bellman equation, i.e., R(s, a) =Es′,a′∼π[QπR(s, a)−γQπR(s′, a′)].Here, we present a lemma to show how A VRIL Enhancement 1 (i.e., extending KL-divergenceregularization on all actions) impacts reward learning. This enhancement could be viewed as a dataaugmentation technique encouraging a small distance to the prior distribution for any action. Weformalize the intuition in Lemma 1.Lemma 1. Assume the prior reward distribution, p(R(s, a)), is a Gaussian distribution partitionedon each state and action pair, minimizing LKLresults in qφ(R(s, a)) =p(R(s, a))for each operated(s, a).Following Lemma 1 and our extended operation over b∈A, we have the following observation.Corollary 1.1. Assume we choose the prior reward distribution p(R(s, a))to be Standard Gaussiandistribution, N(0,1). fors∈ D, b∈As.t.(s, b)/∈ D, optimizing LAVRIL leads to μφ(s, b) = 0 andσ2φ(s, b) = 1 . The proof follows immediately by observing qφ(R(s, b))only gets gradient from LKLand the optimal solution of LKLis that μφ(s, a) = 0 andσ2φ(s, a) = 1 .B MPP Heterogeneity Analysis DetailsIn our analysis, we seek to compare the variance of path features within each RP to the variance ofpath features across RPs, as this would help quantify how diverse expert demonstrations are. Thedemonstration for each RP is multivariate and not normal (we tested for normality and homoscedas-ticity), necessitating the use of the PERMANOV A test, which is non-parametric and can comparemultivariate data. More specifically, it tests the null hypothesis that the centroid and dispersion forthe two groups are equivalent. To apply this test to the RP data, we tested each possible pair of RPs tosee which ones have statistically significant differences in their distributions. The Bonferroni-Holmmethod was used to account for the fact that many hypothesis tests are performed.C Experimental DetailsFor fair comparisons with all baseline techniques, we share the same network architecture for eachpolicy and reward with two hidden layers of 64 units, along with GELU activation functions. Thetraining is with Adam Optimizer for 1000 iterations. For downstream policies, we train offline Con-servative Q Learning [66] for 1000 iterations. Conservative Q-learning is an offline RL algorithmthat guards against overestimation while avoiding explicitly constructing a separate behavior model.We leverage several improvements, including Dueling Double Q Networks and Distributional RLfrom Rainbow [68] to improve the CQL training. We list hyperparameters used in all algorithms inTable 3.15Hyperparameters ValuesTraining Iterations 1000Learning Rate 0.0001State Only Reward FalseState Dim 4, 9Action Dim 2Gamma 0.99Lambda 1.0Train Test Split 0.8Min Number of Test Sols 1Linear Reward FalseOffline CQL Training Itrs 1000Strategy Reward Regularization Coeffficient (MSRD and DROID) 0.01Strategy Q Function Regularization Coeffficient (DROID) 0.001Table 3: This table shows the hyperparameters we use for DROID and all benchmark algorithms.All values separated with commas are for CartPole (LL) and MPP, respectively.To showcase the statistical significance of our results on the Cartpole, LL, and MPP domains, weperform tests for normality and homoscedasticity and find that our metrics do not satisfy the as-sumptions of the parametric ANOV A test. Thus, we instead perform a non-parametric Friedman testfollowed by a posthoc Nemenyi–Damico–Wolfe (Nemenyi) test. We show significance by aligningdemonstration strategies between treatments (benchmark techniques).C.1 CartPole / Lunar LanderC.1.1 Video DemonstrationsWe include demonstrations of heterogeneous behaviors along with each technique’s learned policiesin CartPole and Lunar Lander in the link: https://tinyurl.com/droidcartpolevideos .C.1.2 MetricsHere, we describe the motivation behind each of the metrics, evaluated from rollouts of the policieswith respect to expert demonstrations.1. Frechet Distance [62]: Compare the spatial and temporal differences of the trajectory fromthe agent’s policy with the expert trajectory to quantify how well the agent captures themotion pattern of the expert.2. KL Divergence [63] (CartPole): By estimating the state distribution within a trajectory bythe kernel density estimator [69], KL divergence quantifies how well the learned policiesstate visitation matches the expert’s.3. Undirected Hausdorff Distance [64]: This measures the maxima between the two DirectedHausdorff distances: one mapping our learned policy’s trajectory to the expert trajectory,and the other mapping the expert trajectory to our learn policy’s trajectory. This metricstudies how far the agent’s trajectory is from the expert’s trajectory.4. Average Log-Likelihood: This measures the likelihood of expert demonstration under thelearned policy.5. Ground Truth Rewards (Lunar Lander): The environment reward for Lunar Lander (allbenchmarks in CartPole achieve near-maximal reward).C.1.3 AnalysisWe show Friedman Chi-square and Post-hoc Nemenyi statistical test metrics in CartPole for thethree experiments (Diverse Demonstration Modeling, Policy Generalizability, and Reward General-16Table 4: This table shows the APA-style statistical test results for Friedman ( α= 0.05, d.o.f.=3)and Posthoc Nemenyi ( α= 0.05) of DROID with respect to baselines in Cartpole. All reported teststatistics are significant other than the italicized metrics (if the Friedman test results are insignificant,no posthoc analysis is performed).CartPoleBenchmark KL Frechet Undirected LogMethod Divergence Distance Hausdorff LikelihoodDiverse Demonstration Modeling ( n= 40 )Friedman 174.82 222.92 240.52 339.68DROID vs BC Batch 5.20 4.80 6.06 8.71DROID vs BC Batch Large 5.13 4.73 6.13 9.23DROID vs Diffusion 4.62 11.04 11.82 3.14DROID vs BeT 9.42 10.71 11.52 1.66DROID vs A VRIL Batch 2.84 3.36 4.21 2.66DROID vs A VRIL Batch XL 3.32 3.58 4.28 4.06DROID vs A VRIL SRL 6.87 5.83 6.06 7.87DROID vs A VRIL Single 5.13 4.99 5.24 1.85DROID vs MSRD-Offline 1.77 3.05 3.29 5.50Policy Transferability ( n= 20 )Friedman 82.53 115.51 122.72 148.93DROID vs BC Batch 3.97 4.23 3.92 5.12DROID vs BC Batch Large 2.77 3.86 4.28 5.69DROID vs Diffusion 6.37 8.20 8.25 2.77DROID vs BeT 6.74 7.99 8.04 1.62DROID vs A VRIL Batch 2.30 3.39 3.86 1.36DROID vs A VRIL Batch XL 2.25 3.45 3.34 2.56DROID vs A VRIL SRL 4.96 4.70 4.07 4.49DROID vs A VRIL Single 2.45 2.66 2.98 0.78DROID vs MSRD-Offline 1.62 2.72 2.94 3.71Reward ( n= 20 )Friedman 2.22 0.54 0.3 13.38DROID vs BC Batch N/A N/A N/A N/ADROID vs BC Batch Large N/A N/A N/A N/ADROID vs Diffusion N/A N/A N/A N/ADROID vs BeT N/A N/A N/A N/ADROID vs A VRIL Batch N/A N/A N/A 2.82DROID vs A VRIL Batch XL N/A N/A N/A 2.87DROID vs A VRIL SRL N/A N/A N/A 3.01DROID vs A VRIL Single N/A N/A N/A 2.08DROID vs MSRD-Offline N/A N/A N/A 3.43izability, c.f. main paper Results Section Q1-Q3) in Table 4. In the training task, DROID generatesrollouts that align closer with expert behaviors, evidenced by stronger Undirected Hausdorff perfor-mance. Likewise, DROID does significantly better on Frechet and Undirected Hausdorff distancecompared to the best baselines in both the Demonstration modeling and Policy Transferability tasks.Common reward-policy distillation helps guide DROID’s policies and rewards to better model ex-pert preferences and, thus, better capture diversity in expert behaviors.Likewise, we show the statistical test results for Lunar Lander in Table 5. DROID demonstratesclear superiority over several baseline methods. For instance, when compared to BC Batch andBC Batch Large, DROID achieves significantly better Frechet Distance values. This indicates thatDROID’s trajectory predictions align more closely with ground truth demonstrations than those ofthe standard BC baseline methods. Similarly, in the ”Policy Transferability” benchmark, DROIDconsistently outperforms its counterparts. Compared to BC Batch, DROID attains substantiallylower Frechet Distance and Undirected Hausdorff Distance than the best baselines in A VRIL SRLand A VRIL Batch XL. This suggests that DROID’s explicit strategy decomposition and knowledgesharing improve overall performance.17Table 5: This table shows the APA-style statistical test results for Friedman ( α= 0.05, d.o.f.=3) andPosthoc Nemenyi ( α= 0.05) of DROID with respect to baselines in Lunar Lander. All reported teststatistics are significant other than the italicized metrics (if the Friedman test results are insignificant,no posthoc analysis is performed).Lunar LanderBenchmark Frechet Undirected Log GroundMethod Distance Hausdorff Likelihood TruthDiverse Demonstration Modeling ( n= 40 )Friedman 171.61 170.31 322.40 207.27DROID vs BC Batch 6.94 7.98 3.84 4.91DROID vs BC Batch Large 6.24 6.13 4.10 3.91DROID vs Diffusion 8.97 9.16 12.56 8.23DROID vs BeT 8.9 9.08 11.08 8.75DROID vs A VRIL Batch 3.40 4.14 5.28 0.37DROID vs A VRIL Batch XL 1.92 2.88 6.68 0.18DROID vs A VRIL SRL 4.73 5.39 0.15 3.32DROID vs A VRIL Single 3.43 2.73 5.80 0.44DROID vs MSRD-Offline 8.27 7.90 9.60 6.06Policy Transferability ( n= 20 )Friedman 71.93 76.90 122.76 87.25DROID vs BC Batch 4.49 2.66 1.51 3.03DROID vs BC Batch Large 3.86 3.92 1.62 3.08DROID vs Diffusion 6.27 6.74 6.68 5.69DROID vs BeT 5.74 5.48 5.43 6.16DROID vs A VRIL Batch 2.14 2.04 0.89 0.78DROID vs A VRIL Batch XL 2.67 1.98 1.36 0.11DROID vs A VRIL SRL 3.08 2.66 7.15 2.87DROID vs A VRIL Single 2.40 2.30 1.15 1.62DROID vs MSRD-Offline 5.33 5.64 3.97 4.44Reward ( n= 20 )Friedman 25.46 32.02 29.15 60.94DROID vs BC Batch N/A N/A N/A N/ADROID vs BC Batch Large N/A N/A N/A N/ADROID vs Diffusion N/A N/A N/A N/ADROID vs BeT N/A N/A N/A N/ADROID vs A VRIL Batch N/A N/A N/A N/ADROID vs A VRIL Batch XL N/A N/A N/A N/ADROID vs A VRIL SRL N/A N/A N/A N/ADROID vs A VRIL Single N/A N/A N/A N/ADROID vs MSRD-Offline N/A N/A N/A N/AC.2 Mars Path PlanningC.2.1 Domain IntroductionExploring Mars has been a fascinating and challenging endeavor for space agencies worldwide.The Curiosity Rover is the longest active autonomous vehicle NASA has sent to Mars to study theclimate, geology, and potential habitability of the planet [10]. The Rover has been in operation forthe past ten years, and its path planning has been done by manual labor of Rover Planners (RPs) onEarth.There are several factors RPs consider when designing paths, including the change in elevation,distance to the desired destination, uncertainty about missing data on the terrain, etc. We study adataset of Curiosity Rovers curated paths from 154 sols (a sol being a Martian day, approximately24.6 hours). We demonstrate in Section 4 of the main paper that there is significant heterogeneitybetween RP’s paths. Each RP has a specific priority among safety, efficiency, risk, and missionconstraints that inform their path design. This motivates us to design an autonomous path-planningapproach that learns from these heterogeneous experts.C.2.2 Dataset CurationThe data consists of features that were created through a series of interviews with RPs, scientists,and engineers that ideally capture the decision-making process for rover path planning. The featureswere engineered to codify the reasoning behind the RP’s decisions. For example, RPs would visually18analyze the terrain and map waypoints to avoid “rough” terrains without quantifiable measures ofwhat is considered rough. We identify the following features to encode the mental models andstrategies of RPs with considerations of risks, efficiency, safety, and mission requirements.Distance Feature – The distance feature measures the percent added distance the rover must take byadding intermediate waypoints in relation to the direct distance between the start and end waypoints.The aim of this feature is to drive the rover’s necessary additional distance. It is assumed that thepath between waypoints is driven straight as RPs rarely drive curved paths and rather set additionalintermediate points if the rover needs to avoid hazards between waypoints.Unknown Data Feature – When constructing the height maps, data could be missing where thecameras cannot see terrain beyond a hill or obstructions like a large rock or the rover itself. Bytraversing terrains with missing data, the RP places the rover at a higher risk of damage. The designof the Unknown Data Feature is to minimize the distance the rover traverses over terrain withoutdata. We compute the unknown data feature as the percent data missing in the height map forproposed trajectories.Roughness Feature – Rover Planners ideally drive on relatively smooth surfaces, avoiding roughterrain that could potentially damage the rover’s hardware. Similarly, Rover Planners also look toavoid terrain that is too soft to prevent a similar fate as Spirit getting stuck in sand [70]. Here,the roughness is computed as the difference of consecutive surface angles as the rover traverses tothe goal point. The maximum roughness and the average roughness over proposed trajectories aremeasured to avoid large holes or rocks and minimize traveling on rough terrains.Pitch and Roll Feature – Pitch and roll of the rover add another level of safety checks that ensurethat the rover will not face terrains that risk the rover rolling over.Turning Trajectory – We include the turning trajectory as a feature to track. This feature calculatesthe angle the rover must turn at intermediate waypoints. With this feature, the cost of taking sharpturns considers the rover’s hardware and long-term health.Waypoint Grid Construction – The 64x64-sized waypoint grid is constructed by scaling the terrainheight map along each axis and sampling the terrain map height at each (x,y) coordinate in the scaledgrid. We do so according to an inverse weighted distance from each point along the four nearestpoints with height map data. We perform this scaling to limit the size of the action space of possiblewaypoints.H(x, y) =h1d1+h2d2+h3d3+h4d41d1+1d2+1d3+1d4Hrepresents the height evaluated at each point in the gaming area. (x, y)represent the correspond-ing coordinates, and hi, direpresent the height and distance away from the evaluated point in thedataset, respectively.C.2.3 Description of PolicyThe action space exists on the 64 by 64 discrete grid of 4096 possible successor waypoints, whichwas chosen to be large enough to have high precision when selecting waypoints. The average dis-tance between grid points ranges from 0.01m to 0.7m for different Sols. We define our learnedpolicy in Equation 9 from our learned Q-function Qθ.πθ(s) = maxa∈AQθ(s, a) (9)As mentioned in the main paper, we consider the three-waypoint planning problem and therefore,an action, a, (i.e., the intermediate waypoint) determines the trajectory as from the current pointto the intermediate waypoint and then from the intermediate waypoint to the ending waypoint. Wecalculate the features of the action (i.e., next waypoint) for each of the two segments of the trajectory(current point to next waypoint, and next waypoint to goal point).19Table 6: This table shows the APA-style statistical test results for Friedman ( α= 0.05, d.o.f.=3) andPosthoc Nemenyi ( α= 0.05) of DROID with respect to baselines in MPP. All reported test statisticsare significant other than the italicized metrics (if the Friedman test results are insignificant, noposthoc analysis is performed).Mars Path PlanningBenchmark Undirected Distance from Final LogMethods Hausdorff Waypoint Distance LikelihoodDiverse Demonstration Modeling ( n= 114 )Friedman 112.98 44.91 408.32 786.47DROID vs BC Batch 2.19 0.15 3.93 4.33DROID vs BC Batch Large 2.29 0.67 14.45 4.23DROID vs Diffusion 4.22 1.11 4.77 4.04DROID vs A VRIL Batch 6.29 4.00 4.84 12.07DROID vs A VRIL Batch XL 4.09 0.07 5.07 3.17DROID vs A VRIL SRL 5.18 0.83 6.43 7.57DROID vs A VRIL Single 6.19 3.86 12.74 12.16DROID vs MSRD-Offline 2.30 0.24 10.08 6.51Policy Transferability ( n= 49 )Friedman 74.09 15.08 153.77 345.76DROID vs BC Batch 2.14 N/A 3.83 1.49DROID vs BC Batch Large 2.89 N/A 3.41 1.42DROID vs Diffusion 2.55 N/A 3.73 3.91DROID vs A VRIL Batch 4.15 N/A 7.43 8.89DROID vs A VRIL Batch XL 3.11 N/A 2.81 3.28DROID vs A VRIL SRL 3.73 N/A 2.99 6.27DROID vs A VRIL Single 4.22 N/A 7.14 8.63DROID vs MSRD-Offline 3.55 N/A 2.72 3.84Reward ( n= 49 )Friedman 65.42 172.63 173.50 129.79DROID vs BC Batch N/A N/A N/A N/ADROID vs BC Batch Large N/A N/A N/A N/ADROID vs Diffusion N/A N/A N/A N/ADROID vs A VRIL Batch 2.73 6.16 6.51 0.21DROID vs A VRIL Batch XL 6.10 7.96 8.10 0.34DROID vs A VRIL SRL 5.97 7.69 7.94 2.88DROID vs A VRIL Single 3.16 6.05 6.56 4.59DROID vs MSRD-Offline 2.18 0.32 3.59 2.48C.2.4 MetricsHere, we include further description of the metrics we study in the MPP problem:1. Average Distance from Midpoint: The average distances from our policies’ predicted way-points to the demonstrated waypoints.2. Distance from Endpoint: The average distance from the final waypoint selected by the pathgenerated by each technique to the goal point.3. Undirected Hausdorff Distance [64]: This metric measures the maxima between the Di-rected Hausdorff distances mapping both our learned policy’s set of waypoints to the expertwaypoints and vice-versa.4. Average Log Likelihood: This metric measures the likelihood of expert demonstration un-der the learned policy.C.2.5 AnalysisWe showcase the specific metrics that DROID outperforms baseline techniques on for MPP in Ta-ble 6. Rather than assuming homogeneity across demonstrations or discarding data to design apersonalized policy for each RP, DROID takes advantage of per-RP modeling and knowledge shar-ing to significantly outperform three out of four metrics in the Diverse Demonstration modelingbenchmark. On the policy generalization benchmark, DROID can also model the latent objectivesfrom diverse experts to design a trajectory in unseen Sols that align closer to the expert’s true pathwhile successfully capturing the high-level common task goal. Lastly, DROID is the only technique20Figure 4: This figure shows Shapley values (the contribution of each of the composite features tothe model’s reward estimate) for the learned rewards of RP 1 (left) and 5 (right) evaluated on Sol2030’s demonstrated path.Figure 5: These figures show DROID’s policy outputs on a terrain map to plan for the next way-point from the Start point (left) and Waypoint 1 (right). The orange spheres represent the selectedwaypoints of DROID and the expert. Highlighted in green above the terrain map are the top 10highest-rated successor waypoints. The orange labels correspond to DROID’s found waypoints, andthe red labels correspond to the expert demonstration’s waypoints.to show significantly better performance on downstream reward transfer, indicating the learned re-ward is a more useful encoding of an expert’s latent objective and can be used to better interpret thesalient features of a given expert.C.2.6 Additional Qualitative AnalysisIn this section, we discuss the additional contributions of DROID to the goal of interpreting expertdecision-making and how it is valuable in the domain of path planning for the Mars Curiosity Roverand future missions.Feature Contribution Analysis We perform a Shapley value analysis [71] on each RP’s learnedreward function. The analysis measures how adding a feature would change the prediction outputand is helpful in comparing the relative importance different RPs place on features, even on Solsthey have not explicitly planned on. As shown in Figure 4, we evaluated two randomly selected RPs(1 and 5) on Sol 2030. We observe that all drivers value that the path has a low pitch. However,RP 1 prefers to have a smaller turning angle, while RP 5 values a lower pitch. This demonstrateshow DROID can model heterogeneous strategies and quantify the influence of specific features onthe modeled objective function. We can identify why certain RPs like or dislike a given path andunderstand which features contribute to that assessment.Learned Reward and Policy First, we analyze our learned Q-function and study how it can pro-vide insight into the decision-making process of our model. We showcase in Figure 5 how ourtechnique can highlight the top 10 highest-rated successor waypoints from the start and midpoint21Figure 6: This figure visualizes actual 3D Mars terrain map with DROID’s mean estimate of thelearned task (left) and strategy (right) reward for Sol 2163. X-axis and Y-axis correspond to thesurface coordinates, Z-axis corresponds to elevation, and heatmap coloring is the normalized rewardoutput. The black line with arrows interlayed represents the path of the rover. The orange labels areDROID’s found waypoints and the red labels are ground-truth demonstration waypoints. Point 0 isthe starting point, Point 1 is the intermediate waypoint, and Point 2 is the final waypoint selected.Table 7: This table shows the ablation performance of DROID in Cartpole (Left) and MPP (Right).Bold indicates the best-performing model of the metric.CartPoleBenchmark KL Frechet Undirected LogMethod Divergence Distance Hausdorff LikelihoodDiverse Demonstration Modeling ( n= 40 )Ablation 1 6.758 0.492 0.340 -63.926Ablation 2 8.250 0.716 0.608 -68.511Ablation 3 6.718 0.532 0.387 -63.728Ablation 4 9.756 0.992 0.768 -81.057Ablation 5 5.868 0.420 0.263 -42.553Ablation 6 6.244 0.444 0.298 -92.006DROID (ours) 6.047 0.425 0.261 -37.948Policy Generalizability ( n= 40 )Ablation 1 7.253 0.504 0.358 -66.059Ablation 2 8.889 0.743 0.654 -71.426Ablation 3 6.455 0.509 0.316 -65.631Ablation 4 10.045 1.033 0.797 -84.323Ablation 5 4.632 0.419 0.276 -45.663Ablation 6 6.626 0.487 0.284 -92.680DROID (ours) 5.271 0.412 0.207 -38.057MPPBenchmark Undirected Distance from Final LogMethods Hausdorff Waypoint Distance LikelihoodDiverse Demonstration Modeling ( n= 40 )Ablation 1 4.871 1.557 8.391 -10.157Ablation 2 6.084 0.288 7.575 -10.104Ablation 3 7.126 0.571 7.287 -8.431Ablation 4 6.720 0.209 7.441 -8.419Ablation 5 3.910 4.014 7.498 -225.212Ablation 6 5.556 6.783 9.389 -14.479DROID 4.592 0.070 6.780 -7.261Policy Generalizability ( n= 40 )Ablation 1 8.086 1.842 8.945 -15.010Ablation 2 8.071 1.615 8.331 -16.334Ablation 3 9.318 3.933 9.644 -16.503Ablation 4 8.518 0.576 7.744 -11.391Ablation 5 6.162 7.295 9.537 -13.610Ablation 6 8.026 9.078 9.246 -30.037DROID 6.144 0.277 6.407 -18.483positions, respectively. Providing multiple options aligned with the expert’s latent preference couldbe beneficial for a future assistive tool for RPs. Expert drivers at NASA can also study DROID’srecommended waypoint and similar waypoints that are rated highly by our model. If an expert dis-agrees with the best action identified by DROID, we can find several additional options that alignwith that expert’s latent preferences.We examine the uncertainty in the reward predictions of our model by plotting the standard deviationof the strategy reward posterior. As shown in Figure 7, we can estimate how uncertain our modelis about different parts of the terrain due to the limited coverage of the dataset. Intuitively, areas ofthe state space where the demonstrations have not covered, such as the edges of the terrain, have ahigher estimate of uncertainty.22Figure 7: This figure shows a heatmap of Sol 2163 of DROID’s Strategy Reward log standarddeviation estimate, where higher value represents greater uncertainty in the reward estimation.Proposed Application to NASA The explainability provided by DROID has significant potentialapplication as a supplementary planning tool for interplanetary exploration. By leveraging a Shapelyvalue analysis of the importance of different factors, such as the change in elevation and uncertaintyabout missing data on the terrain, rover planners can gain a deeper understanding of the objectivefunction our algorithm extracts for modeling rover path planning. Therefore, DROID can explainwhat features contribute most to its perceived estimate of any human or AI-designed path.DROID may have an application to inform the design of future missions by providing more insightinto the limitations of the rover and the types of environments in which it is best suited to operate.As shown on the Curiosity Rover dataset, DROID could be applied to give rapid feedback aboutpaths that best avoid sharp rocks (Rough Terrain), which may damage the open holes on the rover’swheels [9], thus improving the longevity of the rover. Similarly, with additional data, such as orbitalsatellite imagery, our approach could be used to evaluate the value of landing sites [57] by studyingour learned RP objective function on constructed terrain maps.Furthermore, the ability to model different strategies taken by human drivers could potentially beused in the future by JPL in developing training programs. We hope a future application of DROIDwould be to capture difficult-to-articulate tribal knowledge among rover planners and identify themost important features to trainees. We can describe implicitly understood knowledge to help trainnew drivers at NASA faster and with greater efficiency. By letting DROID explain which featurescontribute most to the underlying latent RP strategy, human drivers can better understand whatfeatures to consider when navigating other extraterrestrial terrains.Our hope is that DROID lessens the burden for operators to plan out daily schedules for rovers(since it performs automated path planning that better optimizes operator preferences). Moreover,the algorithm’s ability to reason under uncertainty makes it particularly useful for fast path planninginference, even when occluded information exists from cameras or other sensors. With the DROID’sability to learn diverse expert strategies and plan under uncertainty/occlusions, our algorithm couldfurther advance autonomous rover exploration.D Additional Related WorksIn this section, we describe additional related works regarding offline path planning under uncer-tainty and navigation beyond the MDP setting.23Algorithm 1: Dual Reward and policy Offline Inverse Distillation (DROID)Input : Training iterations E, number of strategies M=|D|, demonstration dataset for allstrategies D={Dj}Mj=1, learning rate α.Output : Learned policy set Π, rewards functions RInitialize: Initialize all reward function and policy parameters Θ ={θTask,{θS−i}Mi=1,φTask,{φS−i}Mi=1}}1fori= 1toEdo2 Zero Gradients for Shared Reward and Policy Parameters: θTask, φTask3 forj= 1toMdo4 Sample a minibatch (s, a)∼ D j5 Calculate Q(s) =Qtask(s) +QS−j(s).6 Combine loss terms to calculate overall loss Laccording to Equation 77 Calculate A VRIL loss LA VRIL(θ, φ)according to Equation 1 with the twoimprovements introduced in Equation 5 and 68 Calculate reward regularization LRD({φS−i}Mi=1;D)according to Equation 3 andpolicy regularization terms LPD({θS−i}Mi=1;D)according to Equation 4.9 Calculate the gradient of Lwith respect to all parameters Θ.10 Update strategy-only reward parameters: φS-j←φS-j+α∂L∂φ S-j11 Update strategy-only policy parameters: θS-j←θS-j+α∂LL∂θS-j12 Aggregate shared task reward gradients: ∆φTask←∆φTask+∂L∂φ Task13 Aggregate shared task policy gradients: ∆θTask←∆θTask+∂L∂θTask14 Update shared reward and policy parameters with learning rate α:θTask←θTask+α∆θTask, φTask←φTask+α∆φTask.Offline Learning Offline learning has been used to teach robots to perform tasks such as assembly,manipulation, and grasping in manufacturing settings [26]. It has also been used in healthcare toassist with tasks such as clinical diagnosis [26]. In medical diagnosis, offline LfD can be used tounderstand how expert clinicians diagnose a disease based on temporal indicators of key symptomsand medical reports. DROID’s success in offline learning, particularly in complex domain settings,makes it useful for understanding an expert decision-making process without risking the safety ofhumans or expensive equipment.Path Planning Algorithm. Several works use human-inspired admissible heuristic functions toplan paths [72, 73]. Yet, these functions are handcrafted and require domain expertise to design.Model Predictive Path Integral (MPPI) is studied for local path following for rovers [74]. However,classical path planning approaches fail without a high-fidelity simulator [75, 76]. Other works lookat the problem of path planning to maximize a reward function [77, 78, 79] under uncertainty. How-ever, these techniques leverage exploration to obtain a better estimate of their cost function, whichmay not be feasible in offline learning. Our algorithm, DROID, learns heterogeneous preferencesand policies directly from expert demonstrations without assuming a hand-designed reward functionor a simulator.Generalization Performance of Navigation Algorithms. Another important factor in offline pathplanning is the generalization performance of the planning algorithm to novel terrains. To improvethe generalization performance, existing work attempts to decouple the training of feature extractionand navigation blocks using Deep RL [80]. However, they perform online planning through 2Dnavigation to extract an attention map, which is not feasible in the offline setting. Additionally,Meng et al. [81] proposes a path planning algorithm that balances the trade-off between safety andefficiency under uncertainty. However, in contrast with DROID, these approaches do not generalizeto unseen environments that contain new or additional obstacles.24Global Path Planning for the Martian Domain. Several prior works study path planning in theMartian domain but focus on local path planning (short-range egocentric navigation) and do notaddress global path planning: long-range end Hedrick et al. [56] proposes efficient Martian pathplanning and Rover-IRL [57] learns a local cost function from demonstration, but both fail to learnan RP objective function that can transfer to new and unexplored terrain spaces, a key challengein the Mars domain [9]. Unlike previous path-planning approaches, DROID infers a global RPobjective function that can transfer to planning downstream on unseen terrains.E Pseudocode for DROIDWe present the pseudocode for DROID in Algorithm 1. For each training iteration E(line 1),we iterate over all Mdemonstrations (line 3). For each strategy, we sample a minibatch fromthe strategy’s dataset, Dj(line 4). We then calculate the overall loss L(line 6), with which wecalculate the gradient with respect to all parameters (line 9). We update the strategy-only rewardand policy parameters (lines 10-11) and aggregate shared task reward and policy gradients (lines12-13). After iterating over all Mstrategies, we update the shared reward and policy parameterswith the aggregated gradients (line 14).25 |
W7eg2NqFJ60 | Transforming a Quadruped into a Guide Robot forthe Visually Impaired: Formalizing Wayfinding,Interaction Modeling, and Safety MechanismJ. Taery KimGeorgia Institute of Technologytaerykim@gatech.eduWenhao YuGoogle DeepMindmagicmelon@deepmind.comYash KothariGeorgia Institute of Technologyykothari3@gatech.eduBruce N. WalkerGeorgia Institute of Technologybruce.walker@psych.gatech.eduJie TanGoogle DeepMindjietan@deepmind.comGreg TurkGeorgia Institute of Technologyturk@cc.gatech.eduSehoon HaGeorgia Institute of Technologysehoonha@gatech.eduFigure 1: We develop a robot guide dog by formalizing the wayfinding task, developing an interac-tion model from the collected data ( Left), and employing action shielding, to guide a user ( Right ).Abstract: This paper explores the principles for transforming a quadrupedal robotinto a guide robot for individuals with visual impairments. A guide robot has greatpotential to resolve the limited availability of guide animals that are accessible toonly two to three percent of the potential blind or visually impaired (BVI) users.To build a successful guide robot, our paper explores three key topics: (1) for-malizing the navigation mechanism of a guide dog and a human, (2) developing adata-driven model of their interaction, and (3) improving user safety. First, we for-malize the wayfinding task of the human-guide robot team using Markov DecisionProcesses based on the literature and interviews. Then we collect real human-robot interaction data from three visually impaired and six sighted people anddevelop an interaction model called the “Delayed Harness” to effectively simulatethe navigation behaviors of the team. Additionally, we introduce an action shield-ing mechanism to enhance user safety by predicting and filtering out dangerousactions. We evaluate the developed interaction model and the safety mechanismin simulation, which greatly reduce the prediction errors and the number of colli-sions, respectively. We also demonstrate the integrated system on a quadrupedalrobot with a rigid harness, by guiding users over 100+ m trajectories.Keywords: Assistive Robot, Autonomous Navigation, Interaction Modeling7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.1 IntroductionGuide dogs play a crucial role in the lives of blind or visually impaired individuals (BVIs) by pro-viding them with increased mobility and independence [1]. Unfortunately, limited availability isone of the known issues of guide animals due to the lengthy training process and their relativelyshort working lifespan. Consequently, only a small fraction, approximately two to three percent [2],of potential users are able to benefit from the assistance of guide dogs. Researchers have exploredvarious types of assistive technologies for BVIs, including smart canes [3, 4, 5, 6, 7], belts [8, 9],glasses [10], audio/haptic systems [11, 12], and mobile robots [13, 14, 15, 16, 17, 18]. Recently, thedevelopment of affordable and capable quadrupedal robots has opened up opportunities for robotguide dogs [19, 20, 21, 22, 23], which possess notable qualities in terms of autonomy and mobility.Developing robotic guiding systems for BVIs requires additional challenges beyond the commonpractices of the existing autonomous robot navigation systems [24, 25]. Unlike conventional for-mulations that simplify the problem as Point-goal orObject-goal navigation [26], navigation ofthe human-robot dyad often involves more complex and subtle bidirectional interactions, includinghigh-level command generation by the user and intelligent disobedience by the robot. Furthermore,ensuring BVI user safety is critical during navigation, yet it is a challenging problem because thesensor configurations are robot-centric.This paper discusses principles and practical solutions for transforming an autonomous robot into aguide robot for individuals with visual impairments. The topics that we explore include formalizingthe wayfinding task of a human and guide dog team, developing a data-driven model of the humanand the guide robot interaction, and improving the safety of the BVI user. First, we identify themechanism of wayfinding in the literature [27, 28, 29] and formally define the guided navigationtask using Markov Decision Processes (MDPs). Then we collect real human-robot interaction dataof three BVI and six sighted users and develop a concise model, Delayed Harness , to better predictthe navigation behaviors of the team in simulation. Finally, we introduce an action shielding mech-anism [30] to improve the safety of the BVI user, which predicts the human position in the next stepand filters out dangerous actions.A mid-size quadrupedal robot, AlienGo [31], with a rigid harness is chosen as the evaluation plat-form. Initially, we evaluate the accuracy of the developed Delayed Harness interaction model andthe improved safety of the learned policy in the Habitat [25] simulator. Then we deploy the de-veloped system on a real AlienGo robot to demonstrate the effective long-range navigation of thehuman-robot team. Our contributions include: (1) We develop a formal definition of the wayfindingtask of the human and guide dog team. (2) We collect human-robot interaction data, develop a con-cise interaction model, and open-source it to foster guide robot research in the robotics community.(3) We improve the safety of users by employing action shielding. (4) We demonstrate a completesystem on hardware that can travel along trajectories for more than 100meters.2 Related WorkAssistive Guidance Systems for Visually Impaired People. Researchers have developed diverseassistive systems to enhance mobility for visually impaired people. Such assistive systems consistof essential elements, including sensors for perceiving the environment, computers for processinginformation, and various interfaces to instruct users. One common form is passive hand-held orwearable devices, such as canes [3, 4, 5, 6, 7], belts [8, 9, 11, 12], smart glasses [10], and smartphoneapplications [32, 33], which generally give audio or haptic feedback to inform users. On the otherhand, there exist guide robots [34, 13, 14, 19, 20, 22] that are designed to lead users actively towarddestinations. Since the earliest guide robot MELDOG [34], such systems are often built on top ofwheeled robots for mobility [13, 14, 15, 35, 16, 17, 36, 37, 38]. Recently, legged robots are emergingin consideration of the ability to travel as real guide dogs [19, 20, 21, 22, 23, 39, 40, 41].Human Modeling in Guide Robots. Developing guide robots necessitates modeling human-robotinteraction because they communicate with users via physical interactions. For instance, Wang etal. [16] introduce a rotating rigid rod model that assumes the user is holding the end of the rotating2rod at a fixed distance. On the other hand, some guide robots [19, 42, 22] communicate via a flexibleleash, which investigates a mathematical model for capturing slack and taut modes. Likewise, inter-actions are typically modeled based on the interface of robots that have been developed [43, 35]. Themost common interface for a real guide dog is a fixed harness [44, 45]. In this paper, we claim thata simple offset between the human and the robot is insufficient to capture bidirectional interactionsand propose a novel mathematical model to improve the human-robot navigation experience.Safety-aware Reinforcement Learning for Navigation. Autonomous robot navigation hasgained significant attention in robotics and has been approached through both planning-based meth-ods [46, 47] and learning-based techniques [48, 49, 50]. Please refer to the survey paper [51, 52]for further detailed references. Numerous algorithms have been developed to enhance the safety ofnavigation, including action shielding [30, 53], model predictive methods [54], multi-agent strate-gies [55], and near future considerations [56, 57]. Our work is also inspired by these prior workswhile being more customized toward the human-robot team navigation task.3 Formalizing the Wayfinding TaskThis section identifies how a human interacts with a guide dog and formulates the wayfinding task fora guide robot using Markov Decision Processes (MDPs). Our problem definition aims to developmore natural and comfortable user interfaces of guide robots for BVI users, compared to typicalPoint-Goal orObjective-Goal navigation problems [26].Wayfinding with a Guide Dog. We first identify how guide dogs and human handlers communi-cate to work as a team by reviewing relevant background materials [44, 27, 28, 29] and conductinginterviews with actual guide dog users. Typically, the human user is the one who generates high-level directional cues, such as “go straight” or “turn left”, based on his or her own knowledge, whilethe guide dog handles local path following and collision avoidance based on visual perception. Un-less the team routinely travels to the same destinations, the guide dog is not typically aware of thegoal or path. Then, the guide dog sees and leads the human handler based on the given command.The guide dog needs to follow the virtual “travel line”, a centerline of the hallway directed by thehandler while adjusting a trajectory to ensure the safety of both the human and the dog itself. We re-fer to this type of human-dog navigation as Wayfinding , and it is our goal to replicate this interactionwith a human-robot team.In addition, a guide dog should understand user intention and compensate for any errors. We findtwo major types of human error: Orientation error andTiming error . For instance, orientation errorcan occur in environments such as a long hallway, where the user command is misaligned with thetravel line and must be corrected. Timing error occurs when a human handler gives turn commandsbefore or after the actual turning point. For instance, if a turning command is given early whiletraveling down a hallway, the robot is expected to find the next available turn or stop if no turn isavailable. Please refer to Figure 6 and the supplemental video for illustrative examples.Problem Formulation. We define the wayfinding task for the robot as following high-level hu-man directional cues: going straight ,turning left/right , orstopping while adjusting a detailed trajec-tory to avoid collisions. We use Markov Decision Processes (MDPs) to formalize this wayfindingtask, which is defined as a tuple of the state space S, action space A, stochastic transition functionp(st+1|st,at), reward function r(st,at), and the distribution of initial states s0∼ρ0. The state andaction spaces can vary depending on the choice of the robot. For instance, one common choice is toconstruct the state using onboard sensors, such as RGB-D or lidar, while defining the action spaceas a set of possible navigation commands to the robot. We use depth images, lidar readings, anda relative location from the start location as a state and adopt discrete actions based on the recentautonomous navigation papers [49, 58, 50].The reward function is critical to describe the desirable wayfinding behavior, which is defined as:rt= (dt−dt−1)−a∆ max ( | ̄θ−θt| −b,0)−ccollidet−λ (1)where dtis the human’s travel distance from the start point, θis the human’s orientation, ̄θis thetarget orientation (e.g., 0◦for the straight command and 90◦for the turn left command), ais a3scalar weight, bis the error margin above which the corresponding term is activated, ccollidet is acollision penalty, and λis a time penalty. The first term encourages the team to travel as far aspossible while minimizing the human’s unnecessary movements, such as stepping backward, thesecond term encourages matching the desired orientation (with some flexibility), and the third andfourth terms regularize unsafe and unnecessary behaviors, respectively.Because our formulation rewards the team for traveling long distances, it encourages the navigationpolicy to compensate for any orientation or timing errors, particularly in narrow hallways. Also notethat all the terms are defined in a human-centric manner in order to provide a smooth trajectory forthe visually impaired user, not for the robot itself.4 Modeling Human-Guide Robot InteractionsIn order to develop a successful guiding policy, it is important to simulate human-robot interactionsaccurately. Guide dogs wear a rigid harness, instead of a loose leash, allowing the human to feelboth the guide dog’s position and its orientation. We adopt such a fixed harness for our human-robotteams. While there are established human-robot models, such as a rotating rod [16], a slack/tautleash [19], and a geometric model [35], we require a new model specifically tailored to our guidedog robot with a harness. In this section, we propose a new interaction model, the Delayed Harness ,and optimize the model parameters using real-world data.Preliminary: Rigid Harness Model. The simplest model for describing rigid harness interactionsis to use a fixed offset between the human and the robot. In this model, the human maintains theirrelative position to the robot, facing the same forward direction. Formally, when the robot positionand orientation (xRt, yRt, θRt)is given, the human state (xHt, yHt, θHt)is computed by assuming afixed distance dbetween the handler and the robot as follows:xHt=xRt+d·cosθt, yHt=yRt+d·sinθt, θHt=θRt. (2)This model expects that the human and the robot rotate together, requiring extra space compared toleash, string, or rod connections.Delayed Harness Model. A rigid harness model is not accurate because a human cannot exactlyfollow the robot in reality. As a result, a learned robot policy based on such a model may lead tounnecessary movements or even collisions with a user.Figure 2: Delayed Harness Model.To this end, we develop a more flexible model named De-layed Harness . This model is based on the observation thata robot’s action causes the change of the relative location, butthat a human tends to gradually recover the default relative lo-cation (Figure 2). Let us define the robot and human states asxHt= [xHt, yHt, θHt]TandxRt= [xRt, yRt, θRt]T, which givesthe offset ot=xHt−xRt. Once the robot takes an action at,it first changes its position: xRt+1=xRt+at. Now, we havea temporary offset ˆot+1=xHt−xRt+1(gray dashed line inFigure 2). We gradually interpolate this temporary offset to-ward the default offset, ̄o(green line), to obtain the correctedoffset in the next time: ot+1=αˆo+ (1−α) ̄o. Finally, we compute the next human positionxHt+1=xRt+1+ot+1. Note that we use simple arithmetic operators for illustration, but in reality,they should be handled as transformation operations.Data Collection and Model Fitting. OurDelayed Harness model requires four parameters: thedefault offset ̄o= [∆ x,∆y,∆θ]Tand the decaying parameter α. We determine these parametersby fitting them to the collected trajectories. We recruit three BVI users and six sighted people, andask them to walk with a manually-controlled robot guide dog over five trajectories described in thework of Nanavati et al. [35]. The sighted individuals each wore a eye mask while walking with therobot. Please refer to the supplemental material for more details of the data collection process. Thecollected data is provided as supplemental material ( interaction-data.zip ).45 Improving Safety via Action ShieldingWe further introduce an action-shielding approach [30] to improve safety during human-robot navi-gation. Specifically, our focus is on enhancing the safety of the BVI user based on data from sensorsarranged in a robot-centric layout. Inspired by a guide dog user who also uses a cane, our actionshielding predicts the changes in the human and robot positions according to the possible actionsand filters out unsafe actions. Particularly, we designed the shielding mechanism based on lidar,which has relatively better accuracy ( ±5cm) than a depth camera ( 5% up to 15 m).The implementation of such action shielding requires two subroutines: (1) predicting possible colli-sions in the next time step, and (2) suppressing potentially risky actions.Figure 3: Examples of Shielding Zones.Computation of Shielding Zone. To iden-tify the shielding zone, we compute the occu-pied areas of both agents in the current andnext time steps, which are approximated as cir-cular shapes. While we have four collisionprimitives, the region between the occupied ar-eas of the human and robot agents should beempty because the two agents are physicallyconnected through a harness. In addition, thezone between the current and the next timesteps should also be collision-free to account for continuous movements. Therefore, we computea convex hull of all the collision shapes from both agents at two timesteps to compute the shield-ing zone (red line in Figure 3). Once computed, we sample rays from the origin to determine thedistance thresholds for lidar. Refer to the Appendix 8.2 for more details.Suppressing Unsafe Actions. When shielding is activated, we can suppress unsafe actions byadjusting the action probability distribution by a suppression factor β. A suppression factor of zeromeans assigning zero probability to the unsafe actions to force the policy to select safe actions only.It is also possible to employ a small positive value to encourage exploration during training.The action shielding technique can be utilized during both training and testing. During training, weanticipate the agent will learn a safer policy, especially when sensing is not perfect due to sensornoises or blind spots. Alternatively, we can combine action shielding with any pre-trained policyto improve its safety at the evaluation stage. We examine how the learned policy performs underdifferent action suppression factors in ideal and realistic sensing conditions in Section 6.3.6 ExperimentsWe conducted experiments to address the following questions: (1) Is the proposed delayed harnessmodel more accurate than other models? (2) How does the interaction model impact performance?(3) Does action shielding effectively decrease collisions in noisy environments? (4) When combined,can the proposed robot guide dog navigate to the destination with a human user?6.1 Training DetailsOur policies were trained in the Habitat simulator [25] with the Matterport3D dataset [59] using DD-PPO [49]. The training typically took 10 million steps until convergence, which roughly correspondsto two to three days. The observation space consists of all the sensor information, including depthimages, 2D lidar reading, and the relative location from the starting position, followed by a one-hot encoded high-level directional cue provided by the user. For the low-level robot action space,we choose ten discrete actions {stop, forward, turn 10◦left/right, sidestep left/right, diagonal stepstoward the front left/right }inspired by the recent navigation literature [60]. We particularly includedside and diagonal steps because they are effective for guiding the user to escape confined regions.5Table 1: Comparison of Different Interaction ModelsModel Fixed-unopt-ind Fixed-opt-ind RR-opt-ind DH-opt-ind (ours) DH-opt-allRMSE 136.2 102.8 431.4 86.3 123.1Figure 4: ( Left) Different trajectories of interaction models optimized using the collected motioncapture data on each individual subject. ( Right ) Each subject’s base positions derived from theoptimized results of the delayed harness model.6.2 Comparison of Interaction ModelsWe first compared our delayed harness model against two baselines: a fixed harness and a rotatingrod[16]. By default, the parameters are optimized for each individual subject, which gives us threemodels: Fixed-opt-ind ,RR-opt-ind (rotating rod), and DH-opt-ind (delayed harness). In addition, weincluded the unoptimized setting, Fixed-unopt-ind , which uses the starting parameter to estimate thedefault offset ̄oand assumes zero decay α= 0. We also investigated the DH-opt-all model, whichoptimizes a single set of parameters for all the subjects. For all the optimized models, we fitted theparameters over ten trajectories and validated them over five trajectories. The results indicated thattheDH-opt-ind model exhibited the highest accuracy, followed by Fixed-opt-ind ,DH-opt-all ,Fixed-unopt-ind , and RR-opt-ind (Table 1). Figure 4 left presents the example of the different trajectoriesbetween Fixed Harness andDelayed Harness models. For the same time step, Delayed Harness(connected via green line) is closer to the actual human location (orange line) compared to FixedHarness (grey line) by capturing the delayed response of the human user. It is also worth mentioningthat the low accuracy of a rotating rod model was because its interface was different than a harness:therefore, we did not further investigate more complex rod-based models, such as the geometricmodel proposed by Nanavati et al. [35].The accuracy difference between DH-opt-ind andDH-opt-all highlights the importance of mod-els customized for users. We observed that subjects have different default offsets (Fig. 4 right) orresponsiveness to direction changes. Therefore, we decided to train policies for each individual,assuming that it is possible to collect user-specific data at onboarding. In the future, it will be inter-esting to investigate preference optimization techniques [61] to improve robot guide dog behaviors.Table 2: Rewards with Different Train-ing and Testing Interaction Models.TestFixed DelayTrainFixed 5.96 4.27Delay 4.00 5.04Indeed, selecting the appropriate interaction model is criticalto achieving the best human-robot team navigation. Table 2presents the performance of the learned policies using var-ious interaction models at training and testing times. Bothpolicies with the fixed or delayed harness models showedtheir best performance when evaluated with the correspond-ing model. However, if the model is altered, the performancedrops, approximately 25% lower than the original perfor-mance. Therefore, we suggest to train a policy with a moreaccurate interaction model, such as Delayed Harness .6SensorConfigSupp. Factorduring TrainSupp. Factorduring TestCollision-FreeEp. Ratio ↑Avg. CollisionsPer Ep. ↓Ideal 0.0 0.0 (diverges)0.1 0.0 0.96 0.040.5 0.0 0.84 1.460.9 0.0 0.88 1.801.0 0.0 0.92 1.40(no shielding) 1.0 1.0 0.32 8.20Noisy 0.0 0.0 (diverges)0.1 0.0 0.68 4.120.5 0.0 0.84 1.480.9 0.0 0.64 2.521.0 0.0 0.72 3.16(no shielding) 1.0 1.0 0.52 3.96Table 3: Performance of Action Shielding in Ideal and Noisy Environments.6.3 Effectiveness of Action ShieldingThe main objective of our action shielding mechanism is to improve the safety of the human-robotteam in unseen environments with noisy sensor readings. To this end, we compared a policy withaction shielding against a vanilla RL policy in both ideal and noisy simulated environments. Addi-tionally, we varied an action suppression factor βduring training as well, which has a large effecton the performance. For noisy environments, we employed Gaussian noise for the lidar and Red-wood [62] noise with an intensity of 1.0 for the depth camera.In Table 3, we present the collision-free episode ratio and the average number of collisions perepisode as metrics for evaluating the safety of the policies. Overall, it is evident that activating actionshielding ( β <1.0) during evaluation effectively reduces the occurrence of collisions. However, thedesirable training configurations differed in ideal and noisy environments. In an ideal setting, theperformance is not very sensitive to the choice of the suppression factor. It is even very beneficial toturn on action shielding only at the testing stage, which shows a great performance margin against noshielding (collision-free episode ratios: 0.92vs0.32). In a noisy environment, training a policy withmild action shielding ( β∼0.5) was more effective for adapting a navigation strategy to cope withaction shielding. In any configuration, zero action supression βduring training results in divergenceof learning, which is likely due to insufficient exploration.Action shielding was effective even in real environments with noisy sensor readings. We presentedtwo distinct scenarios where the vanilla policy (without action shielding) failed to generalize to novelsituations, such as encountering curtains or large boxes that were unseen during training. Conversely,the policy with action shielding successfully navigated around these obstacles by predicting the userand robot trajectories. For more details, please refer to the supplemental video.6.4 Real-world ExperimentsFigure 5: Hardware PlatformWe deployed the proposed robot guide dog system on the hard-ware of AlienGo [31] by Unitree (Figure 5). This quadruped robotwas equipped with a Zed depth camera, a Slamtec RPLIDAR li-dar, and an Intel t265 tracking camera. The robot leads the uservia a harness while taking verbal commands as directional cues.The policy was trained and deployed with action shielding withaction supression of 0.5 and 0.0, respectively, based on Table 3.General Navigation. We assigned two BVI and three sighted users wearing an eye mask the taskof completing five indoor routes with four directional cues: forward ,left ,right , and stop .Route A ( 146 m) is a curved path expected to be traversed by a single forward cue. Routes B(130m), C ( 121m), D ( 120m), and E ( 80m) required the user to issue multiple turns based on theirown localization. Refer to Appendix 8.3 and the supplemental video for the details of the routes.We also set additional obstacles, such as wet floor signs and opened doors, on top of the existing7chairs and trashcans. In our experiments, the user successfully accomplished all the tasks withoutcollisions. In Route A, a single command of forward was sufficient to complete all the tasks, evenin a very long curved corridor. In Routes B and C, the user sometimes issued turning commandsearly or late, but the robot was able to compensate for the timing errors. In Routes C, D, and E, arobot intelligently adjusted the trajectory along the hallway without additional cues. Where therewere obstacles ahead, the robot guide dog led the user to avoid them while staying in the travel line.Occasionally, the human got close to obstacles or walls, and the robot prevented human collisionsby actively using action shielding, which encourages sidestepping as a means of avoidance. Afterthe navigation experiments, we conducted interviews with the BVI users (Appendix 8.3). The userswere positive about the system’s controllability and found their experience improved with familiarityduring the second trial.Figure 6: Trajectories with Two Type of HumanErrors: Orientation Error (a) and Timing Error (b).Orientation and Timing Errors. As ex-plained in Section 3, the guide robot must effec-tively handle cues that are provided with sometiming or orientation errors caused by humanusers. We showcase the robustness of the de-veloped robot guide dog by displaying the tra-jectories involving human errors. Figure 6(a)displays orientation errors on a forward cue,where the robot corrected the initial orientationerror and followed the direction of the hallwayto travel as long as possible. Figure 6(b) illus-trates the turning with timing errors , where therobot took diagonal stepping or 120◦turning to correct early or late issued commands, respectively.7 Conclusion and LimitationThis paper presents three topics and potential solutions for transforming a standard quadrupedalrobot into a robot guide dog to assist blind or visually impaired individuals. First, we study thenavigation patterns of human-guide dog pairs and establish a formal wayfinding task using MarkovDecision Processes (MDPs). Then we propose a concise interaction model called the Delayed Har-ness to effectively represent the interaction between a human and a robot guide dog, which leadsto more accurate behavior prediction and better navigation performance. The parameters of thisinteraction model are optimized with respect to the real interaction data of three visually impairedand six sighted subjects, which is provided as supplemental material to this manuscript. Finally,we introduce an action shielding mechanism to improve the safety of the human user, inspired bya guide dog user who also uses a cane along with a guide dog. We demonstrate that the integratedrobot guide dog can navigate with users over multiple 100+ m long trajectories. We hope that thetopics and techniques discussed in this study will inspire further research in the development ofguide robots for blind and visually impaired individuals.Limitation. The proposed solutions in the paper have room for improvement, and we plan toinvestigate the following research directions. Firstly, while we have explored only a limited set ofbasic verbal cues, real guide dog users employ a wider range of commands, such as “find an elevator”or “follow the person”, combined with non-verbal cues. It will be beneficial to formalize a moreexpanded set of human-guide dog interactions. Additionally, although our Delayed Harness modeleffectively captures typical navigation behaviors, it struggles with out-of-distribution situations likeusers pausing to chat with someone. In the future, we will expand the provided dataset to encompasssuch diverse scenarios. The developed action shielding mechanism is effective but not perfect.For instance, it can fail even in simulation due to blind spots of the lidar sensor. To resolve thisissue, we will explore alternative sensors and safety mechanisms to improve the safety of the humanand the robot. Our proposed system requires an hour-long data collection session for personalizednavigation. To mitigate this, we will explore online adaptation algorithms to reduce training time.Finally, we plan to evaluate the system on a larger population of blind or visually impaired users togather feedback for further improvements.8AcknowledgmentsThis research was supported by Google Research Collabs and Google Cloud. JK was supported bythe National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655.Any opinions, findings, and conclusions or recommendations expressed in this material are thoseof the author(s) and do not necessarily reflect the views of the National Science Foundation, or anysponsor.References[1] L. Whitmarsh. The benefits of guide dog ownership. Visual impairment research , 7(1):27–42,2005.[2] Guiding Eyes for the Blind. https://www.guidingeyes.org/about/faqs/ , 2022.[3] J. Borenstein and I. Ulrich. The guidecane-a computerized travel aid for the active guidanceof blind pedestrians. In Proceedings of International Conference on Robotics and Automation ,volume 2, pages 1283–1288. IEEE, 1997.[4] S.-J. Kang, Y . Ho, and I. H. Moon. Development of an intelligent guide-stick for the blind.InProceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat.No. 01CH37164) , volume 4, pages 3208–3213. IEEE, 2001.[5] K. Yatani, N. Banovic, and K. Truong. Spacesense: representing geographical information tovisually impaired people using spatial tactile feedback. In Proceedings of the SIGCHI Confer-ence on Human Factors in Computing Systems , pages 415–424, 2012.[6] S. Kher Chaitrali, A. Dabhade Yogita, K. Kadam Snehal, D. Dhamdhere Swati, and V . Desh-pande Aarti. An intelligent walking stick for the blind. International Journal of EngineeringResearch and General Science , 3(1):1057–1062, 2015.[7] R. Vel ́azquez, E. Pissaloux, P. Rodrigo, M. Carrasco, N. I. Giannoccaro, and A. Lay-Ekuakille.An outdoor navigation system for blind pedestrians using gps and tactile-foot feedback. Ap-plied Sciences , 8(4):578, 2018.[8] S. Shoval, J. Borenstein, and Y . Koren. The navbelt-a computerized travel aid for the blindbased on mobile robotics technology. IEEE Transactions on Biomedical Engineering , 45(11):1376–1386, 1998.[9] H.-C. Wang, R. K. Katzschmann, S. Teng, B. Araki, L. Giarr ́e, and D. Rus. Enabling indepen-dent navigation for visually impaired people through a wearable vision-based feedback system.In2017 IEEE international conference on robotics and automation (ICRA) , pages 6533–6540.IEEE, 2017.[10] W.-J. Chang, J.-P. Su, L.-B. Chen, M.-C. Chen, C.-H. Hsu, C.-H. Yang, C.-Y . Sie, and C.-H.Chuang. An ai edge computing based wearable assistive device for visually impaired peoplezebra-crossing walking. In 2020 IEEE International Conference on Consumer Electronics(ICCE) , pages 1–2, 2020. doi:10.1109/ICCE46568.2020.9043132.[11] J. Wilson, B. N. Walker, J. Lindsay, C. Cambias, and F. Dellaert. Swan: System for wearableaudio navigation. In 2007 11th IEEE international symposium on wearable computers , pages91–98. IEEE, 2007.[12] B. N. Walker and J. Wilson. Swan 2.0: Research and development on a new system forwearable audio navigation. In Proceedings of the WirelessRERC State of the Technology Forum2021 Atlanta, GA, USA (virtual conference) (23-24 March) , 2021.9[13] Y . Wei, X. Kou, and M. C. Lee. Smart rope and vision based guide-dog robot system for thevisually impaired self-walking in urban system. In 2013 IEEE/ASME International Confer-ence on Advanced Intelligent Mechatronics , pages 698–703, 2013. doi:10.1109/AIM.2013.6584174.[14] T.-K. Chuang, N.-C. Lin, J.-S. Chen, C.-H. Hung, Y .-W. Huang, C. Teng, H. Huang, L.-F. Yu,L. Giarr ́e, and H.-C. Wang. Deep trail-following robotic guide dog in pedestrian environmentsfor people who are blind and visually impaired-learning from virtual and real worlds. In 2018IEEE International Conference on Robotics and Automation (ICRA) , pages 5849–5855. IEEE,2018.[15] D. R. Bruno, M. H. De Assis, and F. S. Osorio. Development of a mobile robot: Roboticguide dog for aid of visual disabilities in urban environments. Proceedings - 2019 LatinAmerican Robotics Symposium, 2019 Brazilian Symposium on Robotics and 2019 Work-shop on Robotics in Education, LARS/SBR/WRE 2019 , pages 104–108, 2019. doi:10.1109/LARS-SBR-WRE48964.2019.00026.[16] L. Wang, J. Zhao, and L. Zhang. Navdog: robotic navigation guide dog via model predictivecontrol and human-robot modeling. In Proceedings of the 36th Annual ACM Symposium onApplied Computing , pages 815–818, 2021.[17] V . Ranganeni, M. Sinclair, E. Ofek, A. Miller, J. Campbell, A. Kolobov, and E. Cutrell. Ex-ploring levels of control for a navigation assistant for blind travelers. In Proceedings of the2023 ACM/IEEE International Conference on Human-Robot Interaction , HRI ’23, page 4–12,New York, NY , USA, 2023. Association for Computing Machinery. ISBN 9781450399647.doi:10.1145/3568162.3578630. URL https://doi.org/10.1145/3568162.3578630 .[18] S. Azenkot, C. Feng, and M. Cakmak. Enabling building service robots to guide blind people aparticipatory design approach. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 3–10. IEEE, 2016.[19] A. Xiao, W. Tong, L. Yang, J. Zeng, Z. Li, and K. Sreenath. Robotic guide dog: Leading ahuman with leash-guided hybrid physical interaction. In 2021 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 11470–11476. IEEE, 2021.[20] A. Sivacoumare, S. M. S, S. Satheesh, T. Athul, M. V , and T. Vinopraba. Ai quadruped robotassistant for the visually impaired. In IECON 2021 – 47th Annual Conference of the IEEEIndustrial Electronics Society , pages 1–5, 2021. doi:10.1109/IECON48115.2021.9589508.[21] K. Mehrizi. Quadrupedal Robotic Guide Dog with V ocal Human-Robot Interaction. pages2–4, 2021. URL http://arxiv.org/abs/2111.03718 .[22] Y . Chen, Z. Xu, Z. Jian, G. Tang, Y . Yangli, A. Xiao, X. Wang, and B. Liang. QuadrupedGuidance Robot for the Visually Impaired: A Comfort-Based Approach. 2022. URL http://arxiv.org/abs/2203.03927 .[23] K. A. Hamed, V . R. Kamidi, W. L. Ma, A. Leonessa, and A. D. Ames. Hierarchical andSafe Motion Control for Cooperative Locomotion of Robotic Guide Dogs and Humans: AHybrid Systems Approach. IEEE Robotics and Automation Letters , 5(1):56–63, 1 2020. ISSN23773766. doi:10.1109/LRA.2019.2939719.[24] B. Shen, F. Xia, C. Li, R. Mart ́ın-Mart ́ın, L. Fan, G. Wang, C. P ́erez-D’Arpino, S. Buch,S. Srivastava, L. Tchapmi, et al. igibson 1.0: A simulation environment for interactive tasks inlarge realistic scenes. In 2021 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 7520–7527. IEEE, 2021.[25] M. Savva, A. Kadian, O. Maksymets, Y . Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V . Koltun,J. Malik, D. Parikh, and D. Batra. Habitat: A platform for embodied AI research. Proceed-ings of the IEEE International Conference on Computer Vision , 2019-Octob:9338–9346, 2019.ISSN 15505499. doi:10.1109/ICCV .2019.00943.10[26] P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V . Koltun, J. Kosecka, J. Ma-lik, R. Mottaghi, M. Savva, and A. R. Zamir. On Evaluation of Embodied Navigation Agents.7 2018. URL http://arxiv.org/abs/1807.06757 .[27] Guide dogs for the blind puppy raising manual-training phase descriptions.https://www.guidedogs.com/uploads/files/Puppy-Raising-Manual/Training-Phase-Descriptions.pdf , 2022.[28] GDB class lecture: Guidework. https://s3-us-west-1.amazonaws.com/gdb-assets/Guidework.txt?mtime=20160412125546 , 2022.[29] GDB class lecture: Why does your guide dog work? https://www.guidedogs.com/explore-resources/alumni-resources/class-lecture-materials2/why-does-your-guide-dog-work-lecture , 2022.[30] M. Alshiekh, R. Bloem, R. Ehlers, B. K ̈onighofer, S. Niekum, and U. Topcu. Safe reinforce-ment learning via shielding. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 ,pages 2669–2678, 2018.[31] Aliengo by unitree robotics. https://www.unitree.com/products/aliengo, 2020.[32] A. Meliones, C. Filios, and J. Llorente. Reliable ultrasonic obstacle recognition for outdoorblind navigation. Technologies , 10(3):54, 2022.[33] P. Theodorou, K. Tsiligkos, A. Meliones, and C. Filios. An extended usability and ux evalua-tion of a mobile application for the navigation of individuals with blindness and visual impair-ments outdoors—an evaluation framework based on training. Sensors , 22(12):4538, 2022.[34] S. Tachi, K. Tanie, K. Komoriya, and M. Abe. Electrocutaneous communication in a guidedog robot (meldog). IEEE transactions on biomedical engineering , (7):461–469, 1985.[35] A. Nanavati, X. Z. Tan, J. Connolly, and A. Steinfeld. Follow the robot: Modeling coupledhuman-robot dyads during navigation. In 2019 IEEE/RSJ International Conference on Intel-ligent Robots and Systems (IROS) , pages 3836–3843, 2019. doi:10.1109/IROS40897.2019.8967656.[36] A. Kulkarni, A. Wang, L. Urbina, A. Steinfeld, and B. Dias. Robotic assistance in indoornavigation for people who are blind. In 2016 11th ACM/IEEE International Conference onHuman-Robot Interaction (HRI) , pages 461–462. IEEE, 2016.[37] M. Kuribayashi, T. Ishihara, D. Sato, J. V ongkulbhisal, K. Ram, S. Kayukawa, H. Takagi,S. Morishima, and C. Asakawa. Pathfinder: Designing a map-less navigation system for blindpeople in unfamiliar buildings. In Proceedings of the 2023 CHI Conference on Human Factorsin Computing Systems , pages 1–16, 2023.[38] S. Kayukawa, D. Sato, M. Murata, T. Ishihara, A. Kosugi, H. Takagi, S. Morishima, andC. Asakawa. How users, facility managers, and bystanders perceive and accept a navigationrobot for visually impaired people in public buildings. In 2022 31st IEEE International Con-ference on Robot and Human Interactive Communication (RO-MAN) , pages 546–553. IEEE,2022.[39] H. Hwang, T. Xia, I. Keita, K. Suzuki, J. Biswas, S. I. Lee, and D. Kim. System configurationand navigation of a guide dog robot: Toward animal guide dog-level guiding work. arXivpreprint arXiv:2210.13368 , 2022.[40] J. T. Kim, W. Yu, J. Tan, G. Turk, and S. Ha. How to train your guide dog: Wayfinding and safenavigation with human-robot modeling. In Companion of the 2023 ACM/IEEE InternationalConference on Human-Robot Interaction , pages 221–225, 2023.11[41] D. DeFazio, E. Hirota, and S. Zhang. Seeing-eye quadruped navigation with force responsivelocomotion control. arXiv preprint arXiv:2309.04370 , 2023.[42] H. Tan, C. Chen, X. Luo, J. Zhang, C. Seibold, K. Yang, and R. Stiefelhagen. Flying guidedog: Walkable path discovery for the visually impaired utilizing drones and transformer-basedsemantic segmentation. In 2021 IEEE International Conference on Robotics and Biomimetics(ROBIO) . IEEE, 2021.[43] J. Guerreiro, D. Sato, S. Asakawa, H. Dong, K. M. Kitani, and C. Asakawa. Cabot: Designingand evaluating an autonomous navigation robot for blind people. ASSETS 2019 - 21st Interna-tional ACM SIGACCESS Conference on Computers and Accessibility , (October):68–82, 2019.doi:10.1145/3308561.3353771.[44] B. Knol, C. Roozendaal, L. Van den Bogaard, and J. Bouw. The suitability of dogs as guidedogs for the blind: criteria and testing procedures. Veterinary Quarterly , 10(3):198–204, 1988.[45] M. J. Tellefson. A perspective on teaching early harness travel to young blind children usingchildren’s visual companion dogs. Journal of Visual Impairment & Blindness , 106(5):306–312,2012.[46] G. E. Jan, K. Y . Chang, and I. Parberry. Optimal path planning for mobile robot navigation.IEEE/ASME Transactions on mechatronics , 13(4):451–460, 2008.[47] J. Bruce and M. Veloso. Real-time randomized path planning for robot navigation. In IEEE/RSJinternational conference on intelligent robots and systems , volume 3, pages 2383–2388. IEEE,2002.[48] C. Chen, Y . Liu, S. Kreiss, and A. Alahi. Crowd-robot interaction: Crowd-aware robot navi-gation with attention-based deep reinforcement learning. In 2019 international conference onrobotics and automation (ICRA) , pages 6015–6022. IEEE, 2019.[49] E. Wijmans, A. Kadian, A. Morcos, S. Lee, I. Essa, D. Parikh, M. Savva, and D. Batra. Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames. In InternationalConference on Learning Representations , 2020.[50] M. Sorokin, J. Tan, C. K. Liu, and S. Ha. Learning to navigate sidewalks in outdoor environ-ments. IEEE Robotics and Automation Letters , 7(2):3906–3913, 2022.[51] K. Zhu and T. Zhang. Deep reinforcement learning based mobile robot navigation: A review.Tsinghua Science and Technology , 26(5):674–691, 2021.[52] X. Xiao, B. Liu, G. Warnell, and P. Stone. Motion planning and control for mobile robotnavigation using machine learning: a survey. Autonomous Robots , 46(5):569–597, 2022.[53] N. Jansen, B. K ̈onighofer, J. Junges, A. Serban, and R. Bloem. Safe reinforcement learningusing probabilistic shields. 2020.[54] O. Bastani and S. Li. Safe reinforcement learning via statistical model predictive shielding.2021. doi:10.15607/rss.2021.xvii.026.[55] I. ElSayed-Aly, S. Bharadwaj, C. Amato, R. Ehlers, U. Topcu, and L. Feng. Safe multi-agentreinforcement learning via shielding. Proceedings of the International Joint Conference onAutonomous Agents and Multiagent Systems, AAMAS , 1:483–491, 2021. ISSN 15582914.[56] G. Thomas, Y . Luo, and T. Ma. Safe Reinforcement Learning by Imagining the Near Future. InM. Ranzato, A. Beygelzimer, Y . Dauphin, P. S. Liang, and J. W. Vaughan, editors, Advances inNeural Information Processing Systems , volume 34, pages 13859–13869. Curran Associates,Inc., 2021.12[57] H.-L. Hsu, Q. Huang, and S. Ha. Improving safety in deep reinforcement learning usingunsupervised action planning. In 2022 International Conference on Robotics and Automation(ICRA) , pages 5567–5573. IEEE, 2022.[58] N. Yokoyama, Q. Luo, D. Batra, and S. Ha. Benchmarking augmentation methods for learningrobust navigation agents: the winning entry of the 2021 igibson challenge. arXiv preprintarXiv:2109.10493 , 2021.[59] A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva, S. Song, A. Zeng, andY . Zhang. Matterport3d: Learning from rgb-d data in indoor environments. InternationalConference on 3D Vision (3DV) , 2017.[60] N. Yokoyama, S. Ha, and D. Batra. Success weighted by completion time: A dynamics-awareevaluation criteria for embodied navigation. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 1562–1569. IEEE, 2021.[61] K. A. Ingraham, C. D. Remy, and E. J. Rouse. The role of user preference in the customizedcontrol of robotic exoskeletons. Science robotics , 7(64):eabj3487, 2022.[62] S. Choi, Q.-Y . Zhou, and V . Koltun. Robust reconstruction of indoor scenes. In 2015 IEEEConference on Computer Vision and Pattern Recognition (CVPR) , pages 5556–5565, 2015.doi:10.1109/CVPR.2015.7299195.138 Supplementary Material8.1 Human-Robot Modeling Data CollectionFigure 7: Data collection of three blind or visually impaired (BVI) subjects.We collected human and robot interaction data using a Vicon motion capture system with Pulsaractive marker clusters (Figure 7). The study is reviewed and approved by the institutional reviewboard (IRB). We recruited nine participants consisting of three blind or visually impaired (BVI) andsix sighted subjects. For the data collection, the participants are asked to wear a back strap to placea motion capture marker on their lower back. Then, they follow the robot, AlienGo, by using theirleft hand to hold onto a harness handle that is attached to the robot. For white cane users, we askedthem to hold their cane in their right hand for safety purposes, but they did not actively use the caneduring data collection.Table 4: Description of Five Robot Trajectories for Interaction Data CollectionTrajectory Description1 2.5 m forward, 90-degree in-place left turn2 1.2 m forward, 90-degree left turn3 0.75 m forward, 45-degree gradual left turn, 135-degree in-place right turn40.5 m forward, 90-degree right turn, 0.5 m forward,90-degree left turn, 0.5 m forward50.6 m forward, 90-degree in-place right turn, 0.6 m forward,180-degree left u-turn, 0.6 m forward, 90-degree right turn, 0.6 m forwardRobot Trajectories. The robot is scripted to follow pre-defined trajectories as described in Table 4.The five trajectories are based on the work of Nanavati et al. [35] with modifications to fit in themotion capture space.Experimental Protocol. The data collection process is divided into three parts. Each part is de-signed to examine different aspects of the interaction behavior. Part 1: The subjects followed therobot without vision, but relying purely on the physical interaction without any visual cues. Werepeated each trajectory three times in random order. To prevent the subject from predicting thetrajectory, they were told that there were 15 trajectories. Part 2: The subjects followed the robotwithout vision, but with prior information on the robot’s expected movement. We provided verbaldescriptions of the expected cue just before the robot executed each cue. Part 3: The sighted sub-jects followed the robot with their full vision. This is extra data collected only from sighted subjectsto respond to the robot’s movement with additional visual perception.148.2 Pseudocode of Action ShieldingThis section provides a more detailed algorithm of our action shielding mechanism. We computethe convex hull of the collision primitives first, compute the lidar thresholds from this hull, and thencheck whether the action is safe or not. Please review this material with Section 5.Algorithm 1 Convex-hull Action Shielding.Input current position and orientation of robot and human xRt,xHt, a list of possible actions A, avector of action probability p, action suppression scale β, lidar reading l.Output A modified action probability vector ˆp1:procedure ACTION SHIELDING (Ttr,Tth,A,s,l)2: fori←1to len( A)do3: a=A[i]4: ˆp[i] =p[i]5: xRt+1,xHt+1←EstimateNext( xRt,xHt,a) ▷Estimate Via Interaction Model6: S ← ConvexHull( xRt,xHt,xRt+1,xHt+1)7: ̄l←ComputeLidarThresholds ( S)8: ifany of reading lis less than the threshold ̄lthen9: ˆp[i]∗=β10: end if11: end for12: ˆp←normalize( ˆp)13: return the modified action probability ˆp14:end procedure8.3 Real World Navigation ExperimentsUser Study Overall, the users were positive about our suggested system: “I think the technologyis going to be great. Once you tell it to go straight, it’s in a straight path, and that’s very nice. ...I like the way it was pulling– it was very consistent, a nice solid pull forward. ” The users enjoyedthe developed system’s controllability, allowing the users to navigate the hallway effectively: “I likebeing able to do the commands. ... So that was good, a little more control over the direction and theway the dog (robot) was walking. ” The users also felt much more comfortable with the second trialthan the first one because they became familiar with the system: “I thought the second run was like50 % better than the first one. ” However, one participant suggested reducing the noise from the gait:“(Q: What did you like least about this experience?) Well, the noise from the gait. ”Teamed Navigation Routes We include here the map information for our real-world experiments(Figure 8 and Figure 9).15Figure 8: Illustration of three routes in our experiments.Figure 9: Illustration of two routes in our experiments with the visually impaired subjects.16 |
Pwsm7d0iWJD | Learning Lyapunov-Stable Polynomial DynamicalSystems Through ImitationAmin AbyanehDepartment of Electrical and Computer EngineeringMcGill Universityamin.abyaneh@mail.mcgill.caHsiu-Chin LinSchool of Computer ScienceMcGill Universityhsiu-chin.lin@cs.mcgill.caAbstract: Imitation learning is a paradigm to address complex motion planningproblems by learning a policy to imitate an expert’s behavior. However, relyingsolely on the expert’s data might lead to unsafe actions when the robot deviates fromthe demonstrated trajectories. Stability guarantees have previously been providedutilizing nonlinear dynamical systems, acting as high-level motion planners, inconjunction with the Lyapunov stability theorem. Yet, these methods are prone toinaccurate policies, high computational cost, sample inefficiency, or quasi stabilitywhen replicating complex and highly nonlinear trajectories. To mitigate thisproblem, we present an approach for learning a globally stable nonlinear dynamicalsystem as a motion planning policy. We model the nonlinear dynamical systemas a parametric polynomial and learn the polynomial’s coefficients jointly with aLyapunov candidate. To showcase its success, we compare our method against thestate of the art in simulation and conduct real-world experiments with the KinovaGen3 Lite manipulator arm. Our experiments demonstrate the sample efficiencyand reproduction accuracy of our method for various expert trajectories, whileremaining stable in the face of perturbations.Keywords: Imitation learning, Safe learning, Motion planning, Dynamical system,Semidefinite programming, Robotic manipulation1 IntroductionMotion planning for robotic systems is generally regarded as a decomposition of a desired motioninto a series of configurations that potentially satisfy a set of constraints [ 1]. Imitation learning tacklesmotion planning by imitating an expert’s behavior to learn a planning policy [ 2]. To this day, only ahandful of imitation learning methods provide mathematical stability guarantees for their resultantpolicy. Stability is a critical factor when deploying imitation policies in environments exposedto external perturbations. Therefore, unpredictable environments require a policy that reasonablyresponds in unexplored regions of state space, away from original demonstrations.Researchers have turned to autonomous dynamical systems (DS) as a means to learn stable motionplanning policies [ 3,4,5]. Essentially, a parametric time-invariant DS is optimized to provide anaction (velocity) given the current state (position), while adhering to constraints that attain globalLyapunov stability. This approach leads to safety and predictability of planned trajectories, evenin areas of state space without expert demonstrations. However, previous work is mostly confinedto basic Lyapunov functions that adversely impact the reproduction accuracy, and require sufficientlylarge set of demonstrations. Others have proposed approaches focused on diffeomorphism andRiemannian geometry [ 6,7,8] and contraction theory [ 9], that are prone to quasi-stability, increasedcomputational time, or restricted hypothesis class.We propose a method to simultaneously learn a polynomial dynamical system (PLYDS) and a poly-nomial Lyapunov candidate to generate globally stable imitation policies. Polynomials, depending on7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Expert Demonstration DatasetParameterized polynomialwith adjustable complexityGlobally Stable PolicyPosition or velocitycontroller , e.g., PIDJoint Contr ollerManipulator ArmStateActionFeedback CommandPolicy FormulationPolynomial representationensuring L yapunov conditionsLyapunov CandidateOptimizationUsing sum-of-squares andsemidefinite programmingtechniquesFigure 1: Overview of the stable policy learning framework. Policy learning (left) optimizes a stablepolynomial DS from expert’s demonstration data. This policy is then deployed (right) to plan globallystable and predictable trajectories in the entire state space.the degree, possess an expressive power to approximate highly nonlinear systems, and polynomialregression can empirically compete with neural networks on challenging tasks [ 10,11]. Unlike mostneural policies, global stability can be naturally expressed with polynomials. Polynomials also enableus to utilize efficient semi-definite programming [ 12,13] and sum-of-squares (SOS) optimizationtechniques [14, 15], and offer adaptability to expert’s demonstrations.Our main contribution is twofold. We propose a polynomial representation of the planning policyand Lyapunov candidate function, coupled with concrete mathematical stability certification forprecise and safe replication of the expert’s demonstrations, as depicted in Figure 1. Then, wedefine a regularized semi-definite optimization problem to jointly learn the DS and the Lyapunovcandidate with higher flexibility and precision. We compare the reproduction accuracy of PLYDS withalternatives in the literature and evaluate the performance in both simulation and real robotic systems.2 Background and NotationConsider a system operating in a state-space X ⊂Rn, e.g., a robot in its task- or configuration-space.The system can execute actions in A ⊂Rn, for instance, velocity or torque commands, leading tostate evolution. We denote the state variable with x≜[x1x2. . . x n]T∈ X, and consider the actionvariable to be the state’s derivative ̇x∈ A. Within this space, our goal is to learn an imitation policythrough a dataset of experts’ state-action pairs, referred to as trajectories.LetNd∈Nbe the number of trajectories demonstrated by the expert. Each trajectory containsNs∈Nstate-action pairs. The dataset of expert trajectories stacks all state-action pairs, defined as:D≜nxd(s), ̇xd(s)d∈ {1, . . . , N d}, s∈ {1, . . . , N s}o, (1)where (xd(s), ̇xd(s))is the dataset entry corresponding to the s-th sample of the d-th demonstratedtrajectory. The dataset Dholds Nt=NdNssamples. We assume that the trajectories contain thesame sample size ( Ns), share a common target ( x∗∈ X), and have zero velocity at the target, i.e.,xd(Ns) =x∗and ̇xd(Ns) =0for all trajectories d∈ {1, . . . , N d}.Definition 2.1. (Dynamical Systems). The mapping between the state and the action in each samplecan be modelled with a time-invariant autonomous dynamical system (DS), denoted by: ̇x=f(x) +ε=ˆf(x), f, ˆf:X − → A . (2)In Equation (2), fis an ordinary differential equation for the true underlying DS. The term ε∈Rncaptures measurement and recording noise of expert’s demonstrations. We assume that εis embeddedin the estimated DS, ˆf, and eliminate the need for modeling its distribution. Following [ 3], we aim atlearning a noise-free estimation of f(x), denoted by ˆf(x). One can view ˆf(x)in Equation (2) as apolicy that maps states to actions for reproducing the demonstrated trajectories in the state-space. Forinstance, when the robot is located in x0∈ X, the policy yields an action ̇x0=ˆf(x0), which can bepassed to the robot’s velocity controller.2The estimated DS in Equation (2), ˆf(x), is globally asymptotically stable (GAS) around an equilib-rium point xe, if and only if for every initial state, x→xeas the system evolves and time goes toinfinity [ 16]. A popular tool to study the GAS property of a DS is the Lyapunov stability theorem.According to this theorem, a DS exhibits GAS if there exists a positive-definite function v:X − →R,known as Lyapunov potential function (LPF), such that ̇v(x)<0for all x̸=xeand ̇v(xe) = 0 . Toensure GAS for ˆf(x), we simultaneously learn the policy, ˆf, and the LPF, v.3 Related workExtensive research is conducted on imitation learning and its applications in robotic motion planningfor a variety of tasks. Existing efforts can be divided into the following predominant research tracks.Dynamical systems for motion planning. Dynamical systems have proved to effectively counterautonomous motion planning problems by proposing a time-invariant policy [ 17]. Traditional methodsof encoding trajectories are based on spline decomposition [ 18], Gaussian process regression [ 19],or unstable dynamical systems [ 20,21]. They either lack robustness because of time-variance orfail to provide GAS. SEDS [ 3] is the first attempt to learn stable planning policies. However, itsperformance declines when applied to highly nonlinear expert trajectories. Most notably, it suffersfrom trajectories where the distance to the target is not monotonically decreasing. The intrinsiclimitation of SEDS comes from the choice of a simple Lyapunov function. Follow-up researchintroduces more complex Lyapunov candidates to stably mimic nonlinear trajectories [ 4,22], but arestill restricted in representing the Lyapunov candidate. Others have tried to tackle SEDS limitationsthrough diffeomorphic transformations and Riemannian geometry [ 6,8,7] that yield quasi-stableplanners for some trajectories, and contraction theory [ 9] that restricts the class of metrics to makethe optimization tractable. Lastly, most improvements to the original SEDS still use the Gaussianmixture model formulation, that is vulnerable in presence of limited expert demonstrations.Imitation learning. Recent imitation learning developments can be applied to motion planning taskswith minimal modifications, since motion planning can be achieved by finding a (not necessarilystable) policy in the robot’s task-space from the expert’s behavior. For instance, GAIL [ 23] introducesan adversarial imitation learning approach that directly optimizes the expert’s policy, but requiresa large set of expert’s data (low sample efficiency) and extensive training iterations. The growinginterest in neural policies has also led to the development of end-to-end autonomous driving [ 24] andbehavioral cloning [ 25,26,27] methods. Nevertheless, they generally lack GAS, and it is unclearwhether the robot can recover from perturbations. The same drawbacks exist with apprenticeshiplearning approaches, such as Abbeel and Ng [28] and inverse reinforcement learning, such as Ziebartet al. [29], and the computational demand is even higher for the latter.Stability in neural dynamical systems. Methods such as [ 30,31] represent the dynamics with aNeural Network, and propose the joint training of dynamics and a Lyapunov function to guarantee thestability. Though theoretically sound, these methods have only been applied to rather simple settingsand require large demonstration sets. Neural Lyapunov methods [ 32,33,34] promise a data drivenand potentially stable approach to control and model nonlinear dynamics, but lack global stability.Methods such as [35] are also not stable-by-design and the dynamical system lacks autonomy.4 MethodologyWe instantiate the policy and the corresponding LPF candidate, ˆfandv, with two polynomials inSection 4.1 and Section 4.2, respectively. This allows us to accurately imitate an expert’s behavior,while providing formal GAS guarantees. Subsequently, we formulate a tractable optimization problemfor jointly learning the policy and the LPF in Section 4.3.4.1 Dynamical system policy formulationWe need to approximate the unknown underlying DS in Equation (2) to discover the mappingbetween states and actions from expert’s behavior. To this end, we opt to model the policy with a3parametric polynomial. The representative power of polynomials was originally established throughthe Weierstrass approximation theorem, stating that every continuous function defined on a closedinterval can be approximated with desired precision by a polynomial. This idea is fortified by recentstudies, such as [10, 11], that compare polynomials to neural networks on a variety of tasks.Definition 4.1. (Polynomial Dynamical Systems). A Polynomial Dynamical System (PLYDS) is apolynomial approximation of the policy in Equation (2), and is expressed as, ̇x=ˆf(x;P)≜bTx,αP1bx,αbTx,αP2bx,α. . .bTx,αPnbx,αT, (3)where bx,α≜[1 (xT)◦1(xT)◦2. . .(xT)◦α]Tis the polynomial basis of degree α∈N, and(xT)◦kis the element-wise k-th power of xT. Every row iofˆfis a polynomial of degree 2α,ˆfi(x;Pi) =bTx,αPibx,α, where Pi∈Sαn+1andSk≜{S∈Rk×k|ST=S}. The matrixP∈Sαn2+nencapsulates the block-diagonal form of all Pimatrices.Below, we present an example to show how PLYDS, as defined in Definition 4.1, captures nonlineartime-invariant policies. One can further complicate the policy by increasing α, which in turn producesa larger basis vector and a more flexible polynomial.Example 4.1.1. A second-order polynomial representation of a one-dimensional DS is: ̇x=ˆf(x;P) =bTx,αP1bx,α= [1x]p00p01p01p111x=p00x2+ (p01+p10)x+p00,where α= 1,bx,α= [1x]T. Note how Pcan be symmetric without loss of generality.4.2 Global stability guarantees for polynomial dynamical systemsAs explained in Section 4.1, a polynomial policy allows for accurately imitating the expert’s demon-strations. Yet, there is no formal GAS guarantee that the robot will ultimately converge to the target inthe face of perturbations, deflecting it from the expert’s trajectories. Owing to the Lyapunov stabilitytheorem, finding an LPF that meets the criteria in Section 2 ensures the desired stability [36].The major challenge lies in learning an LPF, v, which is a positive definite function with negativegradient. We tackle this by confining to the class of polynomial LPF candidates.Definition 4.2. (Polynomial Lyapunov Candidate). A multidimensional polynomial LPF is given by,v(x;Q)≜bTx,βQ1bx,βbTx,βQ2bx,β. . .bTx,βQnbx,βT, v:X →Rn, (4)where β∈Nis the polynomial basis degree. Each row is defined by vi(x;Qi) =bTx,βQibx,β, vi:X − →R, and can be viewed as a scalar Lyapunov function. The parameters matrix, Q∈Sβn2+n, isa block-diagonal of all Qi∈Sβn+1matrices.Definition 4.2 introduces a non-conventional LPF candidate. Rather than considering a single LPF,we designate a distinct polynomial LPF for each dimension of the state space and stack them intov(x;Q). This characterization, known as a vector Lyapunov function [ 37], is less restrictive andenables the policy and LPF to be learned moreindependently for each dimension of the state space.We highlight that the GAS of the policy in each dimension, ˆfi(x;Pi), implies the GAS of the entirepolicy, ˆf(x;P). Proposition 4.3 establishes a link between the policy stability in each row and theglobal stability of the multidimensional policy.Proposition 4.3. Assuming each pair (ˆfi(x;Pi), vi(x;Qi))individually satisfies the GAS condi-tions. Then, the sum ˆv=Pni=1vi(x;Qi)yields a valid standard Lyapunov function for ˆf(x;P),proving that the policy satisfies GAS conditions. The proof is given in Appendix A.1.The formulation of the policy and the LPF as multidimensional polynomials empowers us to leveragetools from sum-of-squares (SOS) [ 15,38]. The SOS approach boils the Lyapunov GAS conditionsdown to verifying positive-definiteness of a set of specified matrices. The next two lemmas illustratethe SOS formulation of Lyapunov stability conditions.4Lemma 4.4. The first Lyapunov stability criterion, vi(x;Qi)⪰0, is satisfied for each i∈{1, . . . , n }ifQi⪰0andQi∈Sβn+1. The proof is outlined in Appendix A.2.Lemma 4.5. The second Lyapunov criterion,∂∂tvi(x;Qi)≺0, is fulfilled for each i∈ {1, . . . , n }if there exists a symmetric matrix Gi≺0andGi∈S(α+β)n+1such that:∂∂tvi(x;Qi) =∂vi(x;Qi)∂x∂x∂t=∂vi(x;Qi)∂xˆf(x;P) =bTx,α+βGibx,α+β, (5)where α+βis the basis degree. The matrix Giis acquired by polynomial coefficient matching, anddepends on PandQi. We summarize this dependence with the function G(P,Q) =G, where Gsymbolizes the block-diagonal form of all Gimatrices. The proof is outlined in Appendix A.3.Finally, with the necessary tools at our disposal, we can establish the connection between the globalstability of the policy and finding SOS polynomials in Theorem 4.6. This theorem serves as thefundamental basis for the subsequent policy optimization process.Theorem 4.6. A polynomial DS policy, ˆf(x;P), is GAS if the following conditions are satisfied:(a)Q⪰0,(b)G≺0,(c)G(P,Q) =G. (6)The proof is straightforward and is sketched in Appendix A.4.4.3 Joint optimization problemAt this stage, we have established polynomial representations for both the policy and the LPF, alongwith a firm connection that confirms global stability. Now, we develop an objective function usingthe Mean-Squared Error (MSE) cost with the Elastic Net Regularization [ 39]. The MSE is calculatedbetween the policy output and the expert’s actions across demonstrated trajectories, and it solelydepends on the policy parameters. Essentially, this problem entails regularized polynomial regressionto minimize the imitation MSE to expert’s demonstrations, subject to the existence of an LPF thatsatisfies the Lyapunov conditions. The optimization problem is framed as:minQ,G,PJ(P) =12NtNdXd=1NsXs=1(ˆf(xd(s);P)− ̇xd(s))2+λ1∥P∥1+λ2∥P∥2F,s.t. (a)Q⪰0 (b)G≺0 (c)G(P,Q) =G(d)Q=QT,G=GT,P=PT,(7)where∥.∥1and∥.∥2Fdenote the first and Frobenius norms, and λ1, λ2∈R+represent the regulariza-tion coefficients. Equation 7 is a semi-definite programming with nonlinear cost function [ 40,38], andcan be solved using standard semi-definite solvers [ 41,42]. Semi-definite programming facilitatesoptimization over the convex cone of symmetric, positive semi-definite matrices or its affine subsets.Note that (c)can cause the optimization to become non-convex. To alleviate this, we employ SDPrelaxations [ 12], iterative methods based on an initial guess of the Qmatrix, and ultimately sequentialquadratic programming (SQP) [43].Notice that the negative-definite constraints can be transformed to a semi-definite constraints by aconstant shift. Furthermore, the Lyapunov conditions restrict the gradient to nonzero values awayfrom the origin. The restriction ensures that the LPF has only one global minimum at the target.5 ExperimentsWe employ two motion planning datasets for experiments. Our primary data comes from the widelyrecognized LASA Handwriting Motion Dataset [ 44], which comprises data recorded from handwrittentrajectories. The second dataset contains expert demonstrations collected through teleoperating arobotic arm on realistic manipulation tasks. Details about both datasets can be found in Appendix B.1.5.1 EvaluationFor evaluation purposes, we apply PLYDS and the baselines to the dataset (Figure 2a) and evaluatethe performance of policy rollouts in PyBullet simulation (Figure 2b) before deploying safe policies5(a) (b) (c)Figure 2: Overview of the evaluation sequence: (a) learning from demonstrated data, (b) numericalevaluation in simulation and (c) deployed in real-world Gen3 Lite manipulator.(a) (b)Figure 3: Comparison of (a) mean and standard deviation of reproduction MSE and (b) computationtime to designated imitation learning methods. PLYDS performs reasonably well in terms of accuracyand is even more promising in terms of computational cost.onto a manipulator (Figure 2c). In all experiments, we randomly split the demonstrated trajectories inthe dataset into train and test sets. The policy learning stage, introduced in Equation (7), is carried outon the training data. The learned policy is subsequently evaluated by calculating the MSE betweenthe policy predictions and the ground truth in the test data,12NtestdNsPNtestdd=1PNss=1(ˆf(xd(s);P)− ̇xd(s))2. Recall that the policy output, ̇x, is the velocity passed to the robot’s low-level controller. Werepeat this procedure over 20 different random seeds, and report the average and standard deviation.We compare the accuracy of our approach to existing baselines. Primarily, we compare againstStable Estimator of Dynamical Systems (SEDS) [3], Linear Parameter Varying Dynamical Systems(LPV-DS) [ 22], and Stable Dynamical System learning using Euclideanizing Flows (SDS-EF) asmethods that ensure GAS. We also compare our method to Behavioral Cloning (BC) [ 26], andGenerative Adversarial Imitation Learning (GAIL) [ 23] to highlight the importance of global stability.Note that among these, BC and GAIL do not provide mathematical stability guarantees, but the resultscould provide further comparison ground for the accuracy and computation time. The implementationdetails, hyperparameters and architecture are discussed in Appendix B.2 and Appendix B.3.5.2 Handwriting datasetWe compare the learned policies of PLYDS to the baselines on eight designated motions. The outcomeof these experiments is reported in Figure 3. Despite stability guarantees, the overall accuracy isbetter among stable imitation learning methods compared to unstable neural approaches.To analyze GAS, we visualize the learned policies of all methods as streamlines. Figure 4 illustratesthe policy rollouts for N-Shaped motion of the handwriting dataset. Each sub-figure represents atrained policy illustrated with gray streamlines. It is evident that SEDS and PLYDS maintain GAS,6SDS-EF GAILX1SEDS PL YDSX2Figure 4: Policy rollout for N-Shaped motion of the handwriting dataset. Each figure represents atrained policy (gray) and rollouts (red) learned from demonstrations (blue). Note the stability issueswith GAIL and SDS-EF, where some streamlines fail to reliably converge to the target.SEDS SDS-EF LPV -DSX1X2PL YDSFigure 5: Policy rollout for Sine-Shaped motion (blue) of the handwriting dataset, with access to onlyoneexpert demonstration. Each figure represents a trained policy (gray) and one rollout (red) learnedfrom one demonstration (blue). Methods requiring large datasets for clustering, such as SEDS andLPV-DS, exhibit inaccurate and unsteady performance.while GAIL and SDS-EF fail to demonstrate converging trajectories for the entire state space. Thesame pattern persists for other motions as depicted in Appendix C.1.Finally, we examine the sample efficiency of our method by reducing the input data to onedemon-strated trajectory. From Figure 5, we can see that PLYDS learns a stable policy with such limitedtraining samples, while the baselines generate trajectories which diverges from expert data.So far in this section, the policy and the LPF polynomial degrees were set to α= 6andβ= 2. Tounderstand the way in which the complexity of polynomials affects the overall performance, werepeated the same experiments with degrees of α=4, 6, and 8, and present the result in Appendix C.2.We observe that a higher complexity leads to improved precision, if not halted by overfitting orstability sacrifice. Moreover, we study different LPF complexities in Appendix C.3, evaluate theperformance of PLYDS with noisy demonstrations in Appendix C.4, and further investigate thecomputational times in Appendix C.5.5.3 Manipulation tasksTo conduct real-world trials, we collect a second set of expert demonstrations through teleoperatingKinova Gen3 Lite, a manipulator arm with six degrees of freedom. This new dataset holds threedistinct motions: (a) root-parabola, (b) standard pick and place, and (c) prolonged-sine, whichrepresent exemplary nonlinear trajectories (Figure 6). Additional details are available in Appendix B.The performance of all methods is summarized in Table 1, where PLYDS often outperforms otherbaselines. Next, the learned policy of PLYDS is transferred to the physical arm (Figure 6) andsuccessfully imitates the introduced manipulation tasks. We also start the robot at regions that arefurther away from the demonstrations and introduce perturbations by randomly pushing the robot toreveal the inherent GAS of PLYDS. As expected, PLYDS manage to successfully recover to the goal.7(a) (b) (c)Figure 6: Manipulation tasks: (a) root-parabola, (b) standard pick and place, and (c) prolonged-sine.Expert Motion Prolonged Sine Root Parabola Pick-and-Place Computational TimeSEDS [3] 0.234±0.015 0.152±0.023 0.094±0.012 277 .02±13.60BC [26] 1.650±0.133 0.931±0.078 0.725±0.133 38.93±9.11GAIL [23] 2.322±0.098 1.322±0.094 0.663±0.098 143 .15±8.68SDS-EF [8] 0.234±0.015 0.152±0.023 0.094±0.012 715 .62±18.79LPV-DS + P-QLF [22] 0.234±0.015 0.152±0.023 0.094±0.012 334 .55±25.74PLYDS (ours) 0.111±0.007 0.176±0.015 0.021±0.003 21.37±1.52Table 1: Policy rollout reproduction MSE and computational time in PyBullet.6 Conclusion and LimitationsWe introduced an approach that aims to learn globally stable nonlinear policies represented bypolynomial dynamical systems. We employ the learned policies for motion planning based onimitating expert demonstrations. Our approach jointly learns a polynomial policy along with aparametric Lyapunov candidate that verifies global asymptotic stability by design. The resultingDS is utilized as a motion planning policy, guiding robots to stably imitate the expert’s behavior. Acomprehensive experimental evaluation is presented in real-world and simulation, where the methodis compared against prominent imitation learning baselines.Limitations. A limitation of SOS is that the set of non-negative polynomials is larger than the ones ex-pressed as SOS [ 45]. Though rare in motion planning tasks, this implies that finding a Lyapunov candi-date could be difficult, especially with simultaneous search for a suitable dynamical system. Lasserrehierarchy and SOS extensions [ 46] can search in a broader class of LPF candidates and tackle this is-sue. Another limitation occurs when finding highly complex policies that lead to a violation of stabilityguarantees. This often happens when the regularization coefficients or the optimization tolerance arenot set properly. We discuss this trade-off between stability and accuracy in Appendix C.2. Further, thecomputation complexity of PLYDS is feasible with a reasonable choice of polynomial degrees. Higherdegrees are computationally demanding, but are often unnecessary in normal motion planning tasks.Future work. Future work includes incorporating more elaborate safety criteria, such as control bar-rier functions [ 47] or real-time obstacle avoidance, into our learning objectives. Plus, applications ofour method in SE(3) planning, or other higher-dimensional spaces, such as configuration space of ma-nipulator robots, may be further investigated. Vector Lyapunov functions and adaptable complexity ofpolynomials can pave the way for such applications, as they assuage major computational challenges.7 Video, Codebase, and ReproducibilityThe codebase, video supplements, etc. related to this project are available on our Git repository1.Reproducing the experiments is as straightforward as installing dependent software packages, andrunning a Unix commands in README files.1github.com/aminabyaneh/stable-imitation-policy8AcknowledgmentsThis work is sponsored by NSERC Discovery Grant. We also appreciate the thoughtful reviewers’comments which helped us enhance the paper, particularly the experiments.References[1]J.-C. Latombe. Robot motion planning , volume 124. Springer Science & Business Media,2012.[2]A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learningmethods. ACM Computing Surveys (CSUR), 50(2):1–35, 2017.[3]S. M. Khansari-Zadeh and A. Billard. Learning stable nonlinear dynamical systems withgaussian mixture models. IEEE Transactions onRobotics, 27(5):943–957, 2011.[4]S. M. Khansari-Zadeh and A. Billard. Learning control Lyapunov function to ensure stabilityof dynamical system-based robot reaching motions. Robotics andAutonomous Systems , 62(6):752–765, 2014.[5]N. Figueroa and A. Billard. A physically-consistent bayesian non-parametric mixture modelfor dynamical system learning. In A. Billard, A. Dragan, J. Peters, and J. Morimoto, editors,Proceedings ofThe2ndConference onRobot Learning , volume 87 of Proceedings ofMachineLearning Research , pages 927–946. PMLR, Oct 2018. URL https://proceedings.mlr.press/v87/figueroa18a.html .[6]K. Neumann and J. J. Steil. Learning robot motions with stable dynamical systems underdiffeomorphic transformations. Robotics andAutonomous Systems, 70:1–15, 2015.[7]N. D. Ratliff, J. Issac, D. Kappler, S. Birchfield, and D. Fox. Riemannian motion policies. arXivpreprint arXiv:1801.02854, 2018.[8]M. A. Rana, A. Li, D. Fox, B. Boots, F. Ramos, and N. Ratliff. Euclideanizing flows: Dif-feomorphic reduction for learning stable dynamical systems. In Learning forDynamics andControl, pages 630–639. PMLR, 2020.[9]H. Ravichandar, I. Salehi, and A. Dani. Learning partially contracting dynamical systems fromdemonstrations. In Conference onRobot Learning, pages 369–378. PMLR, 2017.[10] X. Cheng, B. Khomtchouk, N. Matloff, and P. Mohanty. Polynomial regression as an alternativeto neural nets. arXiv preprint arXiv:1806.06850, 2018.[11] P. Morala, J. A. Cifuentes, R. E. Lillo, and I. Ucar. Towards a mathematical framework toinform neural network modelling via polynomial regression. Neural Networks , 142:57–72,2021.[12] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM review, 38(1):49–95, 1996.[13] H. Wolkowicz, R. Saigal, and L. Vandenberghe. Handbook ofsemidefinite programming:theory, algorithms, andapplications, volume 27. Springer Science & Business Media, 2012.[14] M. Laurent. Sums of squares, moment matrices and optimization over polynomials. In Emergingapplications ofalgebraic geometry, pages 157–270. Springer, 2009.[15] P. A. Parrilo. Structured semidefinite programs and semialgebraic geometry methods inrobustness andoptimization. California Institute of Technology, 2000.[16] R. L. Devaney. Anintroduction tochaotic dynamical systems. CRC press, 2021.[17] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Survey: Robot programming by demonstra-tion. Technical report, Springrer, 2008.9[18] J. Aleotti and S. Caselli. Robust trajectory learning and approximation for robot programmingby demonstration. Robotics andAutonomous Systems, 54(5):409–413, 2006.[19] M. Muhlig, M. Gienger, S. Hellbach, J. J. Steil, and C. Goerick. Task-level imitation learningusing variance-based movement optimization. In 2009 IEEE International Conference onRobotics andAutomation, pages 1177–1184. IEEE, 2009.[20] M. Hersch, F. Guenter, S. Calinon, and A. Billard. Dynamical system modulation for robotlearning via kinesthetic demonstrations. IEEE Transactions onRobotics , 24(6):1463–1467,2008.[21] S. Schaal. Scalable locally weighted statistical techniques for real time robot learning. AppliedIntelligence-Special issue onScalable Robotic Applications ofNeural Networks , 17:49–60,2002.[22] N. B. Figueroa Fernandez and A. Billard. A physically-consistent bayesian non-parametricmixture model for dynamical system learning. Technical report, EPFL, 2018.[23] J. Ho and S. Ermon. Generative adversarial imitation learning. Advances inneural informationprocessing systems, 29, 2016.[24] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXiv preprintarXiv:1604.07316, 2016.[25] F. Torabi, G. Warnell, and P. Stone. Behavioral cloning from observation. In Proceedings ofthe27th International Joint Conference onArtificial Intelligence, pages 4950–4957, 2018.[26] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances inneural information processing systems, volume 1, pages 305–313, 1988.[27] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee,I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Conference onRobot Learning ,pages 158–168. PMLR, 2022.[28] P. Abbeel and A. Y . Ng. Apprenticeship learning via inverse reinforcement learning. InProceedings ofthetwenty-first international conference onMachine learning, page 1, 2004.[29] B. D. Ziebart, A. L. Maas, J. A. Bagnell, A. K. Dey, et al. Maximum entropy inverse reinforce-ment learning. In Aaai, volume 8, pages 1433–1438. Chicago, IL, USA, 2008.[30] J. Z. Kolter and G. Manek. Learning stable deep dynamics models. Advances inneuralinformation processing systems, 32, 2019.[31] N. Lawrence, P. Loewen, M. Forbes, J. Backstrom, and B. Gopaluni. Almost surely stable deepdynamics. Advances inNeural Information Processing Systems, 33:18942–18953, 2020.[32] S. M. Richards, F. Berkenkamp, and A. Krause. The Lyapunov neural network: Adaptivestability certification for safe learning of dynamical systems. In Conference onRobot Learning ,pages 466–476. PMLR, 2018.[33] H. Dai, B. Landry, L. Yang, M. Pavone, and R. Tedrake. Lyapunov-stable neural-network control.InProceedings ofRobotics: Science andSystems, July . doi:10.15607/RSS.2021.XVII.063.[34] A. Coulombe and H.-C. Lin. Generating stable and collision-free policies through Lyapunovfunction learning. International Conference onRobotics andAutomation , pages 3037–3043,2023.[35] Y .-C. Chang, N. Roohi, and S. Gao. Neural Lyapunov control. Advances inneural informationprocessing systems, 32, 2019.10[36] A. Papachristodoulou and S. Prajna. On the construction of Lyapunov functions using thesum of squares decomposition. In Proceedings ofIEEE Conference onDecision andControl ,volume 3, pages 3482–3487. IEEE, 2002.[37] V . Lakshmikantham, V . M. Matrosov, and S. Sivasundaram. Vector Lyapunov functions andstability analysis ofnonlinear systems, volume 63. Springer Science & Business Media, 2013.[38] G. Blekherman, P. A. Parrilo, and R. R. Thomas. Semidefinite optimization andconvexalgebraic geometry. SIAM, 2012.[39] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal oftheroyal statistical society: series B(statistical methodology), 67(2):301–320, 2005.[40] S. Burer and R. D. Monteiro. A nonlinear programming algorithm for solving semidefiniteprograms via low-rank factorization. Mathematical Programming, 95(2):329–357, 2003.[41] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear programming: theory andalgorithms .John Wiley & Sons, 2013.[42] M. ApS. MOSEK Optimizer APIforPython 10.0.46 , 2021. URL http://docs.mosek.com/9.0/toolbox/index.html .[43] P. T. Boggs and J. W. Tolle. Sequential quadratic programming. Acta numerica, 4:1–51, 1995.[44] S. M. Khansari-Zadeh and A. Billard. Lasa Handwriting Dataset. https://cs.stanford.edu/people/khansari/download.html#SEDS_reference , 2011.[45] G. Blekherman. There are significantly more nonegative polynomials than sums of squares.Israel Journal ofMathematics, 153:355–380, 2006.[46] A. Papachristodoulou and S. Prajna. On the construction of Lyapunov functions using thesum of squares decomposition. In Proceedings ofthe41st IEEE Conference onDecision andControl, 2002., volume 3, pages 3482–3487 vol.3, 2002. doi:10.1109/CDC.2002.1184414.[47] A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada. Controlbarrier functions: Theory and applications. In 18th European control conference (ECC) , pages3420–3431. IEEE, 2019.[48] B. O’Donoghue, E. Chu, N. Parikh, and S. Boyd. Conic optimization via operator splitting andhomogeneous self-dual embedding. Journal ofOptimization Theory andApplications , 169(3):1042–1068, June 2016. URL http://stanford.edu/ ~boyd/papers/scs.html .[49] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski,P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman,N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, ̇I. Polat, Y . Feng,E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero,C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. NatureMethods, 17:261–272, 2020. doi:10.1038/s41592-019-0686-2.[50] S. G. Nersesov, W. M. Haddad, and Q. Hui. Finite-time stabilization of nonlinear dynamicalsystems via control vector lyapunov functions. Journal oftheFranklin Institute , 345(7):819–837, 2008.11A Mathematical ProofsDue to space limitations and to maintain coherency, proofs for propositions, lemmas, and theoremsare presented in this section in the order in which they appeared in the main text.A.1 Proof of Proposition 4.3Assuming each pair (ˆfi(x;Pi), vi(x;Qi))individually satisfies the GAS conditions. Then, the sumˆv=Pni=1vi(x;Qi)yields a valid standard Lyapunov function for ˆf(x;P), proving that the policysatisfies GAS conditions.Proof. Asvi(x;Qi)is an LPF candidate for ˆfi(x;Pi), both the first and second Lyapunovconditions must be satisfied, i.e., ∀i∈ {1, . . . , n }:(a)vi(x;Qi)⪰0,∀x∈ X, (b)∂vi(x;Qi)∂t≺0,∀x∈ X.Define the sum of elements ˆv(x;Q) =Pni=1vi(x;Qi). We show that ˆv(x;Q)satisfies bothLyapunov global stability conditions:(i)vi(x;Qi)⪰0 (a)⇒v1(x;Q1) +. . .+vn(x;Qn) = ˆv(x;Q)⪰0,(ii)∂ˆv(x;Q)∂t=∂Pni=1vi(x;Qi)∂t=nXi=1∂vi(x;Qi)∂t,∂vi(x;Qi)∂t≺0 (b).□A.2 Proof of Lemma 4.4The first Lyapunov stability criterion, vi(x;Qi)⪰0, is satisfied for each i∈ {1, . . . , n }ifQi⪰0andQi∈Sβn+1.Proof. Considering that vi(x;Qi) =bTx,βQibx,βandQiis not singular, we can perform a Choleskyfactorization on the parameters’ matrix Qi. The result is Qi=LTiLi, and the positivity of vi(x;Qi)comes from,vi(x;Qi) =bTx,βQibx,β=bTx,βLTiLibx,β= (Libx,β)T(Libx,β) =||Libx,β||2⪰0,that represents vi(x;Qi)as an SOS and therefore achieves the first Lyapunov condition. □A.3 Proof of Lemma 4.5The second Lyapunov criterion,∂∂tvi(x;Qi)≺0, is fulfilled for each i∈ {1, . . . , n }if there exists asymmetric matrix Gi≺0andGi∈S(α+β)n+1such that:∂∂tvi(x;Qi) =∂vi(x;Qi)∂x∂x∂t=∂vi(x;Qi)∂xˆf(x;P) =bTx,α+βGibx,α+β, (8)where α+βis the basis degree. The matrix Giis acquired by polynomial coefficient matching,and depends on PandQi. We summarize this dependence for all i∈ {1, . . . , n }with the functionG(P,Q) =G, where Gsymbolizes the block-diagonal form of all Gimatrices.Proof. We know that LPF rows are denoted by vi(x;Qi) =bTx,βQibx,β. Hence, we write thesecond Lyapunov condition by taking the derivative of each row:∂vi(x;Qi)∂t=∂vi(x;Qi)∂x1∂x1∂t+∂vi(x;Qi)∂x2∂x2∂t+. . .+∂vi(x;Qi)∂xn∂xn∂t∂xj∂t=ˆfj(x;Pj)⇒∂vi(x;Qi)∂t=nXj=1∂vi(x;Qi)∂xjˆfj(x;Pj)=nXj=1∂[bTx,βQibx,β]∂xj[bTx,αPjbx,α]12Within the last summation, both the derivative of Lyapunov function and the policy are polynomials.The idea is that their multiplication could also be written as an SOS polynomial if the parameters PiandQiare chosen carefully. For this polynomial, we define a new basis bx,α+βandGi∈S(α+β)n+1.Note that the degree of this basis is calculated by12[(2β−1) + (2 α) + 1] , which is the rounded-updegree of the above multiplication term.Next, we match polynomial coefficients on both sides, yielding Giparameters as a function of bothPandGi, i.e.,bTx,α+βGibx,α+βMatching⇐= = = = = = = = ⇒CoefficientsnXj=1∂[bTx,βQibx,β]∂xj[bTx,αPjbx,α]⇒Gi=Gi(P,Qi)We summarize the same relationship for all Gimatrices, and call the resulting function G. Hence,the second condition can be represented by G=G(P,Q)andG≺0, and be viewed as SOS. □A.4 Proof of Theorem 4.6Assuming the polynomial representation of a nonlinear autonomous dynamical system (Defini-tion 4.1),ˆf(x;P) = [bTx,αP1bx,αbTx,αP2bx,α...bTx,αPnbx,α]T,the existence of a corresponding polynomial Lyapunov function (Definition 4.2),v(x;Q) = [bTx,βQ1bx,βbTx,βQ2bx,β...bTx,βQnbx,β]Tguarantees the asymptotic global stability of the policy, if the following conditions are satisfied:(a)Q⪰0,(b)G≺0,(c)G(P,Q) =G.Proof. The proof is straightforward and follows both Lemma 4.4 and Lemma 4.5. We know thateach partial DS ˆfi(x;Pi)is stable if the corresponding parameterized LPF satisfies (a), (b), and(c), where Gis an affine function found in Lemma 4.5 by polynomial coefficient matching. Sinceeach ˆfi(x;Pi)explains the derivative along one of the orthogonal basis of ˆf(x;P), their individualglobal stability is equivalent to the stability of the entire system. In other words,∀xi∈ D{ ˆfi(x;Pi)},limt→∞xi=x∗i⇒limt→∞x= [ limt→∞x1limt→∞x2...limt→∞xn]T=x∗Another proof can be provided using the LPF introduced in Proposition 4.3 as a Lyapunov candidatefor the whole system. Both proofs equally validate the stability of the polynomial DS. □B Experiment Setup and DetailsEnclosed in this section are detailed descriptions of our experiment setup, main software packages,and datasets. Due to space limitations, crucial details from the experiments are explained here. Eventhough reading the section is not necessary to understand the paper, it provides useful insight into oursetup and can aid reproducibility and future research.B.1 DatasetsHandwriting dataset. The LASA Handwriting Dataset, partly depicted in Figure 7, is a collectionof 2D handwriting motions recorded from a Tablet-PC and by user’s input. The dataset includes 30human handwriting motions, where each motion represents a desired pattern. For each pattern, thereare seven demonstrations recorded, with each demonstration starting from a slightly different (butfairly close) initial position but ending at the same final point. These demonstrations may intersect13AngleC GNSinePFigure 7: Plots of handwriting dataset motions used in our experiments. We select a representativesubset of motions for baselining to keep the experiments computationally feasible. Each plot shows 7demonstrations with 1000 recorded samples per each. Notice that the time indexing is included in thedataset, but it is irrelevant to our work as we learn time-invariant policies.with each other. Out of the 30 motions, 26 correspond to a single pattern, while the remaining fourmotions include multiple patterns, referred to as Multi Models. In all the handwriting motions, thetarget position is defined as (0, 0), without loss of generality. The dataset provides the followingfeatures:•Position (2 ×1000) representing the motion in 2D space. The first row corresponds to thex-axis, and the second row corresponds to the y-axis in Cartesian coordinates.•Time (1 ×1000) being the time-stamp for each data point in the motion. We do not use thisproperty, as our proposed method generates time-invariant policies.•Velocity (2 ×1000) representing the velocity corresponding to each position. We use thisfeature as a label and form our MSE cost function to calculate the difference between thepredicted velocity and this data.•Acceleration (2 ×1000) matrix representing the acceleration. Not applicable to our research,but could potentially be utilized for future research.We will not experiment on the entire dataset of 30 motions due to computational unfeasibility. Instead,we select a representative set of motions with (8 ×5×2×1000) samples in total. The experimentsare mainly conducted with this designated set, but since the set is chosen to be representative, weexpect the results to generalize to other motions as well.Velocity normalization. Moreover, for some experiments, we opt to normalize the velocity values,such as in Figure 8, to avoid large cost values. This can cause a loss of generality, since the policyactions are now restricted to the direction of the action vector, and will not try to replicate its size.The size of this arrow might be important in scenarios where parts of the motion need to be carriedout at a different pace. PLYDS can handle the dataset with or without velocity normalization. Thedataset is also referenced and provided as a part of our reproducibility efforts.Real-world collected dataset. We collect data by teleoperating the Kinova Gen3 Lite Arm. Tele-operation involves employing human agents to operate a robotic device or system, recording theiractions as expert demonstrations. The teleoperated actions are then recorded and utilized as trainingdata for algorithmic learning. We have two options for teleoperation. First, a human expert canmanually control the robot arm using a joystick or keyboard. This process results in natural butnon-smooth trajectories. The second approach employs the robot’s internal control systems and14Figure 8: Speed profile for the sine motion, normalized vs. natural. When we normalize the speed,policies fail to capture the difference in the speed vector’s magnitude along the trajectories. PLYDSworks with both normalized and regular velocities, but we mostly opt for normalized velocities forbaselining and comparisons, especially when plotting the policy streamlines for visual verification.Figure 9: Plots of the dataset collected using Kinova Gen3 Lite and teleoperation. The robot isoperated to complete the following trajectories multiple times, while the position and velocity dataare recorded in real-time. Expert’s demonstrations can also come from robot’s low-level controllers,which leads to faster data gathering process. We tried to keep the scale of these trajectories alignedwith the handwriting dataset to achieve consistency.trajectory planning systems to perform as an expert and execute some patterns. This leads to asmoother data collection process with higher reliability. Please keep in mind that planners onlyconnect a series of few points, with no guarantee of stability, and are time-dependent. Consequently,the role of policies generated by PLYDS will not be obsolete.We gather an open-source dataset holding three distinct motions: the prolonged sine, root parabola,and pick-and-place (Figure 9). Each motion is represented by 50 demonstrations in a 3-dimensionalworld. Each demonstration contains a state (position) vector (3 ×1000) and a corresponding action(velocity) vector (3 ×1000). Note that orientation is not recorded in the dataset, as we assumethe robot’s gripper will always face downwards. There will not be any loss of generality becausecontrolling the orientation with PLYDS can be done in parallel, and in the exact same way as theend-effector’s position. The dataset is provided as a part of our reproducibility efforts in Section 7.B.2 SDP OptimizationWe primarily use the commercially available MOSEK [ 42] optimization software that providessolutions for numerous types of optimization issues, including nonlinear semidefinite programming.The flexibility and high-performance capabilities of MOSEK make it ideal for challenging opti-mization tasks in both commercial and academic settings. We currently use the MOSEK under anacademic license, which can be obtained free of charge with an academic domain email. SCS [ 48]is another solver specifically designed for solving semidefinite complementarity problems, whichinclude nonlinear SDP as a special case. SCS employs an augmented Lagrangian method combined15with the Fischer-Burmeister function to handle the complementarity conditions in the SDP. At thistime, we do not have any solid comparison between the efficiency of these solvers for our setup, butcommercial software products often perform more efficient than open-source products.We also use SciPy [ 49], an open-source scientific computing library for Python that has many modulesfor numerical optimization. SciPy can handle a wide range of optimization problems, includingnonlinear programming with semidefinite constraints, even though it may not provide specializedsolvers for nonlinear SDP. Our software still supports SciPy; however, it is not as efficient in solvingnonlinear SDP problems as MOSEK and SCS.B.3 Hyperparameters and architecture.We provide a summary of parameters related to each baseline we used in the paper. Note that weaccelerate the computation of GAIL, BC, and SDS-EF with an NVIDIA GeForce RTX 3060 GPU,but SEDS, LPV-DS, and PLYDS use only a Core-i7 Gen8 CPU for optimization. aPLYDS. For our experiments, we primarily utilize parameters α= 3 andβ= 1, which haveproven effective in most cases. However, we also explore higher degrees to cover a broader rangeof settings. Additionally, to strike a balance between stability and accuracy, we occasionally adjustthe tolerance level from 10−4to10−9. This allows us to trade off precision for stability whennecessary. For the Lyapunov candidate to maintain non-zero gradient beyond the quadratic form,we opt for only square elements in the LPF basis vector. This ensures stability and reliability inour system. Although it is possible to enforce a positive Hessian for the Lyapunov function, itincurs additional computational time while further limiting flexibility in stability conditions. Moreinformation about the parameters and architecture of PLYDS can be found on our GitHub repository:github.com/aminabyaneh/stable-imitation-policy.GAIL. The discriminator network takes as input the state-action pairs or observations generated bythe policy network and expert demonstrations. Hidden Layers : The network may consist of two orthree hidden layers, each with 256 or 512 units. Activation Function : Rectified Linear Unit (ReLU)activation function is commonly used between the layers. Output : The discriminator produces asingle output value, representing the probability of the input being from the expert or the generatedpolicy. Hyperparameters : Learning Rate: 0.0001, Number of Discriminator Updates per GeneratorUpdate: 1 or 2, Discount Factor (for reinforcement learning algorithm): 0.99, Batch Size: 64 or128, Number of Training Iterations: 1000. We use the imitation package for GAIL’s implementation:imitation.readthedocs.io/en/latest/algorithms/gail.html.BC. The behavioral cloning network takes the state-action pairs as input. Hidden Layers : The net-work may have two or three hidden layers, each consisting of 128 or 256 units. Activation Function :Rectified Linear Unit (ReLU), Output : The output layer of the network corresponds to the action spacedimensionality, producing the predicted action. Hyperparameters : Learning Rate: 0.0001, Numberof Training Iterations: 5000, Batch Size: 64, Regularization Strength (L2 regularization): 0.001, Opti-mizer: Adam, Loss Function: Mean Squared Error (MSE). Same as with GAIL, we use the imitationpackage to access BC’s implementation: imitation.readthedocs.io/en/latest/algorithms/bc.html.SEDS. Takes position-velocity pairs as input (or state-action pairs in general). Number of GaussianComponents : Typically ranges from 3 to 10, depending on the complexity of the motion being learned.Gaussian Mixture Model (GMM) Parameters: Covariance Type: Diagonal covariance is commonlyused for efficiency and simplicity, Regularization Weight: Often set to a small value, such as 1e-6,to avoid singularities and overfitting, Maximum Number of Iterations: 100 iterations. ConvergenceTolerance : 1e-6 or 1e-7. We have implemented SEDS in Python using the SciPy optimization library,and the original MATLAB code is not used in our comparisons to remain consistent with otherbaselines, particularly in terms of computational time. Our implementation of SEDS can be found ongithub.com/aminabyaneh/stable-imitation-policy.16LPV-DS. We mainly use the original implementation of LPV-DS available on:github.com/nbfigueroa/ds-opt and developed in Matlab. The parameters of the method re-main the same as the original repository, and we only change the number of demonstrations ifrequired for comparison purposes.SDS-EF. We also provide an implementation of this baseline on our GitHub repository. Thecoupling layers are set with the following parameters: base network = ’rffn’, activation func-tion = ’elu’, and sigma=0.45. The main architecture uses 10 blocks and hidden layers’ sizeare set to 200. All these parameters are the same as SDS-EF’s original implementation ongithub.com/mrana6/euclideanizing flows, but we omit the preprocessing step found in the origi-nal implementation, to be able to fairly, and effectively compare the results to other baselines.C Supplemental ResultsWe present a comprehensive set of additional experiments aimed at putting the proposed frameworkto test from a variety of angles including access to fewer demonstrations, demonstrating the varietyof LPFs, more baseline policy rollouts, additive noise, and lastly, we conduct an ablation study byremoving the stability guarantee, and present computation times in comparison with the baselines.C.1 Baseline policy rolloutsIn Figure 10, we plot policy rollouts optimized with PLYDS in comparison to the baselines: SEDS,GAIL, and SDS-EF. We extend these results to LPV-DS in Figure 11. All the portrayed policies areoptimized on a set of motions from the handwriting dataset, namely, G-Shaped, Angle, C-Shaped,and P-Shaped demonstrations. The key takeaway is the pattern of instability among neural basedimitation learning methods for unknown areas of state space, and inaccuracies visible across thebaselines. Hence, the same patterns as the plots in the main text continue to emerge.17SEDS GAIL SDS-EF PL YDSX2X1Figure 10: Policy rollout for Angle, C-Shaped, G-Shaped, and P-Shaped demonstrations in hand-writing dataset. PLYDS is visually compared to the baselines in terms of reproduction accuracy andglobal stability. Note the inaccuracies and unstable reproductions in other baselines.X1X2Figure 11: Additional rollouts generated with the LPV-DS’s source code in Matlab. These plots serveto complement the results acquired in Figure 10.C.2 Ablation study: stability vs. accuracyWhen working with DS policies, there is a dilemma known as the stability-accuracy trade-off [ 22].This means that a balance must be struck between the reliability and robustness of generated policiesto guarantee global convergence to the target (referred to as stability), and minimizing errors to obtainprecise solutions (referred to as accuracy). It is important to find a compromise between these two18Stable Unstable StableX1UnstableX2Figure 12: Policy rollouts under utilizing PLYDS, both with and without the imposition of stabilityconstraints, reveal a significant difference in policy behavior. The absence of enforced stabilityconstraints, combined with the utilization of a complex polynomial and a preference for accuracyover stability embedded in the tolerance parameter, results in system instability.Degree: 12 Degree: 8 Degree: 4 Degree: 2X1X2Figure 13: We delve into PLYDS policy rollouts, each employing distinct polynomial complexities.As evident from the plot, the use of increasingly complex polynomials results in more intricatetrajectories that better mimic the expert’s behavior, but generate more complex trajectories furtherfrom the demonstration data. This heightened complexity poses a challenge in ensuring and validatingthe stability of the system.factors, as more stable algorithms may not be as accurate, while more accurate algorithms may besensitive to instabilities.The higher the degree of the polynomial, the more accurate imitation of expert behavior. Hence,in theory, any nonlinearity may be approximated by our DS formulation. However, in practice, weintroduce regularization and tolerance parameters in the code. The tolerance can be used to choose inthe favor of accuracy or stability (see Figure 12). Another way to balance this equation is to start withlower-degree polynomials, and increase the policy’s complexity when the accuracy is insufficient.Figure 13 serves as an illustration for this process.C.3 Complexity of Lyapunov functionsFigure 14 demonstrates the complexity of the Lyapunov function affects trajectory planning in thestate space in various ways, such as optimization efficiency, trajectory smoothness, obstacle avoidance,robustness to perturbations, and planning accuracy. If the Lyapunov function is more complex, it mayincrease computational costs but in turn result in more complex and nonlinear trajectories. On theother hand, simpler Lyapunov functions may offer faster computations, smoother trajectories, andsatisfactory planning accuracy, but they may not be adaptable to complex expert demonstrations.When deciding on the complexity of the Lyapunov function, it is necessary to consider the feasibilityof computations, and the desired smoothness and accuracy of planned trajectories and always startwith the most simple: quadratic distance function. Figure 14 illustrates this by showing both stableand unstable Lyapunov possibilities gauged across various LPF complexities.1901e51e50Unstable StableDegree = 2 Degree = 4 Degree = 6Figure 14: During our policy optimization, we obtain LPF samples such as the ones depicted above.Even though variation in complexity notably affects the computation time, employing more complexLyapunov functions (polynomials of higher degrees) appears necessary to achieve stable and precisepolicies in some cases. Currently, we manually determine the complexity of the Lyapunov functionand shift to higher complexities only if the optimization fails to deliver satisfactory results.(a) (b)Figure 15: Performance of PLYDS in the face of uniform additive noise (a) and a sample of a noisytrajectory with noise-level set to 2 (b). Noise levels are in centimeters, therefore, a noise-level of 4means each reading could be deviated from its true value by ±4cm.C.4 Performance with additive noiseNoise in imitation learning significantly impacts the learning process and resulting policies. Excessivenoise levels can destabilize the algorithm, preventing it from converging to an optimal policy. Toassess PLYDS performance while exposed to noisy measurements, we apply uniform additive noise,distributing samples evenly across a specified interval. We vary the size of this interval, expanding itsymmetrically around zero for positions, while also accounting for its effect on velocities within theexpert dataset. The results in Figure 15 demonstrate a moderate level of noise-robustness that can befurther improved in future studies. Noisy data also increases the error bands, leading to increaseduncertainty in the outcome of policy optimization.C.5 Computation timesWe performed all experiments on a machine equipped with a Core i7 8th Gen CPU, an NVIDIAGeForce RTX 3060 GPU, and 32 GB DDR2 RAM. Among the methods included in our experiments,GAIL, BC, and SDS-EF utilize the GPU to expedite neural network computations. On the other hand,20(a) (b)Figure 16: Total computation times averaged over 20 trials for PLYDS compared to other baselines(a) and with different dataset sizes (b). It is noteworthy that GAIL and BC are utilizing a GPU toaccelerate their processing power.(a) (b) (c)Figure 17: Accuracy (MSE) and policy rollouts for vectorized (left) vs. scalar (right) Lyapunov function. Forpolicy rollout, we picked the N-Shaped motion, where the non-vectorized Lyapunov functions results in a visiblereduction in reproduction accuracy. However, the accuracy comparison shows that the extent of improvementscaused by vector Lyapunov functions (marked by vec.) depend on the shape of each motion, and may not beverified visually for all motions.PLYDS, SEDS, and LPV-DS solely rely on the CPU for policy optimization. Despite this variance incomputational resources, we provide a comparison of computation times in Figure 16.C.6 Ablation study: vector Lyapunov functionsVector Lyapunov functions [ 37], extend the concept of Lyapunov stability to systems where statevariables are represented as vectors rather than scalars. This extension is particularly valuable whendealing with interconnected or multidimensional systems. Instead of relying on a scalar function, ourapproach utilizes a vector-valued Lyapunov function to assign a vector to each point in the state space.The properties of this vector-valued function are leveraged to analyze the stability and convergencebehavior of system trajectories.We employ this technique known to enhance the flexibility of the optimization process, as discussedin [50]. Moreover, our observations indicate accuracy improvements over the non-vectorized versionfor certain motion scenarios. In Figure 17, we present the results of an ablation study focusing onvector Lyapunov functions, highlighting their essential role as a small yet critical component of ourmethod. It is also known that Vectorizing the Lyapunov function yields a higher flexibility duringoptimization, can potentially lower the computational cost for higher order polynomials.21 |
uJqxFjF1xWp | BM2CP: Efficient Collaborative Perception withLiDAR-Camera ModalitiesBinyu Zhao, Wei Zhang⋆, Zhaonian ZouSchool of Computer Science and TechnologyHarbin Institute of Technology, Chinabyzhao@stu.hit.edu.cn, {weizhang,znzou }@hit.edu.cnAbstract: Collaborative perception enables agents to share complementary per-ceptual information with nearby agents. This would improve the perception per-formance and alleviate the issues of single-view perception, such as occlusion andsparsity. Most existing approaches mainly focus on single modality (especially Li-DAR), and not fully exploit the superiority of multi-modal perception. We proposea collaborative perception paradigm, BM2CP , which employs LiDAR and cam-era to achieve efficient multi-modal perception. It utilizes LiDAR-guided modalfusion, cooperative depth generation and modality-guided intermediate fusion toacquire deep interactions among modalities of different agents, Moreover, it is ca-pable to cope with the special case where one of the sensors, same or differenttype, of any agent is missing. Extensive experiments validate that our approachoutperforms the state-of-the-art methods with 50×lower communication volumesin both simulated and real-world autonomous driving scenarios. Our code is avail-able at https://github.com/byzhaoAI/BM2CP .Keywords: Multi-Agent Perception, Multi-Modal Fusion, Vehicle-to-Everything(V2X) Application1 IntroductionCollaborative perception enables agents to share complementary perceptual information with theirnearby agents. This would fundamentally alleviate the issues of single-agent perception, such asocclusion and sparsity in raw observations. Recently, different strategies have been proposed toimplement collaborative perception. These approaches can be divided into three categories: LiDARbased collaboration [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], camera based collaboration [16,17, 18, 19] and multi-modal based collaboration [20].Intuitively, different types of sensors can provide heterogeneous perceptual information at differentlevels, and thus, more accurate perception could be achieved through multi-modal analysis. How-ever, most existing approaches are not multi-modal based methods, and present better performanceonly using LiDAR. Take camera and LiDAR as example, fusing the two modalities straightforwardlywill bring negative impacts on perception performance, which is demonstrated by experiments in Ta-ble 3a. Camera captures rich semantics and contexts in a fixed view, but lacks the information ofdistance. Thus distance information, i.e. depth, will be estimated at first generally, which couldhelp lift the camera representations from 2D to 3D to align with LiDAR representations. But theestimation brings uncertainty and have negative effect to modal fusion and subsequent collaborativefeature fusion.Therefore, it poses challenges to build a well-behaved collaborative perception method with LiDAR-camera modalities: (a) How to generate depth information? (b) How to fuse the LiDAR data andcamera data effectively? (c) How to collaborate between agents with multi-modal data?7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.In this paper, we answer the aforementioned questions by proposing a Biased Multi-Modal Col-laborative Perception ( BM2CP ) method including three components: (a) cooperative depth gener-ation. A hybrid strategy is applied to provide more reliable depth distribution, which combinesprojection from ego and nearby LiDARs and prediction from ego camera; (b) biased multi-modalfusion. A preferable modal fusion is obtained by LiDAR-guided feature selection, which approvethe importance of LiDAR representations and utilize it to select useful camera representations; (c)modality-guided collaborative fusion. A preference threshold mask is generated to filter bird’s-eye-view (BEV) features, which achieves multi-view multi-modal critical feature sharing. Meanwhile,a flexible workflow makes BM2CP capable to deal with the case that one of the sensors, which issame or different type, of any agent is missing.In summary, our main contributions are threefold:• We propose a novel framework for multi-modal collaborative perception, where modalfusion is guided by LiDAR and collaborative fusion is guided by modality. Besides, it canhandle the case where modality data is incomplete.• We design LiDAR-guided depth generation and biased modal fusion, which achievesdeeper interactions between LiDAR and camera modalities. The message containing in-formation of depths and features is exchanged among agents, which achieves better modalfeature learning and efficient communication. These designs encourage sufficient featurefusion in both modality aspect and collaboration aspect.• Extensive experimental results and ablation studies on both simulated and real-worlddatasets demonstrate the performance and efficiency of BM2CP . It achieves superior per-formances with 83.72%/63.17% and 64.03%/48.99% AP at IoU0.5/0.7 on OPV2V [9] andDAIR-V2X [21]. When all camera sensors are missing, the performance is still comparableto other state-of-the-art LiDAR based methods.2 Related WorksCollaborative perception. As aforementioned stated, Most collaborative methods are LiDARbased, focusing on different issues such as performance [1, 3, 6, 8, 12], bandwidth trade-off [11, 18],communication interruption [7], latency [13] and pose error correction [5, 15]. For camera basedmethods, Xu et al. [18] propose the first attention-based multi-view cooperative framework, whichshares features in BEV with Transformer [22]. Hu et al. [19] first conduct depth estimation and shareit to reduce the impact of erroneous depth estimates. However, the depth ground truth is required tobe collected in advance, which limits the generalization of the method. For LiDAR-camera basedmethods, Zhang et al. [20] use limited number of image pixels, which are in 2D predicted boundingbox, to generate virtual 3D point clouds. Whereas the modality for collaboration is still LiDAR.Xiang et al. [23] focus on constructing attention-based collaborative network. The method requiresthat different agents provide heterogeneous modalities.Absence of sensor data. To the best of our knowledge, none of multi-agent collaborative perceptionapproach addresses the absence of sensor data issue. Recently, Li et al. [24] propose a solution withRADAR and LiDAR for single-agent perception that fills the missing sensor data with zero value,and uses the teacher-student model with exponential moving average (EMA) to learn equally fromboth modalities.BM2CP constructs a flexible and straightforward workflow to overcome the absence of sensor datawith available modalities. The proposed networks do not need any fine-tuning. Meanwhile, it isworth emphasizing that camera cannot accomplish the perception task independently, especiallywhen each agent is only equipped with one camera. In this situation, reliable depth generationwould become impossible and inferior perceptual information might be produced. This can beinferred from the research of Hu et al. [19] as well. Their experiments on DAIR-V2X dataset [21]show that the perception generated through combining cameras and depth ground truth is muchworse than that generated based on LiDAR data.2Figure 1: The framework of BM2CP . Colors indicate different modal voxels: orange for camera,which is obtained only from camera; blue for LiDAR, which is obtained only from LiDAR; greenfor LiDAR-camera hybrid, which is obtained from both modalities; and gray for normal, whichreceives no modal information from each modality and is filled with 0. In the object detection taskexample, green boxes are ground truths and red boxes are detected vehicles.3 Efficient Collaborative Perception with LiDAR-Camera ModalitiesOur designs include (a) cooperative depth generation , which is guided by LiDAR and shares depthinformation among agents; (b) biased multi-modal fusion , which achieves sufficient local featurelearning; and (c) modality-guided collaborative fusion , which shares critical BEV detection featuresto improve representation and performance. The overall framework is illustrated in Fig 1.3.1 Problem formulationTo the best of our knowledge, there does not exist any problem formulation about multi-agent col-laborative perception with LiDAR-camera modalities. Here, we try to provide a feasible definitionof LiDAR and camera perception fusion in voxel space. Considering Nagents in the scenario, letIi∈ RW×Hbe the RGB image collected by the camera of the i-th agent, where WandHare theheight and width of image. Let Pi∈ RN×3be the point cloud collected by LiDAR of the i-th agent,where Nis the number of point clouds. Yiis the corresponding ground truth of detection. Agentsexchange their positions and relative poses to build a communication graph.In order to fuse modal features in voxel space, the point cloud and the image need to be voxelized.Vli=fvoxelize (Pi),Vci=fimgext(Ii)⊗fdepest(Ii) (1)where fvoxelize (·)is a series of operations to obtain voxel features from point cloud, which aresimilar to the operation defines in Lang et al. [25]. fimgext(·)andfdepest(·)are the feature extractorand depth estimator based on raw image. Plane features can be obtained through fimgext(·). The⊗operation produces the voxel features, which expands the plane features a new dimension in Z-axis.Then, we fuse point cloud features Vli∈ RX×Y×Zand image features Vci∈ RX×Y×Zby cell andcollapse the fused voxel features Vfi∈ RX×Y×Zto BEV .Vfi=fmodal fuse(Vli,Vci),Fi=fcollap (Vfi) (2)where fmodal fuse(·,·)andfcollap (·)denote the modality fuse operation and collapse operation,respectively. Collapse operation integrates the dimension of Z-axis to the channel dimension [25].After collapsing, BEV feature Fi∈ RX×Ywill be packed and transmitted to nearby agents basedon communication graph.3Each agent aggregates the received features with its own BEV feature and finally conduct predictionfor a specific task, such as object detection or scenario segmentation. ̃Fi=ffeat fuse(Fi,{Fj→i}j∈Ni),ˆYi=fdecoder ( ̃Fi) (3)where ̃Fiis the aggregated feature. Fj→iis the warped feature based on relative poses of the j-th agent and the i-th agent. Niis the nearby agent set that the i-th agent can communicate with.ffeat fuse(·)denotes the feature fuse operation and fdecoder denotes the decoder for prediction.The objective of BM2CP is to minimize the distance between predicted and the ground truth detec-tionPig(ˆYi, Yi), where g(·,·)is the evaluation metric.3.2 Cooperative depth generationA hybrid strategy, prediction&projection , is applied to reduce the erroneous of estimation and obtaina reliable depth distribution. Prediction is used to predict pixel-wise depth using convolutionalblocks. In order to reduce the number of model parameters, image feature extractor fimgext(·)anddepth estimator fdepest(·)utilize a shared encoder and independent heads, i.e. depth head and imagehead. Each head is composed of several convolutional layers and training from scratch. Similar tothe method in Reading et al. [26], the predicted depths are classified to a series of discrete values.The number of classes is the same as that of discretized depth bins. Projection is applied to transformLiDAR point clouds to RGB image coordinates. Let Pi={(xi, yi, zi,1)}be the homogeneouscoordinates of point cloud, where (xi, yi, zi)is the 3D coordinate. Let Tldr2cam be the mappingfrom LiDAR sensor to camera sensor, and Tcam 2imgrepresent the mapping from camera sensor tothe RGB image. The overall mapping from LiDAR to RGB image isI′i=Tldr2imgPi=Tldr2camTcam 2imgPi (4)where I′i={(ui, vi, di)}is the projected image with depth information, and (ui, vi)is the 2Dcoordinate, diis the corresponding depth of each pixel. Tldr2cam andTcam 2imgare equal to theextrinsic and intrinsic matrices of camera, respectively. The projected depths are also mapped todiscrete values corresponding to the discretized depth bins.Considering that some pixels may not have any projection depth, while some pixels have multipledepths to map, the hybrid strategy is implemented as follows: a) For pixel with no projection depth,it obtains the depth through prediction strategy; b) For pixel with only one projection depth, no extraoperation is required; c) For pixel with multiple projection depths, it selects the minimum depth.According to the principle of imaging, each pixel only presents the attribute of nearest object inreflected lights while the other objects are occluded. Therefore, it is more reasonable to select theminimized depth which is the closest to camera.On the other hand, point clouds from nearby agents contain different depth information. Intuitively,the 3D location of a correct depth candidate is spatially consistent through viewpoints of multipleagents. Therefore, more reliable depth distribution could be obtained through communications.Since the depths projected from ego agent is more accurate, the depths from nearby agents are onlyused to replace predicted depths.3.3 Biased multi-modal fusionMotivated by the fact that LiDAR-based detectors usually surpass camera-based counterparts, wetake LiDAR as the guiding modality to achieve multi-modal fusion and generate fused voxel fea-tures Vfi. The illustration is presented in Fig 2a. The voxels are grouped into four categories:LiDAR voxels, camera voxels, LiDAR-camera hybrid voxels, and normal voxels. LiDAR voxelsand camera voxels are features only obtained from LiDAR branch and camera branch, respectively.Hybrid voxels are features obtained from both branches. Normal voxels denote it receives no modalinformation from each modalities, which are filled with 0.4Figure 2: (a) LiDAR-guided modal fusion. (b) Modality-guided preference map and confidencemask generation. Colors indicate different modal voxels: orange for camera, blue for LiDAR, greenfor LiDAR-camera hybrid, and gray for normal.For LiDAR-camera hybrid voxels, the fusion result ̃vHis conditioned by LiDAR, which is formu-lated as ̃vH=Conv ([ReLU (Conv (vL))∗vC, vL]), where vLandvCdenote cells from the sameposition of LiDAR and camera, respectively. [·,·]denotes concatenation.For camera only voxels, LiDAR information is not contained in the same cell. Therefore, we conductglobal attention to filter camera voxel features Vc. The guidance matrix Ax,y,z comes from overallLiDAR voxel features VlAx,y,z =0MHA (Vl,Vc,Vc)< threshold1MHA (Vl,Vc,Vc)> threshold(5)where MHA (·,·,·)is the multi-head attention [22] which outputs the scaled dot-product attentionweight. The threshold is empirically set as 0.5. The final camera voxel sets { ̃vC}is formulated as{ ̃vC}=Ax,y,z×Vcx,y,z, which collects the features that Ax,y,z is not 0.Since LiDAR voxels and normal voxels are not affected by camera voxel features, their fusionresults can be acquired through identity mapping. Finally, biased modal fusion is achieved and thefine-grained voxel features Vfiare collapsed to BEV feature.3.4 Multi-agent collaborative perceptionWe design a modality-guided collaboration to select the most critical spatial features and promoteefficient communication, as Fig 2b illustrated.First, a plane preference map T∈ RX×Yis generated based on the fused voxel features Vfi∈RX×Y×Z. A threshold is assigned to each cell Tm,nof the preference map based on a set of voxelsthat can be collapsed to it in Z-axis. From the set of voxels, we select a preferred voxel to decidethe threshold of corresponding cell. The preferred order is hybrid >LiDAR >camera >normal .Examples and more cases can be found at Appendix A.3. Since the hybrid voxels is guided byLiDAR and contains sufficient multi-modal information, the hybrid threshold is set as 0. For a betterperformance-bandwidth trade-off, the threshold for other types of cell is set as 0.5.Then, a binary confidence mask M∈ RX×Yis generated based on the preference map Tandthe collapsed BEV feature Fi∈ RX×Y. First, a classification head fcls(·)is used to evaluatethe importance of each cell in BEV feature, and fcls(Fi)∈[0,1]. On the other hand, preferencemap provides a threshold Tm,nfor each cell in BEV feature at the same location (m, n). Then wecompare the value of importance at the position (m, n)infcls(Fi)and the threshold Tm,n. Whenthe value of importance is greater than threshold, Mm,n= 1and the cell at the same location (m, n)in BEV feature Fiis regarded as critical, and will be broadcast to nearby agents. When the value ofimportance is smaller, Mm,n= 0.5Table 1: 3D detection quantitative results on OPV2V dataset and DAIR-V2X dataset. Comm is thecommunication volumes in log-scale.Dataset OPV2V DAIR-V2XMethod Comm AP@0.5 AP@0.7 Comm AP@0.5 AP@0.7No Fusion 0 61.65 43.26 0 52.75 45.63Late Fusion 18.43 74.27 57.45 18.62 53.88 38.12When2com (CVPR’20) 20.17 69.12 53.76 20.31 51.88 37.05V2VNet (ECCV’20) 22.56 78.54 59.42 22.90 58.80 43.75DiscoNet (NeurIPS’21) 21.61 77.01 58.50 21.85 55.17 39.84CoBEVT (CoRL’22) 21.07 81.37 61.32 21.31 50.70 39.39V2X-ViT (ECCV’22) 20.21 80.92 61.23 20.45 56.75 39.90Where2comm (NeurIPS’22) 15.64 79.67 60.15 16.54 60.85 46.48BM2CP 11.13 83.72 63.17 11.01 64.03 48.99After collaboration, multi-head attention MHA (q, k, v )is implemented to aggregate these criti-cal BEV features from agents and generates updated BEV feature ̃Fi, where q=k=v=[Fi,{Fj→i}j∈Ni].[·,·]is concatenation operation and {·}denotes the feature set from nearby agentsetNi. The updated BEV feature is finally used to predict the detection with task-specific decoder.3.5 Robustness against missing sensorBM2CP is capable to deal with the case when one of the sensors is missing through a flexible andstraightforward workflow. Suppose that the camera sensor is now absent and RGB image is notavailable. In modal fusion step, LiDAR voxels and normal voxels remain, and they are used as thefusion results through identity mapping. In collaborative fusion step, since no hybrid voxels exist,the threshold of cells is uniformly set as 0.5. The paradigm is similar when LiDAR sensor is missing.By conducting this, BM2CP adapts well to the modality-unavailable cases. More discussion aboutmissing sensor can be found at Appendix A.4. The experimental results are collected in Sec 4.2.4 Experimental Results4.1 Experimental setupDataset. We conduct experiments of 3D object detection task on OPV2V dataset [9] and DAIR-V2X dataset [21]. OPV2V dataset is a vehicle-to-vehicle (V2V) collaborative perception dataset,which is co-simulated by OpenCDA and Carla [27]. The perception range is 40m×40m. DAIR-V2X dataset is the first public real-world collaborative perception dataset. Each sample contains avehicle and an infrastructure, and they are equipped with a LiDAR and a front-view camera. Theperception range is 201.6m×80m.Compared methods. We consider comparisons with following LiDAR-based methods: No Fu-sion,Late fusion ,When2com [4],V2VNet [3],DiscoNet [18], V2X-ViT [12] and Where2comm [11].Among these methods, No Fusion is considered as the baseline which only uses individual observa-tion. Late fusion shares the detected 3D bounding boxes with nearby agents. Rest are state-of-the-art(SOTA) LiDAR-based collaborative perception algorithms.Implementation details. We re-implement all methods based on the pyTorch [28] framework andOpenCOOD [9] codebase with Adam [29] optimizer and multi-step learning rate scheduler. Inorder to compare communication volumes fairly, all compared methods use the same architecturesand follows PointPillar [25] with no feature compression. Weighted cross entropy loss is used. Thedetection results are evaluated by Average Precision (AP) at Intersection-over-Union (IoU) thresholdof 0.50 and 0.70.6(a) OPV2V . (b) DAIR-V2X.Figure 3: Robustness to localization error. Gaussian noise with 0 mean and varying std is introduced.Table 2: Evaluation results against missing sensor on DAIR-V2X dataset. Camera orLiDAR denotesthe sensor that the agent is equipped with.Ego agent Nearby agent(s) AP@0.5 AP@0.7Camera LiDAR 19.21 5.74Camera Both 20.41 6.05LiDAR LiDAR 61.59 47.30LiDAR Both 62.71 47.64Both Camera 53.31 43.86Both LiDAR 63.54 48.38Both Both 64.03 48.994.2 Quantitative EvaluationComparison with baseline and SOTA methods. Table 1 shows the comparisons with recent meth-ods in terms of communication volumes and detection performance. Mathematically, the communi-cation volume is calculated in log-scale by log2(|Fi→j|), where |·|is the L0norm which counts thenon-zero elements in BEV features. It is observed that BM2CP : (a) achieves a superior perception-communication trade-off; (b) achieves significant improvements over previous SOTA methods onboth datasets; (c) achieves a better performance with extremely less communication volume, whichis 105 times less than baseline and 46 times less than Where2comm .Robustness against the absence of modality. Table 2 shows the results with the setting of missingsensor. It is observed that: (a) the performance degrades severely when one of the agents is onlyequipped with camera sensor. This is consistent with the conclusion that the performance is poorwhen only camera sensor works; (b) The performance (61.59% on AP@IoU0.5 and 47.30% onAP@IoU0.7) is comparable with SOTA methods, when agents only use LiDAR for perception; (c)The performance drop slightly when one of the agents miss the camera sensor.Robustness to localization noise. We also evaluate the robustness to localization noise followingthe setting in V2VNet [3] and V2X-ViT [12]. Gaussian noise with a mean of 0mand a standarddeviation of 0m−0.6mis used, and the results are shown in Fig 3. Unfortunately, performancedegrading happens when localization noise increases. The reason comes from the cooperative depthgeneration . The shared depth distributions are broadcast based on the localization and relative poseof each agent. Therefore, it intensifies the errors and provides wrong depth information for images.To solve this problem, we correct relative pose before depth and feature communication and name itBM2CP-robust . Detailed process can be found at Appendix A.5. Comparing the results of BM2CP-robust with BM2CP , the performance degrading gets alleviated evidently.7Figure 4: Qualitative results on DAIR-V2X dataset. Ground truths are colored in green and predic-tions are colored in red. BM2CP detects more objects than V2X-ViT andWhere2comm in the firstrow, and detects less false positive objects in the second row.Table 3: Component ablation studies on DAIR-V2X dataset.(a) Modal fusion strategy.Strategy AP@0.5/0.7No 50.17/37.33Equal 58.33/42.13Biased 60.31/44.42(b) Depth projection.Projection AP@0.5/0.7No 52.84/38.07Ego 60.31/44.42All 62.25/46.46(c) Collaborative strategy.Strategy AP@0.5/0.7Max 61.33/46.02Concat 62.25/46.46Attention 64.03/48.994.3 Qualitative EvaluationFig 4 shows the comparison with No Fusion ,V2X-ViT andWhere2comm .BM2CP achieves morecomplete and less false negative detection. The reason is that BM2CP leverages multi-modal featurefusion in critical voxels and employs modality-guided cell-level confidence mask to achieve morecomprehensive fusion. We also visualize the projected depths, which can be found at Appendix B.3.4.4 Ablation StudyAblation study is conducted to investigate the effectiveness of the main components in our method.The results are presented in Table 3. We also conduct ablation studies on the number of agentscameras. They can be found at Appendix B.4.LiDAR-guided modal fusion. As shown in Table 3a, equally multi-modal fusion can result in aminor drop in performance. It proves the importance of the guidance by LiDAR.LiDAR-based depth generation. We investigate the effect of using projected depths from ego andnearbys. Results in Table 3b indicate LiDAR-based cooperative depth generation is essential.Modality-guided collaborative fusion. We compare our masking-attention strategy with strategiesof max fusion and concatenate fusion. Table 3c show that both max fusion and concatenate fusionlead to performance degradation and increase the demand of bandwidth.5 ConclusionWe propose a novel framework, termed BM2CP , for multi-agent collaborative perception, which fo-cuses on fusion with LiDAR-camera modalities. We adopt LiDAR-guided modal fusion to achievereasonable and comprehensive modal feature learning, apply cooperative depth generation to en-hance the modal fusion with more reliable depth information, and propose modality-guided collab-orative fusion for more efficient and critical feature fusion. Extensive experiments demonstrate thesuperior performance and the effectiveness of designed components.Limitation and Future Works. Although BM2CP-robust significantly alleviates performance de-grading, it increases the runtime during training and test. Further efforts is needed to reduce thecomplexity of overall workflow and computation. Besides, validations on more datasets are criticalto prove its generalization.8AcknowledgmentsWe would like to thank all the reviewers for their helpful and valuable comments.References[1] Q. Chen, S. Tang, Q. Yang, and S. Fu. Cooper: Cooperative perception for connected au-tonomous vehicles based on 3d point clouds. In 2019 IEEE 39th International Conference onDistributed Computing Systems (ICDCS) , pages 514–524. IEEE, 2019.[2] Q. Chen, X. Ma, S. Tang, J. Guo, Q. Yang, and S. Fu. F-cooper: Feature based cooperative per-ception for autonomous vehicle edge computing system using 3d point clouds. In Proceedingsof the 4th ACM/IEEE Symposium on Edge Computing , pages 88–100, 2019.[3] T.-H. Wang, S. Manivasagam, M. Liang, B. Yang, W. Zeng, and R. Urtasun. V2vnet: Vehicle-to-vehicle communication for joint perception and prediction. In Computer Vision–ECCV2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16 ,pages 605–621. Springer, 2020.[4] Y .-C. Liu, J. Tian, N. Glaser, and Z. Kira. When2com: Multi-agent perception via communi-cation graph grouping. In Proceedings of the IEEE/CVF Conference on computer vision andpattern recognition , pages 4106–4115, 2020.[5] N. Vadivelu, M. Ren, J. Tu, J. Wang, and R. Urtasun. Learning to communicate and correctpose errors. In Conference on Robot Learning , pages 1195–1210. PMLR, 2021.[6] Y . Li, S. Ren, P. Wu, S. Chen, C. Feng, and W. Zhang. Learning distilled collaboration graphfor multi-agent perception. Advances in Neural Information Processing Systems , 34:29541–29552, 2021.[7] S. Ren, Z. Lei, Z. Wang, S. Chen, and W. Zhang. Robust collaborative perception against com-munication interruption. the 2nd IJCAI Workshop on Artificial Intelligence for AutonomousDriving , 2022.[8] Y . Yuan, H. Cheng, and M. Sester. Keypoints-based deep feature fusion for cooperative vehicledetection of autonomous driving. IEEE Robotics and Automation Letters , 7(2):3054–3061,2022.[9] R. Xu, H. Xiang, X. Xia, X. Han, J. Li, and J. Ma. Opv2v: An open benchmark dataset andfusion pipeline for perception with vehicle-to-vehicle communication. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 2583–2589. IEEE, 2022.[10] J. Cui, H. Qiu, D. Chen, P. Stone, and Y . Zhu. Coopernaut: end-to-end driving with cooperativeperception for networked vehicles. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 17252–17262, 2022.[11] Y . Hu, S. Fang, Z. Lei, Y . Zhong, and S. Chen. Where2comm: Communication-efficient collab-orative perception via spatial confidence maps. In Advances in Neural Information ProcessingSystems , volume 35, pages 4874–4886, 2022.[12] R. Xu, H. Xiang, Z. Tu, X. Xia, M.-H. Yang, and J. Ma. V2x-vit: Vehicle-to-everything coop-erative perception with vision transformer. In Computer Vision–ECCV 2022: 17th EuropeanConference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX , pages 107–124.Springer, 2022.[13] Z. Lei, S. Ren, Y . Hu, W. Zhang, and S. Chen. Latency-aware collaborative perception. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part XXXII , pages 316–332. Springer, 2022.9[14] Y . Li, J. Zhang, D. Ma, Y . Wang, and C. Feng. Multi-robot scene completion: Towards task-agnostic collaborative perception. In Proceedings of The 6th Conference on Robot Learning ,pages 2062–2072. PMLR, 2022.[15] Y . Lu, Q. Li, B. Liu, M. Dianati, C. Feng, S. Chen, and Y . Wang. Robust collaborative 3dobject detection in presence of pose errors. In IEEE International Conference on Robotics andAutomation (ICRA) , May 2023.[16] N. Glaser, Y .-C. Liu, J. Tian, and Z. Kira. Overcoming obstructions via bandwidth-limitedmulti-agent spatial handshaking. In 2021 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 2406–2413. IEEE, 2021.[17] Y . Zhou, J. Xiao, Y . Zhou, and G. Loianno. Multi-robot collaborative perception with graphneural networks. IEEE Robotics and Automation Letters , 7(2):2289–2296, 2022.[18] R. Xu, Z. Tu, H. Xiang, W. Shao, B. Zhou, and J. Ma. Cobevt: Cooperative bird’s eye viewsemantic segmentation with sparse transformers. In Proceedings of The 6th Annual Conferenceon Robot Learning , pages 989–1000. PMLR, 2022.[19] Y . Hu, Y . Lu, R. Xu, W. Xie, S. Chen, and Y . Wang. Collaboration helps camera overtake lidarin 3d detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 9243–9252, 2023.[20] H. Zhang, G. Luo, Y . Cao, Y . Jin, and Y . Li. Multi-modal virtual-real fusion based trans-former for collaborative perception. In 2022 IEEE 13th International Symposium on ParallelArchitectures, Algorithms and Programming (PAAP) , pages 1–6, 2022.[21] H. Yu, Y . Luo, M. Shu, Y . Huo, Z. Yang, Y . Shi, Z. Guo, H. Li, X. Hu, J. Yuan, et al. Dair-v2x:A large-scale dataset for vehicle-infrastructure cooperative 3d object detection. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 21361–21370, 2022.[22] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[23] H. Xiang, R. Xu, and J. Ma. Hm-vit: Hetero-modal vehicle-to-vehicle cooperative percep-tion with vision transformer. In Proceedings of the IEEE/CVF International Conference onComputer Vision (ICCV) , pages 284–295, October 2023.[24] Y .-J. Li, J. Park, M. O’Toole, and K. Kitani. Modality-agnostic learning for radar-lidar fusionin vehicle detection. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 918–927, 2022.[25] A. H. Lang, S. V ora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom. Pointpillars: Fast en-coders for object detection from point clouds. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 12697–12705, 2019.[26] C. Reading, A. Harakeh, J. Chae, and S. L. Waslander. Categorical depth distribution networkfor monocular 3d object detection. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 8555–8564, 2021.[27] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. Carla: An open urban drivingsimulator. In Conference on robot learning , pages 1–16. PMLR, 2017.[28] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learninglibrary. Advances in neural information processing systems , 32, 2019.[29] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.10AppendixA MethodA.1 MotivationOur motivation comes from two aspects. Firstly, depth is the major gap for RGB image to liftup to voxels, but part of correct depths could be acquired from LiDARs of an agent or nearbyagents. Thus, depth information could be transmitted among modalities of agents. This allowscompensation from LiDAR in different views, and reduces the uncertainty of the infinite depthprediction. Secondly, the transmitted data should include detection clues to provide refined andcomplementary information, which can fundamentally overcome inevitable limitations detected bysingle modality or single agent.A.2 Coordinate transformation among LiDAR, camera and RGB imageIf the LiDAR coordinate system is regarded as the world coordinate system, the 3D coordinate ofpoint Wcould be:Wworld ="xworldyworldzworld#(6)We also have the camera coordinate and image coordinate of point WasWcam="xcamycamzcam#, Wimg=ximgyimg(7)Its world homogeneous coordinate in camera coordinate system and its camera homogeneous coor-dinate in image coordinate system areWworld h=xworldyworldzworld1, Wimgh="ximgyimg1#(8)Suppose that Eis the transformation matrix from LiDAR coordinate system to camera coordinatesystem and Iis the transformation matrix from camera coordinate system to image coordinate sys-tem, The inverse matrix of Eand matrix IareE−14×4=rx1ry1rz1txrx2ry2rz2tyrx3ry3rz3tz0 0 0 1, I3×3="fx0u0fyv0 0 1#(9)Generally, EandIare the extrinsic matrix and intrinsic matrix of camera, which are given bydataset [21].Then we haveWcamh=E4×4∗Wworld h, Wimg=1zcam∗I3×3∗Wcam (10)A.3 Generate preference mapFor example, when the voxel set is composed of {vhybrid , vcamera , vnormal }, thevhybrid is the pre-ferred voxel and the threshold of corresponding cell in preference map is assigned as hybrid thresh-old; When the voxel set is composed of {vLiDAR , v1normal , v2normal }, thevLiDAR is the preferredvoxel and the threshold of corresponding cell in preference map is assigned as LiDAR threshold.We also visualize four typical cases when generating one cell of preference map, which are shownin Fig 5.11Figure 5: Four typical cases when generating one cell of preference map.A.4 Robustness against missing sensorThere are mainly 2 cases of missing sensor. Case 1 is shown in Fig 6a. Suppose that the camera sen-sor is now absent and RGB image is not available. In modal fusion step, LiDAR voxels and normalvoxels remain, and they are used as the fusion results through identity mapping. In collaborativefusion step, since no hybrid voxels exist, the threshold of cells is uniformly set as 0.5. The paradigmis similar in case 2 when LiDAR sensor is missing. The modal fusion step is shown in Fig 6b. Byconducting this, BM2CP adapts well to the modality-unavailable cases.Figure 6: The workflow of biased multi-modal fusion when camera (a) or LiDAR (b) is missing.Existing LiDAR voxels and normal voxels in case (a), or camera voxels and normal voxels in case(b) will conduct identity mapping as the final fusion results. Colors indicate different modal voxels:orange for camera, blue for LiDAR, green for LiDAR-camera hybrid, and gray for normal. Trans-parent cells and dashed arrows indicate the corresponding type of voxel features do not exist.A.5 Robust BM2CPBM2CP-robust corrects relative pose before depth and feature communication. Specifically, single-agent 3D object detection is first conducted to estimate local bounding boxes and their uncertaintyfor each agent with LiDAR voxel features. Then we conduct internal agent-object pose graph opti-mization [15] for each agent to correct relative pose ξj→i, where iandjare ego agent and nearbyagent, respectively. The corrected relative pose is used to correct shared depth maps and BEV fea-tures.12B ExperimentsB.1 Detailed settings of architectureWe follow the default settings in OpenCOOD [9] codebase, which is also shown in Tab 4.Table 4: Details of unified network architecture.Blocks SettingsV oxel Feature Encoder (VFE) use normalization and absolute 3D coordinates, 64 filtersPointPillar Scatter 64-channel outputBEV backboneResNet backbone:layers= [3,4,5]strides= [2,2,2]filters= [64,128,256]upsample strides= [1,2,4]upsample filters= [128,128,128]Shrink Header shrink from 384 channels to 256 channels with stride 3Detect Head 256-channel output with 2 anchorsB.2 Detailed settings of experimentsTable 5: Details of unified network architecture.Method optimizer lr schedule initial lrNo Fusion Adam multistep 1e-3Late Fusion Adam multistep 1e-3When2com (CVPR’20) Adam multistep 1e-3V2VNet (ECCV’20) Adam multistep 1e-3DiscoNet (NeurIPS’21) Adam multistep 2e-3CoBEVT (CoRL’22) Adam multistep 2e-3V2X-ViT (ECCV’22) Adam multistep 2e-3Where2comm (NeurIPS’22) Adam multistep 2e-3BM2CP Adam multistep 1e-3B.3 More VisualizationsVisualization of projected depths. Fig 7 shows how depth distribution is empowered by LiDARinprojection design. In the scene, objects show evident contour against the background with lightercoloring. And more depths from nearby agents will further fill the depth in empty (white pixel inimage).Figure 7: Two visualizations of projected depths from LiDAR coordinates to image coordinates.Paired arrows in colors indicate the same objects including car and sign in LiDAR data (left), cameradata (right) and projected depth map (middle), respectively.13More visualizations of comparison with SOTA methods. Fig 8 shows more comparisons with NoFusion ,V2X-ViT andWhere2comm .Figure 8: More qualitative results in DAIR-V2X dataset. Ground truths are colored in green andpredictions are colored in red.B.4 More Ablation StudiesNumber of agents. We study the influence brought by the number of collaborative agents. Asshown in Table 6a, increasing the number of collaborative agents can generally bring performanceimprovement on OPV2V dataset, whereas such gain will be more marginal when the number isgreater than 4.Robustness to camera dropout. We demonstrate the performance when the ego agent carriesn∈[1,4]cameras in Table 6b. It can be seen that by the performance decreases but still maintainin an acceptable level.Table 6: Ablation studies of the number of agents and cameras on OPV2V dataset.(a) The number of agents.Number AP@0.5 AP@0.71 58.05 28.782 67.83 34.663 75.73 48.574 79.29 57.855 83.72 63.17(b) The number of cameras.Number AP@0.5 AP@0.71 79.84 59.072 80.02 60.553 82.57 61.684 83.72 63.1714 |
PK2debCKaG | Language Conditioned Traffic GenerationShuhan Tan1Boris Ivanovic2Xinshuo Weng2Marco Pavone2Philipp Krähenbühl11UT Austin2NVIDIAAbstract: Simulation forms the backbone of modern self-driving development.Simulators help develop, test, and improve driving systems without putting hu-mans, vehicles, or their environment at risk. However, simulators face a majorchallenge: They rely on realistic, scalable, yet interesting content. While re-cent advances in rendering and scene reconstruction make great strides in cre-ating static scene assets, modeling their layout, dynamics, and behaviors remainschallenging. In this work, we turn to language as a source of supervision for dy-namic traffic scene generation. Our model, LCTGen , combines a large languagemodel with a transformer-based decoder architecture that selects likely map loca-tions from a dataset of maps, produces an initial traffic distribution, as well as thedynamics of each vehicle. LCTGen outperforms prior work in both unconditionaland conditional traffic scene generation in-terms of realism and fidelity. Code anddemo available at https://ariostgx.github.io/lctgen .Keywords: Self-driving, Content generation, Large language model1 IntroductionDriving simulators stand as a cornerstone in self-driving development. They aim to offer a con-trolled environment to mimic real-world conditions and produce critical scenarios at scale. Towardsthis end, they need to be highly realistic (to capture the complexity of real-world environments),scalable (to produce a diverse range of scenarios without excessive manual effort), and able to cre-ateinteresting traffic scenarios (to test self-driving agents under different situations).In this paper, we turn to natural language as a solution. Natural language allows practitioners toeasily articulate interesting and complex traffic scenarios through high-level descriptions. Insteadof meticulously crafting the details of each individual scenario, language allows for a seamlessconversion of semantic ideas into simulation scenarios at scale. To harness the capacity of naturallanguage, we propose LCTGen .LCTGen takes as input a natural language description of a trafficscenario, and outputs traffic actors’ initial states and motions on a compatible map. As we will showin Section 5, LCTGen generates realistic traffic scenarios that closely adhere to a diverse range ofnatural language descriptions, including detailed crash reports [1].The major challenge of language-conditioned traffic generation is the absence of a shared repre-sentation between language and traffic scenarios. Furthermore, there are no paired language-trafficdatasets to support learning such a representation. To address these challenges, LCTGen (see Fig-ure 1) uses a scenario-only dataset and a Large Language Model (LLM). LCTGen has three modules:Interpreter ,Generator andEncoder . Given any user-specified natural language query, the LLM-powered Interpreter converts the query into a compact, structured representation. Interpreteralso retrieves an appropriate map that matches the described scenario from a real-world map library.Then, the Generator takes the structured representation and map to generate realistic traffic scenar-ios that accurately follow the user’s specifications. Also, we design the Generator as a query-basedTransformer model [2], which efficiently generates the full traffic scenario in a single pass.This paper presents three main contributions:1. We introduce LCTGen , a first-of-its-kind model for language-conditional traffic generation.2. We devise a method to harness LLMs to tackle the absence of language-scene paired data.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.G e n e r a t o r I n t e r p r e t e r ( L L M )“ e g o v e h i c l e t u r n s r i g h t a t a n i n t e r s e c t i o n ”R e t r i e v a lS t r u c t u r e d R e p r e s e n t a t i o nM a pT e x t D e s c r i p t i o n O u t p u t S c e n a r i o. . .M a p D a t a s e tM a p : [ 2 , 2 , 1 , 1 , 0 , 2 ]V 1 : [ - 1 , 0 , 0 , 2 , 1 , 1 , 1 , 1 ]V 2 : [ 2 , 0 , 0 , 3 , 4 , 4 , 4 , 4 ]V 3 : [ 0 , 0 , 2 , 2 , 4 , 4 , 4 , 4 ]Figure 1: Overview of our LCTGen model.3.LCTGen exhibits superior realism and controllability over prior work. We also show LCTGencan be applied to instructional traffic editing and controllable self-driving policy evaluation.2 Related WorkTraffic scenario generation traditionally rely on rules defined by human experts [3], e.g., rules thatenforce vehicles to stay in lanes, follow the lead vehicles [4, 5, 6] or change lanes [7]. This approachis used in most virtual driving datasets [8, 9, 10, 11] and simulators [12, 3, 13]. However, trafficscenarios generated in this way often lack realism as they do not necessarily match the distributionof real-world traffic scenarios. Moreover, creating interesting traffic scenarios in this way requiresnon-trivial human efforts from experts, making it difficult to scale. In contrast, LCTGen learns thereal-world traffic distribution for realistic traffic generation. Also, LCTGen can generate interestingscenarios with language descriptions, largely reducing the requirement of human experts.Prior work in learning-based traffic generation is more related to our work. SceneGen [14] uses au-toregressive models to generate traffic scenarios conditioned on ego-vehicle states and maps. Traf-ficGen [15] applies two separate modules for agent initialization and motion prediction. BITS [16]learns to simulate agent motions with a bi-level imitation learning method. Similar to LCTGen , thesemethods learn to generate realistic traffic scenarios from real-world data. However, they lack theability to control traffic generation towards users’ preferences. In contrast, LCTGen achieves suchcontrollability via natural languages and at the same time can generate highly realistic traffic sce-narios. Moreover, we will show in the experiments that LCTGen also outperforms prior work in thesetting of unconditional traffic reconstruction, due to our query-based end-to-end architecture.Text-conditioned generative models have recently shown strong capabilities for controllable con-tent creation for image [17], audio [18], motion [19], 3D object [20] and more. DALL-E [17] usesa transformer to model text and image tokens as a single stream of data. Noise2Music [18] usesconditioned diffusion models to generate music clips from text prompts. MotionCLIP [19] achievestext-to-human-motion generation by aligning the latent space of human motion with pre-trainedCLIP [21] embedding. These methods typically require large-scale pairs of content-text data fortraining. Inspired by prior work, LCTGen is the first-of-its-kind model for text-conditioned traf-fic generation. Also, due to the use of LLM and our design of structured representation, LCTGenachieves text-conditioned generation without any text-traffic paired data.Large language models have become increasingly popular in natural language processing and re-lated fields due to their ability to generate high-quality text and perform language-related tasks.GPT-2 [22] is a transformer-based language model that is pre-trained on vast amounts of text data.Following this trend, GPT-3 [23] shows strong in-context-learning capacity, and InstructGPT [24]improves the instruction following capacity by fine-tuning with human feedback to better align themodels with their users. More recently, GPT-4 [25] demonstrates strong performance in both in-context learning and instruction following. In our work, we adapt the GPT-4 model [25] with chain-of-thought [26] prompting method as our Interpreter .3 PreliminariesLetmbe a map region, and stbe the state of all vehicles in a scene at time t. A traffic scenario τ=(m,s1:T)is the combination of a map region mandTtimesteps of vehicle states s1:T= [s1, ...,sT].2G P T 4S u m m a ry : V 1 a p p r o a c h e s a n i n t e r s e c t i o n a n d d o e s n o t n o t i c e V 2 a h e a d . . .E x p l a n a t i o n : [ V 1 ] - B e c a u s e V 1 i s m o v i n g , w e a s s u m e V 1 ' s i n i t i a l s p e e d i s 1 0 m / s ( i n d e x 4 ) . V 1 k e e p s g o i n g s t r a i g h t , s o i t s a c t i o n s a r e a l l 4 ( k e e p s p e e d ) . [ V 2 ] - A s V 1 i s m o v i n g s t r a i g h t a n d h i t s V 2 f r o m b e h i n d , V 2 i s i n f r o n t o f V 1 . . . .O u t p u t : - ' V 1 ' : [ - 1 , 0 , 0 , 4 , 4 , 4 , 4 , 4 ] - ' V 2 ' : [ 3 , 8 , 2 , 0 , 4 , 4 , 4 , 4 ] - ' M a p ' : [ 2 , 2 , 2 , 2 , 8 , 1 ]A s V e h i c l e 1 a p p r o a c h e d t h e i n t e r s e c t i o n , i t s d r i v e r d i d n o t n o t i c e t h e v e h i c l e s s t o p p e d a h e a d a t t h e t r a f f i c l i g h t . T h e t r a f f i c s i g n a l t u r n e d g r e e n a n d V e h i c l e 2 b e g a n t o s l o w l y m o v e f o r w a r d . T h e f r o n t a l p l a n e o f V e h i c l e 1 s t r u c k t h e r e a r p l a n e o f V e h i c l e 2 . . .Figure 2: Example Interpreter input and output. We only show partial texts for brevity.Map. We represent each map region mby a set of Slane segments denoted by m={v1, ..., v S}.Each lane segment includes the start point and endpoint of the lane, the lane type (center lane, edgelane, or boundary lane), and the state of the traffic light control.Vehicle states. The vehicle states st={s1t, ..., sNt}at time tconsist of Nvehicle. For each vehicle,we model the vehicle’s position, heading, velocity, and size. Following prior work [14, 15], wechoose the vehicle at the center of the scenario in the first frame as the ego-vehicle. It represents theself-driving vehicle in simulation platforms.4LCTGen : Language-Conditioned Traffic GenerationOur goal is to train a language-conditioned model τ∼LCTGen (L,M)that produces traffic scenar-ios from a text description Land a dataset of maps M. Our model consists of three main compo-nents: A language Interpreter (Section 4.1) that encodes a text description into a structured repre-sentation z. Map Retrieval m∼Retrieval (z,M)that samples matching map regions mfrom adataset of maps M. AGenerator (Section 4.3) that produces a scenario τ∼Generator (z, m)fromthe map mand structured representation z. All components are stochastic, allowing us to samplemultiple scenes from a single text description Land map dataset M. We train the Generator with areal-world scenario-only driving dataset (Section 4.4).4.1 InterpreterThe Interpreter takes a text description Las input and produces a structured representation z=Interpreter (L). After defining the representation z, we show how to produce it via GPT-4 [25].Structured representation z= [zm, za1, . . . zaN]contains both map-specific zmand agent-specificcomponents zai. For each scenario, we use a 6-dimensional vector zmdescribing the local map. Itmeasures the number of lanes in each direction (north, south, east, west), the distance of the mapcenter to an intersection, and the lane the ego-vehicle finds itself in. This compact abstract allows alanguage model to describe the important properties of a mand interact with map dataset M. Foreach agent i,zaiis an8-dimentional integer vector describing the agent relative to the ego vehicle. Itcontains an agent’s quadrant position index (1-4), distance range (0-20m, 20-40m,...), orientation in-dex (north, south, east, west), speed range (0-2.5m/s, 2.5-5m/s, ...), and action description (turn left,accelerate, ...). Please refer to Supp.A. for a complete definition of z. Note that the representation zdoes not have a fixed length, as it depends on the number of agents in a scene.Language interpretation. To obtain the structured representation, we use a large language model(LLM) and formulate the problem into a text-to-text transformation. Specifically, we ask GPT-4 [25]to translate the textual description of a traffic scene into a YAML-like description through in-contextlearning [27]. To enhance the quality of the output, we use Chain-of-Thought [26] prompting to letGPT-4 summarize the scenario qin short sentences and plan agent-by-agent how to generate z. SeeFigure 2 for an example input and output. Refer to Supp. A for the full prompt and Supp. D.4 formore complete examples.4.2 RetrievalThe Retrieval module takes a map representation zmand map dataset M, and samples map re-gions m∼Retrieval (zm,M). Specifically, we preprocess the map dataset Minto potentiallyoverlapping map regions {m1, m2, ...}. We sample map regions, such that their center aligns withthe locations of an automated vehicle in an offline driving trace. This ensures that the map regionis both driveable and follows a natural distribution of vehicle locations. For each map mj, we pre-compute its map representation ˆzmj. This is possible, as the map representation is designed to be3G e n e r a t i v eT r a n s f o r m e rM a pE n c o d e rP ES t r u c t u r e d R e p r e s e n t a t i o n M a pM a p F e a t u r eM o t i o n M L PA t t r i b u t e M L PA g e n t Q u e ry A g e n t F e a t u r e M a p M a s k M L PP o s i t i o n M L P M o t i o n P r e d A t t r i b u t e P r e d L a n e M a s kA g e n t M a s kP o s i t i o n P r e dFigure 3: Architecture of our Generator model.both easy to produce programmatically and by a language model. Given zm, the Retrieval rankseach map region mjbased its feature distancezm−ˆzmj. Finally, Retrieval randomly samplesmfrom the top- Kclosest map regions.4.3 GeneratorGiven a structured representation zand map m, the Generator produces a traffic scenario τ=Generator (z, m). We design Generator as a query-based transformer model to efficiently capturethe interactions between different agents and between agents and the map. It places all the agents ina single forward pass and supports end-to-end training. The Generator has four modules (Figure 3):1) a map encoder that extracts per-lane map features F; 2) an agent query generator that convertsstructured representation zaito agent query qi; 3) a generative transformer that models agent-agentand agent-map interactions; 4) a scene decoder to output the scenario τ.Map encoder processes a map region m={v1, . . . , v S}withSlane segments viinto a map featureF={f1, . . . , f S}, and meanwhile fuse information across different lanes. Because Scould be verylarge, we use multi-context gating (MCG) blocks [28] for efficient information fusion. MCG blocksapproximate a transformer’s cross-attention, but only attend a single global context vector in eachlayer. Specifically, a MCG block takes a set of features v1:Sas input, computes a context vector c,then combines features and context in the output v′1:S. Formally, each block is implemented viav′i=MLP(vi)⊙MLP(c)where c=MaxPool (v1:S)where⊙is the element-wise product. The encoder combines 5 MCG blocks with 2-layer MLPs.Agent query generator transforms the structured representation zaiof each agent iinto an agentquery qi∈Rd. We implement this module as an MLP of positional embeddings of the structuredrepresentation qi=MLP(PE(zai)) + MLP(xi).We use a sinusoidal position encoding PE (·). Wealso add a learnable query vector xias inputs, as inspired by the object query in DETR [29].Generative transformer. To model agent-agent and agent-map interactions, we use F={f1, . . . , f S}andQ={q1, . . . , q N}as inputs and pass them through multiple transformer lay-ers. Each layer follows Q′=MHCA(MHSA(Q), F), where MHCA ,MHSA denote that multi-head cross-attention and multi-head self-attention respectively [2]. The output of Q′in each layer is used as thequery for the next layer to cross-attend to F. The outputs of the last-layer Q∗are the agent features.Scene decoder . For each agent feature q∗i, the scene decoder produces the agents position, attributes,and motion using an MLP. To decode the position, we draw inspiration from MaskFormer [30],placing each actor on a lane segment in the map. This allows us to explicitly model the positionalrelationship between each actor and the road map. Specifically, we employ an MLP to turn q∗iintoan actor mask embedding eagenti∈Rd. Likewise, we transform each lane feature fjinto a per-lane map mask embedding elanej∈Rd. The position prediction ˆpi∈RSfor the i-th agent is thenˆpi=softmax (eagenti×[elane1, . . . , elaneS]T),For each agent query, we predict its attributes , namely heading, velocity, size, and position shiftfrom the lane segment center, following Feng et al. [15]. The attribute distribution of a potentialagent is modeled with a Gaussian mixture model (GMM). The parameters of a K-way GMM for4each attribute of agent iare predicted as [μi,Σi, πi] =MLP(q∗i), where μi,Σiandπidenote themean, diagonal covariance matrix, and the categorical weights of the K-way GMM model.We further predict the future T−1step motion of each agent, by outputting K′potential futuretrajectories for each agent: {pos2:Ti,k,probi,k}K′k=1=MLP(q∗i), where pos2:Ti,krepresents the k-thtrajectory states for T−1future steps, and probi,kis its probability. Specifically, for each timestampt, posti,k= (x, y, θ )contains the agent’s position (x, y)and heading θatt.During inference, we randomly sample values from the predicted position, attribute, and motiondistributions of each agent query to generate an output agent status through Ttime stamps si1:T.For categorical distributions, we select the category with the highest probability. For GMMs, werandomly sample a value from the model. Compiling the output for all agents, we derive the vehiclestatuses s1:T. In conjunction with m, the Generator outputs the final traffic scenario τ= (m,s1:T).4.4 TrainingThe Generator is the only component of LCTGen that needs to be trained. We use real-world self-driving datasets, composed of Dtraffic scenarios {τj}Dj=1. For each traffic scene, we use an Encoderto produce the latent representation z, then train the Generator to reconstruct the scenario.Encoder .The Encoder takes a traffic scenario τand outputs structured agent representation:za=Encoder (τ). As mentioned in Section 4.1, zacontains compact abstract vectors of eachagent{za1, ..., zaN}. For each agent i, the Encoder extracts from its position, heading, speed, andtrajectory from the ground truth scene measurements si1:Tinτ, and converts it into zaifollowing aset of predefined rules. For example, it obtains the quadrant position index with the signs of (x, y)position. In this way, we can use Encoder to automatically convert any scenario τto latent codes z.This allows us to obtain a paired dataset (m,s1:N, za1:N)from scenario-only driving dataset.Training objective. For each data sample (m,s1:N, za1:N), we generate a prediction p=Generator (z, m). The objective is to reconstruct the real scenario τ. We compute the loss as:L(p, τ) =Lposition (p, τ) +Lattr(p, τ) +Lmotion(p, τ), (1)whereLposition ,Lattr,Lmotion are losses for each of the different predictions. We pair each agent in pwith a ground-truth agent in τbased on the sequential ordering of the structured agent representationza. We then calculate loss values for each component. For Lposition , we use cross-entropy lossbetween the categorical output ˆpand the ground-truth lane segment id. For Lattr, we use a negativelog-likelihood loss, computed using the predicted GMM on the ground-truth attribute values. ForLmotion , we use MSE loss for the predicted trajectory closest to the ground-truth trajectory. Thetraining objective is the expected loss Lover the dataset. We refer readers to Supp. B for moredetailed formulations of the loss functions.5 ExperimentsDatasets. We use the large-scale real-world Waymo Open Dataset [31], partitioning it into 68ktraffic scenarios for training and 2.5k for testing. For each scene, we limit the maximum numberof lanes to S= 384 , and set the maximum number of vehicles to N= 32 . We simulate T= 50timesteps at 10 fps, making each τrepresent a 5-second traffic scenario. We collect all the mapsegments in the training split to obtain the map dataset Mwith 68K map regions.Implementation. We query GPT-4 [25] (with a temperature of 0.2) through the OpenAI API forInterpreter . For Generator , we set the latent dimension d= 256 . We use a 5-layer MCG block forthe map encoder. For the generative transformer, we use a 2-layer transformer with 4 heads. We usea dropout layer after each transformer layer with a dropout rate of 0.1. For each attribute predictionnetwork, we use a 2-layer MLP with a latent dimension of 512. For attribute GMMs, we use K= 5components. For motion prediction, we use K′= 12 prediction modes. We train Generator withAdamW [32] for 100 epochs, with a learning rate of 3e-4 and batch size of 64.5MethodInitialization MotionPos Heading Speed Size mADE mFDE SCRTrafficGen [15] 0.2002 0.1524 0.2379 0.0951 10.448 20.646 5.690MotionCLIP [19] 0.1236 0.1446 0.1958 0.1234 6.683 13.421 8.842LCTGen (w/oz) 0.1319 0.1418 0.1948 0.1092 6.315 12.260 8.383LCTGen 0.0616 0.1154 0.0719 0.1203 1.329 2.838 6.700Table 1: Traffic scenario generation realism evaluation (lower the better).5.1 Scene Reconstruction EvaluationWe evaluate the quality of LCTGen ’s generated scenarios by comparing them to real scenarios fromthe driving dataset. For each scenario sample (τ, z, m )in the test dataset, we generate a scenariowithˆτ=Generator (z, m)and then compute different metrics with τandˆτ.Metrics . To measure the realism of scene initialization, we follow [14, 15] and compute the maxi-mum mean discrepancy ( MMD [33]) score for actors’ positions, headings, speed and sizes. Specif-ically, MMD measures the distance between two distributions qandp. For each pair of real andgenerated data (τ,ˆτ), we compute the distribution difference between them per attribute. To mea-sure the realism of generated motion behavior, we employ the standard mean average distance error(mADE ) and mean final distance error ( mFDE ). For each pair of real and generated scenarios (τ,ˆτ),we first use the Hungarian algorithm to compute a matching based on agents’ initial locations withtheir ground-truth location. We then transform the trajectory for each agent based on its initial po-sition and heading to the origin of its coordinate frame, to obtain its relative trajectory. Finally, wecompute mADE and mFDE using these relative trajectories. We also compute the scenario collisionrate ( SCR ), which is the average proportion of vehicles involved in collisions per scene.Baselines . We compare against a state-of-the-art traffic generation method, TrafficGen [15]. AsTrafficGen only takes a map mas input to produce a scenario τ, we train a version of LCTGen thatalso only uses mas input for a fair comparison, referred to as LCTGen (w/oz). We also compareagainst MotionCLIP [19], which takes both a map mand text Las input to generate a scenario τ.Please refer to Supp. C for the implementation details of each baseline.Results . The results in Table 1 indicate the superior performance of LCTGen . In terms of sceneinitialization, LCTGen (w/oz) outperforms TrafficGen in terms of MMD values for the Position,Heading, and Speed attributes. Importantly, when conditioned on the language input L,LCTGensignificantly improves its prediction of Position, Heading, and Speed attributes, significantly out-performing both TrafficGen and MotionCLIP on MMD ( >2×).LCTGen also achieves 7-8x smallermADE and mFDE than baselines when comparing generated motions. The unconditional versionofLCTGen , without z, also outpaces TrafficGen in most metrics, demonstrating the effectivenessofGenerator ’s query-based, end-to-end transformer design. We note that LCTGen (w/o) zhas anon-par Size-MMD score with TrafficGen, which is lower than LCTGen . We conjecture that this isbecause our model learns spurious correlations of size and other conditions in zin the real data.5.2 Language-conditioned Simulation EvaluationLCTGen aims to generate a scenario τthat accurately represents the traffic description from the inputtextL. Since no existing real-world text-scenario datasets are available, we carry out our experimentusing text Lfrom a text-only traffic scenario dataset. To evaluate the degree of alignment betweeneach scenario and the input text, we conduct a human study. We visualize the output scenario τgenerated by LCTGen or the baselines, and ask humans to assess how well it matches the input text.Datasets. We use a challenging real-world dataset, the Crash Report dataset [1], provided by theNHTSA. Each entry in this dataset comprises a comprehensive text description of a crash scenario,including the vehicle’s condition, driver status, road condition, vehicle motion, interactions, andmore. Given the complexity and intricate nature of the traffic scenarios and their text descriptions,this dataset presents a significant challenge (see Figure 2 for an example). We selected 38 casesfrom this dataset for the purposes of our study. For a more controllable evaluation, we also useanAttribute Description dataset. This dataset comprises text descriptions that highlight various6MethodCrash Report Attribute DescriptionOurs Prefered (%) Score (1-5) Ours Prefered (%) Score (1-5)TrafficGen [15] 92.35 1.58 90.48 2.43MotionCLIP [19] 95.29 1.65 95.60 2.10LCTGen - 3.86 - 4.29Table 2: Human study results on the language-conditioned simulation.“ V 1 a n d V 2 c o l l i d e a t a n i n t e r s e c t i o n o f t w o u r b a n t r a f f i c w a y s , w i t h V 2 s t r i k i n g t h e l e f t s i d e o f V 1 . ”“ V 1 i s t r a v e l i n g e a s t i n t h e l e f t t u r n l a n e a n d a t t e m p t s t o t u r n l e f t w h e n i t c o l l i d e s w i t h V 2 t r a v e l i n g w e s t i n t h e l e f t t h r o u g h l a n e . ”“ t h e s c e n e i s s p a r s e . t h e r e a r e o n l y v e h i c l e s b e h i n d t h e e g o - v e h i c l e . m o s t c a r s a r e m o v i n g i n f a s t s p e e d . t h e c e n t e r c a r t u r n s l e f t . ”“ t h e e g o c a r t u r n s r i g h t , m o s t c a r s a r e m o v i n g i n s l o w s p e e d . ”Figure 4: Qualitative results on text-conditioned generation.attributes of a traffic scenario. These include aspects like sparsity ("the scenario is dense"), position("there are vehicles on the left"), speed ("most cars are driving fast"), and the ego vehicle’s motion("the ego vehicle turns left"). We create more complex descriptions by combining 2, 3, and 4attributes. This dataset includes 40 such cases. Refer to Supp. C for more dataset details.Baselines. We compare with TrafficGen and MotionCLIP. For each text input L,LCTGen outputsa scenario τ= (m,s1:T). To ensure fairness, we feed data mto both TrafficGen and MotionCLIPto generate scenarios on the same map. As TrafficGen does not take language condition as input,we only feed Lto MotionCLIP. In addition, TrafficGen can’t automatically decide the number ofagents, therefore it uses the same number of agents as our output τ.Human study protocol . For each dataset, we conduct a human A/B test. We present the evaluatorswith a text input, along with a pair of scenarios generated by two different methods using the sametext input, displayed in a random order. The evaluators are then asked to decide which scenario theythink better matches the text input. Additionally, evaluators are requested to assign a score between1 and 5 to each generated scenario, indicating its alignment with the text description; a higher scoreindicates a better match. A total of 12 evaluators participated in this study, collectively contributing1872 scores for each model.Quantitative Results . We show the results in Table 2. We provide preference score, reflectingthe frequency with which LCTGen ’s output is chosen as a better match than each baseline. We alsoprovide the average matching score, indicating the extent to which evaluators believe the generatedscenario matches the text input. With LCTGen often chosen as the preferred model by human evalua-tors ( at least 90% of the time), and consistently achieving higher scores compared to other methods,these results underline its superior performance in terms of text-controllability over previous works.The high matching score also signifies LCTGen ’s exceptional ability to generate scenarios that faith-fully follow the input text. We include more analysis of human study result in Supp. D.3.Qualitative Results . We show examples of LCTGen output given texts from the Crash Report (lefttwo) and Attribute Description (right two) datasets in Figure 4. Each example is a pair of input textand the generated scenario. Because texts in Crash Report are excessively long, we only show theoutput summary of our Interpreter for each example (Full texts in Supp. C). Please refer to Supp.video for the animated version of the examples here. We show more examples in Supp. D.2.5.3 Application: Instructional Traffic Scenario EditingBesides language-conditioned scenario generation, LCTGen can also be applied to instructional traf-fic scenario editing. Given either a real or generated traffic scenario τ, along with an editing instruc-tion text I,LCTGen can produce an edited scenario ˆτthat follows I. First, we acquire the structured7I n p u t“ m a k e t h e c a r i n f r o n t t u r n l e f t ” “ r e m o v e a l l t h e h o r i z o n t a l c a r s ”“ a d d m o r e c a r s o n t h e l e f t ”“ s p e e d u p s a m e - d i r e c t i o n c a r s ”Figure 5: Instructional editing on a real-world scenario. Refer to Supp. A for full prompts.Method Pos Heading Speed Sizew/o Quad. 0.092 0.122 0.076 0.124w/o Dist. 0.071 0.124 0.073 0.121w/o Ori. 0.067 0.132 0.082 0.122LCTGen 0.062 0.115 0.072 0.120(a) Ablation study for scene initialization.Method mADE mFDE SCRw/o Speed 2.611 5.188 7.150w/o Action 2.188 5.099 7.416LCTGen init. + [15] motion 2.467 5.682 5.210LCTGen 1.329 2.838 6.700(b) Ablation study for motion behavior generation.Table 3: Scene reconstruction ablation study on the Waymo Open Dataset.representation of the scenario using z=Encoder (τ). Next, we compose a unique prompt that in-structs Interpreter to alter zin accordance with I, resulting in ˆz=Interpreter (z, I). Finally, wegenerate the edited scenario ˆτ=Generator (ˆz, m), where mis the same map used in the input.We show an example of consecutive instructional editing of a real-world scenario in Figure 5. Wecan see that LCTGen supports high-level editing instructions (vehicle removal, addition and actionchange). It produces realistic output following the instruction. This experiment highlights LCTGen ’spotential for efficient instruction-based traffic scenario editing. As another application of LCTGen ,we also show how LCTGen can be utilized to generate interesting scenarios for controllable self-driving policy evaluation. Please refer to Supp. D.1 for this application.5.4 Ablation studyScene initialization. Table 3 summarizes the results, where the last row corresponds to our fullmethod. To validate the performance of LCTGen for scene initialization, we mask out the quadrantindex, distance, and orientation in the structure representation zfor each agent, respectively. Asa result, we observed a significant performance drop, especially in the prediction of Position andHeading attributes, as shown in the left side of Table 3. This suggests that including quadrant index,distance, and orientation in our structured representation is effective.Motion behavior generation. We summarized the results in Table 3 (right). By masking out thespeed range and action description in the structured representation for each agent, we observed asignificant performance drop in the metrics for motion behavior. Moreover, if we initialize the scenewith LCTGen while generating agents’ motion behavior using TrafficGen’s [15], we also observedsignificantly worse performance than using LCTGen to generate the traffic scenario in one shot. Theresults suggest that the end-to-end design of scene initialization and motion behavior generation byourLCTGen can lead to better performance. We show more ablation study results in Supp. D.5.6 ConclusionIn this work, we present LCTGen , a first-of-its-kind method for language-conditioned traffic scenegeneration. By harnessing the expressive power of natural language, LCTGen can generate realisticand interesting traffic scenarios. The realism of our generated traffic scenes notably exceeds previ-ous state-of-the-art methods. We further show that LCTGen can be applied to applications such asinstructional traffic scenario editing and controllable driving policy evaluation.Limitations. The primary constraint of LCTGen lies in the Interpreter module’s inability to outputperfect agent placements and trajectories, as it lacks direct access to detailed lane information fromthe map. Our future work aims to overcome these issues by equipping the Interpreter with map andmath APIs, enabling it to fetch precise map data and output more comprehensive traffic scenarios.8Acknowledgement. We thank Yuxiao Chen, Yulong Cao, and Danfei Xu for their insightful discus-sions. We thank all the human evaluation participants for their time and effort in our experiments.We also appreciate the constructive comments from the anonymous reviewers. This material issupported by the National Science Foundation under Grant No. IIS-1845485.References[1] National Highway Traffic Safety Administration. Crash injury research engineering network.https://crashviewer.nhtsa.dot.gov/CIREN/SearchIndex , 2016.[2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, andI. Polosukhin. Attention is all you need. In I. Guyon, U. V . Luxburg, S. Bengio, H. Wallach,R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Process-ing Systems , volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper _files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf .[3] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y .-P. Flötteröd, R. Hilbrich, L. Lücken,J. Rummel, P. Wagner, and E. Wießner. Microscopic traffic simulation using sumo. In The21st IEEE International Conference on Intelligent Transportation Systems . IEEE, 2018. URLhttps://elib.dlr.de/124092/ .[4] L. Papaleondiou and M. Dikaiakos. Trafficmodeler: A graphical tool for programming micro-scopic traffic simulators through high-level abstractions. In VETECS , pages 1 – 5, 05 2009.doi:10.1109/VETECS.2009.5073891.[5] J. Maroto, E. Delso, J. Félez, and J. Cabanellas. Real-time traffic simulation with a microscopicmodel. Intelligent Transportation Systems, IEEE Transactions on , 7:513 – 527, 01 2007. doi:10.1109/TITS.2006.883937.[6] F. E. Gunawan. Two-vehicle dynamics of the car-following models on realistic driving con-dition. Journal of Transportation Systems Engineering and Information Technology , 12(2):67 – 75, 2012. ISSN 1570-6672. doi:https://doi.org/10.1016/S1570-6672(11)60194-3. URLhttp://www.sciencedirect.com/science/article/pii/S1570667211601943 .[7] J. Erdmann. Lane-changing model in sumo. In Proceedings of the SUMO2014 ModelingMobility with Open Data , volume 24, 05 2014.[8] M. Wrenninge and J. Unger. Synscapes: A photorealistic synthetic dataset for street sceneparsing. Arxiv , Oct 2018. URL http://arxiv.org/abs/1810.08705 .[9] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez. The synthia dataset: Alarge collection of synthetic images for semantic segmentation of urban scenes. In 2016 IEEEConference on Computer Vision and Pattern Recognition (CVPR) , pages 3234–3243, 2016.[10] M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K. Rosaen, and R. Vasudevan. Driv-ing in the matrix: Can virtual worlds replace human-generated annotations for real world tasks?InIEEE International Conference on Robotics and Automation , pages 1–8, 2017.[11] S. R. Richter, V . Vineet, S. Roth, and V . Koltun. Playing for data: Ground truth from computergames. In ECCV , 2016.[12] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. CARLA: An open urban drivingsimulator. In Proceedings of the 1st Annual Conference on Robot Learning , pages 1–16, 2017.[13] A. Prakash, S. Boochoon, M. Brophy, D. Acuna, E. Cameracci, G. State, O. Shapira, andS. Birchfield. Structured domain randomization: Bridging the reality gap by context-awaresynthetic data. In ICRA , pages 7249–7255, 05 2019. doi:10.1109/ICRA.2019.8794443.9[14] S. Tan, K. Wong, S. Wang, S. Manivasagam, M. Ren, and R. Urtasun. Scenegen: Learningto generate realistic traffic scenes. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 892–901, 2021.[15] L. Feng, Q. Li, Z. Peng, S. Tan, and B. Zhou. Trafficgen: Learning to generate diverse andrealistic traffic scenarios. In IEEE International Conference on Robotics and Automation ,London, United Kingdom, May 2023.[16] D. Xu, Y . Chen, B. Ivanovic, and M. Pavone. BITS: Bi-level imitation for traffic simulation. InIEEE International Conference on Robotics and Automation (ICRA) , London, UK, May 2023.URL https://arxiv.org/abs/2208.12403 .[17] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. V oss, A. Radford, M. Chen, and I. Sutskever.Zero-shot text-to-image generation. In International Conference on Machine Learning , pages8821–8831. PMLR, 2021.[18] Q. Huang, D. S. Park, T. Wang, T. I. Denk, A. Ly, N. Chen, Z. Zhang, Z. Zhang, J. Yu, C. Frank,et al. Noise2music: Text-conditioned music generation with diffusion models. arXiv preprintarXiv:2302.03917 , 2023.[19] G. Tevet, B. Gordon, A. Hertz, A. H. Bermano, and D. Cohen-Or. Motionclip: Exposinghuman motion generation to clip space. In Computer Vision–ECCV 2022: 17th EuropeanConference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXII , pages 358–374.Springer, 2022.[20] J. Gao, T. Shen, Z. Wang, W. Chen, K. Yin, D. Li, O. Litany, Z. Gojcic, and S. Fidler. Get3d:A generative model of high quality 3d textured shapes learned from images. In Advances InNeural Information Processing Systems , 2022.[21] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[22] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models areunsupervised multitask learners. OpenAI blog , 1(8):9, 2019.[23] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Nee-lakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger,T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen,E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Rad-ford, I. Sutskever, and D. Amodei. Language models are few-shot learners. InH. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advancesin Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Asso-ciates, Inc., 2020. URL https://proceedings.neurips.cc/paper _files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf .[24] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback.Advances in Neural Information Processing Systems , 35:27730–27744, 2022.[25] OpenAI. Gpt-4 technical report, 2023.[26] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thoughtprompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 , 2022.[27] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. Re-thinking the role of demonstrations: What makes in-context learning work? In EMNLP , 2022.10[28] B. Varadarajan, A. Hefny, A. Srivastava, K. S. Refaat, N. Nayakanti, A. Cornman, K. Chen,B. Douillard, C. P. Lam, D. Anguelov, and B. Sapp. Multipath++: Efficient information fusionand trajectory aggregation for behavior prediction, 2021.[29] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-endobject detection with transformers. In Computer Vision – ECCV 2020: 16th European Confer-ence, Glasgow, UK, August 23–28, 2020, Proceedings, Part I , page 213–229, Berlin, Heidel-berg, 2020. Springer-Verlag. ISBN 978-3-030-58451-1. doi:10.1007/978-3-030-58452-8_13.URL https://doi.org/10.1007/978-3-030-58452-8 _13.[30] B. Cheng, A. G. Schwing, and A. Kirillov. Per-pixel classification is not all you need forsemantic segmentation. In NeurIPS , 2021.[31] Waymo LLC. Waymo open dataset: An autonomous driving dataset, 2019.[32] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Confer-ence on Learning Representations , 2017.[33] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A kernel two-sampletest. The Journal of Machine Learning Research , 13(1):723–773, 2012.[34] Q. Li, Z. Peng, L. Feng, Q. Zhang, Z. Xue, and B. Zhou. Metadrive: Composing diverse drivingscenarios for generalizable reinforcement learning. IEEE Transactions on Pattern Analysis andMachine Intelligence , 2022.[35] M. Treiber, A. Hennecke, and D. Helbing. Congested traffic states in empirical observationsand microscopic simulations. Physical Review E , 62(2):1805–1824, aug 2000. doi:10.1103/physreve.62.1805. URL https://doi.org/10.1103%2Fphysreve.62.1805 .[36] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra,P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu,J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V . Goswami, N. Goyal, A. Hartshorn, S. Hosseini,R. Hou, H. Inan, M. Kardas, V . Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura,M.-A. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y . Lu, Y . Mao, X. Martinet, T. Mihaylov,P. Mishra, I. Molybog, Y . Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten,R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan,P. Xu, Z. Yan, I. Zarov, Y . Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic,S. Edunov, and T. Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023.11AppendixIn the appendix, we provide implementation and experiment details of our method as well as ad-ditional results. In Section A and Section B, we show details of Interpreter and Generator re-spectively. In Section C we present implementation details of our experiments. In Section D, weshow more results on applications ablation study, as well as additional qualitative results. Finally, inSection E, we present an analysis on LCTGen with different LLMs.AInterpreterA.1 Structured representation detailsThe map specific zmis a 6-dim integer vector. Its first four dimensions denote the number of lanesin each direction (set as north for the ego vehicle). The fifth dimension represents the discretizeddistance in 5-meter intervals from the map center to the nearest intersection (0-5, 5-10...). The sixthdimension indicates the ego vehicle’s lane id, starting from 1 for the rightmost lane.For agent i, the agent-specific zaiis an 8-dim integer vector describing this agent relative to the egovehicle. The first dimension denotes the quadrant index (1-4), where quadrant 1 represents the front-right of the ego vehicle. The second dimension is the discretized distance to the ego vehicle witha 20m interval, and the third denotes orientation (north, south, east, west). The fourth dimensionindicates discretized speed, set in 2.5m/s intervals. The last four dimensions describe actions overthe next four seconds (one per second) chosen from a discretized set of seven possible actions: lanechanges (left/right), turns (left/right), moving forward, accelerating, decelerating, and stopping.A.2 Generation promptsThe scenario generation prompt used for Interpreter consists of several sections:1.Task description : simple description of task of scenario generation and output formats.2.Chain-of-thought prompting [26] : For example, "summarize the scenario in short sen-tences", "explain for each group of vehicles why they are put into the scenario".3.Description of structured representation : detailed description for each dimension of thestructured representation. We separately inform the model Map and Actor formats.4.Guidelines : several generation instructions. For example, "Focus on realistic action gener-ation of the motion to reconstruct the query scenario".5.Few-shot examples : A few input-output examples. We provide a Crash Report example.We show the full prompt below:Prompt 1: Full prompt for Interpreter scenario generation.1You are a very faithful format converter that translate natrual language traffic scenariodescriptions to a fix-form format to appropriately describe the scenario with motionaction. You also need to output an appropriate map description that is able to supportthis scenario. Your ultimate goal is to generate realistic traffic scenarios thatfaithfully represents natural language descriptions normal scenes that follows thetraffic rule.23Answer with a list of vectors describing the attributes of each of the vehicles in thescenario.45Desired format:6Summary: summarize the scenario in short sentences, including the number of vehicles. Alsoexplain the underlying map description.7Explaination: explain for each group of vehicles why they are put into the scenario and howthey fullfill the requirement in the description.8Actor Vector: A list of vectors describing the attributes of each of the vehicles in thescenario, only output the values without any text:9- ’V1’: [,,,,,,,]1210- ’V2’: [,,,,,,,]11- ’V3’: [,,,,,,,]12Map Vector: A vector describing the map attributes, only output the values without any text:13- ’Map’: [,,,,,]1415Meaning of the Actor vector attribute:16- dim 0: ’pos’: [-1,3] - whether the vehicle is in the four quadrant of ego vechile in theorder of [0 - ’front left’, 1 - ’back left’, 2- ’back right’, 3 - ’front right’]. -1 ifthe vehicle is the ego vehicle.17- dim 1: ’distance’: [0,3] - the distance range index of the vehicle towards the ego vehicle; range is from 0 to 72 meters with 20 meters interval. 0 if the vehicle is the egovehicle. For example, if distance value is 15 meters, then the distance range index is0.18- dim 2: ’direction’: [0,3] - the direction of the vehicle relative to the ego vehicle, inthe order of [0- ’parallel _same’, 1-’parallel _opposite’, 2-’perpendicular _up’, 3-’perpendicular _down’]. 0 if the vehicle is the ego vehicle.19- dim 3: ’speed’: [0,20] - the speed range index of the vehicle; range is from 0 to 20 m/swith 2.5 m/s interval. For example, 20m/s is in range 8, therefore the speed value is8.20- dim 4-7: ’action’: [0,7] - 4-dim, generate actions into the future 4 second with each twoactions have a time interval of 1s (4 actions in total), the action ids are [0 - ’stop’, 1 - ’turn left’, 2 - ’left lane change’, 3- ’decelerate’, 4- ’keep _speed’, 5-’accelerate’, 6-’right lane change’, 7-’turn right’].2122Meaning of the Map attributes:23- dim 0-1: ’parallel _lane _cnt’: 2-dim. The first dim is the number of parallel same-direction lanes of the ego lane, and the second dim is the number of parallel opposite-direction lanes of the ego lane.24- dim 2-3: ’perpendicular _lane _cnt’: 2-dim. The first dim is the number of perpendicularupstream-direction lanes, and the second dim is the number of perpendicular downstream-direction lanes.25- dim 4: ’dist _to_intersection’: 1-dim. the distance range index of the ego vehicle to theintersection center in the x direction, range is from 0 to 72 meters with 5 metersinterval. -1 if there is no intersection in the scenario.26- dim 5: ’lane id’: 1-dim. the lane id of the ego vehicle, counting from the rightmost laneof the same-direction lanes, starting from 1. For example, if the ego vehicle is in therightmost lane, then the lane id is 1; if the ego vehicle is in the leftmost lane,then the lane id is the number of the same-direction lanes.2728Transform the query sentence to the Actor Vector strictly following the rules below:29- Focus on realistic action generation of the motion to reconstruct the query scenario.30- Follow traffic rules to form a fundamental principle in most road traffic systems toensure safety and smooth operation of traffic. You should incorporate this rule intothe behavior of our virtual agents (vehicles).31- Traffic rule: in an intersection, when the vehicles on one side of the intersection arecrossing, the vehicles on the other side of the intersection should be waiting. Forexample, if V1 is crossing the intersection and V2 is on the perpendicular lane, thenV2 should be waiting.32- For speed and distance, convert the unit to m/s and meter, and then find the intervalindex in the given range.33- Make sure the position and direction of the generated vehicles are correct.34- Describe the initialization status of the scenario.35- During generation, the number of the vehicles is within the range of [1, 32].36- Always generate the ego vehicle first (V1).37- Always assume the ego car is in the center of the scene and is driving in the positive xdirection.38- In the input descriptions, regard V1, Vehicle 1 or Unit #1 as the ego vehicle. All theother vehicles are the surrounding vehicles. For example, for "Vehicle 1 was travelingsouthbound", the ego car is Vehicle 1.39- If the vehicle is stopping, its speed should be 0m/s (index 0). Also, if the first actionis ’stop’, then the speed should be 0m/s (index 0).40- Focus on the interactions between the vehicles in the scenario.41- Regard the last time stamp as the time stamp of 5 second into the future.4243Generate the Map Vector following the rules below:44- If there is vehicle turning left or right, there must be an intersection ahead.45- Should at least have one lane with the same-direction as the ego lane; i.e., the first dimof Map should be at least 1. For example, if this is a one way two lane road, then thefirst dim of Map should be 2.46- Regard the lane at the center of the scene as the ego lane.1347- Consider the ego car’s direction as the positive x direction. For example, for "V1 wastraveling northbound in lane five of a five lane controlled access roadway", thereshould be 5 lanes in the same direction as the ego lane.48- The generated map should strictly follow the map descriptions in the query text. Forexample, for "Vehicle 1 was traveling southbound", the ego car should be in thesouthbound lane.49- If there is an intersection, there should be at least one lane in either the upstream ordownstream direction.50- If there is no intersection, the distance to the intersection should be -1.51- There should be vehicle driving vertical to the ego vehicle in the scene only when thereis an intersection in the scene. For example, when the road is just two-way, thereshould not be any vehicle driving vertical to the ego vehicle.52- If no intersection is mentioned, generate intersection scenario randomly with real-worldstatistics.535455Query: The crash occurred during daylight hours on a dry, bituminous, two-lane roadway underclear skies. There was one northbound travel lane and one southbound travel lane withspeed limit of 40 km/h (25 mph). The northbound lane had a -3.6 percent grade andthe southbound lane had a +3.6 percent grade. Both travel lanes were divided by adouble yellow line. A 2016 Mazda CX-3 (V1) was in a parking lot attempting to execute aleft turn to travel south. A 2011 Dodge Charger (V2/police car) was traveling northresponding to an emergency call with lights sirens activated. V1 was in a parking lot (facing west) and attempted to enter the roadway intending to turn left. As V1 enteredthe roadway it was impacted on the left side by the front of V2 (Event 1). V1 thenrotated counterclockwise and traveled off the west road edge and impacted an embankmentwith its front left bumper (Event 2). After initial impact V2 continued on in anorthern direction and traveling to final rest approximately 40 meters north of impactarea facing north in the middle of the roadway. V1 and V2 were towed from the scenedue to damage.5657Summary: V1 attempts to turn left from a parking lot onto a two-lane roadway and is struckby V2, a police car traveling north with lights and sirens activated. There are 2vehicles in this scenario. This happens on a parking lot to a two-lane two-way roadwith intersection.58Explanation:59- V1 (ego vehicle) is attempting to turn left from a parking lot onto the roadway. We cannotfind V1’s speed in the query. Because V1 tries to turn left, its initial speed shouldbe set low. We set V1’s speed as 5 m/s, which has the index of 2. V1 turns left, so itsactions are all 1 (turn left).60- V2 is a police car traveling north with lights and sirens activated. As V1 is turning left, 5 seconds before the crash, V1 is facing west and V2 is coming from northbound,crossing the path of V1. In the coordinates of V1 (which is facing west initially), V2comes from the front and is on the left side. Hence, V2’s position is "front left" (3).As V1 is facing west and V2 facing north, V2 is moving in the perpendicular downdirection with V1. Therefore its direction is 3 (perpendicular _down). We cannot find V2’s speed in the query. Because V2 is a police car responding to an emergency call, weassume V2’s init speed is 10 m/s (index 4). Given this speed, V2’s distance to V1 is 10m/s *5s = 50m (index 10). V2 keeps going straight, so its actions are all 4 (keepspeed).61- Map: V1 tries to turn left from a partking lot onto a two-lane roadway. There are a one-way exit lane from parking lot (one same-direction parallel) and the ego vehicle is inthe left turn lane with lane id 1. On the perpendicular side there is a two-laneroadway. V1 is about to turn left, so the distance to the intersection is set to be 10m(index 2).62Actor Vector:63- ’V1’: [-1, 0, 0, 2, 1, 1, 1, 1]64- ’V2’: [0, 10, 3, 4, 4, 4, 4, 4]65Map Vector:66- ’Map’: [1, 0, 1, 1, 2, 1]6768Query: INSERT _QUERY _HERE6970Output:A.3 Instructional editing promptsWe also provide Interpreter another prompt for instructional scenario editing. This prompt followa similar structure to the generation prompt. We mainly adopt the task description, guidelines,14and examples to scenario editing tasks. Note that for the instructional editing task, we change thedistance interval (second dimension) of agent-specific zaifrom 20 meters to 5 meters. This is toensure the unedited agents stay in the same region before and after editing.We show the full prompt below:Prompt 2: Full prompt for Interpreter instructional scenario editing.1You are a traffic scenario editor that edit fix-form traffic scenario descriptions accordingto the user’s natural language instructions.23The user will input a fix-form traffic scenario description as well as the map description.The user also a natural language instruction to modify the scenario. You need to outputa fix-form traffic scenario that is modified according to the instruction.45Input format:6- V1: [,,,,,,,]7- V2: [,,,,,,,]8- V3: [,,,,,,,]9- Map: [,,,,,]10Instruction: natural language instruction to modify the scenario.1112Output format:13Summary: summarize the scenario in short sentences. summarize the user instruction, andindicate which part of the scenario should be modified.14Explaination: explain step-by-step how each part of the scenario is modified.15Actor Vector: A list of vectors describing the attributes of each of the vehicles. Only thevehicles that are modified should be included in the output.16- V2: [,,,,,,,]1718Meaning of the Actor vector attribute:19- dim 0: ’pos’: [-1,3] - whether the vehicle is in the four quadrant of ego vechile in theorder of [0 - ’front left’, 1 - ’back left’, 2- ’back right’, 3 - ’front right’]. -1 ifthe vehicle is the ego vehicle.20- dim 1: ’distance’: [0,14] - the distance range index of the vehicle towards the egovehicle; range is from 0 to 72 meters with 5 meters interval. 0 if the vehicle is theego vehicle.21- dim 2: ’direction’: [0,3] - the direction of the vehicle relative to the ego vehicle, inthe order of [0- ’parallel _same’, 1-’parallel _opposite’, 2-’perpendicular _up’, 3-’perpendicular _down’]. 0 if the vehicle is the ego vehicle.22- dim 3: ’speed’: [0,8] - the speed range index of the vehicle; range is from 0 to 20 m/swith 2.5 m/s interval. For example, 20m/s is in range 8, therefore the speed value is8.23- dim 4-7: ’action’: [0,7] - 4-dim, generate actions into the future 4 second with each twoactions have a time interval of 1s (4 actions in total), the action ids are [0 - ’stop’, 1 - ’turn left’, 2 - ’left lane change’, 3- ’decelerate’, 4- ’keep _speed’, 5-’accelerate’, 6-’right lane change’, 7-’turn right’].2425Meaning of the Map attributes:26- dim 0-1: ’parallel _lane _cnt’: 2-dim. The first dim is the number of parallel same-direction lanes of the ego lane, and the second dim is the number of parallel opposite-direction lanes of the ego lane.27- dim 2-3: ’perpendicular _lane _cnt’: 2-dim. The first dim is the number of perpendicularupstream-direction lanes, and the second dim is the number of perpendicular downstream-direction lanes.28- dim 4: ’dist _to_intersection’: 1-dim. the distance range index of the ego vehicle to theintersection center in the x direction, range is from 0 to 72 meters with 5 metersinterval. -1 if there is no intersection in the scenario.29- dim 5: ’lane id’: 1-dim. the lane id of the ego vehicle, counting from the rightmost laneof the same-direction lanes, starting from 1. For example, if the ego vehicle is in therightmost lane, then the lane id is 1; if the ego vehicle is in the leftmost lane,then the lane id is the number of the same-direction lanes.3031Follow the instructions below:32- ’V1’ is the ego vehicle, and the other vehicles are the surrounding vehicles.33- The user will input a fix-form traffic scenario description as well as the map description. The user also an natural language instruction to modify the scenario. You need tooutput a fix-form traffic scenario that is modified according to the instruction.34- First figure out which part of the scenario should be modified according to theinstruction. For example, if the instruction is "the vehicle in front of me should turnleft", then the vehicle in front of the ego vehicle should be modified.351536Input:37Actor vector:38- V1: [-1, 0, 0, 0, 4, 4, 4, 4]39- V2: [ 2, 1, 0, 1, 4, 4, 4, 4]40- V3: [ 3, 3, 0, 1, 4, 4, 4, 0]41- V4: [ 3, 4, 0, 8, 4, 4, 2, 0]42- V5: [ 0, 9, 1, 8, -1, 4, 5, -1]43- V6: [ 3, 5, 0, 0, 0, 0, 0, 0]44- V7: [ 0, 9, 3, 0, 0, 0, 0, 0]45- V8: [ 3, 10, 3, 3, 4, 5, 1, 0]46- V9: [ 0, 10, 3, 0, 0, 0, 0, -1]47- V10: [ 3, 10, 2, 0, 0, 0, 0, -1]48- V11: [ 3, 11, 2, 0, 0, 0, 0, 0]49- V12: [ 3, 11, 2, 0, 0, 7, 0, 0]50- Map: [4, 3, 2, 3, 6, 4]5152Instruction: move the vehicle behind the ego vehicle to the opposite lane and move faster.5354Output:55Summary: The instruction is to move the vehicle behind the ego vehicle to the opposite laneand accelerate. First find which vehicle is behind the ego vehicle. There are only 1vechile behind the ego vehicle, that is V2 (with position=2, indicating on the rightback side of the ego vehicle). Therefore, the vehicle V2 should be modified.56Explaination: The vehicle V2 is modified to move to the opposite lane and accelerate. Thevehicle V2 is in the right back side of the ego vehicle, and the ego vehicle is in therightmost lane of the same-direction lanes. Therefore, the vehicle V2 should move tothe leftmost lane of the opposite-direction lanes. Therefore, V2’s direction should beopposite to the ego vehicle, changed to 1 (parallel _opposite). In this lane, V2 shouldbe moved to the left back of the ego car, its position should be changed to 1. V2should move faster, its speed should be changed to 10 (25 m/s).57Actor vector:58- V2: [ 1, 1, 1, 10, 4, 4, 4, 4]5960Instruction: remove all the vehicles on the front of the ego car and moving in the samedirection.6162Output:63Summary: The instruction is to remove all the vehicles on the front of the ego car andmoving in the same direction. First find which vehicles are on the front of the egovehicle. V3-V12 are all on the front of the ego vehicle. Then, only V3, V4 and V6 hasthe same direction as the ego vehicle (0). Therefore, V3, V4 and V6 should be removed.64Explaination: V3, V4, V6 are on the front of the ego vehicle and moving in the samedirection. V3, V4 and V6 are removed from the scenario.6566Actor vector:67- V3: removed.68- V4: removed.69- V6: removed.7071Input: INSERT _QUERY _HERE7273Output:BGeneratorB.1 Training objectivesIn the main paper, we show the full training objective of Generator as:L(p, τ) =Lposition (p, τ) +Lattr(p, τ) +Lmotion(p, τ). (2)In this section, we provide details of each loss function. We first pair each agent ˆaiinpwith aground-truth agent aiinτbased on the sequential ordering of the structured agent representationza. Assume there are in total Nagents in the scenario.16ForLposition , we use cross-entropy loss between the per-lane categorical output ˆpand the ground-truth lane segment id l. Specifically, we compute it asLposition (p, τ) =NXi=1−log ˆpi(li), (3)where liis the index of the lane segment that the i-th ground-truth agent aiis on.ForLattr, we use a negative log-likelihood loss, computed using the predicted GMM on the ground-truth attribute values. Recall that for each attribute of agent i, we use an MLP to predict the param-eters of a GMM model [μi,Σi, πi]. Here, we use these parameters to construct a GMM model andcompute the likelihood of ground-truth attribute values. Specifically, we haveLattr(p, τ) =NXi=1(−logGMM heading,i (hi)−logGMM vel,i(veli)−logGMM size,i(bbox i)−logGMM pos,i(posi)),(4)where GMM heading,i ,GMM vel,i,GMM size,i,GMM pos,irepresent the likelihood function of the pre-dicted GMM models of agent i’s heading, velocity, size and position shift. These likelihood valuesare computed using the predicted GMM parameters. Meanwhile, hi,veli,bbox iandposirepresentthe heading, velocity, size and position shift of the ground-truth agent airespectively.ForLmotion , we use MSE loss for the predicted trajectory closest to the ground-truth trajectory fol-lowing the multi-path motion prediction idea [28]. Recall that for each agent ˆai, we predict K′different future trajectories and their probabilities as {pos2:Ti,k,probi,k}K′k=1=MLP(q∗i). For eachtimestamp t, posti,kcontains the agent’s position and heading. We assume the trajectory of ground-truth agent aiis pos2:Ti,∗. We can compute the index k∗of the closest trajectory from the K′pre-dictions as k∗= arg minkPTt=2(posti,k−posti,∗)2. Then, we compute the motion loss for agent ias:Lmotion ,i=−logprobi,k∗+TXt=2(posti,k∗−posti,∗)2, (5)where we encourage the model to have a higher probability for the cloest trajectory k∗and reducethe distance between this trajectory with the ground truth. The full motion loss is simply:Lmotion(p, τ) =NXiLmotion ,i (6)where we sum over all the motion losses for each predicted agent in p.C Experiment DetailsC.1 Baseline implementationTrafficGen [15]. We use the official implementation1. For a fair comparison, we train its Initial-ization and Trajectory Generation modules on our dataset for 100 epochs with batch size 64. Wemodify T= 50 in the Trajectory Generation to align with our setting. We use the default values forall the other hyper-parameters. During inference, we enforce TrafficGen to generate Nvehicles byusing the result of the first Nautoregressive steps of the Initialization module.MotionCLIP [19]. The core idea of MotionCLIP is to learn a shared space for the interestedmodality embedding (traffic scenario in our case) and text embedding. Formally, this model containsa scenario encoder E, a text encoder ˆE, and a scenario decoder D. For each example of scene-text paired data (τ, L, m ), we encode scenario and text separately with their encoders z=E(τ),1https://github.com/metadriverse/trafficgen17ˆz=ˆE(L). Then, the decoder takes zandmand output a scenario p=D(z, m). MotionCLIPtrains the network with Lrecto reconstruct the scenario from the latent code:Lrec=Lposition (p, τ) +Lattr(p, τ) +Lmotion(p, τ), (7)where we use the same set of loss functions as ours (Equation 2). On the other hand, MotionCLIPaligns the embedding space of the scenario and text with:Lalign= 1−cos(z,ˆz), (8)which encourages the alignment of scenario embedding zand text embedding ˆz. The final lossfunction is thereforeL=Lrec+λLalign, (9)where we set λ= 100 .During inference, given an input text Land a map m, we can directly use the text encoder to obtainlatent code and decode a scenario from it, formally τ=D(ˆE(L), m).For the scenario encoder E, we use the same scenario encoder as in [15], which is a 5-layer multi-context gating (MCG) block [28] to encode the scene input τand outputs z∈R1024with thecontext vector output cof the final MCG block. For text encoder ˆE, we use the sentence embeddingof the fixed GPT-2 model. For the scenario decoder D, we modify our Generator to take in latentrepresentation zwith a dimension of 1024 instead of our own structured representation. Because Ddoes not receive the number of agents as input, we modify Generator to produce the N= 32 agentsfor every input and additionally add an MLP decoder to predict the objectiveness score of eachoutput agent. Here objectiveness score is a binary probability score indicating whether we shouldput each predicted agent onto the final scenario or not. During training, for computation of Lrec, weuse Hungarian algorithm to pair ground-truth agents with the predicted ones. We then supervise theobjectiveness score in a similar way as in DETR.Note that we need text-scenario paired data to train MotionCLIP. To this end, we use a rule-basedmethod to convert a real dataset τto a text L. This is done by describing different attributes of thescenario with language. Similar to our Attribute Description dataset, in each text, we enumeratethe scenario properties 1) sparsity; 2) position; 3) speed and 4) ego vehicle’s motion. Here is oneexample: "the scene is very dense; there exist cars on the front left of ego car; there is no car on theback left of ego car; there is no car on the back right of ego car; there exist cars on the front right ofego car; most cars are moving in fast speed; the ego car stops".We transform every scenario in our dataset into a text with the format as above. We then trainMotionCLIP on our dataset with the same batch size and number of iterations as LCTGen .C.2 MetricWe show how to compute MMD in this section. Specifically, MMD measures the distance betweentwo distributions qandp.MMD2(p, q) =Ex,x′∼p[k(x, x′)] +Ey,y′∼q[k(y, y′)]−2Ex∼p,y∼q[k(x, y)],(10)where kis the kernel function (a Gaussian kernel in this work). We use Gaussian kernel in this work.For each pair of real and generated data (τ,ˆτ), we compute the distribution difference between themper attribute.C.3 DatasetCrash Report. We use 38 cases from the CIREN dataset [1] from the NHTSA crash report searchengine. Each case contains a long text description of the scenario as well as a PDF diagram showing18the scenario. Because the texts are very long and require a long time for humans to comprehend,in our human study, along with each text input, we will also show the diagram of the scenario as areference. We show example crash reports in Section D.4. We also refer the reader to the NHTSAwebsite2to view some examples of the crash report.Attribute Description. We create text descriptions that highlight various attributes of a trafficscenario. Specifically, we use the following attributes and values:1. Sparsity: "the scenario is {nearly empty/sparse/with medium density/very dense}".2. Position: "there are only vehicles on the {left/right/front/back} side(s) of the center car" or"there are vehicles on different sides of the center car".3. Speed: "most cars are moving in {slow/medium/fast} speed" or "most cars are stopping".4. Ego-vehicle motion: "the center car {stops/moves straight/turns left/turns right}".Figure A1: Human study user interface.We create sentences describing each of the single attributes with all the possible values. We alsocompose more complex sentences by combining 2,3 or 4 attributes together with random values foreach of them. In total, we created 40 cases for human evaluation. Please refer to Section D.4 forsome example input texts from this dataset.2https://crashviewer.nhtsa.dot.gov/CIREN/Details?Study=CIREN&CaseId=1119C.4 Human studyWe conduct the human study to access how well the generated scenario matches the input text. Weshowcase the user interface of our human study in Figure A1. We compose the output of two modelswith the same text input in random order and ask the human evaluator to judge which one matchesthe text description better. Then, we also ask them to give each output a 1-5 score. We allow theuser to select "unsure" for the first question.We invite 12 human evaluators for this study, and each of them evaluated all the 78 cases we pro-vided. We ensure the human evaluators do not have prior knowledge of how different model workson these two datasets. On average, the human study takes about 80 minutes for each evaluator.C.5 Qualitative result full textsIn Figure 4 and Figure A2, we show examples of the output of our model on Crash Report data.Recall that the texts we show in the figures are the summary from our Interpreter due to spacelimitations. We show the full input text for each example in this section.Text 1: Full texts of examples in Figure 4 .1Figure 4 Column 1 (CIREN ID 594):2"This crash occurred during daylight hours on a dry, bituminous divided trafficway (medianstrip without positive barrier) under clear skies. There were four east travel lanes(two through lanes, one left turn and one right turn) and four west travel lanes (twothrough lanes, one left and one right). The east lanes have a slight right curve andthe west lanes curve slightly to the left. Both east/west travel lanes were levelgrade at point of impact and divided by a grass median. The speed limit at thislocation is 80km/h (50 mph). The intersecting north/south roadway consisted of onenorth travel lane and three south travel lanes (one through lanes, one left and oneright). These travel lanes were divided by a raised concrete median on the northernside of the intersection. This intersection is controlled by overhead traffic signals.A 2017 Dodge Grand Caravan (V1) was traveling east in the left turn lane and a 2006Nissan Sentra (V2) was traveling west in the left through lane. As V1 was travelingeast it attempted to execute a left turn to travel north when its front bumper impactedthe front bumper of V2 (Event 1). After initial impact, V1 rotated counterclockwiseapproximately 80 degrees before traveling to its final resting position in the middleof the intersection facing north. V2 was traveling west in the left through lane andattempting to travel through the intersection when its front bumper impacted the frontbumper of V1. After initial impact V2 rotated clockwise approximately 20 degreesbefore traveling to its final resting position in the middle of the intersection facingnorthwest. V1 and V2 were towed from the scene due to damage sustained in the crash."34Figure 4 Column 2 (CIREN ID 31):5"A 2016 Kia Sedona minivan (V1) was traveling southwest in the right lane of three. A 2015Chevrolet Silverado cab chassis pickup (V2) was ahead of V1 in the right lane. V2 wasa working vehicle picking up debris on the highway in a construction zone. The driverof V2 stopped his vehicle in the travel lane. The driver of V1 recognized animpending collision and applied the brakes while steering left in the last momentbefore impact. V1 slid approximately three meters before the front of V1 struck theback plane of V2 in a rear-end collision with full engagement across the strikingplanes (Event 1). Both vehicles came to rest approximately two meters from impact. V1was towed due to damage while V2 continued in service."67Figure A2 Row 2 (CIREN ID 33):8"This two-vehicle collision occurred during the pre-dawn hours (dark, street lights present)of a fall weekday at the intersection of two urban roadways. The crash only involvedthe eastern leg of the intersection. The westbound lanes of the eastern leg consistedof four westbound lanes that included a right turn lane, two through lanes, and a leftturn lane. The three eastbound lanes of the eastern leg consisted of a merge lane fromthe intersecting road and two through-lanes. The roadway was straight with a speedlimit of 89 kmph (55 mph), and the intersection was controlled by overhead, standardelectric, tri-colored traffic signals. At the time of the crash, the weather was clearand the roadway surfaces were dry. As Vehicle 1 approached the intersection, its driverdid not notice the vehicles stopped ahead at the traffic light. The traffic signalturned green and Vehicle 2 began to slowly move forward. The frontal plane of Vehicle 1struck the rear plane of Vehicle 2 (Event 1). Both vehicles came to rest in the leftthrough-lane of the westbound lane facing in a westerly direction. Vehicle 1 was towedfrom the scene due to damage sustained in the crash. Vehicle 2 was not towed nor20disabled. The driver of Vehicle 2 was transported by land to a local trauma center andwas treated and released."910Figure A2 Row 4 (CIREN ID 77):11"A 2017 Chevrolet Malibu LS sedan (V1) was traveling southeast in the right lane cresting ahill. A 1992 Chevrolet C1500 pickup (V2) was traveling northwest in the second lanecresting the same hill. Vehicle 2 crossed left across the center turn lane, an oncominglane, and then into V1\u2019s oncoming lane of travel. Vehicle 1 and Vehicle 2collided in a head-on, offset-frontal configuration (Event 1). Vehicle 1 attempted tosteer left just before impact, focusing the damage to the middle-right of its frontplane. Both vehicles rotated a few degrees clockwise before coming to rest in theroadway, where they were towed from the scene due to damage."1213Figure A2 Row 5 (CIREN ID 56):14"A 2013 Honda CR-V utility vehicle (V1) was traveling west in the right lane approaching anintersection. A 2003 Chevrolet Silverado 1500 pickup (V2) was stopped facing north at astop sign. Vehicle 2 proceeded north across the intersection and was struck on theright plane by the front plane of V1 (Event 1). The impact caused both vehicles totravel off the northwest corner of the intersection, where they came to rest. Bothvehicles were towed due to damage."D Additional ResultsD.1 Controllable self-driving policy evaluationWe show how LCTGen can be utilized to generate interesting scenarios for controllable self-drivingpolicy evaluation. Specifically, we leverage LCTGen to generate traffic scenario datasets possessingdiverse properties, which we then use to assess self-driving policies under various situations. Forthis purpose, we input different text types into LCTGen : 1) Crash Report, the real-world crash reportdata from CIREN; 2) Traffic density specification, a text that describes the scenario as "sparse","medium dense", or "very dense". For each type of text, we generate 500 traffic scenarios fortesting. Additionally, we use 500 real-world scenarios from the Waymo Open dataset.We import all these scenarios into an interactive driving simulation, MetaDrive [34]. We evaluate theperformance of the IDM [35] policy and a PPO policy provided in MetaDrive. In each scenario, theself-driving policy replaces the ego-vehicle in the scenario and aims to reach the original end-pointof the ego vehicle, while all other agents follow the trajectory set out in the original scenario. Weshow the success rate and collision rate of both policies in Table A1. Note that both policies experi-ence significant challenges with the Crash Report scenarios, indicating that these scenarios presentcomplex situations for driving policies. Furthermore, both policies exhibit decreased performance indenser traffic scenarios, which involve more intricate vehicle interactions. These observations givebetter insight about the drawbacks of each self-driving policy. This experiment showcases LCTGenas a valuable tool for generating traffic scenarios with varying high-level properties, enabling a morecontrolled evaluation of self-driving policies.Test DataIDM [35] PPO (MetaDrive) [34]Success (%) Collision (%) Success (%) Collision (%)Real 93.60 3.80 69.32 14.67LCTGen + Crash Report [1] 52.35 39.89 25.78 27.98LCTGen + "Sparse" 91.03 8.21 41.03 21.06LCTGen + "Medium" 84.47 12.36 43.50 26.67LCTGen + "Dense" 68.12 19.26 38.89 32.41Table A1: Controllable self-driving policy evaluation.21Figure A2: Qualitative result comparison on text-conditioned generation on Crash Report.22Figure A3: Qualitative result comparison on text-conditioned generation on Attribute Description.231 2 3 4 5Score0102030405060Proportion (%)LCTGenMotionCLIPTrafficGen(a) Crash Report1 2 3 4 5Score01020304050Proportion (%)LCTGenMotionCLIPTrafficGen (b) Attribute DescriptionFigure A4: Human study score distribution.Ours better Neutral Others better020406080Proportion (%)Ours vs. TrafficGenOurs better Neutral Others better020406080Proportion (%)Ours vs. MotionCLIPMC Better Neutral TG better01020304050Proportion (%)MotionCLIP vs. TrafficGen(a) Crash ReportOurs better Neutral Others better0204060Proportion (%)Ours vs. TrafficGenOurs better Neutral Others better0204060Proportion (%)Ours vs. MotionCLIPMC Better Neutral TG better010203040Proportion (%)MotionCLIP vs. TrafficGen(b) Attribute DescriptionFigure A5: Human study A/B test distribution.D.2 Text-conditioned simulation qualitative resultsWe show the more qualitative results of text-conditioned simulation in Figure A2 (Crash Report)and Figure A3 (Attribute Description). Here, we also compare the output of LCTGen with Motion-CLIP [19] and TrafficGen [15].D.3 Human study statisticsScore distribution. We show the human evaluation scores of the two datasets in Figure A4. Weobserve that our method is able to reach significantly better scores from human evaluators.A/B test distribution. We show the distribution of A/B test result for each pair of methods in Fig-ure A5. Note that our method is chosen significantly more frequenly as the better model comparedwith other models. We also observe that TrafficGen is slightly better than MotionCLIP in AttributeDescription dataset, while the two models achieve similar results in Crash Report.24MethodCrash Report Attribute DescriptionAvg. Score Human Std. Avg. Score Human Std.TrafficGen [15] 1.58 0.64 2.43 0.72MotionCLIP [19] 1.65 0.67 2.10 0.64LCTGen 3.86 0.87 4.29 0.65Table A2: Human study average score and variance.Human score variance. We show the variance of quality score across all human evaluators inTable A2. Specifically, for each case, we compute the standard deviation across all the human eval-uators for this case. Then, we average all the standard deviation values across all the cases and showin the table as "Human Std.". This value measures the variance of score due to human evaluators’subjective judgement differences. According to the average score and human variance shown in thetable, we conclude that our model outperforms the compared methods with high confidence levels.20-25 >25010203040506070Proportion (%)Age Distribution(a) AgeFemale Male0102030405060Proportion (%)Gender Distribution (b) GenderNever drive Rarely drive Often drive01020304050Proportion (%)Drive Experience Distribution (c) Driving experienceFigure A6: Human study population distributions.Human evaluator population distribution. We show the population distributions of the humanevaluators involved in our experiment. Specifically, we include statistics of the human evaluators’age, gender and driving experience in Figure A6.For driving experience, we use the following classifications:• “Never drive”: does not have a driver’s license.• “Rarely drive”: has a driver’s license but does not often drive.• “Often drive”: drives routinely.D.4 Interpreter input-output examplesHere we show the full-text input and output of Interpreter for four examples in Figure 4. Specifi-cally, we show two examples from Crash Report and two examples from Attribute Descriptions.Text 2: Input-output examples of Interpreter .1Figure 4 Column 1 (CIREN ID 594):2Input:3"This crash occurred during daylight hours on a dry, bituminous divided trafficway (medianstrip without positive barrier) under clear skies. There were four east travel lanes(two through lanes, one left turn and one right turn) and four west travel lanes (twothrough lanes, one left and one right). The east lanes have a slight right curve andthe west lanes curve slightly to the left. Both east/west travel lanes were levelgrade at point of impact and divided by a grass median. The speed limit at thislocation is 80km/h (50 mph). The intersecting north/south roadway consisted of one25north travel lane and three south travel lanes (one through lanes, one left and oneright). These travel lanes were divided by a raised concrete median on the northernside of the intersection. This intersection is controlled by overhead traffic signals.A 2017 Dodge Grand Caravan (V1) was traveling east in the left turn lane and a 2006Nissan Sentra (V2) was traveling west in the left through lane. As V1 was travelingeast it attempted to execute a left turn to travel north when its front bumper impactedthe front bumper of V2 (Event 1). After initial impact, V1 rotated counterclockwiseapproximately 80 degrees before traveling to its final resting position in the middleof the intersection facing north. V2 was traveling west in the left through lane andattempting to travel through the intersection when its front bumper impacted the frontbumper of V1. After initial impact V2 rotated clockwise approximately 20 degreesbefore traveling to its final resting position in the middle of the intersection facingnorthwest. V1 and V2 were towed from the scene due to damage sustained in the crash."45Output:6Summary: V1 is traveling east in the left turn lane and attempts to turn left when itcollides with V2 traveling west in the left through lane. There are 2 vehicles in thisscenario. This happens at the intersection of an eight-lane divided trafficway and afour-lane north/south roadway, controlled by traffic signals.7Explanation:8- V1 (ego vehicle) is traveling east in the left turn lane and attempts to turn left. Wecannot find V1’s speed in the query. Because V1 tries to turn left, its initial speedshould be set low. We set V1’s speed as 5 m/s, which has the index of 2. V1 turns left,so its actions are all 1 (turn left).9- V2 is traveling west in the left through lane. As V1 is turning left, 5 seconds before thecrash, V2 is coming from the opposite direction (westbound), crossing the path of V1.In the coordinates of V1 (which is facing east initially), V2 comes from the front andis on the left side. Hence, V2’s position is \"front left\" (0). As V1 is facing eastand V2 facing west, V2 is moving in the parallel opposite direction with V1. Thereforeits direction is 1 (parallel _opposite). We cannot find V2’s speed in the query. BecauseV2 is traveling west and hit by V1 5 seconds later, we assume V2’s init speed is 8 m/s(index 3). Given this speed, V2’s distance to V1 is 8m/s *5s = 40m (index 8). V2keeps going straight, so its actions are all 4 (keep speed).10- Map: This happens at the intersection of an eight-lane divided trafficway (4 lanes foreastbound and westbound traffic) and a four-lane north/south roadway (1 lane fornorthbound and 3 lanes for southbound traffic). The ego vehicle is in the left turn-only lane with lane id 4. There is an intersection with a four-lane north/south roadahead. V1 is about to cross the interaction, therefore the distance to the intersectionis with 15 meter index 3.11Actor Vector:12- ’V1’: [-1, 0, 0, 2, 1, 1, 1, 1]13- ’V2’: [0, 8, 1, 3, 4, 4, 4, 4]14Map Vector:15- ’Map’: [4, 4, 1, 3, 3, 4]1617Figure A2 Row 2 (CIREN ID 33):18Input:19"This two-vehicle collision occurred during the pre-dawn hours (dark, street lights present)of a fall weekday at the intersection of two urban roadways. The crash only involvedthe eastern leg of the intersection. The westbound lanes of the eastern leg consistedof four westbound lanes that included a right turn lane, two through lanes, and a leftturn lane. The three eastbound lanes of the eastern leg consisted of a merge lane fromthe intersecting road and two through-lanes. The roadway was straight with a speedlimit of 89 kmph (55 mph), and the intersection was controlled by overhead, standardelectric, tri-colored traffic signals. At the time of the crash, the weather was clearand the roadway surfaces were dry. As Vehicle 1 approached the intersection, its driverdid not notice the vehicles stopped ahead at the traffic light. The traffic signalturned green and Vehicle 2 began to slowly move forward. The frontal plane of Vehicle 1struck the rear plane of Vehicle 2 (Event 1). Both vehicles came to rest in the leftthrough-lane of the westbound lane facing in a westerly direction. Vehicle 1 was towedfrom the scene due to damage sustained in the crash. Vehicle 2 was not towed nordisabled. The driver of Vehicle 2 was transported by land to a local trauma center andwas treated and released."2021Output:22Summary: V1 approaches an intersection and does not notice stopped vehicles ahead. Thetraffic light turns green, and V2 starts moving forward slowly. V1 strikes the rear ofV2. There are 2 vehicles in this scenario. This happens at the intersection of twourban roadways with multiple lanes and traffic signals.23Explanation:24- V1 (ego vehicle) is approaching the intersection and does not notice the stopped vehiclesahead. We cannot find V1’s speed in the query. Because V1 is moving and does not stop,26we assume V1’s initial speed is 10 m/s (index 4). V1 keeps going straight, so itsactions are all 4 (keep speed).25- V2 is stopped at the traffic light and starts moving forward slowly when the light turnsgreen. As V1 is moving straight and hits V2 from behind, 5 seconds before the crash, V2is in front of V1. Hence, V2’s position is \"front right\" (3). As V1 and V2 are bothmoving westbound, V2 is moving in the parallel same direction with V1. Therefore itsdirection is 0 (parallel _same). V2’s speed is slow, so we set it to 2.5 m/s (index 1).Given this speed, V2’s distance to V1 is 10m/s *5s - 2.5m/s *5s = 37.5m (index 7). V2keeps going straight, so its actions are all 4 (keep speed).26- Map: This happens at the intersection of two urban roadways with multiple lanes andtraffic signals. The ego vehicle is in the left through-lane of the westbound lane withlane id 3. There are 4 westbound lanes and 3 eastbound lanes. The distance of the egovehicle to the intersection is set to be 10m/s *5s = 50m (index 10).27Actor Vector:28- ’V1’: [-1, 0, 0, 4, 4, 4, 4, 4]29- ’V2’: [3, 7, 0, 1, 4, 4, 4, 4]30Map Vector:31- ’Map’: [4, 3, 0, 0, 10, 3]323334Figure A3 Row 1:35Input:36"the scene is with sparse density. there are vehicles on different sides of the center car.most cars are moving in medium speed. the center car moves straight"3738Output:39Summary: The ego vehicle (V1) moves straight at a medium speed in a sparse traffic scenario.There are vehicles on different sides of the ego vehicle, corresponding to the sparsedensity. We generate 4 vehicles in the scene on a six-lane two-way road. The ego car isplaced on the third lane from the right.40Explanation:41- V1 (ego vehicle) is in the center and moves straight. Because V1 moves with a medium speed, its speed is set to 4 (medium speed) and action indices are 4 (keep speed).42- V2 is in the closest distance range (index 0). It is moving in the same direction as theego vehicle (index 0) and is located in the back right (index 2) of the ego vehicle. Itis moving with a medium speed (index 4), and its actions are all to keep speed (index4).43- V3 is in a further distance range (index 1). It is moving in the opposite direction of theego vehicle (index 1), therefore it is on the left side of the ego vehicle, locatingin the back left (index 1) of the ego vehicle. It is moving with a medium speed (index4), and its actions are all to keep speed (index 4).44- V4 is in the farthest distance range (index 2). It is moving in the same direction as theego vehicle (index 0) and is located in the front right (index 3) of the ego vehicle.It is moving with a medium speed (index 4), and its actions are all to keep speed (index 4).45- V5 is in the farthest distance range (index 2). It is moving in the opposite direction ofthe ego vehicle (index 1), therefore it is on the left side of the ego vehicle,locating in the front left (index 0) of the ego vehicle. It is moving with a mediumspeed (index 4), and its actions are all to keep speed (index 4).46- Map: Given the sparse density and the presence of vehicles on different sides of the egovehicle, we assume this happens on a 6-lane two-way road. The ego vehicle is in thethird lane from the right, with lane id 3. There is no intersection nearby, indicatedby -1.47Actor Vector:48- ’V1’: [-1, 0, 0, 4, 4, 4, 4, 4]49- ’V2’: [2, 0, 0, 4, 4, 4, 4, 4]50- ’V3’: [1, 1, 1, 4, 4, 4, 4, 4]51- ’V4’: [3, 2, 0, 4, 4, 4, 4, 4]52- ’V5’: [0, 2, 1, 4, 4, 4, 4, 4]53Map Vector:54- ’Map’: [3, 3, 0, 0, -1, 3]5556Figure 4 Column 3:57Input:58"the scene is with sparse density. there are only vehicles on the back side of the centercar. most cars are moving in fast speed. the center car turns left"5960Output:6162Summary: The ego vehicle (V1) turns left at a fast speed in a sparse traffic scenario. Allother vehicles are located behind the ego vehicle, which corresponds to the scenario’s27sparse density. We generate 3 vehicles in the scene on a six-lane two-way road with anintersection in front. The ego car is placed on the left-turn lane.63Explanation:64- V1 (ego vehicle) is in the center and turns left. Because V1 moves with a fast speed, itsspeed is set to 6 (fast speed) and action indices are 1 (turn left).65- V2-V3 are in the closest distance range (index 0). They are moving in the same directionas the ego vehicle (index 0). Because the ego car is on the left-turn lane, they areboth located in the back right (index 2) of the ego vehicle. They are both moving witha fast speed (index 6 or 7), and their actions are all to keep speed (index 4).66- V4 is in a further distance range (index 1). It is moving in the opposite direction of theego vehicle (index 1), therefore it is on the left side of the ego vehicle, locatingin the back left (index 1) of the ego vehicle. It is moving with a fast speed (index 6), in the opposite direction as the ego vehicle (index 1). Its actions are all to keepspeed (index 4).67- Map: Given the sparse density and the presence of vehicles behind the ego vehicle, weassume this happens on a 6-lane two-way road. The ego vehicle is in the left-turn lane,with lane id 3. There is an intersection 10 meters ahead (index 2) as the ego vehicleis turning left.68Actor Vector:69- ’V1’: [-1, 0, 0, 6, 1, 1, 1, 1]70- ’V2’: [2, 0, 0, 6, 4, 4, 4, 4]71- ’V3’: [2, 0, 0, 7, 4, 4, 4, 4]72- ’V4’: [1, 1, 1, 6, 4, 4, 4, 4]73Map Vector:74- ’Map’: [3, 3, 2, 2, 2, 3]D.5 Attribute Description Result SplitMethod Density Position Speed Ego-car MotionTrafficGen [15] 2.75 2.03 2.34 2.27MotionCLIP [19] 1.89 2.24 1.91 1.78LCTGen 4.24 4.28 4.38 4.40Table A3: Human study result split analysis on Attribute Description scores.We generate the Attribute Description dataset with different attributes. In this section, we split thematching score result for the full dataset into different attributes. We show the result in Table A3.We observe our method has nearly identical performance over all the attributes. TrafficGen the bestresults with Density, while MotionCLIP performs the best with Position.D.6 Full Ablation StudyMethodInitialization MotionPos Heading Speed Size mADE mFDE SCRw/o Quad. 0.092 0.122 0.076 0.124 2.400 4.927 8.087w/o Dist. 0.071 0.124 0.073 0.121 1.433 3.041 6.362w/o Ori. 0.067 0.132 0.082 0.122 1.630 3.446 7.300w/o Speed 0.063 0.120 0.104 0.122 2.611 5.188 7.150w/o Action 0.067 0.128 0.173 0.128 2.188 5.099 7.146w/oxi 0.067 0.133 0.076 0.124 1.864 3.908 5.929w/o GMM 0.064 0.128 0.078 0.178 1.606 3.452 8.216LCTGen 0.062 0.115 0.072 0.120 1.329 2.838 6.700Table A4: Ablation study of LCTGenIn our main paper, we split the ablation study into two different groups. Here we show the full resultsof all the ablated methods in Table A4. We additionally show the effect of 1) using the learnablequery xiand 2) using the GMM prediction for attributes.28I n p u t “ a d d m o r e c a r s o n t h e l e f t ” “ l e t t h e l e f t c a r d o l a n e c h a n g e ” “ m a k e i t s p a r s e r a n d s p e e d u p ” “ r e m o v e c a r s o n t h e r i g h t ”Figure A7: Instructional editing on a real-world scenarioD.7 Instructional traffic scenario editingWe show another example of instructional traffic scenario editing in Figure A7. Different fromthe compound editing in Figure 5 in the main paper, here every example is edited from the inputscenario.E LCTGen with different LLMsTo study the effects of different LLMs, we provide the outputs of LCTGen using various state-of-the-art commercial and open-source LLMs, namely GPT-4 [25], GPT-3.5 [24], Llama2-7B [36], andLlama2-70B [36].We provide four input texts from Crash Reports and Attribute Descriptions to each LLM. Then, weshow the LCTGen output with different LLM output Figures A8- A11 and the LLM outputs beloweach figure. We then provide an intuitive evaluation of using LCTGen with different LLMs in thefollowing sections.E.1 Analysis on LLM outputsFor an intuitive comparison of the outputs from different LLMs, we evaluate different LLMs withthe following criteria on these examples:• Q1: Do the summary and explanation match the input text?• Q2: Does the output vector match the input text?• Q3: Does the LLM correctly use chain-of-thought (i.e., applies correct logic which matchesthe final answer)?• Q4: Does the LLM follow the formatting guidelines?We compute the average rate that these criteria are met for different LLMs with the 4 examples weshow:Table A5: Evaluation of different LLM’s resultQ1 Q2 Q3 Q4GPT-4 100% (1/1/1/1) 100% (1/1/1/1) 100% (1/1/1/1) 100% (1/1/1/1)GPT-3.5 100% (1/1/1/1) 50% (0/1/1/0) 75% (1/1/0/1) 100% (1/1/1/1)Llama2-7B 75% (0/1/1/1) 25% (1/0/0/0) 50% (0/0/1/1) 50% (0/0/1/1)Llama2-70B 25% (0/1/0/0) 25% (1/0/0/0) 0% (0/0/0/0) 0% (0/0/0/0)We have the following observations:• GPT-4 does well in chain-of-thought (CoT) inference and outputs vectors consistent withthe CoT logic. See an example on Text 3 L9-17, where the inference process matches wellwith the output. It also produces correct vectors most of the time.29• GPT-3.5 sometimes makes commonsense mistakes. For example, on Text 6 L33, it saysV1 turns right, however, on L39 it indicates there is no intersection in the map. Thiscommonsense error leads to the strange scenario in Figure A11.• Llama 2 models often produce outputs that do not follow guidelines. For example, on Text4 L36, the model first outputs vectors and then outputs an explanation. This issue makesits chain-of-thought prompting ineffective and leads to incorrect results (e.g., Figure A9).• Llama 2 models often make incorrect chain-of-thought inferences. For example, on Text 5L54, it indicates that V1 should turn left, but set V1’s action to 7 (turn right).• Llama2-70B model refuses to answer two queries due to “safety concerns”. For example,on Text 6 L83, it indicates the query may produce a “potential safety hazard” and refuse toanswer. This is problematic as in the prompt we already indicate this is for traffic scenariogeneration, which is not harmful.E.2 DiscussionGPT-4 does well in terms of all the aspects most of the time, which aligns with the strong perfor-mance of LCTGen in our paper. We recommend using GPT-4 for LCTGen.GPT-3.5 works stably across different cases and produces reasonable outputs. However, it some-times produces incorrect results with flawed chain-of-thought processes. One can use GPT-3.5 forLCTGen, but it might lead to inferior performance compared to GPT-4.Llama 2 models do not reliably work for LCTGen, they often output incorrect results and do notfollow our guidelines (e.g., no chain-of-thought, refuse to generate). For the current version ofLCTGen prompts, we do not recommend using Llama 2. Note that one might be able to engineer aprompt for Llama 2 and other open-source LLMs to work stably for LCTGen going forward.Figure A8: Qualitative comparison of result from different LLMs for Figure 4 Column 1. The redcar is the ego vehicle (V1).Text 3: Output from different LLMs for Figure 4 Column 1.1Input:2"This crash occurred during daylight hours on a dry, bituminous divided trafficway (medianstrip without positive barrier) under clear skies. There were four east travel lanes(two through lanes, one left turn and one right turn) and four west travel lanes (twothrough lanes, one left and one right). The east lanes have a slight right curve andthe west lanes curve slightly to the left. Both east/west travel lanes were levelgrade at point of impact and divided by a grass median. The speed limit at thislocation is 80km/h (50 mph). The intersecting north/south roadway consisted of onenorth travel lane and three south travel lanes (one through lanes, one left and oneright). These travel lanes were divided by a raised concrete median on the northernside of the intersection. This intersection is controlled by overhead traffic signals.A 2017 Dodge Grand Caravan (V1) was traveling east in the left turn lane and a 2006Nissan Sentra (V2) was traveling west in the left through lane. As V1 was travelingeast it attempted to execute a left turn to travel north when its front bumper impactedthe front bumper of V2 (Event 1). After initial impact, V1 rotated counterclockwiseapproximately 80 degrees before traveling to its final resting position in the middleof the intersection facing north. V2 was traveling west in the left through lane andattempting to travel through the intersection when its front bumper impacted the frontbumper of V1. After initial impact V2 rotated clockwise approximately 20 degrees30before traveling to its final resting position in the middle of the intersection facingnorthwest. V1 and V2 were towed from the scene due to damage sustained in the crash."34 ========================================GPT-4======================================56Output:7Summary: V1 is traveling east in the left turn lane and attempts to turn left when itcollides with V2 traveling west in the left through lane. There are 2 vehicles in thisscenario. This happens at the intersection of an eight-lane divided trafficway and afour-lane north/south roadway, controlled by traffic signals.8Explanation:9- V1 (ego vehicle) is traveling east in the left turn lane and attempts to turn left. Wecannot find V1’s speed in the query. Because V1 tries to turn left, its initial speedshould be set low. We set V1’s speed as 5 m/s, which has the index of 2. V1 turns left,so its actions are all 1 (turn left).10- V2 is traveling west in the left through lane. As V1 is turning left, 5 seconds before thecrash, V2 is coming from the opposite direction (westbound), crossing the path of V1.In the coordinates of V1 (which is facing east initially), V2 comes from the front andis on the left side. Hence, V2’s position is \"front left\" (0). As V1 is facing eastand V2 facing west, V2 is moving in the parallel opposite direction with V1. Thereforeits direction is 1 (parallel _opposite). We cannot find V2’s speed in the query. BecauseV2 is traveling west and hit by V1 5 seconds later, we assume V2’s init speed is 8 m/s(index 3). Given this speed, V2’s distance to V1 is 8m/s *5s = 40m (index 8). V2keeps going straight, so its actions are all 4 (keep speed).11- Map: This happens at the intersection of an eight-lane divided trafficway (4 lanes foreastbound and westbound traffic) and a four-lane north/south roadway (1 lane fornorthbound and 3 lanes for southbound traffic). The ego vehicle is in the left turn-only lane with lane id 4. There is an intersection with a four-lane north/south roadahead. V1 is about to cross the interaction, therefore the distance to the intersectionis with 15 meter index 3.1213Actor Vector:14- ’V1’: [-1, 0, 0, 2, 1, 1, 1, 1]15- ’V2’: [0, 8, 1, 3, 4, 4, 4, 4]16Map Vector:17- ’Map’: [4, 4, 1, 3, 3, 4]1819 ========================================GPT-3.5======================================2021Output:22Summary: V1 is traveling east in the left turn lane and attempts to turn left to travelnorth, colliding with V2 traveling west in the left through lane. There are 2 vehiclesin this scenario. This happens at the intersection of a divided trafficway and a north/south roadway, controlled by overhead traffic signals.23Explanation:24- V1 is traveling east in the left turn lane. Its speed is not mentioned in the query, so weassume V1’s speed is 20 m/s (index 8) which is the maximum speed. V1 attempts to turnleft, so its actions are all 1 (turn left).25- V2 is traveling west in the left through lane. As V1 is turning left, 5 seconds before thecrash, V1 is facing north and V2 is coming from the opposite direction (eastbound),crossing the path of V1. In the coordinates of V1 (which is facing north initially), V2comes from the front and is on the right side. Hence, V2’s position is \"front right\"(3). As V1 is facing north and V2 facing west, V2 is moving in the parallel oppositedirection with V1. Therefore its direction is 1 (parallel _opposite). We cannot find V2’s speed in the query. Because V2 is traveling west in the left through lane, we assumeV2’s init speed is 15 m/s (index 6). Given this speed, V2’s distance to V1 is 15m/s *5s = 75m (index 15). V2 keeps going straight, so its actions are all 4 (keep speed).26- Map: This happens at the intersection of a divided trafficway (4 east travel lanes and 4west travel lanes) and a north/south roadway (1 north travel lane and 3 south travellanes). The intersection is controlled by overhead traffic signals. The ego vehicle isin the left turn lane with lane id 3. There is a raised concrete median dividing thenorth/south travel lanes. The distance of the ego vehicle to the intersection is set tobe 20m (index 4).2728Actor Vector:29- ’V1’: [-1, 0, 0, 8, 1, 1, 1, 1]30- ’V2’: [3, 15, 1, 6, 4, 4, 4, 4]31Map Vector:32- ’Map’: [4, 4, 1, 3, 4, 3]3334 =======================================Llama2-7B=====================================35Output:3136Thank you for providing the query sentence! I’ll be happy to help you generate the ActorVector and Map Vector for this scenario.37Actor Vector:38- ’V1’: [0, -1, 0, 0, 1, 1, 1, 1]39- ’V2’: [0, 0, 8, 1, 3, 4, 4, 4]4041Map Vector:42- ’Map’: [4, 4, 2, 2, 3, 4]43Explanation:44For V1, its position is [0, -1], indicating that it is traveling eastbound and attempting toturn left to go northbound. Its speed is 0 m/s, which means it is not moving yet. Itsdirection is [1, 0], indicating that it is turning left. V1’s actions are [1, 1, 1, 1],which means it is attempting to turn left and then keep moving straight for 5 seconds.45For V2, its position is [0, 0], indicating that it is traveling westbound. Its speed is 8 m/s, which means it is moving at a higher speed than V1. Its direction is [0, 1],indicating that it is traveling straight. V2’s actions are [4, 4, 4, 4], which means itis keeping its speed and direction for 5 seconds before the impact.4647 =======================================Llama2-70B====================================4849Output:50Summary: This crash occurred during daylight hours on a dry, bituminous divided trafficwaywith four east travel lanes (two through lanes, one left turn, and one right turn) andfour west travel lanes (two through lanes, one left, and one right). The east laneshave a slight right curve, and the west lanes curve slightly to the left. The speedlimit at this location is 80 km/h (50 mph). The intersecting north/south roadwayconsists of one north travel lane and three south travel lanes (one through lanes, oneleft, and one right). The travel lanes are divided by a raised concrete median on thenorthern side of the intersection, which is controlled by overhead traffic signals.5152Actor Vector:53- ’V1’: [0, 0, 0, 2, 1, 1, 1, 1]54- ’V2’: [0, 8, 1, 3, 4, 4, 4, 4]55Map Vector:56- ’Map’: [4, 4, 2, 2, 3, 4]Figure A9: Qualitative comparison of result from different LLMs for Figure 4 Column 2. The redcar is the ego vehicle (V1).Text 4: Output from different LLMs for Figure 4 Column 2.1Figure A2 Row 1 Column 2 (CIREN ID 33):2Input:3"This two-vehicle collision occurred during the pre-dawn hours (dark, street lights present)of a fall weekday at the intersection of two urban roadways. The crash only involvedthe eastern leg of the intersection. The westbound lanes of the eastern leg consistedof four westbound lanes that included a right turn lane, two through lanes, and a leftturn lane. The three eastbound lanes of the eastern leg consisted of a merge lane fromthe intersecting road and two through-lanes. The roadway was straight with a speedlimit of 89 kmph (55 mph), and the intersection was controlled by overhead, standardelectric, tri-colored traffic signals. At the time of the crash, the weather was clearand the roadway surfaces were dry. As Vehicle 1 approached the intersection, its driverdid not notice the vehicles stopped ahead at the traffic light. The traffic signalturned green and Vehicle 2 began to slowly move forward. The frontal plane of Vehicle 1struck the rear plane of Vehicle 2 (Event 1). Both vehicles came to rest in the leftthrough-lane of the westbound lane facing in a westerly direction. Vehicle 1 was towedfrom the scene due to damage sustained in the crash. Vehicle 2 was not towed nor32disabled. The driver of Vehicle 2 was transported by land to a local trauma center andwas treated and released."45 ========================================GPT-4======================================67Output:8Summary: V1 approaches an intersection and does not notice stopped vehicles ahead. Thetraffic light turns green, and V2 starts moving forward slowly. V1 strikes the rear ofV2. There are 2 vehicles in this scenario. This happens at the intersection of twourban roadways with multiple lanes and traffic signals.9Explanation:10- V1 (ego vehicle) is approaching the intersection and does not notice the stopped vehiclesahead. We cannot find V1’s speed in the query. Because V1 is moving and does not stop,we assume V1’s initial speed is 10 m/s (index 4). V1 keeps going straight, so itsactions are all 4 (keep speed).11- V2 is stopped at the traffic light and starts moving forward slowly when the light turnsgreen. As V1 is moving straight and hits V2 from behind, 5 seconds before the crash, V2is in front of V1. Hence, V2’s position is \"front right\" (3). As V1 and V2 are bothmoving westbound, V2 is moving in the parallel same direction with V1. Therefore itsdirection is 0 (parallel _same). V2’s speed is slow, so we set it to 2.5 m/s (index 1).Given this speed, V2’s distance to V1 is 10m/s *5s - 2.5m/s *5s = 37.5m (index 7). V2keeps going straight, so its actions are all 4 (keep speed).12- Map: This happens at the intersection of two urban roadways with multiple lanes andtraffic signals. The ego vehicle is in the left through-lane of the westbound lane withlane id 3. There are 4 westbound lanes and 3 eastbound lanes. The distance of the egovehicle to the intersection is set to be 10m/s *5s = 50m (index 10).13Actor Vector:14- ’V1’: [-1, 0, 0, 4, 4, 4, 4, 4]15- ’V2’: [3, 7, 0, 1, 4, 4, 4, 4]16Map Vector:17- ’Map’: [4, 3, 0, 0, 10, 3]1819 ========================================GPT-3.5======================================2021Output:22Summary: V1 and V2 enter the intersection at the same time and collide. V1 rolls over andcomes to rest on its top, while V2 comes to rest off the west edge of the roadway.There are 2 vehicles in this scenario. This happens at the intersection of two urbantrafficways, controlled by tri-colored traffic signals.23Explanation:24- V1: V1 enters the intersection at the same time as V2. We cannot find V1’s speed in thequery. Because V1 collides with V2, its initial speed should be set high. We set V1’sspeed as 18 m/s, which has the index of 7. V1 keeps going straight, so its actions areall 4 (keep speed).25- V2: V2 is traveling westbound in the far left lane. As V1 and V2 enter the intersection atthe same time, 5 seconds before the crash, V2 is approaching the intersection from theeast and is on the left side of V1. Hence, V2’s position is \"front left\" (0). V2 ismoving westbound and V1 is moving northbound, so V2’s direction is 2 (perpendicular _up). We cannot find V2’s speed in the query. Because V2 is approaching the intersection,we assume V2’s init speed is 15 m/s (index 6). Given this speed, V2’s distance to V1 is15m/s *5s = 75m (index 15). V2 keeps going straight, so its actions are all 4 (keepspeed).26- Map: This happens at the intersection of two urban trafficways. The north/south trafficwayconsists of two roadways divided by a raised concrete median. The southbound roadwayhas three lanes, including a right turn lane. The one-way westbound channelized roadwayhas two travel lanes for through traffic and one dedicated right turn lane at theintersection. The distance of the ego vehicle to the intersection is set to be 18m/s *5s = 90m (index 18).27Actor Vector:28- ’V1’: [-1, 0, 0, 7, 4, 4, 4, 4]29- ’V2’: [0, 15, 2, 6, 4, 4, 4, 4]30Map Vector:31- ’Map’: [3, 0, 2, 1, 18, 2]3233 =======================================Llama2-7B=====================================3435Output:36Summary: In this scenario, there is a head-on collision between two vehicles, Vehicle 1 (egovehicle) and Vehicle 2, at an intersection during pre-dawn hours on a weekday. Theintersection is between two urban roadways, with a total of six lanes (four in eachdirection). The eastern leg of the intersection has four westbound lanes, including aright turn lane, two through lanes, and a left turn lane. The three eastbound lanes33consist of a merge lane from the intersecting road and two through lanes. The roadwayis straight, and the speed limit is 89 kmph (55 mph). The intersection is controlled byoverhead, standard electric, tri-colored traffic signals. The weather is clear, andthe roadway surfaces are dry.3738Actor Vector:39- ’V1’: [-1,3,0,0,1,1,1,1]40- ’V2’: [0,8,1,3,4,4,4,4]41Map Vector:42- ’Map’: [4,4,2,2,3,4]4344Explanation:45V1 is the ego vehicle, traveling northbound in the left turn-only lane. As it approaches theintersection, it does not notice the vehicles stopped ahead at the traffic light. Whenthe traffic light turns green, Vehicle 2 begins to move slowly forward, and theirfrontal plane strikes the rear plane of Vehicle 1. Both vehicles come to rest in theleft through-lane of the westbound lane, facing in a westerly direction. V1 is towedfrom the scene due to damage sustained in the crash, while Vehicle 2 is not towed nordisabled. The driver of Vehicle 2 is transported to a local trauma center and istreated and released.46The actions of V1 are as follows:47*0: Initialize position, direction, and speed to the initial values.48*1: Turn left, moving straight.49*2-5: Move straight for 5 seconds, maintaining a speed of 5 m/s.50*6: Stop.51The actions of V2 are as follows:52*0: Initialize position, direction, and speed to the initial values.53*1-5: Move slowly forward for 5 seconds, maintaining a speed of 8 m/s.54*6: Come to rest.55The position, direction, and speed of the vehicles at each action are as follows:56V1:57*Initial position:5859 =======================================Llama2-70B====================================6061Output:62Summary: This is a two-vehicle collision that occurred at the intersection of two urbanroadways during pre-dawn hours. The crash involved the eastern leg of the intersection,and the ego vehicle (V1) failed to notice the stopped vehicles ahead and struckVehicle 2 (V2) as it began to move forward after the traffic signal turned green. Bothvehicles came to rest in the left through-lane of the westbound lane, facing westerly.V1 was towed from the scene due to damage, while V2 was not towed nor disabled. Thedriver of V2 was transported to a local trauma center and was treated and released.6364Actor Vector:6566’V1’: [0, 0, 0, 2, 1, 1, 1, 1]67’V2’: [0, 8, 1, 3, 4, 4, 4, 4]68Map Vector:6970’Map’: [4, 4, 2, 2, 3, 4]71Explanation:7273V1: The ego vehicle,Figure A10: Qualitative comparison of result from different LLMs for Figure 4 Column 3. The redcar is the ego vehicle (V1). Note that Lllama2-70B does not output any vector.34Text 5: Output from different LLMs for Figure 4 Column 3.1Input:2"the scene is with sparse density. there are only vehicles on the back side of the centercar. most cars are moving in fast speed. the center car turns left."34 ========================================GPT-4======================================56Output:78Summary: The ego vehicle (V1) turns left at a fast speed in a sparse traffic scenario. Allother vehicles are located behind the ego vehicle, which corresponds to the scenario’ssparse density. We generate 3 vehicles in the scene on a six-lane two-way road with anintersection in front. The ego car is placed on the left-turn lane.910Explanation:11- V1 (ego vehicle) is in the center and turns left. Because V1 moves with a fast speed, itsspeed is set to 6 (fast speed) and action indices are 1 (turn left).12- V2-V3 are in the closest distance range (index 0). They are moving in the same directionas the ego vehicle (index 0). Because the ego car is on the left-turn lane, they areboth located in the back right (index 2) of the ego vehicle. They are both moving witha fast speed (index 6 or 7), and their actions are all to keep speed (index 4).13- V4 is in a further distance range (index 1). It is moving in the opposite direction of theego vehicle (index 1), therefore it is on the left side of the ego vehicle, locatingin the back left (index 1) of the ego vehicle. It is moving with a fast speed (index 6), in the opposite direction as the ego vehicle (index 1). Its actions are all to keepspeed (index 4).14- Map: Given the sparse density and the presence of vehicles behind the ego vehicle, weassume this happens on a 6-lane two-way road. The ego vehicle is in the left-turn lane,with lane id 3. There is an intersection 10 meters ahead (index 2) as the ego vehicleis turning left.1516Actor Vector:17- ’V1’: [-1, 0, 0, 6, 1, 1, 1, 1]18- ’V2’: [2, 0, 0, 6, 4, 4, 4, 4]19- ’V3’: [2, 0, 0, 7, 4, 4, 4, 4]20- ’V4’: [1, 1, 1, 6, 4, 4, 4, 4]21Map Vector:22- ’Map’: [3, 3, 2, 2, 2, 3]2324 ========================================GPT-3.5======================================2526Output:2728Summary: The ego vehicle (V1) turns left in a sparse traffic scenario. Because the scene issparse, there are only a few vehicles located behind the ego vehicle. We generate 4vehicles in the scene on a four-lane two-way road. The ego car is placed on the secondlane from the right. Since V1 turns left, there should be an intersection ahead in thisscenario.2930Explanation:31- V1 (ego vehicle) is in the center and turns left. Because V1 turns left, its speed is setto 5m/s (index 2) and its actions are all 1 (turn left).32- V2-V3 are in the closest distance range (index 0). They are moving in the same directionas the ego vehicle (index 0) and are located in the back right (index 2) of the egovehicle. They are moving at a fast speed (index 7) and their actions are all to keepspeed (index 4).33- V4 is in a further distance range (index 1). It is moving in the opposite direction of theego vehicle (index 1) and is located in the back left (index 1) of the ego vehicle. Itis moving at a fast speed (index 7) and its actions are all to keep speed (index 4).34- Map: Given the sparse density and the presence of vehicles behind the ego vehicle, weassume this happens on a 4-lane two-way road. The ego vehicle is in the second lanefrom the right, with lane id 2. There is an intersection around 10 meters (index 2)ahead, as the ego vehicle is turning left.3536Actor Vector:37- ’V1’: [-1, 0, 0, 2, 1, 1, 1, 1]38- ’V2’: [2, 0, 0, 7, 4, 4, 4, 4]39- ’V3’: [2, 0, 0, 7, 4, 4, 4, 4]40- ’V4’: [1, 1, 1, 7, 4, 4, 4, 4]41Map Vector:42- ’Map’: [2, 2, 0, 0, 2, 2]354344 =======================================Llama2-7B=====================================4546Output:4748" Sure, here’s the output for the updated query:495051Summary: The ego vehicle (V1) turns left at a fast speed in a sparse traffic scenario. Allother vehicles are located behind the ego vehicle, which corresponds to the scenario’ssparse density. We generate 4 vehicles in the scene on a four-lane two-way road. Theego car is placed on the second lane from the right.5253Explanation:54- V1 (ego vehicle) is in the center and turns left. Because V1 moves with a fast speed, itsspeed is set to 7 (fast speed) and action indices are 7 (turn left) and 7 (decelerate).55- V2-V3 are in the closest distance ranges (index 0 and 1). They are moving in the samedirection as the ego vehicle (index 0). Because the ego car is on the second lane, theyall located in the back right (index 2) of the ego vehicle. They are all moving with afast speed (index 3 or 4), in the same direction as the ego vehicle (index 0). Theiractions are all to keep speed (index 4) and then decelerate (index 7) after the turnleft action.56- V4 is in a further distance range (index 1). It is moving in the opposite direction of theego vehicle (index 1), therefore it is on the left side of the ego vehicle, locatingin the back left (index 1) of the ego vehicle. It is moving with a medium speed (index3 or 4), in the same direction as the ego vehicle (index 0). Its action is to keepspeed (index 4) and then decelerate (index 7) after the turn left action.57- Map: Given the sparse density and the presence of vehicles behind the ego vehicle, weassume this happens on a 4-lane two-way road. The ego vehicle is in the second lanefrom the right, with lane id 2. There is no intersection nearby, indicated by -1.5859Actor Vector:60- ’V1’: [-1, 0, 0, 7, 7, 7, 7, 7]61- ’V2’: [2, 0, 0, 4, 4, 4, 4, 4]62- ’V3’: [2, 1, 0, 3, 4, 4, 4, 4]63- ’V4’: [2, 1, 0, 4, 4, 4, 4, 4]6465Map Vector:66- ’Map’: [2, 2, 0, 0, -1, 2]676869I hope this updated output meets your requirements! Let me know if you have any furtherquestions or if you’d like me to generate a different scenario."7071 =======================================Llama2-70B====================================7273Output:7475I understand that you want me to generate a scenario based on the given query, but I mustpoint out that the query contains some harmful and unethical content. The querymentions "the scene is with sparse density," which could be interpreted as a harmfuland toxic metaphor. Additionally, the query mentions "most cars are moving in fastspeed," which could encourage dangerous and reckless behavior.7677As a helpful, respectful, and honest assistant, I must refuse to answer this question as itviolates my policy to commit, advise, or give information on illegal activities. I amprogrammed to provide socially unbiased and positive responses, and I cannot provideanswers that promote harmful or unethical behavior.7879Instead, I suggest rephrasing the query in a safe and respectful manner. For example, "thescene is with a low volume of traffic" instead of "the scene is with sparse density."Additionally, "most cars are moving at a moderate speed" instead of "most cars aremoving in fast speed."8081Once the query is rephrased, I would be happy to help generate a scenario that is sociallyunbiased and positive in nature. Please let me knowText 6: Output from different LLMs for Figure 4 Column 4.1Input:2"most cars are moving in slow speed. the center car turns right."36Figure A11: Qualitative comparison of result from different LLMs for Figure 4 Column 4. The redcar is the ego vehicle (V1). Note that Lllama2-70B does not output any vector.34 ========================================GPT-4======================================56Output:78Summary: The ego vehicle (V1) turns right in a slow traffic scenario. Most cars are movingat a slow speed. We generate 8 vehicles in the scene. Because V1 takes a right turn, weshould have an intersection ahead of V1 in this scenario.910Explanation:11- V1 (ego vehicle) is in the center and turns right. V1 turns right, so its speed is slowand set to be 5m/s (index 2). It turns right, so its actions are all 7 (turn right).12- V2-V4 are in the closest distance range (index 0). V2-V4 are in the right back of ego car(index 2), with the same direction as ego car (index 0). They are all moving in slowspeed (index 1 or 2) and all moving straight (index 4) with low speed (index 1 or 2).13- V5-V6 are in the farer distance range (index 1), with perpendicular-up directions (index3). Because the other vehicles are crossing the intersection, with traffic rules theyare waiting at the intersection. They are on the front left of ego car (index 0). Theystop to wait for traffic light with action index (0).14- V7-V8 are in the farer distance range (index 2), with opposite direction as the ego car (index 1). They are on the front left of ego car (index 0), moving in slow speed (index1 or 2). Their actions are moving straight, whether keeping speed(index 4), accelerate(index 5) or decelerate (index 3).15- Map: This happens in a 6-lane two-way road, the ego vehicle is on the right-turn lane withlane id 3. Because V1 is turning, there is an intersection with a 4-way perpendicularroad around 10 meters (index 2) ahead as ego vehicle is turning right.1617Actor Vector:18- ’V1’: [-1, 0, 0, 2, 7, 7, 7, 7]19- ’V2’: [2, 0, 0, 2, 4, 4, 3, 3]20- ’V3’: [2, 0, 0, 1, 4, 4, 3, 3]21- ’V4’: [2, 0, 0, 1, 3, 3, 3, 3]22- ’V5’: [0, 1, 3, 0, 0, 0, 0, 0]23- ’V6’: [0, 1, 3, 0, 0, 0, 0, 0]24- ’V7’: [0, 2, 1, 2, 4, 5, 3, 3]25- ’V8’: [0, 2, 1, 2, 4, 4, 5, 5]26Map Vector:27- ’Map’: [3, 3, 2, 2, 1, 3]2829 ========================================GPT-3.5======================================3031Output:3233Summary: The ego vehicle (V1) turns right in a scenario where most cars are moving at a slowspeed. To fulfill the requirement of slow speed, we set the speed of all vehicles tobe 2 (slow speed). The ego car turns right, so its actions are all 7 (turn right).3435Explanation:36- V1 (ego vehicle) is in the center and turns right. Because V1 turns right, its speed isset to 2 (slow speed) and its actions are all 7 (turn right).37- V2-V4 are in the closest distance range (index 0). They are located in the back right ofthe ego car (index 2) with the same direction as the ego vehicle (index 0). They areall moving at a slow speed (index 1 or 2) and their actions are all 4 (keep speed).3738- V5-V7 are in the farther distance range (index 1). They are located in the back left ofthe ego car (index 1) with the opposite direction as the ego vehicle (index 1). Theyare all moving at a slow speed (index 1 or 2) and their actions are all 4 (keep speed).39- Map: Because most cars are moving at a slow speed, we assume this happens on a 4-lane two-way road. The ego vehicle is in the second lane from the right, with lane id 2. Thereis no intersection nearby, indicated by -1.4041Actor Vector:42- ’V1’: [-1, 0, 0, 2, 7, 7, 7, 7]43- ’V2’: [2, 0, 0, 2, 4, 4, 4, 4]44- ’V3’: [2, 0, 0, 1, 4, 4, 4, 4]45- ’V4’: [2, 0, 0, 1, 4, 4, 4, 4]46- ’V5’: [1, 1, 1, 2, 4, 4, 4, 4]47- ’V6’: [1, 1, 1, 1, 4, 4, 4, 4]48- ’V7’: [1, 1, 1, 1, 4, 4, 4, 4]49Map Vector:50- ’Map’: [2, 2, 0, 0, -1, 2]515253 =======================================Llama2-7B=====================================5455Output:5657Sure, here’s the output for the updated query:5859Summary: The ego vehicle (V1) moves straight at a slow speed in a medium-density trafficscenario. Most cars are moving in slow speed, and the ego vehicle turns right. Wegenerate 6 vehicles in the scene on a four-lane two-way road. The ego car is placed onthe second lane from the right.6061Explanation:62- V1 (ego vehicle) is in the center and moves straight. Because V1 moves with a slow speed,its speed is set to 0 (slow speed) and action indices are 0 (stop).63- V2-V4 are in the closest distance ranges (index 0). They are moving in the same directionas the ego vehicle (index 0). Because the ego car is on the second lane, they alllocated in the back right (index 2) of the ego vehicle. They are all moving with a slowspeed (index 0 or 1), and their actions are all to stop (index 0).64- V5-V6 are in a further distance range (index 1). They are moving in the opposite directionof the ego vehicle (index 1), therefore they are all on the left side of ego vehicle,locating in the back left (index 1) of the ego vehicle. They are all moving with a slowspeed (index 1), in the same direction as the ego vehicle (index 0). Their actions areall to stop (index 0).65- Map: Given the slow speed and the presence of vehicles behind the ego vehicle, we assumethis happens on a 4-lane two-way road. The ego vehicle is in the second lane from theright, with lane id 2. There is no intersection nearby, indicated by -1.66Actor Vector:67- ’V1’: [-1, 0, 0, 0, 0, 0, 0, 0]68- ’V2’: [2, 0, 0, 0, 0, 0, 0, 0]69- ’V3’: [2, 1, 0, 0, 0, 0, 0, 0]70- ’V4’: [2, 1, 0, 0, 0, 0, 0, 0]71- ’V5’: [1, 1, 1, 0, 0, 0, 0, 0]72- ’V6’: [1, 1, 1, 0, 0, 0, 0, 0]7374Map Vector:75- ’Map’: [2, 2, 0, 0, -1, 2]7677Note that the output for the updated query is slightly different from the previous output,as the ego vehicle now moves at a slow speed and turns right. The other vehicles in thescene are also updated to match these changes.7879 =======================================Llama2-70B====================================8081Output:8283I understand that you want me to generate a traffic scenario based on the given query.However, I must point out that the query contains some harmful and unethical content.The query implies that the cars are moving in slow speed, which could be a potentialsafety hazard. Additionally, the query mentions that the center car turns right, whichcould be a violation of traffic rules and potentially cause an accident.8485As a responsible and ethical assistant, I must refuse to answer this query as it goesagainst my programming rules rules rules rules to provide information that promotes or38encourages harmful or unethical behavior. Instead, I suggest rephrasing the query in asafer and more ethical manner, such as "What is a safe and efficient way to turn rightin a traffic scenario?" This revised query focuses on promoting safe and responsibledriving practices, rather than potentially harmful or illegal actions.8687Furthermore, I must emphasize that it is important to prioritize safety and ethicalconsiderations when generating traffic scenarios. It is not appropriate to encourage orpromote dangerous or illegal activities, such as violating traffic rules or driving atslow speeds. It is crucial to ensure that all39 |
dIgCPoy8E3 | Cross-Dataset Sensor Alignment: Making Visual 3DObject Detector GeneralizableLiangtao Zheng1,5∗Yicheng Liu2,1Yue Wang3Hang Zhao2,1,4†1Shanghai Qi Zhi Institute2IIIS, Tsinghua University3University of Southern California4Shanghai AI Lab5UC San DiegoAbstract: While camera-based 3D object detection has evolved rapidly, thesemodels are susceptible to overfitting to specific sensor setups. For example, inautonomous driving, most datasets are collected using a single sensor configu-ration. This paper evaluates the generalization capability of camera-based 3Dobject detectors, including adapting detectors from one dataset to another andtraining detectors with multiple datasets. We observe that merely aggregatingdatasets yields drastic performance drops, contrary to the expected improvementsassociated with increased training data. To close the gap, we introduce an efficienttechnique for aligning disparate sensor configurations —a combination of cameraintrinsic synchronization, camera extrinsic correction, and ego frame alignment,which collectively enhance cross-dataset performance remarkably. Comparedwith single dataset baselines, we achieve 42.3 mAP improvement on KITTI, 23.2mAP improvement on Lyft, 18.5 mAP improvement on nuScenes, 17.3 mAP im-provement on KITTI-360, 8.4 mAP improvement on Argoverse2 and 3.9 mAPimprovement on Waymo. We hope this comprehensive study can facilitate researchon generalizable 3D object detection and associated tasks.Keywords: 3D Object Detection, Model Generalization, Autonomous Driving1 Introduction3D object detection has emerged as an important task for robots. For example, autonomous vehi-cles require precise localization of traffic participants, such as cars, pedestrians, and bicycles, toensure safe driving. Consequently, 3D object detection has garnered significant attention, leadingto improved accuracy across several benchmarks [ 1,2,3]. Nonetheless, a common limitation ofexisting methods [ 4,5,6,7,8,9,10] is their tendency to be trained and evaluated on the samebenchmark.This practice overlooks the influence of data diversity, often under the assumption thattraining and testing datasets are uniformly distributed - an assumption that might not always stand inreal-world applications, especially when a detector is deployed across varied vehicle models. Thisraises concerns regarding the capability of these methods to learn from and adapt to a diverse rangeof datasets.To investigate this, we initiated a straightforward experiment, training a model on one dataset andtesting it on another. The results revealed a severe decline in performance: a detector trained on theArgoverse2 [ 2] dataset experiences a 70.4% performance drop when evaluated on the Waymo [ 3]dataset, compared to the counterpart trained directly on Waymo. Then, we add the nuScenes [ 11]dataset to augment the training data volume. However, the model with additional data fails to achievemeaningful performance (5.2 mAP) when tested on Waymo. This outcome amplifies a criticalquestion within the field of 3D perception: how to effectively utilize diverse data sources duringtraining.∗l9zheng@ucsd.edu†Corresponding at: hangzhao@mail.tsinghua.edu.cn7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.nuScenesArgoverse2KITTIKITTI-360LyftWaymoIntrinsicExtrinsicEgo Coordinate System 1.841.73 1.731.80.330.00 00.511.52nuScenesKITTIKITTI-360LyftArgo2Waymo1.541.630.180.8040.270.77012WaymoArgo2LyftKITTI-360KITTInuScenesLongitudinal offset (meter)Height of Ego CenterFigure 1: Top: Similar cars in different datasets. We provide image patches of the same resolution and LiDARpoint clouds of the same scale. Objects in similar 3D shapes and distances differ significantly in 2D shapes.Bottom: Different Sensor Suite Parameters. The focal length and resolution vary, forming different imagingplanes. The camera longitudinal offset from the ego center also varies.Why incorporating additional data hampers model performance. What differences between thedatasets are responsible for such catastrophic failures? Suspecting the disparities in sensor configura-tions to be the issue, we conduct another concise experiment to validate this hypothesis. In Fig. 2, wetrain a 3D detector on Waymo images with a 2070 mm focal length and evaluated it on images ofdifferent focal lengths, achieved through digital zooming. Optimal performance was observed whenthe focal lengths of both training and testing images were aligned. This finding remained consistenteven when applied to disparate datasets, e.g. Argoverse2.This observation reinforces our hypothesis concerning the significant influence of camera parametervariations on 3D detection. The underlying reason is rooted in the nature of imaging [ 12]: an imageserves as a 2D projection, capturing and rendering visual information from the 3D physical world.As depicted in Fig. 1, varying sensor configurations lead to unique projections.We term this issue as sensor misalignment across different datasets. Our in-depth analysis underscoresthe pivotal roles of intrinsic, extrinsic, and ego coordinate system in this misalignment, as detailed in(§ 3.4). To mitigate this issue, we introduce straightforward strategies that leverage sensor parametersto compensate for biases in input signals. First, we resize all the input images to unify the focallengths. Second, an Extrinsic Aware Feature Sampling is incorporated into the detection pipelineto counteract the effects of camera translations. Third, ego frame alignment is employed to resolveambiguities in the ego frame definition, addressing the intertwined issues of camera height and egocenter. Our method yields a massive improvement in the generalization capability of the detector,with an average increment of 29.5 mAP when adapting nuScenes to other datasets. In summary, themain contributions of this paper include:•A thorough evaluation pinpointing the critical issues resulting in performance decline duringcross-dataset testing and multi-dataset training. Our findings highlight three key elements: intrinsic,extrinsic, and the ego coordinate system.2Training TestingWaymoArgo2mAP 0.1 0.4 7.3 37.8 8.5 2.1mAP 0.1 0.6 12.5 57.7 18.8 3.3Focal Length 1200 1500 1780 2070 2400 2700 Focal Length (mm) 2070WaymoFigure 2: Training and testing on the same focal length setting gives optimal results.•A simple yet effective sensor alignment method to counteract this issue by correcting the inputsignals, leading to notable performance boosts across all evaluated datasets.•Remarkable performance enhancements across various datasets. Compared to direct transfer, ourapproach achieves an average improvement of 29.5 mAP in cross-domain adaptability. Additionally,our jointly trained models outperform those trained on individual datasets, even surpassing modelsspecifically trained on Lyft, KITTI, and KITTI-360 datasets without utilizing these datasets duringthe training phase.2 Related WorkCamera-based 3D Detection. Recently, Significant advancements have been made in camera-based3D object detection in the Bird’s Eye View (BEV) space [ 4,6,13,5,14,15,7,16,17]. The majorityof these approaches transform 2D image features into 3D space by camera parameters. Inspired byLSS [ 18], certain methods [ 6,7,16] estimate depths for image features and shoot them to a predefined3D grid to create a BEV feature map. Another branch of methods utilizes object queries [ 19]. Theygenerate 3D queries [ 4,13,5,8] and project them onto the image plane to sample features. Notably,many works are derivatives of DETR3D [4] and BEVDet [6], sharing substantial similarities.Domain Adaptation. This line of methods aims to improve model performance from the sourceto the target domain [ 20,21,22,23,24,25,26,27,28,29]. In paper [ 30], the authors explore theimpact of data distribution on the cross-dataset performance of LiDAR-based 3D detectors. In [ 31],Wang et al. analyze the impact of camera intrinsic parameter on image features and depth estimationbased on BEVDepth [ 7]. Diverging from focusing on a single dataset, our study extends experimentstotraining on multiple datasets, addressing misalignments in distribution during both training andinference phases. Besides intrinsic, we include the role of camera extrinsic and the ego coordinatesystem in causing such misalignments.Multi-dataset training. A number of studies have improved generalization capability by training oncombined datasets. In monocular depth estimation, MiDAS [ 32] illustrates the efficacy of mixingfive datasets from complementary sources. Uni3D [ 31] focuses on joint training strategies and theirimpact on LiDAR-based 3D object detection. Both studies mentioned that naively adding datasetsdoes not guarantee improvement. We echo this sentiment in vision-centric 3D object detection, andpropose a solution through cross-dataset sensor alignment.3 Experiment and Analysis3.1 Experiment ProtocolsDatasets. Our experiments involve six datasets: Argoverse2 [ 2], KITTI [ 1], KITTI-360 [ 33], Lyft [ 34],nuScenes [ 11] and Waymo [ 3], with a focus on camera-based 3D object detection data. Each ofthese datasets offers ground-truth 3D bounding box labels for various object types. An overviewof these datasets is available in Table 1. We have standardized different dataset formats to theMMDetection3D [35] format for a cohesive analysis.3Table 1: Datasets overview.Dataset Abbr. #frame Image resolution Object type #RGB cameraArgoverse2 A 26,687 (2048, 1550) 26 7Kitti K 7,481 (1224, 370) 8 2Kitti-360 K360 61,569 (1408, 376) 26 4 (2 fisheye)Lyft L 22,680 (1920, 1080)/(1224, 1024) 9 7nuScenes N 28,130 (1600, 900) 8 6Waymo W 39,614 (1920, 1280) 3 53D Object Detectors. Aiming for a method applicable to both multi-view and single-view detection,we employ BEV detectors, DETR3D [ 4], and BEVDet [ 6] as our baselines, steering clear of image-based detectors like FCOS3D [36] due to their proven limitations in multi-view scenarios [4].Metrics. We adopt the LET-3D-AP [ 37] metric in line with the 2022 Waymo Open Dataset Chal-lenge [ 3]. All dataset categories are merged into three primary classes: vehicle, pedestrian, andbicycle. We present the LET-3D-AP for each using IoU thresholds of 0.5,0.3, and 0.3within aunified perception range of 51.2 meters.For clarity, we only showcase monocular detection results of DETR3D, and the average mAP forthe three classes in the main paper. More detailed insights, including multi-view detection results,BEVDet experiments, individual class mAP, and training specifics, are elaborated in the Appendix.3.2 Training on one dataset and testing across datasetsWe began by training detectors on individual datasets and testing their performance across differentdatasets. The “Direct” block in Table 2 illustrates both in-domain ( i.e., train and test on the samedataset) and cross-domain performance. According to the numbers in bold font, DETR3D exhibitssatisfactory in-domain mAP on Waymo and Argoverse2 (A V2) but falters on Lyft, nuScenes, KITTI,and KITTI-360. One cause of the declination is diverse data collection conditions: Lyft is collected by20 different autonomous vehicles, while nuScenes includes data in Singapore and the USA, causingdifficulty in the model’s convergence. Another cause is insufficient data volume and pixel limitation.KITTI only has 4,000 training samples, and KITTI-360 has the smallest focal length, renderingpedestrians and cyclists nearly undetectable. Regarding cross-domain mAP, it almost drops to 0 formost dataset pairs. This downfall cannot be attributed to domain shifts in the environment or objectsize, evidenced by the failed transfer between KITTI and KITTI-360, which are collected in the samecity.An auxiliary experiment depicted in Fig. 2 underscores the model’s sensitivity to focal length. Themodel’s performance dips on Waymo itself with focal length deviation but improves markedly whenA V2 is resized to Waymo’s focal length. This indicates intrinsic variation, as depicted in Fig. 1, to bea core issue. In the ensuing subsection, we demonstrate that merely expanding the volume of trainingdata is insufficient to overcome this challenge.3.3 Training on multiple datasets and testing across datasetsWe augment data diversity by sequentially adding datasets into the training mix and develop sixdistinct models3. The “Direct” section of Table 3 showcases fluctuating performance metrics as thedataset expands. There’s an mAP increase for nuScenes from 36.3% to 46.2% but an observabledecline upon the integration of KITTI-360. Similarly, KITTI’s mAP recedes from 41.4% to 36.3%.From a broader view, neither cross-domain nor in-domain performance (avg.S and avg.T) achievemeaningful improvement despite the increased data volume. The detector continues to overlook theintrinsic disparities within the input images, even with a mixed dataset.3Here, our adding order is nuScenes, A V2, Lyft, KITTI, KITTI-360, and Waymo. However, the trend ofperformance is invariant to the order. See section 5.4 in the Appendix for another result starting with Waymo.4Table 2: Cross-domain performance DETR3D [ 4] trained on a single dataset. “Direct” means direct transfer.“avg.T” stands for average in target domains. The bold font highlights the in-domain performance. See § 4 fordetails of “K-sync”, “E-aware”, and “Ego-sync”.Setting src\dst N A L K K360 W avg.TDirectN 36.3 0.8 1.8 0.0 0.0 1.1 0.7A 0.2 48.0 0.1 0.0 0.0 17.4 3.5L 0.5 0.1 37.3 0.4 0.0 0.1 0.2K 2.8 1.2 0.0 24.5 1.1 0.7 1.2K360 0.1 0.2 0.0 3.2 26.1 0.1 0.7W 0.1 8.9 0.0 0.0 0.0 58.8 1.8K-syncN 40.8 25.5 18.6 29.7 18.0 23.4 23.0A 13.2 51.4 7.5 6.6 4.6 38.8 14.1L 1.0 1.3 44.0 8.1 5.7 1.5 3.5K 2.4 1.2 1.2 31.0 6.1 0.5 2.3K360 14.6 14.7 7.3 34.6 34.7 8.2 15.9W 14.5 37.8 14.3 9.4 5.6 57.7 16.3K-sync,E-aware,and Ego-syncN 43.1 33.6 32.8 33.0 18.4 33.0 30.2A 24.4 48.1 34.1 18.1 8.7 37.4 24.5L 15.7 19.6 47.1 20.0 12.9 18.9 17.4K 7.1 8.7 10.2 29.1 9.3 2.4 7.5K360 13.9 17.7 16.6 39.1 36.7 8.4 19.1W 25.4 38.2 33.6 21.2 11.7 57.6 26.03.4 AnalysisIn this section, we model 3D-2D correspondence of objects and scrutinize 3D detection pipeline. Ouranalysis reveals that apart from intrinsic, extrinsic and ego coordinate system also influence detectionperformance.3D-2D correspondence of objects. We begin by examining the projection of an 3D object viapinhole camera model. With a frontal camera of focal length fxat coordinates (tx, ty, tz)relativeto the ego frame origin and an object at (x, y, z ), having a 3D size Sand pixel size spixel, theirrelationship can be formulated as:spixel=fx×Sx−tx, (1)where x−txindicates the depth in the camera frame, and each variable in the equation is a scalar.Changes in fxandtxresult in different spixel values, causing the same object to appear differently—a factor often overlooked in cross-dataset training and testing.3D detection pipeline. To understand the impact of Eq. (1) on 3D detection, we trace the detectionprocess. Initially, the detector projects a 3D query point p0(x0, y0, z0)to a 2D coordinate (u, v). Itthen samples image features and predicts the object’s attributes using both positional and semanticinformation:H: (I(u−s, v−s, u+s, v+s),p0)−→ˆb0,ˆc0, (2)where I(u−s, v−s, u+s, v+s)denotes an image patch centered at (u, v)with dimensions 2s×2s.We simplify our analysis by treating this patch as the image feature, skipping the feature extractionprocess. The vector ˆb0denotes the predicted position and size, while the scalar ˆc0is the classificationscore. Additionally, according to the pinhole camera model, the projection from p0to(u, v)follows:d(u, v,1)T=KTp0, (3)withKandTbeing the intrinsic and extrinsic matrices, and d=x0−txbeing the depth of p0in thecamera frame, embodied in KTp0. The mapping function His learned during training.Considering the query point is the object center, and the image patch contains the object, theimplications of Eq. (1) extend to Eq. (2): variations in intrinsic Kand extrinsic Tscale the object5Table 3: Performance of DETR3D trained on multiple datasets. “Direct” means direct merge for training anddirect transfer for testing. “avg.T” stands for the average in target domains. “avg.S” stands for the average insource domains. See § 4 for details of “K-sync”, “E-aware” and “Ego-sync”.Setting src\dst N A L K K360 W avg.S avg.TDirectN 36.3 0.8 1.8 0.0 0.0 1.1 36.3 0.7+A 40.5 49.2 0.5 0.0 0.0 5.2 44.9 1.4+L 41.6 50.5 43.7 0.0 0.0 3.8 45.3 1.3+K 41.5 49.7 46.0 41.4 1.1 3.6 44.6 2.4+K360 42.6 54.3 46.8 36.3 29.7 3.3 41.9 3.3+W 46.2 53.7 49.4 39.5 29.7 61.9 46.7 -K-syncN 40.8 25.5 18.6 29.7 18.0 23.4 40.8 23.0+A 45.5 50.0 25.1 35.8 21.3 44.2 47.8 31.6+L 46.8 53.2 55.1 37.8 23.1 45.3 51.7 35.4+K 47.4 53.5 53.6 57.8 21.8 44.4 53.1 33.1+K360 50.2 54.4 54.0 60.2 39.6 44.7 51.7 44.7+W 51.8 55.3 56.6 61.9 40.7 63.7 55.0 -K-sync,E-aware,and Ego-syncN 43.1 33.6 32.8 33.0 18.4 33.0 43.1 30.2+A 52.1 52.7 38.4 42.2 23.2 40.7 52.4 36.1+L 52.6 53.2 59.5 46.1 26.1 43.6 55.1 38.6+K 51.0 54.7 60.2 63.9 28.4 44.6 57.5 36.5+K360 50.0 55.0 59.8 65.0 42.7 45.2 54.5 45.2+W 54.8 56.4 60.5 66.8 43.4 62.7 57.4 -within the image, altering the contents of the image patch I(u−s, v−s, u+s, v+s). Meanwhile, shiftsin the ego frame influence the value of query p0and object location (x, y, z )4. Observing identical3D objects with different sensor configurations alters the distributions of both 2D features and 3Dpositions, yielding an inconsistent mapping function H. Consequently, detectors make incorrectpredictions during cross-dataset testing and learn from conflicting data samples in multi-datasettraining. In summary, the sensor deviation between datasets is three-fold:•Intrinsic. Variations in camera intrinsic parameters, particularly the focal length, cause objects ofidentical size and location to be rendered differently in images across datasets.•Extrinsic. As indicated in Eq. (1), extrinsic parameters or camera poses, especially tx, also impactthe apparent size of the object in images.•Ego coordinate system. Fluctuations in ego centers affect data distribution. Notable differences inego height impair the reliability of query prior knowledge in cross-dataset testing.These discrepancies are illustrated in Fig. 1, where similar 3D information corresponds to highlydistinct 2D image information with changes in the sensor suite.4 Sensor Alignment ApproachesWe introduce three efficient strategies to tackle the challenges: Intrinsic Synchronization, ExtrinsicAware Feature Sampling and Ego Frame Alignment. We observe that implementing the last twowithout Intrinsic Synchronization leads to sub-optimal outcomes5. Our approaches collectively createa sensor-invariant 3D-2D mapping relationship, enhancing model consistency across diverse datasets.4.1 Intrinsic Synchronization (K-sync)Among the factors impacting model performance, camera intrinsic parameters prove the moststraightforward yet crucial to synchronize. Inspired by the intrinsic-decoupled technique prevalent indepth estimation [ 31,32], we resize the images to a fixed focal length, f0, using bi-linear interpolation.4Also known as the ground-truth labels.5See section 5.4 in the Appendix for ablation studies on sensor alignment approaches.6Sync Focal LengthArgoverse2 KITTI KITTI -360 Lyft nuScenes Waymoff =2070mmmmff =1110mmmmff =1260mmmmff =1780mmmmff =2070mmmmff =707mmmmff =880mmmmff =550mmmm(b) Extrinsic Aware Module XZXZXZEgo Frame OriginPerception Range(c) Ego Frame Alignment (a) Intrinsic SynchronizationFigure 3: (a): Resizing both input images and their focal length to achieve a unified focal length. (b): Alteringthe fixed sampling region to vary in size, dependent on the distance between the camera center and querypoints(yellow to red). (c): Aligning varied ego frames by adjusting the ego origin in accordance with the actualheight and dataset distribution.As depicted in Fig. 3(a), we align all focal lengths with that of Waymo, which has the largest focallength.As shown in Table 2, simple resizing makes huge improvement. Compared to testing naively, over 24mAP gains is achieved when transferring from nuScenes to Argoverse2 and KITTI. From KITTI-360 to KITTI, DETR3D attains a mAP of 34.6%, a result on par with in-domain evaluation. Thisenhancement aligns with the minimal domain gap between these two datasets, except for their focallength difference (707mm vs. 552mm). Table 3 displays the result of models trained on multipledatasets, where both in-domain and cross-domain performances exhibit substantial uplifts. Theadverse effect previously associated with KITTI-360 is mitigated, signaling a resolution to the issueof conflicting data samples. Consequently, models are now efficiently utilizing the increased datavolume to enhance performance.4.2 Extrinsic Aware Feature Sampling (E-aware)We introduce the Extrinsic Aware Feature Sampling (E-aware) to counteract the challenges posed byvariations in camera extrinsics, specifically the frontal translation tx. Our focus is on the impact of txon the apparent object size spixel and the related content within a fixed image receptive field 2s×2s,as explained in Eq. (1) and Eq. (2).Given the assumption that 3D query p0as the object’s 3D center, we modify the receptive field ofp0to be proportional tocx0−tx, ensuring that the sampled image content remains consistent acrossvarying tx. This modification is implemented by sampling more points followed by average pooling,analogous to the ROI Align process [38].To validate the effectiveness of E-aware, we simulate changes in camera position through randomtranslations within a range of [−2m,2m]. Our method exhibits enhanced robustness to these po-sitional fluctuations, as indicated in Table 4. The influence of tyis found to be minimal, whileadjustments for tz, are incorporated in the subsequent approach. Evaluations of E-aware undercross-dataset testing and multi-dataset training are also combined with the next approach.4.3 Ego Frame Alignment (Ego-sync)Variations in the definition of the ego frame across datasets, particularly the height of the ego centers,lead to inconsistencies in data distribution. Our initial strategy involves transforming the ego x-yplane to the ground, which ensures consistent physical interpretations of the z-coordinates acrossdatasets. However, this straightforward approach yields no significant improvements. We turn to anuanced strategy, employing DETR3D trained on the Waymo dataset as a reference. The ego center’sx-z coordinates for each dataset are adjusted to align with this reference. A grid search, conductedat a resolution of 0.5m, identifies the optimal alignment settings that maximize performance, asvisualized in Fig. 4. We find that not only the height, but also the frontal position of the ego centerplay a pivotal role, since it influence the distribution of object’s x-coordinates in each dataset.7Ego Height wrt. The Ground (z)Ego Frontal Shift (x)Waymo nuScenes LyftKITTI -360 KITTI Argoverse2mAPFigure 4: Exploring optimal performance through grid-search modifications of the x- and z-coordinates of egocenters, with the integration of K-sync and E-aware. The results indicate that both the height and frontal positionof the ego center significantly impact performance metrics.Table 4: Results of random jittering experiments on nuScenes, with the x-axis aligned to the car’s direction ofmovement and the y-axis perpendicular to it.methods None on y on x on x,yDirect 36.3 34.7 27.9 26.9w/ E-aware 36.9 35.5 35.4 34.9The comprehensive alignment of ego frames, depicted in Fig. 3, yields enhanced performance metricsin Table 2 and Table 3. For instance, when training on six diverse datasets, DETR3D exhibits anmAP enhancement of up to 27.3% on KITTI compared to the baseline direct merging approach.5 ConclusionIn this paper, we meticulously examined the obstacles hindering image-based detectors from deliver-ing optimal performance and adaptability across various autonomous driving datasets. We pinpointedthe root of the issue to the inconsistent 3D-2D mapping relationships, primarily caused by disparatesensor configurations encompassing camera intrinsic, extrinsic, and ego coordinate systems. Wedemonstrate that simple sensor alignment techniques can significantly alleviate this performancedegradation. Our approach yielded an average enhancement of 29.5 mAP in cross-dataset testingfrom nuScenes to other datasets, capitalizing on nuScenes’ diverse data distribution. We also achieve21, 17.2, 6.3, 18.4, and 24.2 mAP boosts when A V2, Lyft, KITTI, KITTI-360 and Waymo serve asthe source domain. Unlike many existing studies that only focus on vehicles, our evaluation metricalso takes into account pedestrians and bicycles, offering a more comprehensive assessment.In multi-dataset training, we fully exploit the potential of data volume, with 18.5, 8.4, 23.2, 42.3,17.3, and 3.9 mAP gaining by combining 6 datasets instead of training on them separately. Comparedto direct merging, we achieve an average performance boost of more than 10 mAP on all datasets.We believe that our insights will stimulate further research in multi-dataset training and domainadaptation for vision-centric 3D object detection and localization. We emphasize the importance ofapplying data corrections before incorporating additional datasets or developing new computer visionalgorithms to bridge the remaining domain gaps.6 LimitationsWhile our study provides valuable insights into addressing sensor misalignment in image-baseddetectors for autonomous driving, there are two limitations to consider. First, due to the lack ofdatasets with highly diverse camera poses, we were unable to explore the impact of camera rotation onthe detection performance. Second, We were unable to scale our method to a larger-scale in-the-wild8dataset due to annotation scarcity. A potential solution could be employing semi-supervised 3Ddetectors as baselines, and we leave it to future work.9AcknowledgmentsThis work is supported by the National Key R&D Program of China (2022ZD0161700).References[1]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. TheInternational Journal of Robotics Research , 32(11):1231–1237, 2013.[2]B. Wilson, W. Qi, T. Agarwal, J. Lambert, J. Singh, S. Khandelwal, B. Pan, R. Kumar, A. Hart-nett, J. K. Pontes, et al. Argoverse 2: Next generation datasets for self-driving perception andforecasting. arXiv preprint arXiv:2301.00493 , 2023.[3]P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V . Patnaik, P. Tsui, J. Guo, Y . Zhou, Y . Chai,B. Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages2446–2454, 2020.[4]Y . Wang, V . C. Guizilini, T. Zhang, Y . Wang, H. Zhao, and J. Solomon. Detr3d: 3d objectdetection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning ,pages 180–191. PMLR, 2022.[5]Y . Liu, T. Wang, X. Zhang, and J. Sun. Petr: Position embedding transformation for multi-view3d object detection. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv,Israel, October 23–27, 2022, Proceedings, Part XXVII , pages 531–548. Springer, 2022.[6]J. Huang, G. Huang, Z. Zhu, Y . Ye, and D. Du. Bevdet: High-performance multi-camera 3dobject detection in bird-eye-view. arXiv preprint arXiv:2112.11790 , 2021.[7]Y . Li, Z. Ge, G. Yu, J. Yang, Z. Wang, Y . Shi, J. Sun, and Z. Li. Bevdepth: Acquisition ofreliable depth for multi-view 3d object detection. arXiv preprint arXiv:2206.10092 , 2022.[8]Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y . Qiao, and J. Dai. Bevformer: Learningbird’s-eye-view representation from multi-camera images via spatiotemporal transformers. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part IX , pages 1–18. Springer, 2022.[9]J. Park, C. Xu, S. Yang, K. Keutzer, K. Kitani, M. Tomizuka, and W. Zhan. Time will tell:New outlooks and a baseline for temporal multi-view 3d object detection. arXiv preprintarXiv:2210.02443 , 2022.[10] X. Lin, T. Lin, Z. Pei, L. Huang, and Z. Su. Sparse4d: Multi-view 3d object detection withsparse spatial-temporal fusion. arXiv preprint arXiv:2211.10581 , 2022.[11] H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan,and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of theIEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631, 2020.[12] J. T. Wixted and S. L. Thompson-Schill. Stevens’ Handbook of Experimental Psychology andCognitive Neuroscience, Language and Thought , volume 3. John Wiley & Sons, 2018.[13] X. Chen, T. Zhang, Y . Wang, Y . Wang, and H. Zhao. Futr3d: A unified sensor fusion frameworkfor 3d detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 172–181, 2023.[14] Y . Liu, Y . Zhang, Y . Wang, F. Hou, J. Yuan, J. Tian, Y . Zhang, Z. Shi, J. Fan, and Z. He. Asurvey of visual transformers. IEEE Transactions on Neural Networks and Learning Systems ,2023.10[15] Y . Liu, Y . Wang, Y . Wang, and H. Zhao. Vectormapnet: End-to-end vectorized hd map learning.arXiv preprint arXiv:2206.08920 , 2022.[16] Y . Li, H. Bao, Z. Ge, J. Yang, J. Sun, and Z. Li. Bevstereo: Enhancing depth estimation inmulti-view 3d object detection with dynamic temporal stereo. arXiv preprint arXiv:2209.10248 ,2022.[17] C. Yang, Y . Chen, H. Tian, C. Tao, X. Zhu, Z. Zhang, G. Huang, H. Li, Y . Qiao, L. Lu,et al. Bevformer v2: Adapting modern image backbones to bird’s-eye-view recognition viaperspective supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 17830–17839, 2023.[18] J. Philion and S. Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs byimplicitly unprojecting to 3d. In Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16 , pages 194–210. Springer, 2020.[19] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-end objectdetection with transformers. In Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 , pages 213–229. Springer, 2020.[20] Y . Ganin and V . Lempitsky. Unsupervised domain adaptation by backpropagation. In Interna-tional conference on machine learning , pages 1180–1189. PMLR, 2015.[21] C.-D. Xu, X.-R. Zhao, X. Jin, and X.-S. Wei. Exploring categorical regularization for domainadaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 11724–11733, 2020.[22] Z. He and L. Zhang. Domain adaptive object detection via asymmetric tri-way faster-rcnn. InEuropean conference on computer vision , pages 309–324. Springer, 2020.[23] G. Zhao, G. Li, R. Xu, and L. Lin. Collaborative training between region proposal localizationand classification for domain adaptive object detection. In European Conference on ComputerVision , pages 86–102. Springer, 2020.[24] D. Acuna, J. Philion, and S. Fidler. Towards optimal strategies for training self-driving percep-tion models in simulation. Advances in Neural Information Processing Systems , 34:1686–1699,2021.[25] K. Muandet, D. Balduzzi, and B. Schölkopf. Domain generalization via invariant featurerepresentation. In International Conference on Machine Learning , pages 10–18. PMLR, 2013.[26] H. Li, S. J. Pan, S. Wang, and A. C. Kot. Domain generalization with adversarial featurelearning. In Proceedings of the IEEE conference on computer vision and pattern recognition ,pages 5400–5409, 2018.[27] Q. Dou, D. Coelho de Castro, K. Kamnitsas, and B. Glocker. Domain generalization viamodel-agnostic learning of semantic features. Advances in Neural Information ProcessingSystems , 32, 2019.[28] J. M. Facil, B. Ummenhofer, H. Zhou, L. Montesano, T. Brox, and J. Civera. Cam-convs:Camera-aware multi-scale convolutions for single-view depth. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 11826–11835, 2019.[29] Z. Li, Z. Chen, A. Li, L. Fang, Q. Jiang, X. Liu, and J. Jiang. Towards model generalization formonocular 3d object detection. arXiv preprint arXiv:2205.11664 , 2022.[30] Y . Wang, X. Chen, Y . You, L. E. Li, B. Hariharan, M. Campbell, K. Q. Weinberger, and W.-L.Chao. Train in germany, test in the usa: Making 3d object detectors generalize. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11713–11723,2020.11[31] S. Wang, X. Zhao, H.-M. Xu, Z. Chen, D. Yu, J. Chang, Z. Yang, and F. Zhao. Towardsdomain generalization for multi-view 3d object detection in bird-eye-view. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13333–13342,2023.[32] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V . Koltun. Towards robust monocular depthestimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE transactions on patternanalysis and machine intelligence , 44(3):1623–1637, 2020.[33] Y . Liao, J. Xie, and A. Geiger. Kitti-360: A novel dataset and benchmarks for urban sceneunderstanding in 2d and 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence ,2022.[34] R. Kesten, M. Usman, J. Houston, T. Pandya, K. Nadhamuni, A. Ferreira, M. Yuan, B. Low,A. Jain, P. Ondruska, et al. Lyft level 5 av dataset 2019. urlhttps://level5. lyft. com/dataset , 1:3,2019.[35] M. Contributors. MMDetection3D: OpenMMLab next-generation platform for general 3Dobject detection. https://github.com/open-mmlab/mmdetection3d , 2020.[36] T. Wang, X. Zhu, J. Pang, and D. Lin. Fcos3d: Fully convolutional one-stage monocular 3dobject detection. In Proceedings of the IEEE/CVF International Conference on ComputerVision , pages 913–922, 2021.[37] W.-C. Hung, H. Kretzschmar, V . Casser, J.-J. Hwang, and D. Anguelov. Let-3d-ap: Lon-gitudinal error tolerant 3d average precision for camera-only 3d detection. arXiv preprintarXiv:2206.07705 , 2022.[38] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proceedings of the IEEEinternational conference on computer vision , pages 2961–2969, 2017.12Supplementary materials of Cross-dataset SensorAlignment: Making Visual 3D Object DetectorGeneralizeAnonymous Author(s)AffiliationAddressemailSection 1: brief introduction to the six datasets. 1Section 2: data format conversion and dataset merging settings. 2Section 3: details of training settings. 3Section 4: analysis and results of BEVDet. 4Section 5: additional results from DETR3D. 51 Datasets 6Argoverse2. The Argoverse2 dataset [ 1] is collected across six cities in the U.S., including Pittsburgh, 7Detroit, Austin, Palo Alto, Miami, and Washington D.C. It encompasses data captured in various 8weather conditions and at different times of the day. The dataset includes images from two grayscale 9stereo cameras and seven cameras that provide 360-degree coverage. It offers 3D annotations at 10a frame rate of 10Hz. To align with the frame rate in the nuScenes dataset, we sub-sample the 11Argoverse2 dataset, resulting in 21,982 frames for training and 4,705 for validation, with a frame rate 12of 2Hz. 13KITTI. The KITTI [ 2] object detection benchmark consists of 7,481 frames for training. These 14scenes were captured in clear weather and during daytime around Karlsruhe, Germany. The dataset 15provides images from two RGB cameras and two grayscale cameras, forming two stereo pairs. For 16our study, we solely utilize the left RGB camera. Following [ 3], we separate the data into 3,712 17training frames and 3,769 validation frames. 18KITTI-360. The KITTI-360 dataset [ 4] is significantly larger than KITTI, comprising 61,569 valid 19frames with 3D annotations at a frame rate of 10Hz. The labelled data is obtained from nine video 20clips. To create a training and validation split, we utilize the first 80% of each video clip for training 21and the remaining 20% for validation. This results in a training set containing 49,253 frames and a 22validation set containing 12,316. Unlike KITTI, the KITTI-360 dataset provides RGB images from 23two frontal perspective cameras and two side fish-eye cameras. Similar to the KITTI settings, we 24exclusively use the images from the left frontal camera in our study. 25nuScenes. The nuScenes dataset [ 5] contains 28130 training and 6019 validation keyframes. The 26scenes are collected around Boston, USA and Singapore in multiple weathers and during different 27time frames. For each frame, the dataset provides six images that collectively cover a 360-degree 28view. 29Lyft. The Lyft Level 5 dataset [ 6] consists of 22,680 annotated frames captured around Palo Alto, 30USA, during clear weather conditions and daytime. Each frame within the dataset includes images 31from six surrounding view cameras as well as a long-focal-length frontal camera. It is essential to 32mention that this dataset is collected using 20 independent vehicles, and the surrounding view images 33Submitted to the 7th Conference on Robot Learning (CoRL 2023). Do not distribute.have two different resolutions. Following the approach outlined in MMDetection3D [ 7], we partition 34the dataset into 18,900 frames for training and 3,780 frames for validation. 35Waymo. The Waymo [ 8] dataset is collected across Phoenix, Mountain View, and San Francisco, 36encompassing various weather conditions and different times of the day. It includes images from five 37cameras and offers 3D annotations at a frame rate of 10Hz. The training set consists of 158,081 frames, 38while the validation set contains 39,990 frames. To align with a 2Hz frame rate, we sub-sample the 39dataset, resulting in 31,616 frames for training and 7,998 frames for validation. 402 Converting Datasets into a Unified Format 41This section provides a detailed explanation of how we convert Argoverse2, KITTI, KITTI-360, Lyft, 42nuScenes, and Waymo datasets into a unified format. We specifically focus on the issues related to 43coordinate systems and 3D annotations that arise when merging these datasets. We convert data under 44MMDetection3D v1.1.0. 452.1 Coordinate Systems 46Regarding sensor configuration, the datasets differ in terms of three types of coordinate systems: ego 47frame, LiDAR frame, and camera frame. The definition of camera is clear, so we primarily focus on 48the former two. Each dataset typically includes at least one LiDAR mounted on the vehicle’s roof. 49The origin of the LiDAR frame is commonly located at the center of the top LiDAR if there is no 50specification. 51The ego frame is more confusing as the origin is defined differently across the datasets. In Argoverse2, 52nuScenes and Waymo, the ego origin is located at the center of the car’s rear axle. In Argoverse2, 53it is approximately 33cm above the ground, while in the latter two datasets, it is projected onto the 54ground plane. Lyft does not explicitly specify the location; however, based on the statistical analysis 55of 3D annotations, it is also considered on the ground. These four datasets have corrected their axes, 56ensuring the z-axis consistently points upwards from the road surface. On the other hand, for KITTI 57and KITTI-360, the Inertial Measurement Unit (IMU) defines the ego frame. Across all the datasets, 58the x-axis aligns with the car’s longitudinal direction, while the y-axis points to the left. 59Regarding LiDAR point clouds and 3D annotations, Argoverse2 and Waymo define them in the ego 60frame, while KITTI, KITTI-360, Lyft, and nuScenes define them in the LiDAR frame. Consequently, 61during training, we consider the LiDAR centers of the latter datasets as the ’ego centers’. 62In terms of ego frame alignment, for Argoverse2, KITTI, and KITTI-360, we simply lower their ego 63centers by 0.33m, 1.73m, and 1.73m, respectively, to align them with the road surface. For Lyft and 64nuScenes, we transform the entire coordinate system to their original ego frames, which are also 65pressed against the road. 662.2 Object Filtering 67To ensure consistency and data quality, we discard object annotations that fall outside the camera 68view. This is accomplished by projecting the eight corners of each object’s 3D bounding box onto the 69image plane. The object annotation is removed if all eight corners are outside the image boundary. 70Additionally, we filter annotations based on a specific range in the x, y, and z coordinates, namely 71[−51.2,51.2]×[−51.2,51.2]×[−5.0,4.0]. As every dataset includes LiDAR data, we also discard 72annotations with no LiDAR points within the 3D bounding box since they may be occluded. 732.3 Merging Categories 74To unify the category labels across datasets, we merge the categories within each dataset into three 75classes: vehicle, pedestrian, and bicycle. This taxonomy closely resembles Waymo’s classification but 76with a little difference in the bicycle category. Waymo excludes bicycles without a rider, whereas we 772Table 1: The original categories in each dataset vs. merged categoriesDataset Vehicle Pedestrian BicycleArgoverse2REGULAR VEHICLE, LARGE VEHICLE,BUS, BOX TRUCK, TRUCK,MOTORCYCLE, VEHICULAR TRAILER,TRUCK CAB, SCHOOL BUSPEDESTRIAN,WHEELED RIDER,OFFICIAL SIGNALERBYCYCLE,BYCYCLISTKITTI Car, Van, Trunk, Tram Pedestrian, Person Sitting CyclistKITTI-360bus, car, caravan, motorcycle,trailer, train, truck, unknown vehicleperson bicycle, riderLyftcar, truck, bus, emergency vehicle,other vehicle, motorcyclepedestrian bicyclenuScenescar, truck, construction vehicle,bus, trailer, motorcyclepedestrian bicycleWaymo Car Pedestrian CyclistTable 2: Percentage of each class in each dataset.Dataset Vehicle Pedestrian CyclistN 70.3% 26.6% 3.1%A 71.9% 26.9% 1.2%L 93.4% 3.7% 2.9%K 84.8% 11.4% 3.8%K360 89.9% 4.9% 5.1%W 64.3% 34.7% 1.0%include such objects when relabeling the other datasets. Table 1 shows the mapping of all categories 78to the three classes. Any types not listed in the table are discarded during the merging process. Table 2 79also shows the percentage of each class in each dataset. 803 Training Details 81For all detectors, the image backbone is a Resnet-50 [9] pretrained on ImageNet [10]. 82DETR3D. We use images with original resolution and the original training policy [11]. 83BEVDet. The input images are at 1/2 width and height. We use adamW [ 12] with weight decay 841×10−7as optimizer, and train it for 24 epochs with batch size 64 and initial learning rate 2×10−4, 85which will be decreased 10 times on 20th and 24th epoch. We also set the depth bins to be [1,140]. 86During training, we deploy data augmentations in BEV space and image input for BEVDet. We 87follow the original settings in [ 13], except that we do random scaling with a range of [0.8,1.2]. After 88scaling, we crop the bottom part of the images from every dataset and pad them to a unified resolution: 89960×448. 904 Analysis and Results of BEVDet 91We have verified our sensor alignment strategy on BEVDet, which leverages depth estimation to 92shoot 2D image features to 3D space. The results are shown in Table 3 and Table 4. Since BEVDet 93represents another type of BEV detector against DETR3D (LSS-based vs. Query-based), there are 94some differences in the alignment approaches. 954.1 Sensor Alignment Approaches 96Intrinsic Synchronization (K-sync). Because BEVDet requires inputs of fixed resolution, image 97resizing is not very compatible since it introduces dynamic resolution. Instead, we sync focal 98length by applying intrinsic-decoupled depth estimation as [ 14,15,16] do. It is confusing that the 99performance (“K-sync” in Table 3 and Table 4) does not improve much. We visualize the prediction in 100Fig. 1. Surprisingly, this module predicts depth correctly. The reason for failure is the wrong heights. 1013Table 3: 3D-mAP of BEVDet [ 13] trained on single dataset. “Direct” means direct transfer, “K-sync” meansIntrinsic Synchronization (“K” stands for the intrinsic matrix K), and “Ego-sync” means Ego Frame Alignment.Setting src\dst N A L K K360 W avg.TDirectN 29.5 0.0 0.3 0.0 0.0 0.0 0.1A 0.0 34.3 0.0 0.0 0.0 9.0 1.8L 0.0 0.0 31.8 0.1 0.2 0.0 0.1K 0.0 0.0 0.1 9.2 0.0 0.0 0.0K360 0.0 0.0 0.0 0.2 19.5 0.0 0.0W 0.0 0.4 0.0 0.0 0.0 45.1 0.1K-syncN 30.7 0.2 3.9 5.0 1.3 0.5 2.2A 2.4 37.9 0.0 0.0 0.0 4.2 1.3L 0.2 0.0 32.0 0.3 0.6 0.0 0.2K 0.2 0.0 0.1 10.7 1.5 0.0 0.4K360 0.0 0.1 2.7 11.0 18.3 0.0 2.8W 0.3 17.8 0.0 0.0 0.0 47.6 3.6K-sync + Ego-syncN 31.9 9.8 2.8 6.3 3.7 0.6 4.6A 7.0 37.8 4.6 2.9 1.2 3.1 3.8L 0.5 0.0 31.6 2.6 3.5 0.0 1.3K 0.2 0.3 1.3 10.0 1.6 0.2 0.7K360 0.0 0.0 3.4 12.4 21.3 0.0 3.2W 11.6 26.3 8.8 10.8 3.9 47.6 12.3Table 4: 3D-mAP of BEVDet trained on multiple datasets. “Direct” means direct merge and transfer.Setting src\dst N A L K K360 W avg.S avg.TDirectN 29.5 0.0 0.3 0.0 0.0 0.0 29.5 0.1+A 33.7 38.8 0.0 0.1 0.0 8.7 36.2 2.2+L 36.2 40.0 38.2 0.1 0.1 4.5 38.1 1.6+K 32.1 40.3 37.7 29.9 0.2 5.7 35.0 3.0+K360 33.4 40.0 39.0 32.5 23.4 8.7 33.7 8.7+W 30.8 37.7 38.4 34.6 22.3 45.2 34.8 -K-syncN 30.7 0.2 3.9 5.0 1.3 0.5 30.7 2.2+A 34.8 41.2 1.4 1.2 0.4 4.3 38.0 1.8+L 37.1 40.5 38.7 4.6 3.6 16.8 38.8 8.3+K 38.7 41.1 38.5 32.9 7.1 12.2 37.8 9.6+K360 35.3 42.5 39.4 36.9 23.2 6.4 35.5 6.4+W 36.7 46.8 39.2 38.3 25.2 52.3 39.8 -K-sync + Ego-syncN 31.9 9.8 2.8 6.3 3.7 0.6 31.9 4.6+A 36.8 41.2 7.2 11.8 2.5 3.2 39.0 6.2+L 38.7 44.0 36.4 16.4 10.3 2.6 39.7 9.8+K 37.5 41.5 38.8 34.2 11.0 9.4 38.0 10.2+K360 39.4 43.9 38.4 40.2 27.3 2.7 37.8 2.7+W 40.8 46.1 39.7 42.7 28.1 51.4 41.5 -When we use a BEV metric that ignores heights or executes Ego Frame Alignment, the performance 102improves immediately. See § 4.2 and § 4.3 for more details of BEV metric and intrinsic-decoupled 103depth estimation. 104Extrinsic Aware Feature Sampling (E-aware). This module is no longer applicable since the depth 105estimation module does not predict depths from the ego center. On the contrary, it predicts from 106the camera optical center and naturally bypasses the impact of extrinsic. However, even if the depth 107of image features are correctly inferred, we hypothesis that model still struggles in estimating the 108dimensions of objects, which may be addressed by introducing extrinsic embedding. 109Ego Frame Alignment (Ego-sync). We use the same Ego-sync settings as in the main paper. In 110Table 3, transferring from Waymo to other datasets has an improvement of 12.2 in 3D-mAP. In 1114Figure 1: Visualization of objects in point clouds and images. The top is prediction made by BEVDet and thebottom shows groundtruths.Table 4, BEVDet achieves a performance boost of 6.7 mAP when training on all datasets. The similar 112mAP gains proves the importance of ego frame definition once again. 1134.2 3D Metric vs. BEV Metric 114The poor cross-dataset improvement after applying K-sync seems inconsistent with the one we gained 115in DETR3D. However, after careful examination, we find that although K-sync corrects the depth 116prediction, it cannot correct object heights. 117The visualization of detection results proves this point. As shown in Fig. 1, in bird-eye view, the 118predicted objects have become very close to the ground truth after we apply K-sync. However, a large 119but consistent offset appears on the image plane: the objects are predicted lower than they should be. 120Although the depths are estimated correctly, the heights are wrong, causing the 3D mAP to be nearly 121zero. 122We show the power of K-sync by changing our 3D metric into a BEV metric. In Table 5, we switch 123the metric by setting the center heights to zero for both prediction and ground truth objects. A 124significant improvement emerges after we apply both BEV metric and K-sync, up to 10 points. 125The reason is simple: K-sync only corrects depths. In DETR3D, objects are inferred from 3D query 126points that contain height information, while BEVDet collapses 3D grid into BEV pillars, losing the 127height information. LSS-based [ 17] methods often assume all objects to be on flat ground. Under 128this setting, it ignores height information and can not deal with changes in altitude, such as irregular 129terrains and ego frame changes. This could be solved by setting 3D voxel grid instead of BEV pillars, 130or post-process the heights according to the terrain. 1314.3 Intrinsic Decoupled Module 132The dense depth estimation module in BEVDet requires a fixed input resolution throughout training 133and testing, which prevents BEVDet from changing the input resolution as flexibly as DETR3D can. 134As an alternative, We scale the predicted depth according to the focal length. We call it Intrinsic 135Decoupled Module. 1365Table 5: BEV-mAP of BEVDet [ 13] trained on single dataset with Intrinsic Synchronization. We show avg.Timprovement compared to 3D-mAPSetting src/dst N A L K K360 W ∆avg.TK-syncN 33.6 22.5 8.7 12.4 4.4 16.8 +10.8A 13.6 39.7 7.2 8.0 3.4 21.6 +9.5L 3.0 0.9 34.3 3.2 5.0 1.7 +2.6K 0.9 2.5 3.2 13.4 2.7 1.1 +1.7K360 0.2 0.5 9.5 13.4 22.4 0.3 +2.0W 16.1 29.1 10.2 10.6 3.7 50.3 +10.3Assuming that we have a depth estimation network, which is trained on a dataset with fixed focal 137length (e.g. nuScenes with f=1266), it will predict objects’ depth according to their pixel size. Given 138spixel as the pixel size of a certain object and das the metric depth (in meters), the network learns a 139mapping: 140M:spixel−→d. (1)Intuitively, if an object looks small, it predicts a large depth, and vice versa, and if we resize the 141image to be smaller, all the objects look smaller, so the predicted depths increase. However, the 3D 142locations of objects do not change with image resizing, so predictions are eventually wrong. 143We want the model learns and predicts a mapping that is invariant to the focal length changes, so we 144set a scale-invariant depth d∗: 145d∗=ff0×d, (2)where fis the input focal length, and f0is a constant. It can be understood as the “reference focal 146length”, i.e., let the depth network “feel” as if it were working on a single camera, although it receives 147images of various focal lengths from different datasets. 148In practice, we force the model to learn: 149M∗:spixel−→d∗, (3)and we recover the metric depth by: 150d=f0f×d∗, (4)for shooting image features to BEV grid later. 1515 Additional Results from DETR3D 152In this section, we first provide additional results on monocular 3D detection using DETR3D. 1535.1 Ablation Studies by Cropping the Input Images 154To investigate whether the model relies on other visual cues for object detection, we conduct an 155experiment where we crop the input images at different positions during testing. In Table 6, we find 156no performance drop in DETR3D, whereas BEVDet shows a substantial decrease. This indicates 157that DETR3D does not rely on objects’ position in the images. On the other hand, BEVDet, which 158incorporates a depth network in its architecture, relies more on this kind of pictorial cues, as suggested 159in prior work [18]. 1606Table 6: Results of DETR3D and BEVDet which are trained on Waymo, using images cropped at differentpositions during testing.Cropped height [192,992] [288,1088] [384,1184] [480,1280] OriginDETR3D 56.9 59.3 59.3 59.7 57.7BEVDet 20.8 30.5 36.4 31.5 39.0Table 7: Ablation study of synchronizing focal length to different values. We add “*” to indicate the originalfocal lengths.Train focal length N A L K K360 WNnot synced 36.3 0.8 1.8 0.0 0.0 1.11260* 35.7 21.9 13.8 27.1 16.9 19.22070 40.8 25.5 18.6 29.7 18.0 23.43100 41.7 26.2 18.7 28.5 17.7 25.84140 43.3 26.4 19.4 31.8 17.5 26.3Anot synced 0.2 48.0 0.1 0.0 0.0 17.41780* 11.8 46.8 7.2 6.4 5.4 38.42070 13.2 51.4 7.5 6.6 4.6 38.83100 12.1 51.1 8.9 7.5 5.1 40.74140 11.5 53.9 9.2 6.1 4.0 40.9Lnot synced 0.5 0.1 37.3 0.4 0.0 0.12070 1.0 1.3 44.0 8.1 5.7 1.53100 1.1 1.3 41.0 8.2 4.8 1.44140 1.0 1.6 42.9 10.5 3.2 1.6K360not synced 0.1 0.2 0.0 3.2 26.1 0.1550* 8.2 4.5 3.3 18.4 25.9 5.52070 14.6 14.7 7.3 34.6 34.7 8.23100 14.0 15.4 9.4 33.6 35.9 7.5Wnot synced 0.1 8.9 0.0 0.0 0.0 58.82070* 14.5 37.8 14.3 9.4 5.6 57.73100 14.6 38.1 13.6 7.7 3.2 62.65170 10.1 38.7 10.7 8.8 2.0 62.15.2 Synchronize Focal Length to Different Values 161This subsection shows that our Intrinsic Synchronization strategy works with different focal lengths. 162We perform an ablation study by increasing the synchronized focal length value in both training and 163testing. In Table 7, we observe that enlarging the images does improve mAP; however, the extent of 164improvement diminishes as the input image size increases. We argue that this phenomenon can be 165attributed to the fact that smaller objects become easier to detect in larger images. 1665.3 Ablation Studies on Sensor Alignment Approaches 167We evaluate various combinations of modules across all datasets and present the results in Table 8. 168We observe that each component contributes to the overall performance; however, only after aligning 169the intrinsic parameters does the extrinsic and ego coordinate system start to impact the detection per- 170formance. While the Extrinsic Aware Feature Sampling (E-aware) may cause a drop in performance, 171we argue that this module provides extrinsic robustness in real-world scenarios. 1725.4 Multi-dataset Training Beginning with Waymo 173We begin with Waymo as the first dataset and gradually add datasets into the training set. Table 9 174shares the same mAP trend with Table 3 in the main paper, which means our observation is invariant 175to the addition order. Here, KITTI-360 drags down the performance again. This decline can be 1767Table 8: Ablation study on the effectiveness of each module in sensor alignment approaches. All models aretrained on Waymo. “Ego” stands for Ego Frame Alignment. “Avg.T” stands for the average cross-datasetperformance.K-sync E-aware Ego N A L K K360 W avg.T0.1 8.9 0.0 0.0 0.0 58.8 1.8✓ 0.0 8.4 0.0 0.0 0.0 58.3 1.7✓ 0.0 5.0 0.0 0.0 0.0 59.4 1.0✓ ✓ 0.0 3.5 0.0 0.0 0.0 59.4 0.7✓ 14.5 37.8 14.3 9.4 5.6 57.7 16.3✓ ✓ 24.7 39.4 33.0 21.2 13.6 57.7 26.4✓ ✓ 14.1 37.7 17.0 9.3 5.8 57.6 16.8✓ ✓ ✓ 25.4 38.2 33.6 21.2 11.7 57.6 26.0Table 9: 3D-mAP of DETR3D trained on multiple dataset, beginning with Waymo.Setting src/dst W N A L K K360 avg.S avg.TDirectW 58.8 0.1 8.9 0.0 0.0 0.0 58.8 1.8+N 60.4 38.4 11.2 0.2 0.0 0.0 49.4 2.8+A 63.0 42.7 54.8 0.1 0.0 0.0 53.5 0.0+L 60.2 44.4 53.1 47.3 0.0 0.0 51.2 0.0+K 63.1 45.5 53.4 49.2 44.3 1.9 51.1 1.9+K360 61.9 46.2 53.7 49.4 39.5 29.7 46.7 0.0attributed to a significant amount of discordant data, as illustrated in Fig. 2, which provides statistics 177on the data volume across different datasets. 1785.5 Per-class and Per-location Evaluation Results 179Table 10 and Table 11 are the extended version of Table 2 and Table 3 in the main paper, showing 180per-class mAP on each dataset. Furthermore, Table 12 shows the evaluation result of the best model 181in Table 3 on each city in each dataset. 182Figure 2: Data volume of each dataset under monocular detection setting8Table 10: 3D-mAP of DETR3D trained on single dataset(full version). The performance is reported in the formatof all(vehicles/ pedestrians/ bicycles).Setting src/dst N A L K K360 WDirectN 36.3 (56.7/36.7/15.7) 0.8 (0.9/1.4/0.3) 1.8 (1.6/1.1/2.7) 0.0 (0.0/0.0/0.0) 0.0 (0.0/0.0/0.0) 1.1 (0.6/1.2/1.4)A 0.2 (0.1/0.5/0.1) 48.0 (73.8/38.7/31.7) 0.1 (0.0/0.1/0.1) 0.0 (0.0/0.1/0.0) 0.0 (0.0/0.1/0.0) 17.4 (19.7/14.1/18.3)L 0.5 (0.9/0.7/0.0) 0.1 (0.1/0.1/0.0) 37.3 (70.5/16.6/24.9) 0.4 (0.5/0.7/0.1) 0.0 (0.0/0.1/0.0) 0.1 (0.0/0.3/0.0)K 2.8 (4.8/3.0/0.5) 1.2 (0.9/1.3/1.5) 0.0 (0.1/0.1/0.0) 24.5 (40.2/25.1/8.3) 1.1 (0.9/2.1/0.4) 0.7 (0.2/0.4/1.3)K360 0.1(0.1/0.2/0.0) 0.2(0.0/0.2/0.4) 0.0(0.0/0.0/0.0) 3.2(0.9/6.7/2.2) 26.1(60.2/4.5/13.7) 0.1(0.0/0.1/0.2)W 0.1 (0.0/0.1/0.0) 8.9 (14.5/9.2/3.1) 0.0 (0.0/0.0/0.0) 0.0 (0.0/0.0/0.0) 0.0 (0.0/0.0/0.0) 58.8 (78.1/50.3/47.9)K-syncN 40.8 (58.5/42.1/21.8) 25.5 (45.6/25.1/5.7) 18.6 (39.8/14.4/1.6) 29.7 (41.1/24.2/23.7) 18.0 (37.6/11.3/4.9) 23.4 (37.4/24.8/7.9)A 13.2 (20.4/13.7/5.5) 51.4 (74.8/43.5/36.0) 7.5 (16.7/3.9/1.9) 6.6 (8.9/1.8/9.1) 4.6 (12.2/0.8/0.8) 38.8 (60.4/31.3/24.6)L 1.0 (1.8/1.2/0.1) 1.3 (3.3/0.4/0.1) 44.0 (76.4/20.9/34.6) 8.1 (17.3/4.7/2.3) 5.7 (12.7/2.0/2.4) 1.5 (3.2/1.0/0.2)K 2.4 (2.0/4.9/0.4) 1.2 (2.6/0.8/0.1) 1.2 (3.0/0.5/0.1) 31.0 (45.7/28.0/19.4) 6.1 (13.3/2.7/2.5) 0.5 (1.2/0.2/0.1)K360 14.6 (34.0/9.3/0.4) 14.7 (36.4/2.7/4.9) 7.3 (20.0/1.6/0.2) 34.6 (59.9/25.1/18.9) 34.7 (69.4/10.0/24.7) 8.2 (22.7/0.7/1.1)W 14.5(25.9/13.3/4.4) 37.8(67.3/34.1/11.9) 14.3(25.5/8.5/8.8) 9.4(6.3/4.8/17.0) 5.6(13.1/1.1/2.6) 57.7(78.0/50.2/45.0)K-sync, E-aware,and Ego-syncN 43.1(63.0/45.0/21.4) 33.6(62.4/30.4/7.9) 32.8(60.8/19.3/18.4) 33.0(49.0/28.2/21.7) 18.4(37.0/12.8/5.5) 33.0(47.8/28.4/22.8)A 24.4(42.0/23.0/8.3) 48.1(75.3/41.7/27.3) 34.1(61.5/22.4/18.3) 18.1(22.0/17.3/15.0) 8.7(16.1/3.1/6.8) 37.4(58.4/28.8/24.9)L 15.7(35.1/10.3/1.9) 19.6(42.6/13.1/3.0) 47.1(79.3/21.2/40.8) 20.0(38.7/11.5/9.9) 12.9(31.8/4.0/2.9) 18.9(30.2/11.2/15.2)K 7.1(12.6/7.9/0.8) 8.7(19.2/3.8/2.9) 10.2(20.7/4.2/5.6) 29.1(49.5/28.1/9.6) 9.3(20.2/4.9/2.7) 2.4(3.6/2.5/1.1)K360 13.9(33.2/8.3/0.3) 17.7(41.7/5.6/5.8) 16.6(38.4/4.2/7.3) 39.1(67.6/26.5/23.2) 36.7(72.4/10.9/26.6) 8.4(18.2/2.8/4.3)W 25.4(45.0/26.3/5.0) 38.2(68.3/34.6/11.7) 33.6(63.3/16.8/20.7) 21.2(13.5/25.3/24.7) 11.7(25.6/6.2/3.3) 57.6(77.9/50.9/44.2)Table 11: 3D-mAP of DETR3D trained on multiple dataset, beginning with nuScenes (full version). Theperformance is reported in the format of all(vehicles/ pedestrians/ bicycles)Setting src/dst N A L K K360 WDirectN 36.3 (56.7/36.7/15.7) 0.8 (0.9/1.4/0.3) 1.8 (1.6/1.1/2.7) 0.0 (0.0/0.0/0.0) 0.0 (0.0/0.0/0.0) 1.1 (0.6/1.2/1.4)+A 40.5 (60.4/38.9/22.1) 49.2 (76.0/42.3/29.4) 0.5 (0.7/0.5/0.3) 0.0 (0.0/0.0/0.0) 0.0 (0.0/0.0/0.0) 5.2 (4.5/4.2/6.9)+L 41.6 (61.0/40.7/23.1) 50.5 (78.4/42.3/30.9) 43.7 (74.5/26.3/30.4) 0.0 (0.0/0.0/0.0) 0.0 (0.0/0.0/0.0) 3.8 (4.2/4.2/2.9)+K 41.5 (62.7/39.6/22.1) 49.7 (78.5/42.4/28.3) 46.0 (75.8/28.1/34.2) 41.4 (62.1/38.7/23.5) 1.1 (1.2/1.2/1.0) 3.6 (4.3/4.2/2.2)+K360 42.6 (64.2/40.8/22.8) 54.3 (78.6/43.8/40.6) 46.8 (76.7/26.5/37.1) 36.3 (51.3/35.0/22.7) 29.7 (60.6/8.6/20.0) 3.3 (4.2/3.8/1.8)+W 46.2 (66.6/42.7/29.2) 53.7 (79.8/47.9/33.5) 49.4 (76.7/33.0/38.4) 39.5 (54.2/36.1/28.0) 29.7 (60.6/9.4/19.2) 61.9 (82.0/55.4/48.2)K-syncN 40.8 (58.5/42.1/21.8) 25.5 (45.6/25.1/5.7) 18.6 (39.8/14.4/1.6) 29.7 (41.1/24.2/23.7) 18.0 (37.6/11.3/4.9) 23.4 (37.4/24.8/7.9)+A 45.5 (64.5/45.0/27.0) 50.0 (77.9/44.0/28.1) 25.1 (49.2/18.0/8.1) 35.8 (48.1/26.2/33.1) 21.3 (42.5/15.0/6.5) 44.2 (67.9/35.8/28.7)+L 46.8(64.3/47.1/28.9) 53.2(79.5/46.6/33.6) 55.1(82.6/36.4/46.2) 37.8(50.7/30.6/31.9) 23.1(44.6/16.9/7.8) 45.3(69.2/37.0/29.6)+K 47.4 (64.5/48.0/29.8) 53.5 (79.3/45.6/35.6) 53.6 (82.5/34.3/43.9) 57.8 (77.7/48.7/46.9) 21.8 (36.4/17.2/11.6) 44.4 (69.4/37.2/26.7)+K360 50.2 (68.0/47.5/34.9) 54.4 (80.7/48.3/34.3) 54.0 (83.7/36.4/42.0) 60.2 (81.9/47.2/51.4) 39.6 (72.7/16.9/29.3) 44.7 (71.6/38.2/24.2)+W 51.8 (68.2/49.8/37.3) 55.3 (82.1/48.8/34.9) 56.6 (84.4/38.4/47.1) 61.9 (83.0/51.5/51.1) 40.7 (73.6/19.9/28.7) 63.7 (83.2/55.0/52.8)K-sync, E-aware,and Ego-syncN 43.1(63.0/45.0/21.4) 33.6(62.4/30.4/7.9) 32.8(60.8/19.3/18.4) 33.0(49.0/28.2/21.7) 18.4(37.0/12.8/5.5) 33.0(47.8/28.4/22.8)+A 52.1(68.1/50.8/37.4) 52.7(77.9/47.3/32.9) 38.4(70.4/23.5/21.2) 42.2(54.9/33.6/37.9) 23.2(43.0/16.5/10.3) 40.7(64.1/35.6/22.5)+L 52.6(68.9/50.2/38.6) 53.2(79.1/47.4/33.2) 59.5(85.6/45.5/47.4) 46.1(61.6/37.8/38.9) 26.1(47.6/19.7/10.9) 43.6(67.1/35.6/28.0)+K 51.0(67.9/51.8/33.3) 54.7(79.8/47.7/36.5) 60.2(85.6/44.5/50.5) 63.9(83.2/58.0/50.6) 28.4(48.8/22.9/13.5) 44.6(67.1/35.4/31.4)+K360 50.0(70.5/50.4/29.3) 55.0(81.4/47.4/36.2) 59.8(86.9/43.9/48.4) 65.0(85.4/54.5/55.1) 42.7(75.5/20.3/32.3) 45.2(68.6/36.2/30.9)+W 54.8(72.7/52.5/39.1) 56.4(82.3/49.0/38.0) 60.5(87.4/45.7/48.4) 66.8(85.2/58.1/57.2) 43.4(76.3/22.3/31.5) 62.7(83.4/56.9/47.9)5.6 Results from Surrounding-view Detection 183We extend our data alignment strategies to surrounding view detection and verify their effectiveness. 184All input images are of 1/2 height and width. The perception range is still 51.2m, but includes the 185area behind the ego car. 186Ablation studies on sensor alignment approaches. In Table 13, DETR3D is trained on Argoverse2, 187nuScenes and Waymo, and tested in six datasets. 188Data diversity vs. data volume We test if the model benefits from data diversity more than data 189volume. Given that Waymo and nuScene are of similar data volume, we use different percentage of 190training data from the two and test the model on three datasets. As shown in Table 14, mixing the 191data achieves better performance. 1929Table 12: Evaluation results per location, using DETR3D with all sensor alignment approaches. The LET-3D-mAP is reported in the format of all(vehicles/pedestrians/bicycles).Dataset Location LET-3D-APArgoverse2ATX 44.6 (69.7/64.3/0.0)DTW 71.5 (79.3/60.3/75.1)MIA 46.8 (84.0/41.1/15.2)PAO 64.3 (91.6/51.4/49.7)PIT 59.9 (82.0/53.2/44.4)WDC 39.9 (80.1/39.7/0.0)KITTI Germany 66.8 (85.2/58.1/57.2)KITTI-360 Germany 43.3 (76.3/22.3/31.4)Lyft Palo Alto 60.5 (87.4/45.7/48.5)nuScenesboston-seaport 61.4 (75.4/51.7/57.1)singapore-hollandvillage 30.3 (67.4/23.4/0.1)singapore-onenorth 50.5 (66.8/53.0/31.8)singapore-queenstown 51.6 (68.5/58.9/27.5)Waymoother 44.9 (72.4/44.8/17.6)phx 63.3 (87.0/55.7/47.1)sf 65.1 (83.7/58.8/52.8)Table 13: Surrounding view 3D detection results: ablation study on sensor alignment approaches. All modelsare trained on Argoverse2, nuScenes and Waymo.K-sync E-aware Ego A N W L K K360 avg.48.0 40.4 54.8 0.6 6.2 0.7 25.1✓ 48.6 41.0 53.8 0.0 3.6 0.0 24.5✓ 49.5 39.7 54.7 1.8 7.4 1.8 25.8✓ ✓ 47.4 40.8 53.3 0.2 4.4 0.6 24.4✓ 52.2 46.5 55.2 22.2 22.0 11.0 34.9✓ ✓ 50.1 47.2 54.2 30.1 36.3 22.1 40.0✓ ✓ 52.0 46.7 55.5 31.1 26.2 15.8 37.9✓ ✓ ✓ 52.1 47.5 54.8 31.4 39.7 24.0 41.6Table 14: Surrounding view 3D detection results: models are trained on different combinations of Waymo andnuScenes.test/ train(W+N) 0.00+1.00 0.01+0.99 0.10+0.90 0.33+0.67 0.5+0.5 0.67+0.33 1.00+0.00A 1.0 6.4 9.1 13.0 14.5 16.0 4.3N 32.5 32.5 32.3 32.1 31.5 30.4 0.2W 0.3 23.4 36.7 45.8 45.6 47.0 46.210References 193[1]B. Wilson, W. Qi, T. Agarwal, J. Lambert, J. Singh, S. Khandelwal, B. Pan, R. Kumar, A. Hart- 194nett, J. K. Pontes, et al. Argoverse 2: Next generation datasets for self-driving perception and 195forecasting. arXiv preprint arXiv:2301.00493 , 2023. 196[2]A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. The 197International Journal of Robotics Research , 32(11):1231–1237, 2013. 198[3]X. Chen, K. Kundu, Y . Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun. 3d object 199proposals for accurate object class detection. Advances in neural information processing systems , 20028, 2015. 201[4]Y . Liao, J. Xie, and A. Geiger. Kitti-360: A novel dataset and benchmarks for urban scene 202understanding in 2d and 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2032022. 204[5]H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan, 205and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the 206IEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631, 2020. 207[6]R. Kesten, M. Usman, J. Houston, T. Pandya, K. Nadhamuni, A. Ferreira, M. Yuan, B. Low, 208A. Jain, P. Ondruska, et al. Lyft level 5 av dataset 2019. urlhttps://level5. lyft. com/dataset , 1:3, 2092019. 210[7]M. Contributors. MMDetection3D: OpenMMLab next-generation platform for general 3D 211object detection. https://github.com/open-mmlab/mmdetection3d , 2020. 212[8]P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V . Patnaik, P. Tsui, J. Guo, Y . Zhou, Y . Chai, 213B. Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In 214Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 2152446–2454, 2020. 216[9]K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 217Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770– 218778, 2016. 219[10] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical 220image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 221248–255. Ieee, 2009. 222[11] Y . Wang, V . C. Guizilini, T. Zhang, Y . Wang, H. Zhao, and J. Solomon. Detr3d: 3d object 223detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning , 224pages 180–191. PMLR, 2022. 225[12] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprint 226arXiv:1711.05101 , 2017. 227[13] J. Huang, G. Huang, Z. Zhu, Y . Ye, and D. Du. Bevdet: High-performance multi-camera 3d 228object detection in bird-eye-view. arXiv preprint arXiv:2112.11790 , 2021. 229[14] S. Wang, X. Zhao, H.-M. Xu, Z. Chen, D. Yu, J. Chang, Z. Yang, and F. Zhao. Towards 230domain generalization for multi-view 3d object detection in bird-eye-view. In Proceedings of 231the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13333–13342, 2322023. 233[15] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V . Koltun. Towards robust monocular depth 234estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE transactions on pattern 235analysis and machine intelligence , 44(3):1623–1637, 2020. 23611[16] D. Park, R. Ambrus, V . Guizilini, J. Li, and A. Gaidon. Is pseudo-lidar needed for monocular 2373d object detection? In Proceedings of the IEEE/CVF International Conference on Computer 238Vision , pages 3142–3152, 2021. 239[17] J. Philion and S. Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs by 240implicitly unprojecting to 3d. In Computer Vision–ECCV 2020: 16th European Conference, 241Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16 , pages 194–210. Springer, 2020. 242[18] T. v. Dijk and G. d. Croon. How do neural networks see depth in single images? In Proceedings 243of the IEEE/CVF International Conference on Computer Vision , pages 2183–2191, 2019. 24412Derivaon of Eq.1 and Eq. 3 in the main paperIn this page, we will derive Eq. 1 and Eq. 3 from the main paper step by step. We will firstanalyze Eq. 3 since Eq. 1 can be easily derived from Eq. 3Derivaon of Eq. 3Given an origin at a fixed locaon of our ego car, with the x-axis along the direcon of traveland the z-axis perpendicular to the ground poinng upwards, we define a right-handedEuclidean coordinate system as the ego frame. Assume we have a forward-facing cameraalong the x-axis, centered at . According to the pinhole camera model, we havecamera intrinsic matrix and extrinsic matrix , where is a rotaon matrix.Now can be represented as follows:where are the focal lengths along the x- and y-axis of the camera coordinate system,and are the pixel offsets on the image. For simplicity we assume and denotethem as so:(t ,t ,t )xyzK [ R ∣ t ] R 3 × 3R , K , tK = f x000f y0c xc y1R = 001− 1000− 10t = ,t yt z−t xf ,f xyc ,c xy f =xf yfK = f000f0c xc y1Now for a point in the ego frame, we want to find the relaonship between and its projecon on the image plane. Let’s assume its image posion as .For the first step, we transform into the camera coordiate system:Here, we assign as the ‘depth’ of in the camera, and use , as shorthand.For the second step, we transform to the image coordinate system:For the last step, we collapse depth and project onto the imaging plane to obtain thevalues of :In summary:So:Assuming that the points are represented in homogeneous coordinates, then , and we can simplify the relaonship:p (x ,y ,z ) 0 0 0 0p 0 (u ,v )p 0p =cam R × p + 0 t = ×001− 1000− 10 +x 0y 0z 0 =t yt z−t x −y +t 0y−z +t 0zx −t 0xd =x − 0t x p 0 X = −y + 0t yY = −z + 0t zp camp =img K × p =cam +f000f0c xc y1 =XYd fX +c dxfY +c dydp img(u ,v )p = 2D p /d =img = +c dfXx +c dfYy1 uv1p =imgd (u ,v , 1 )Tp =img K × p =cam K × ( R × p + 0 t )d (u ,v , 1 ) =TK ( R p + 0 t )p = 0(x ,y ,z , 1 ) 0 0 0Td (u ,v , 1 ) =TK ( R p + 0 t ) = K ( p ) = [R0t1] 0 KT p 0Now:whereDerivaon of Eq. 1With the derivaon and notaons of Eq. 3, we can directly derive Eq. 1. Assume that a 3D box(e.g., a car) is centered at , and the origin of ego frame is on the ground, then is theheight dimension of the box. We project and onto theimage to get the upper and lower bounds of the box, and the difference between them shouldbe the pixel height of this box. From the derivaon of Eq. 3, we know:Similarly, for and we have:So the pixel height of the box would be:Let the 3D size , and the 2D pixel size , then we have the Eq. 1 inmain paper:d (u ,v , 1 ) =TKT p , 0T = ,d = [R0t1]x − 0t xp 0 2z 0p = 1 (x ,y , 0 ) 0 0 p = 2 (x ,y , 2z ) 0 0 0v = =dfY df ( −z +t ) 0zp 1 p 2v = 1 +dft zc ,yv = 2 +df ( − 2z +t ) 0zc yheight = 2D ∣v − 1v ∣ = 2f × =d2z 0f × x −t 0x2z 0S = 2z 0 s =pixelheight 2Ds =pixelf × x −t 0xS |
69y5fzvaAT | RoboCook: Long-Horizon Elasto-Plastic ObjectManipulation with Diverse ToolsHaochen Shi1*Huazhe Xu1*†Samuel Clarke1Yunzhu Li1,2Jiajun Wu11Stanford University2UIUC *Equal contributionhttps://hshi74.github.io/robocookAbstract: Humans excel in complex long-horizon soft body manipulation tasksvia flexible tool use: bread baking requires a knife to slice the dough and arolling pin to flatten it. Often regarded as a hallmark of human cognition, tooluse in autonomous robots remains limited due to challenges in understandingtool-object interactions. Here we develop an intelligent robotic system, Robo-Cook, which perceives, models, and manipulates elasto-plastic objects with vari-ous tools. RoboCook uses point cloud scene representations, models tool-objectinteractions with Graph Neural Networks (GNNs), and combines tool classifi-cation with self-supervised policy learning to devise manipulation plans. Wedemonstrate that from just 20 minutes of real-world interaction data per tool, ageneral-purpose robot arm can learn complex long-horizon soft object manipula-tion tasks, such as making dumplings and alphabet letter cookies. Extensive evalu-ations show that RoboCook substantially outperforms state-of-the-art approaches,exhibits robustness against severe external disturbances, and demonstrates adapt-ability to different materials.Keywords: Deformable Object Manipulation, Long-horizon Planning, ModelLearning, Tool Usage1 IntroductionThink about all the steps and tools a robot would need to use to make a dumpling from a lump ofdough. This scenario contains three fundamental research problems in robotics: deformable objectmanipulation [1, 2, 3, 4], long-horizon planning [5, 6, 7, 8], and tool usage [9, 10, 11, 12]. The taskposes significant challenges to the robot, because it involves decisions at both discrete (e.g., whichtool to use) and continuous levels (e.g., motion planning conditioned on the selected tool).To address these challenges, we propose RoboCook, a framework that perceives, models, and ma-nipulates elasto-plastic objects for long-horizon tasks like making dumplings and alphabet lettercookies. RoboCook introduces three technical innovations. First, we apply a data-driven approachwith a Graph Neural Network (GNN) [13, 14, 15] to learn highly complex interactions between thesoft object and various tools purely from visual observations. Second, we combine a PointNet-based[16, 17] tool classification module with learned dynamics models to determine the most appropriatetool to use at the current task stage. Third, we use a self-supervised policy trained with synthetic datagenerated by our learned dynamics model for gripping, rolling, and pressing to improve performanceand speed, and hand-coded policies for other skills.We carry out comprehensive evaluations to show RoboCook’s effectiveness, robustness, and gener-alizability. Figure 1 shows a typical successful trial of making a dumpling. To showcase robustness,we apply external perturbations during real-time execution, and RoboCook still succeeds in makinga dumpling. To demonstrate generalizability, we test RoboCook to make alphabet letter cookies andRoboCook outperforms four strong baselines by a significant margin. RoboCook can also generalizeto various materials by making a particular shape on different materials without retraining.† Now at Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.234105678109Initial stateKnifeTwo-plane symmetric gripperSquare pressLarge rollerCircle cutterPusherSkin spatulaFilling spatulaHookFinal stateFigure 1: Making dumplings. RoboCook makes a dumpling from a piece of dough in nine steps:The robot (1) cuts the dough to an appropriate volume, (2) pinches the dough and regularizes theshape, (3) presses to flatten the dough, (4) rolls to flatten the dough further, (5) cuts a circulardumpling skin, (6) removes the excess dough, (7) picks and places the skin onto the mold, (8) addsthe filling, and (9) closes and opens the mold. The black arrows denote the moving direction.2 Related WorkLong-horizon deformable object manipulation. Real-world deformable object manipulation is achallenging task [18, 19, 20]. The innate high DoFs, partial observability, and non-linear local in-teractions make deformable objects hard to represent, model, and manipulate [21, 22, 23]. Conven-tional approaches in deformable object manipulation choose model-based methods [24, 25, 26, 27]or adaptive methods [28, 29, 30], yielding complex system identification problems. For example, Liet al. [27] proposes a differentiable simulator as the surrogate of real-world elasto-plastic objects.However, these simulators are based on approximate modeling techniques, resulting in a noticeablesim-to-real gap that hinders their application to real-world scenarios . There have been efforts towardlearning to manipulate from expert demonstrations [31, 32, 33]. This paradigm is used in manipu-lating liquid [11], sand [29], and dough [34]. Despite these successes, obtaining the demonstrationis expensive and sometimes prohibitive. Another recent trend is to learn a dynamics model fromhigh-dimensional sensory data directly for downstream manipulation tasks [18, 35, 36, 37, 15, 38].While these approaches show impressive results, they only consider short-horizon tasks using justone tool. In contrast, long-horizon tasks like dumpling-making require a reactive planner to under-stand the long-term physical effect of different tools to make the most effective discrete (e.g., whichtool to use) and continuous (e.g., the action parameters) action decisions.Tool usage. Tool usage is widely studied in cognitive science and robotics research [39, 40]. Tostudy the process of human evolution, many researchers endorse tool usage as a main benchmarkto evaluate the intelligence of living primates with respect to that of extinct hominids [41, 42, 43].To equip robots with the same capability of tool usage as humans, prior works focus on teachingthe robot the representation and semantics of tools or objects that potentially function like toolsfor downstream policy learning [11, 12]. To perform complicated long-horizon tasks composedof several subtasks, prior works also use human demonstrations to learn a hierarchical policy net-work [44]. In this work, we will use real-world robot random-play data, which is cheaper to acquirethan human demonstration data. We train the robot on these self-exploratory trials to understand thephysical interactions between different tools and deformable objects.3 MethodThe RoboCook framework has three major components: perception, dynamics, and closed-loopcontrol. We first use an effective sampling scheme and intuitive tool representations for particle-2(a)(d)(b)(e)(c)(f)DoughToolOutliersB. DynamicsS!S"S#S$S%S&tGNNa!GNNa"GNNa#GNNa$GNNa%Initial StatePREDGTGrippingTopPREDGTPressingSidePREDGTRollingPers.TopTopPers.SidePers.SideA. PerceptionRaw point cloudFigure 2: Perception and dynamics of RoboCook .(A)The input to the perception module is apoint cloud of the robot’s workspace captured by four RGB-D cameras. From the raw point cloud,we (a) crop the region of interest, (b) extract the dough point cloud, (c) reconstruct a watertightmesh (d) use the Signed Distance Function (SDF) to sample points inside the mesh, (e) removepoints within the tools’ SDF, and (f) sample 300 surface points. (B)We process videos of each toolmanipulating the dough into a particle trajectory dataset to train our GNN-based dynamics model.The model can accurately predict long-horizon state changes of the dough in gripping, pressing, androlling tasks. On the left are the initial states’ perspective, top, and side views, and on the right is acomparison of model predictions and ground truth states.based scene representation. Second, we train Graph Neural Networks (GNNs) as the dynamicsmodel from the processed video dataset to accurately predict dough states during manipulation. Last,a tool classifier selects the best tool for each substage in a long-horizon task and a self-supervisedpolicy network performs closed-loop control.3.1 PerceptionThe perception module aims to sample particles sparsely and uniformly for the downstream dy-namics model. This task is challenging because of dough occlusions from the robot and tool andself-occlusions from the irregular and concave shape of the dough.We merge point clouds from four calibrated RGB-D cameras and perform color segmentation toextract the dough point cloud. Then we apply either Poisson surface reconstruction [45] or alpha-shape surface reconstruction [46] to reconstruct a watertight surface of the dough, depending onocclusion levels. In heavy occlusion cases, alpha-shape surface reconstruction is usually worse atcapturing concavities than Poisson surface reconstruction. However, we use it to secure a completeand singular mesh. With few or no occlusions, we combine Poisson surface reconstruction andthe MeshFix algorithm [47] to generate a watertight mesh. We use the watertight mesh’s SignedDistance Function (SDF) to sample points inside it randomly. To compensate for details lost duringsurface reconstruction, we apply a voxel-grid filter to reconstruct concavities by removing the pointsabove the original dough point cloud. We also remove the noisy points penetrating the tool using themesh’s SDF. We compute the tool mesh’s SDF from the robot’s end-effector pose (recorded duringdata collection) and the ground-truth tool mesh. We then perform alpha-shape surface reconstructionand use Poisson disk sampling [48] to sample 300 points uniformly on the surface, capturing moredetails of the dough with a fixed particle number.For different tools, we uniformly sample particles on the surface of their ground truth mesh to reflecttheir geometric features. The complete scene representation for the downstream dynamics model3concatenates dough and tool particles. Section 6.2.1 and 6.2.2 of supplementary materials describemore data collection and preprocessing details.3.2 Dynamics ModelOur GNN-based dynamics model predicts future states of dough based on the current state andactions of the tools with only 20 minutes of real-world data per tool. The rigid and non-rigid motionsare each predicted by a multi-layer perceptron (MLP) and added together as the predicted motion bythe GNN.The graph of sampled particles at each time step is represented as st= (Ot,Et)withOtas vertices,andEtas edges. For each particle, oi,t=xi,t,coi,t, where xi,tis the particle position iat time t,andcoi,tis the particle’s attributes at time t, including the particle normal and group information (i.e.,belongs to the dough or the tool). The edge between a pair of particles is denoted as ek= (uk, vk),where 1≤uk,vk≤ |O t|are the receiver particle index and sender particle index respectively, and kis the edge index. Section 6.3.1 of supplementary materials provides more details on graph building.Following the previous work [18], we use a weighted function of Chamfer Distance (CD) [49] andEarth Mover’s Distance (EMD) [50] as the loss function to train the dynamics model:L(Ot,bOt) =w1· LCD(Ot,bOt) +w2· LEMD(Ot,bOt), (1)where Otis the real observation, and bOtis the predicted state. We select w1= 0.5andw2= 0.5based on the experiment results. To be more specific, the CD between Ot,bOt⊆R3is calculated byLCD(Ot,bOt) =Xx∈Otminy∈bOt∥x−y∥22+Xy∈bOtminx∈Ot∥x−y∥22. (2)The EMD matches distributions of point clouds by finding a bijection μ:Ot→bOtsuch thatLEMD(Ot,bOt) = minμ:Ot→bOtXx∈Ot∥x−μ(x)∥2. (3)We train the model to predict multiple time steps forward to regularize training and stabilize long-horizon future predictions. The training loss is calculated as a sum of distances between predictionsand ground-truth states:Ltrain=sXi=0L(Ot+i,bOt+i), (4)where the dynamics model takes bOt+i−1as the input to predict bOt+iwhen i > 0. The modelperforms better in inference time by predicting a slightly longer horizon during training time. Em-pirically, we discover that s= 2is good enough for inference-time prediction accuracy, and a largersdoes not give us much gain on the accuracy. Note that although we use s= 2during training, wepredict 15 steps during inference time, which is a long-horizon prediction. This highlights the in-ductive bias of our GNN-based dynamics model - we only need to train on short-horizon predictionsto generalize to a much longer-horizon prediction during inference. Another motivation to keep ssmall is to increase the training speed. More implementation details on model training can be foundin Section 6.3.2 of supplementary materials.3.3 Closed-Loop ControlTo solve long-horizon manipulation tasks involving multiple tools, the robot must determine (1) thebest tool to use at each stage and (2) the optimal trajectory for the selected tool.To answer the first question, we implement a tool classification module based on PointNet++ [17]that takes a concatenation of the input and output states of the dough as input and outputs probabil-ities of the 15 tools achieving the target state. We extract the features of the input and output pointclouds separately with a PointNet++ classification layer. Then we feed the concatenated features toan MLP with a SoftMax activation layer to output probabilities.4A.Closed-Loop ControlB. Self-supervised policy learningGNNPolicy NetworkS!S"ToolRandomly sampled action aL(%a,a)%agradientClassifer NetworkUpdatedPolicy Network...Large rollerCircle cutterKnifeCircle pressCircle punchHookCurrentTargetVisual feedback loopActionsPressandrollGrippingθrdS!S"a=(r,θ,d)+GNNPressinga=(x,y,z,θ)+zxyθS!S"GNNRollingzxyθAction spaceS!S"a=(x,y,z,θ)+GNNAction spaceAction spaceFigure 3: Planning of RoboCook. (A) PointNet-based classifier network identifies appropriate toolsbased on the current observation and the target dough configuration. The self-supervised policynetwork takes the tool class, the current observation, and the target dough configuration as inputsand outputs the manipulation actions. The framework closes the control loop with visual feedback.(B)We show the policy network architecture, the parametrized action spaces of gripping, pressing,and rolling, and how we generate the synthetic datasets to train the policy network.The three tools with the highest probabilities are selected as candidates, and their optimal actionsare planned. The robot selects the tool yielding the final state closest to the target and executes theplanned actions. We let the module output three tool candidates instead of only one to increase ourpipeline’s error tolerance to the tool classifier.To address the second question, we specify and parameterize the action space of the 15 tools basedon human priors. Then we classify the tools into a few categories based on |A|, the dimension of theiraction space. Section 6.4.1 of supplementary materials provides more details on the specificationand justification of tool action spaces.For grippers, rollers, presses, and punches, we collect data in the real world by random explorationin the bounded action space and learn their interaction model with the dough using a GNN. Thenwe generate a synthetic dataset with the dynamics model, train a self-supervised policy network toobtain the optimal policy, and execute closed-loop control with visual feedback. We use straightfor-ward planners for other tools based on human priors. More implementation details can be found inSection 6.4.1 of supplementary materials.To generate the synthetic dataset, we reuse the initial dough states acquired during data collectionand randomly sample actions in the parameterized action space of each tool. As shown in Fig-ure 3(A), the dough states before and after applying actions are the input, and the action parametersare the labels. We design a self-supervised policy network based on PointNet++. Given the continu-ous solution space of the action parameters, we formulate a multi-bin classification problem inspiredby previous works on 3D bounding box estimation [51, 52]. First, we concatenate input and outputpoint clouds, labeling them 0 or 1 for distinction. Next, we extract features using a PointNet++layer. Finally, we feed the feature vector into separate classification and regression heads, both be-ing MLPs with identical structures. The learned policy drastically improves the planning efficiency,and the complete pipeline only takes around 10 seconds of planning time to make a dumpling. Moreimplementation details are described in Section 6.4.2 of supplementary materials.During closed-loop control, cameras capture the workspace point cloud, which the perception mod-ule processes to obtain a clean and sparse point cloud of the dough. Our method takes the currentobservation and final target as input and returns the best tool and optimal actions, as shown in Fig-ure 3(B). Then the robot picks up the selected tool, manipulates the dough, lifts its hand to avoidoccluding the camera views, and plans the next move with visual feedback.5162358491114151312107zxy123456789101112131415Figure 4: RoboCook hardware and setup. Left: Robot’s tabletop workspace with xyzcoordinatesat top-left. Dashed white circles: four RGB-D cameras mounted at four corners of the table. Redsquare: dough location and manipulation area. Dashed white square: tool racks. Right: 15 tools:(1) large roller, (2) circle press, (3) circle punch, (4) square press, (5) square punch, (6) smallroller, (7) knife/pusher, (8) circle cutter, (9) hook, (10) dumpling mold, (11) two-rod symmetricgripper, (12) asymmetric gripper, (13) two-plane symmetric gripper, (14) skin spatula, (15) fillingspatula. Tools are 3D-printed, representing common dough manipulation tools. Section 6.5.2 ofsupplementary materials discusses the design principles of these 3D-printed tools.4 ExperimentsIn this study, we design and implement a hardware setup for soft body manipulation tasks, allowingeasy selection and interchange of 15 tools, as shown in Figure 4. We demonstrate the RoboCookframework’s effectiveness in long-horizon soft object manipulation tasks that require the use of mul-tiple tools, such as making a dumpling from a randomly shaped dough and shaping alphabet lettercookies to compose the word ‘RoboCook.’ Additionally, RoboCook can complete these tasks underextensive human interference, including significant changes to the shape and volume, as shown inthe second supplementary video, demonstrating its robustness to external perturbations. We discussthe experiment setup in Section 6.5 of supplementary materials.4.1 Making DumplingsThe dumpling-making task is to manipulate a piece of dough and the given filling into a dumplingshape. The main challenge lies in choosing appropriate tools and planning effective action trajecto-ries. We consider the task successful if the dumpling skin is thin enough and completely covers thefilling. Dumpling-making is a highly challenging task—even a single error might break the entirepipeline. Our method reliably accomplishes the task as shown in Figure 1. We show a comparisonwith manipulation results of human subjects in Section 6.6 of supplementary materials.RoboCook also demonstrates robustness against external perturbations during real-time execution.At each stage of dumpling making, a human disturbs the robot by deforming the dough, addingadditional dough, or even replacing the dough with a completely different piece to deviate from thetrajectory of the robot’s original plan. For example, at the rolling stage, the human folds the flatteneddough into a bulky shape. The robot decides to roll again to get a flatter dumpling skin. In a morechallenging case, the human replaces the round dumpling skin with a completely different dough ina highly irregular shape. After this perturbation, the robot puts down the roller, picks up the knife tostart again from the beginning, and successfully makes a dumpling. The complete process is shownin the second supplementary video. We also show a quantitative evaluation of the tool classificationnetwork in Section 6.7 of supplementary materials.6ROBCKRoboCraftInitial stateCEM+GNNStep 1Step 2OursOutlineCEM+MPMtRL+GNNFigure 5: Making alphabetical letter cookies. We list R, O, B, C, and K shaping steps in Columns1 to 4. Column 5 manually highlights the contour of the alphabetical letters. Columns 6 through 9compare our self-supervised learned policy with three model-based planning baselines and one RLbaseline. Our method can shape the dough closer to the target than all four baseline methods.Methods CD ↓ EMD↓ CD of normal ↓ Human evaluation ↑Planning time (s) ↓Ours 0.0062 ±0.0007 0.0042 ±0.0006 0.1933 ±0.0345 0.90 ±0.11 9.3 ±1.5RoboCraft 0.0066 ±0.0005 0.0044 ±0.0006 0.2011 ±0.0329 0.54 ±0.43 613.7 ±202.7CEM+GNN 0.0066 ±0.0007 0.0045 ±0.0008 0.2043 ±0.0431 0.52 ±0.41 756.0 ±234.5CEM+MPM 0.0070 ±0.0007 0.0046 ±0.0006 0.1965 ±0.0265 0.48 ±0.35 1486.7 ±512.8RL+GNN 0.0077 ±0.0007 0.0064 ±0.0009 0.2041 ±0.0414 0.17 ±0.09 1867.9 ±190.3Table 1: Quantitative evaluations. We use CD and EMD between the point clouds and the CDbetween the surface normals to evaluate the results. We further profile how long these methods taketo plan actions. Our method outperforms all baseline methods in these metrics by a large margin.4.2 Making Alphabet Letter CookiesThe RoboCook framework demonstrates effectiveness and robustness in highly complicated ma-nipulation tasks such as dumpling-making. This section explores its generalization ability in tasksrequiring precise actions such as shaping alphabet letter cookies [18] without additional training.Figure 5 shows that the RoboCook framework can accurately shape letters R, O, B, C, and K tocompose the word ‘RoboCook.’ For R, the robot uses the two-rod symmetric gripper for cavitiesand the circle punch for the hole. For O, the tool classifier selects a two-plane symmetric gripper topinch the square into a circle and the circle punch for the hole. Our method identifies the asymmetricgripper as suitable for B and locates accurate positions for the circle punch to press twice. ShapingC is more challenging due to the large distance between the initial and target shapes, but our methodsuccessfully plans gripping positions using closed-loop visual feedback. The robot combines thetwo-rod symmetric and asymmetric gripper for K to create cavities.We compare our method with four strong baselines: (1) limited-memory BFGS [53] with GNN-based dynamics model (RoboCraft) [18]; (2) cross-entropy method (CEM) [54] with GNN-baseddynamics model; (3) CEM with a Material Point Method (MPM) simulator [55]; and (4) Reinforce-ment Learning with GNN-based dynamics model. Qualitative results are shown in Figure 5. Weinclude a detailed analysis of our results in Section 6.10.7Initial stateManipulation StepsFinal State (K)tMaterialReal DoughFlour + water + saltModel FoamAirDry ClayPlay-DohFigure 6: Generalizing to different materials . We showcase the dynamics model’s capability togeneralize to various materials by shaping a K without retraining.Table 1 evaluates the results using Chamfer Distance (CD) [49], Earth Mover’s Distance (EMD) [50]and CD between top surface normals. However, we recognize a discrepancy between how thesemetrics measure the results and how humans perceive them - these metrics are prone to local noiseswhile humans are good at capturing the holistic features of the dough. Therefore, we show the pre-diction accuracy of 100 human subjects on recognizing the letters for these five methods in Table 1.Another highlight of our method is its speed. Since the policy network only needs one forwardprediction to output the actions for each tool, it is significantly faster than other methods that relyon forwarding the dynamics models many times. Section 6.8 and 6.9 of supplementary materialsdiscuss more details about the human study and the baseline implementations.We show that the dynamics model can generalize to various materials by shaping a K without re-training in Figure 6. We test on Play-Doh, Air Dry Clay, and Model Foam, each displaying notablydifferent dynamics compared to our original dough made of flour, water, and salt. The dynamicsmodel is trained solely on interaction data with the real dough.5 Conclusion and LimitationsRoboCook demonstrates its effectiveness, robustness, and generalizability in elasto-plastic objectmanipulation with a general-purpose robotic arm and everyday tools. The main contributions ofRoboCook include (1) tool-aware GNNs to model long-horizon soft body dynamics accuratelyand efficiently, (2) a tool selection module combined with dynamics models to learn tool functionsthrough self-exploratory trials, and (3) a self-supervised policy learning framework to improve theperformance and speed significantly. RoboCook pioneers solutions for tool usage and long-horizonelasto-plastic object manipulation in building a generic cooking robot.One limitation of RoboCook is the occasional failure of dough sticking to the tool. A solution is todesign an automatic error correction system. RoboCook also relies on human priors of tool actionspaces to simplify planning. But these simplifications do not constrain generalization as they can beeasily specified for new tools. Section 6.4.1 provides more justifications for this. Another limitationis that humans define the subgoals. Higher-level temporal abstraction and task-level planning arerequired to get rid of them. Finally, RoboCook requires additional topology estimation to apply tocables and cloths [56], which is beyond the focus of this work.8AcknowledgmentsThis work was in part supported by AFOSR YIP FA9550-23-1-0127, ONR MURI N00014-22-1-2740, the Toyota Research Institute (TRI), the Stanford Institute for Human-Centered AI (HAI),JPMC, and Analog Devices.References[1] J. Matas, S. James, and A. J. Davison. Sim-to-real reinforcement learning for deformableobject manipulation. In Conference on Robot Learning , pages 734–743. PMLR, 2018.[2] X. Lin, Y . Wang, J. Olkin, and D. Held. Softgym: Benchmarking deep reinforcement learningfor deformable object manipulation. In Conference on Robot Learning , pages 432–448. PMLR,2021.[3] J. Zhu, A. Cherubini, C. Dune, D. Navarro-Alarcon, F. Alambeigi, D. Berenson, F. Ficuciello,K. Harada, J. Kober, X. Li, et al. Challenges and outlook in robotic manipulation of deformableobjects. IEEE Robotics & Automation Magazine , 29(3):67–77, 2022.[4] H. Yin, A. Varava, and D. Kragic. Modeling, learning, perception, and control methods fordeformable object manipulation. Science Robotics , 6(54):eabd8803, 2021.[5] V . N. Hartmann, A. Orthey, D. Driess, O. S. Oguz, and M. Toussaint. Long-horizon multi-robotrearrangement planning for construction assembly. IEEE Transactions on Robotics , 2022.[6] S. Nair and C. Finn. Hierarchical foresight: Self-supervised learning of long-horizon tasks viavisual subgoal generation. In International Conference on Learning Representations , 2020.[7] S. Pirk, K. Hausman, A. Toshev, and M. Khansari. Modeling long-horizon tasks as sequentialinteraction landscapes. In Conference on Robot Learning , pages 471–484. PMLR, 2021.[8] A. Simeonov, Y . Du, B. Kim, F. Hogan, J. Tenenbaum, P. Agrawal, and A. Rodriguez. Along horizon planning framework for manipulating rigid pointcloud objects. In Conference onRobot Learning , pages 1582–1601. PMLR, 2021.[9] A. Billard and D. Kragic. Trends and challenges in robot manipulation. Science , 364(6446):eaat8414, 2019.[10] N. Yamanobe, W. Wan, I. G. Ramirez-Alpizar, D. Petit, T. Tsuji, S. Akizuki, M. Hashimoto,K. Nagata, and K. Harada. A brief review of affordance in robotic manipulation research.Advanced Robotics , 31(19-20):1086–1101, 2017.[11] D. Seita, Y . Wang, S. J. Shetty, E. Y . Li, Z. Erickson, and D. Held. Toolflownet: Roboticmanipulation with tools via predicting tool flow from point clouds. In 6th Annual Conferenceon Robot Learning , 2022.[12] A. Xie, F. Ebert, S. Levine, and C. Finn. Improvisation through physical understanding: Usingnovel objects as tools with visual foresight. In Proceedings of Robotics: Science and Systems ,FreiburgimBreisgau, Germany, June 2019. doi:10.15607/RSS.2019.XV .001.[13] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neuralnetwork model. IEEE transactions on neural networks , 20(1):61–80, 2008.[14] Y . Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics formanipulating rigid bodies, deformable objects, and fluids. In International Conference onLearning Representations , 2018.[15] Y . Li, J. Wu, J.-Y . Zhu, J. B. Tenenbaum, A. Torralba, and R. Tedrake. Propagation networks formodel-based control under partial observation. In 2019 International Conference on Roboticsand Automation (ICRA) , pages 1205–1211. IEEE, 2019.9[16] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3dclassification and segmentation. In Proceedings of the IEEE conference on computer visionand pattern recognition , pages 652–660, 2017.[17] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. Advances in neural information processing systems , 30, 2017.[18] H. Shi, H. Xu, Z. Huang, Y . Li, and J. Wu. RoboCraft: Learning to See, Simulate, and ShapeElasto-Plastic Objects with Graph Networks. In Proceedings of Robotics: Science and Systems ,New York City, NY , USA, June 2022. doi:10.15607/RSS.2022.XVIII.008.[19] C. Matl and R. Bajcsy. Deformable elasto-plastic object shaping using an elastic hand andmodel-based reinforcement learning. In 2021 IEEE/RSJ International Conference on Intelli-gent Robots and Systems (IROS) , pages 3955–3962. IEEE, 2021.[20] X. Lin, C. Qi, Y . Zhang, Z. Huang, K. Fragkiadaki, Y . Li, C. Gan, and D. Held. Planningwith spatial-temporal abstraction from point clouds for deformable object manipulation. In 6thAnnual Conference on Robot Learning , 2022.[21] D. Driess, Z. Huang, Y . Li, R. Tedrake, and M. Toussaint. Learning multi-object dynamicswith compositional neural radiance fields. In Conference on Robot Learning , pages 1755–1768. PMLR, 2023.[22] C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song. Iterative Residual Policy for Goal-Conditioned Dynamic Manipulation of Deformable Objects. In Proceedings of Robotics: Sci-ence and Systems , New York City, NY , USA, June 2022. doi:10.15607/RSS.2022.XVIII.016.[23] H. Ha and S. Song. Flingbot: The unreasonable effectiveness of dynamic manipulation forcloth unfolding. In Conference on Robot Learning , pages 24–33. PMLR, 2022.[24] Z. Huang, Y . Hu, T. Du, S. Zhou, H. Su, J. B. Tenenbaum, and C. Gan. Plasticinelab: Asoft-body manipulation benchmark with differentiable physics. In International Conferenceon Learning Representations , 2020.[25] X. Lin, Z. Huang, Y . Li, J. B. Tenenbaum, D. Held, and C. Gan. Diffskill: Skill abstractionfrom differentiable physics for deformable object manipulations with tools. In InternationalConference on Learning Representations , 2021.[26] A.-M. Cretu, P. Payeur, and E. M. Petriu. Soft object deformation monitoring and learning formodel-based robotic hand manipulation. IEEE Transactions on Systems, Man, and Cybernet-ics, Part B (Cybernetics) , 42(3):740–753, 2011.[27] S. Li, Z. Huang, T. Du, H. Su, J. B. Tenenbaum, and C. Gan. Contact points discovery forsoft-body manipulations with differentiable physics. In International Conference on LearningRepresentations , 2021.[28] D. Navarro-Alarcon, H. M. Yip, Z. Wang, Y .-H. Liu, F. Zhong, T. Zhang, and P. Li. Automatic3-d manipulation of soft objects by robotic arms with an adaptive deformation model. IEEETransactions on Robotics , 32(2):429–441, 2016.[29] A. Cherubini, V . Ortenzi, A. Cosgun, R. Lee, and P. Corke. Model-free vision-based shapingof deformable plastic materials. The International Journal of Robotics Research , 39(14):1739–1759, 2020.[30] K. Yoshimoto, M. Higashimori, K. Tadakuma, and M. Kaneko. Active outline shaping of arheological object based on plastic deformation distribution. In 2011 IEEE/RSJ InternationalConference on Intelligent Robots and Systems , pages 1386–1391. IEEE, 2011.10[31] B. Balaguer and S. Carpin. Combining imitation and reinforcement learning to fold deformableplanar objects. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems ,pages 1405–1412. IEEE, 2011.[32] F. Nadon, A. J. Valencia, and P. Payeur. Multi-modal sensing and robotic manipulation ofnon-rigid objects: A survey. Robotics , 7(4):74, 2018.[33] B. Jia, Z. Pan, Z. Hu, J. Pan, and D. Manocha. Cloth manipulation using random-forest-basedimitation learning. IEEE Robotics and Automation Letters , 4(2):2086–2093, 2019.[34] N. Figueroa, A. L. P. Ureche, and A. Billard. Learning complex sequential tasks from demon-stration: A pizza dough rolling case study. In 2016 11th ACM/IEEE International Conferenceon Human-Robot Interaction (HRI) , pages 611–612. Ieee, 2016.[35] P. Battaglia, R. Pascanu, M. Lai, D. Jimenez Rezende, et al. Interaction networks for learningabout objects, relations and physics. Advances in neural information processing systems , 29,2016.[36] M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-basedapproach to learning physical dynamics. arXiv preprint arXiv:1612.00341 , 2016.[37] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, andP. Battaglia. Graph networks as learnable physics engines for inference and control. In Inter-national Conference on Machine Learning , pages 4470–4479. PMLR, 2018.[38] T. Silver, R. Chitnis, A. Curtis, J. B. Tenenbaum, T. Lozano-P ́erez, and L. P. Kaelbling. Plan-ning with learned object importance in large problem instances using graph neural networks. InProceedings of the AAAI conference on artificial intelligence , volume 35, pages 11962–11971,2021.[39] C. P. Van Schaik and G. R. Pradhan. A model for tool-use traditions in primates: implicationsfor the coevolution of culture and cognition. Journal of Human Evolution , 44(6):645–664,2003.[40] R. Holladay, T. Lozano-P ́erez, and A. Rodriguez. Force-and-motion constrained planningfor tool use. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 7409–7416. IEEE, 2019.[41] S. T. Parker and K. R. Gibson. Object manipulation, tool use and sensorimotor intelligenceas feeding adaptations in cebus monkeys and great apes. Journal of Human Evolution , 6(7):623–641, 1977.[42] T. Ingold. Tool-use, sociality and intelligence. Tools, language and cognition in human evolu-tion, 429(45):449–72, 1993.[43] T. Matsuzawa. Primate foundations of human intelligence: a view of tool use in nonhumanprimates and fossil hominids. In Primate origins of human cognition and behavior , pages3–25. Springer, 2008.[44] J. Liang, B. Wen, K. Bekris, and A. Boularias. Learning sensorimotor primitives of sequentialmanipulation tasks from visual demonstrations. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 8591–8597. IEEE, 2022.[45] M. Kazhdan, M. Bolitho, and H. Hoppe. Poisson surface reconstruction. In Proceedings of thefourth Eurographics symposium on Geometry processing , volume 7, 2006.[46] H. Edelsbrunner and E. P. M ̈ucke. Three-dimensional alpha shapes. ACM Transactions onGraphics (TOG) , 13(1):43–72, 1994.11[47] M. Attene. A lightweight approach to repairing digitized polygon meshes. The visual com-puter , 26(11):1393–1406, 2010.[48] C. Yuksel. Sample elimination for generating poisson disk sample sets. Computer GraphicsForum , 34(2):25–32, 2015.[49] H. Fan, H. Su, and L. J. Guibas. A point set generation network for 3d object reconstructionfrom a single image. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 605–613, 2017.[50] Y . Rubner, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for imageretrieval. International journal of computer vision , 40(2):99–121, 2000.[51] A. Mousavian, D. Anguelov, J. Flynn, and J. Kosecka. 3d bounding box estimation using deeplearning and geometry. In Proceedings of the IEEE conference on Computer Vision and PatternRecognition , pages 7074–7082, 2017.[52] A. Kundu, Y . Li, and J. M. Rehg. 3d-rcnn: Instance-level 3d object reconstruction via render-and-compare. In Proceedings of the IEEE conference on computer vision and pattern recogni-tion, pages 3559–3568, 2018.[53] R. Fletcher. Practical methods of optimization . John Wiley & Sons, 2013.[54] R. Y . Rubinstein and D. P. Kroese. The cross-entropy method: a unified approach to combi-natorial optimization, Monte-Carlo simulation, and machine learning , volume 133. Springer,2004.[55] S. G. Bardenhagen and E. M. Kober. The generalized interpolation material point method.Computer Modeling in Engineering and Sciences , 5(6):477–496, 2004.[56] Z. Huang, X. Lin, and D. Held. Mesh-based Dynamics with Occlusion Reasoning for ClothManipulation. In Proceedings of Robotics: Science and Systems , New York City, NY , USA,June 2022. doi:10.15607/RSS.2022.XVIII.011.126 Supplementary Materials6.1 Inputs and Outputs of RoboCook’s Respective Modules . . . . . . . . . . . . . . . 136.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.2.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.2.2 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.3 Dynamics Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.3.1 Graph Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.3.2 Model Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.3.3 Building Synthetic Datasets . . . . . . . . . . . . . . . . . . . . . . . . . 146.4 Closed-loop Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.4.1 Action Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.4.2 Multi-bin Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.4.3 Subgoal Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.5 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.5.1 Robot and Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.5.2 Tool Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.5.3 Tool-Switching Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176.6 Comparison with Human Subjects on Dumpling-making . . . . . . . . . . . . . . 176.7 Tool Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186.8 Human Evaluation of Alphabet Letters . . . . . . . . . . . . . . . . . . . . . . . . 196.9 Baseline Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 196.10 Analysis of Comparison with Baselines . . . . . . . . . . . . . . . . . . . . . . . 196.1 Inputs and Outputs of RoboCook’s Respective ModulesPerception module: At each time step, the input is a raw observation point cloud merged from fourRGBD cameras and the franka panda end-effector pose. The output is a clean and sparse surfacepoint cloud representing the dough concatenated with a point cloud sampled on the tool mesh surfacerepresenting the action. We compute the pose of the tool point cloud from the end effector pose andthe tool geometry.GNN dynamics: Our GNN predicts St+1fromStandat+1at each prediction step. The inference-time input is the initial state S0and{at, t= 1,2, . . . N}. The output is SN. We select N= 15 forthe gripping, pressing, and rolling tasks.Tool predictor: The input is a point cloud of the current dough configuration and a point cloud ofthe final target (e.g., the dumpling point cloud). The output is the next tool to achieve the final targetconfiguration.Policy network: The input is a point cloud of the current dough configuration and the point cloudof the subgoal associated with the selected tool. We have a one-to-one mapping between subgoalsand tools. For example, a rolling pin corresponds to a dough flattened by the rolling pin. The outputis the action in the defined action space of the selected tool.136.2 Perception6.2.1 Data CollectionTo train the GNNs, we collect around 20 minutes of data for each tool using a behavior policy thatrandomly samples parameters within a predefined action space. In each episode, the robot appliesmultiple action sequences to the soft object, and after the current episode, a human manually resetsthe environment. Humans reshape the dough into a bulky but not necessarily cubic shape after everyfive grips, three rolls, and three presses.The data collected for each tool are as follows: (1) Asymmetric gripper / two-rod symmetricgripper / two-plane symmetric gripper: 60 episodes with five sequences per episode; (2) Circlepress / square press / circle punch / square punch: 90 episodes with three sequences per episode; (3)Large roller / small roller: 80 episodes with three sequences per episode.We collect point clouds before and after executing each sequence to train the tool classifier. How-ever, we augment the training data by including any pair of these point clouds, not just consecutivepairs. For tools that don’t require a GNN-based dynamics model, we execute a pre-coded dumpling-making pipeline ten times and add point clouds captured before and after using each tool in thepipeline to our dataset. Note that most tool selection data is a direct reuse of the dynamics datacollected during the training of the dynamics model. This approach efficiently collects real-worldtool selection data without needing extra exploration. We record the data collection process in thefourth supplementary video.6.2.2 Data PreprocessingWhen building the dataset to train the dynamics model, aside from the sampling process of theperception module at each time frame, we also want to leverage the continuity of the video data.Therefore, we introduce simple geometric heuristics into the physical environment for better frameconsistency. First, if the operating tool is not in contact with the convex hull of the object pointcloud, we use the same sampled particles from the previous frame. This also applies when the toolmoves away from the object. Additionally, we subsample the original videos to ensure that eachvideo in the dataset has the same number of frames (16 frames in practice).6.3 Dynamics Model6.3.1 Graph BuildingWhen building the graph, the edges between the dough particles are constructed by finding thenearest neighbors of each particle within a fixed radius (in practice, 0.1 cm). The edges betweenthe tool and dough particles are computed slightly differently. Instead of simply connecting to allthe neighbors within the threshold, we limit the number of undirected edges between tool particlesand the dough particles to at most four per tool particle to cut off the redundant edges in the graphneural network. Since all the node and edge features in the GNN are encoded in each particle’s localneighborhood, our GNN is naturally translation-invariant and therefore can accurately predict themovement of the dough regardless of its absolute location in the world frame.6.3.2 Model TrainingWe train the model with temporal abstraction to enhance performance and inference speed. Forexample, when t= 0, we train the model to predict the state of the dough at t= 3directly insteadoft= 1. This shortens the horizon, eases the task, and improves our model’s inference speed bydecreasing the number of forward passes needed for a full action sequence.6.3.3 Building Synthetic DatasetsSince the synthetic dataset generated by the dynamics model is cheap, we can collect as muchsynthetic data as desired. Empirically, for example, we sample 128 random actions in the action14space of gripping for each of the 300 different initial states, so there are 128∗300 = 38 ,400pairs ofinput and output point clouds to train the policy network. The random walk to generate the syntheticdataset starts from initial dough states acquired during dynamics data collection, which are not allblock-shaped states and cover various shapes.6.4 Closed-loop Control6.4.1 Action SpaceWe classify the tools into a few categories based on |A|, the dimension of their corresponding actionspace. We visualize action spaces for gripping, pressing, and rolling in Figure 3 (B).A) Nine tools that have an action space with |A| ≥ 3:1) Asymmetric gripper / two-rod symmetric gripper / two-plane symmetric gripper:{r, θ, d}, where ris the distance between the midpoint of the line segment connectingthe centers of mass of the gripper’s two fingers and the center of the target object, θis the robot gripper’s rotation about the (vertical) axis, and dis the minimal distancebetween the gripper’s two fingers during this pinch.2) Large roller / small roller / square press / square punch: {x, y, z, θ }, where {x, y, z}is the bounded location indicating the center of the action, and θis the robot gripper’srotation about the vertical axis. In the case of rollers, the rolling distance is fixed andtherefore not included in the action space.3) Circle press / circle punch: {x, y, z}, where {x, y, z}is the bounded location indicat-ing the center of the action. The robot gripper’s rotation is unnecessary because thetool’s bottom surface is a circle.B) Five tools that have an action space with |A|= 2:1) Knife / circle cutter / pusher / skin spatula / filling spatula: {x, y}, where {x, y}is thebounded location indicating the center of the action on the plane. θandzare fixed forthese tools to simplify the action space.C) The action of the hook is precoded.In category B, for all tools except the knife, we leverage the prior that the center of the dough isalways the optimal solution in the action space and directly compute the center from the processedpoint cloud. In the case of the knife, we use the ycoordinate of the center of the dough as thesolution for y(thexyz coordinate system is illustrated in Figure 4). For x, we first compute thevolume of the target dough and then perform a binary search with the center of the dough as thestarting point to find the cut position that results in the closest volume to the target volume.In category C, the hook is precoded first to hook the handle of the dumpling mold, then close themold, press the mold handle to turn the dough into a dumpling shape, and finally open the mold byhooking and lifting the handle.The guiding principle in designing action spaces involves starting with the end-effector’s 6-DoFaction space and eliminating redundant DoFs. For instance, rotations along the xandyaxes aretypically not required to generate a meaningful action. Hence, we opt to exclude them from theaction space of the 14 tools. For grippers, we transform the Cartesian coordinate system into a polarcoordinate system to simplify the search process for action parameters since corner cases in thebounded Cartesian space are usually suboptimal. Following this, we introduce tool-specific DoFs,which are determined by the tool’s geometric properties. For example, in the case of grippers, weincorporate an additional parameter, d, to represent the width between the gripper’s two fingers.Our method can potentially generalize to various challenging dough manipulation tasks besidesdumpling-making, such as making alphabet letter cookies (as shown in the paper), pizza, and noo-dles. A successful transfer requires the ground truth meshes of new tools and data from interactingwith them. We only need 20 minutes of real-world interaction data per tool, demonstrating the ease15of retraining for new tasks and tools. Although we incorporate human prior knowledge to simplifythe action space for tools, it does not constrain the generalization capability since we can easily spec-ify the action space for new tools. One limitation is that hand-designed action spaces may not bedesirable in general manipulation settings. Our insight is that for most tasks, especially tasks wherethe robot uses a tool, there is much redundant and undesired space in the full 6-DoF action space. Inmost cases, humans follow certain rules and a constrained action space when using a specific tool.A future direction is to design a planner to automatically prune the action space conditioned on thetool and the task.6.4.2 Multi-bin ClassificationWe formulate the self-supervised policy training as a multi-bin classification problem inspired byprevious works on 3D bounding box estimation [51, 52]. The total loss for the multi-bin classifica-tion isL=|A|Xi=1LAiconf+w· LAiloc, (5)where the confidence loss LAiconfis the softmax loss of the confidences of each bin for each actionparameter Ai, and the localization loss LAilocis the loss that tries to minimize the difference betweenthe estimated parameter and the ground truth parameter. For orientation estimation, we use negativecosine loss as the localization loss and force it to minimize the difference between the ground truthand all the bins that cover that value. We use the smooth L1 loss as the localization loss for actionparameters not representing an orientation. During inference time, for each parameter, the bin withmaximum confidence is selected, and the final output is computed by adding the estimated delta ofthat bin to the center of the same bin.To establish the bins, we first bound our action space into a reasonable range (Amin, Amax). Second,we define the number of bins Nas a hyperparameter and select N= 8for translations and N= 32for rotations. Third, we divide the action space into N+ 1bins with size 2∗(Amax−Amin)/N.The center of the first bin is at Amin, the center of the last bin is at Amax, and each bin overlapswith neighboring bins. This approach is similar to [51].6.4.3 Subgoal DefinitionsAs mentioned in 6.2.1, during data collection, we execute a hand-coded dumpling-making pipelineten times and add point clouds captured before and after using each tool in the pipeline to our datasetas expert demonstrations. The point clouds recorded after using each tool in one of these trajectoriesare selected as subgoals.6.5 Experiment Setup6.5.1 Robot and SensorsWe use the 7-DoF Franka Emika Panda robot arm and its parallel jaw gripper as the base robot. Fourcalibrated Intel RealSense D415 RGB-D cameras are fixed on vertical metal bars around the robottabletop, as shown in Figure 4. The cameras capture 1280 ×720 RGB-D images at 30 Hz. We alsodesign a set of 3D-printed tools based on real-world dough manipulation tools.6.5.2 Tool DesignWe design and 3D-print 14 tools: large roller, small roller, circle press, circle punch, square press,square punch, knife / pusher, circle cutter, two-rod symmetric gripper, asymmetric gripper, two-plane symmetric gripper, skin spatula, filling spatula, and hook. The dumpling mold is the sameas real-world ones. In Figure 7, we compare our 3D-printed tools and their real-world prototypes,which are common kitchen tools for dough manipulation. The design principle of these 3D-printedtools is to mimic real-world ones as closely as possible.16Prototype3D-printed toolsRollersGrippersPresses/punchesKnifeCircle cutterSpatulasHook+Figure 7: Prototypes of 3D-printed tools . We show a comparison between our 3D-printed toolsand their real-world prototypes which are common kitchen tools for dough manipulation. The designprinciple of these 3D-printed tools is to mimic real-world ones as closely as possible. We use 3D-printed tools instead of real-world ones to allow the robot arm to acquire and manipulate the toolsmore easily.The roller is composed of a holder and a rolling pin so that the rolling pin can rotate freely whilethe holder remains static. We designed both large and small rollers to accommodate different needs.We also have a set of punches and presses with square and circle shapes. The knife is a thin planartool that can cut through objects. Similarly, the circle cutter can cut an object into a circular shape.Among the grippers, the two-rod symmetric gripper consists of two cylindrical extrusions, the asym-metric gripper consists of a cylindrical and planar part, and the two-plane symmetric gripper consistsof two planar parts. The two extruding rods on each gripper insert into the corresponding holes ofthe two fingers of Franka’s gripper, allowing them to adhere to and move along with the fingers.A linear shaft connects the two parts of each gripper, constraining their movement to a single axis.The skin and filling spatulas have a similar design, except that their two extrusions are each spatula,so they can pick up and put down the soft object without deforming it. The hook and the dumplingmold are tools used together to mold the dough into a dumpling shape.6.5.3 Tool-Switching SetupThe tool-switching setup is an engineering innovation we implement in this project. We adopt twodesigns so that the robot can pick up, use, and put down the tools without any help from humans:(1) The connector on the top of each tool attaches firmly to the Franka’s gripper when it closes itsfingers and also unlocks easily when the gripper reopens. (2) The tool racks on the bottom andright side of the robot table hold all the 3D-printed tools in their upright poses so that the robot caneasily pick up and put down the tools. Additionally, we calibrate the tools’ world coordinates sothat the robot knows where to find each tool. The supplementary videos of making dumplings showexamples of how the robot switches tools.6.6 Comparison with Human Subjects on Dumpling-makingWe invited five human subjects to make dumplings with the same tools to highlight the complexityof dumpling-making. Each subject participated in two experiments: choosing tools independentlyand following a given tool sequence and subgoals. For a fair comparison, human subjects werenot allowed to directly touch the dough with their hands or apply each tool more than five times.Before the experiments, we introduced each tool and gave them sufficient time to get familiar withthe dough’s dynamics and devise their plan. We compare their best attempt among three trials toour method for each experiment. Figure 9 shows that human performance is notably worse than ourmethod without subgoals. Performance improves with the tool sequence and subgoals but remainscomparable to or worse than our method. The fifth supplementary video records the entire process.17KnifeCircle cutterSmall rollerHookAsymmetric gripperSkin spatulaFilling spatulaTwo-plane gripperTwo-rod gripperSquare pressLarge rollerSquare punchCircle punchCircle pressPusherKnifeCircle cutterSmall rollerHookAsymmetric gripperSkin spatulaFilling spatulaTwo-plane gripperTwo-rod gripperSquare pressLarge rollerSquare punchCircle punchCircle pressPusherLabelPrediction020406080100Figure 8: Confusion matrix of the tool classifier predictions . We show the confusion matrix ofthe tool classifier predictions on the test set, which is split from the training data. The tool classifierachieves an accuracy very close to 1.W/o subgoalsW/ subgoalsSubject 3Subject 4Subject 5RoboCookSubject 1Subject 2Figure 9: Comparison with human subjects . We show a comparison with the manipulation resultsof human subjects. In the first row, Human subjects devise their manipulation plan and choose toolsindependently. In the second row, human subjects follow a given tool sequence and subgoals.6.7 Tool ClassificationThe training of our tool classifier is supervised learning with a cross-entropy loss as in standardclassification architectures. We split a test set from the training data of the tool classifier and show18the confusion matrix of the tool classifier predictions in Figure 8. The instance accuracy is 0.996.We compared PointNet-based and ResNet-based architectures for the tool classification network.PointNet-base architecture generalizes better due to its ability to encode depth information. Empiri-cally, it demonstrates greater robustness to changes in lighting, dough color, and dough transforma-tions.6.8 Human Evaluation of Alphabet LettersWe recognize a discrepancy between how metrics such as Chamfer Distance measure the resultsand how humans perceive them - these metrics are prone to local noises while humans are good atcapturing the holistic features of the dough. Therefore, we invite 100 human subjects to evaluate theresults. The human survey asks the question: “What alphabet letter is the robot trying to shape in thegiven image?” If we put all 20 images (four methods ×five letters) in Question 1, there could be apredictive bias from seeing more than one image of the same letter. Therefore, we shuffle the orderof 20 images and split them into four groups. Each group contains one image for each letter butfrom different methods. After averaging over five letters, we show the human perceived accuracyand human ranking of the performance of these four methods in Table 1.6.9 Baseline Implementation DetailsBaselines RoboCraft, CEM+GNN, and RL+GNN use the same framework as RoboCook but replacethe policy network with gradient descent (RoboCraft), CEM, and RL. In other words, GNNs in thesebaselines are the same as GNNs we trained in RoboCook. They also use the same tool predictor asin RoboCook to handle this multi-tool task. They are meant to compare the policy network inRoboCook with alternatives.The RL+GNN baseline utilizes a model-based Soft Actor-Critic (SAC) with a learned GNN-baseddynamics model as the world model. The action space aligns with other planning methods, and thestate space comprises the point cloud position. The reward function is derived from the change inChamfer Distance after each grip. Training involves a discount factor of 0.99, a learning rate of0.0003 with the Adam optimizer, 2-layer MLPs with 256 hidden units, and ReLU activation for bothpolicy and critic models. We initially collect 250 steps of warm-up data. The replay buffer size is1e6, and the target smooth coefficient is 0.005. We trained the RL baseline for 2500 steps onlinefor each action prediction to fit into our closed-loop control module. The inputs and outputs of theRL baseline are the same as our policy network. The input is a point cloud of the current doughconfiguration and the point cloud of the subgoal. The output is the action in the defined action spaceof the selected tool. The results shown in Figure 5 and Table 1 indicate that the RL baseline isnoticeably worse than our method.6.10 Analysis of Comparison with BaselinesOur methods outperform baselines RoboCraft, CEM+GNN, and RL+GNN by a large margin, usingthe same GNNs and tool predictors but different planning methods. One reason is that our policynetwork sees a complete coverage of the entire action space from the large synthetic datasets gen-erated by our dynamics model offline. Other planning methods explore the action space online andtherefore have a smaller coverage than ours. By training a goal-conditioned policy network offline,we also make the online action synthesis much more efficient than baseline methods.Another insight is that our PointNet-based policy network is better at capturing higher-level shapechanges of the point cloud, such as concavities. For example, by comparing the current and targetdough configuration, the policy network knows that the gripper should probably pinch the concavelocations in the target dough configuration to deform the current dough into the target shape.Our method also outperforms the baseline CEM+MPM since an MPM simulator suffers from asim2real gap and relies on careful system identification to bridge the gap. Thus, the MPM simulatorunderperforms the GNN, which is directly trained on real-world data.19 |
b1tl3aOt2R2 | GNFactor: Multi-Task Real Robot Learning withGeneralizable Neural Feature FieldsYanjie Ze1∗Ge Yan2∗Yueh-Hua Wu2∗Annabella Macaluso2Yuying Ge3Jianglong Ye2Nicklas Hansen2Li Erran Li4Xiaolong Wang21Shanghai Jiao Tong University2UC San Diego3University of Hong Kong4AWS AI, Amazon∗Equal Contributionyanjieze.com/GNFactorAbstract: It is a long-standing problem in robotics to develop agents capableof executing diverse manipulation tasks from visual observations in unstructuredreal-world environments. To achieve this goal, the robot needs to have a compre-hensive understanding of the 3D structure and semantics of the scene. In this work,we present GNFactor , a visual behavior cloning agent for multi-task robotic ma-nipulation with Generalizable Neural feature Fields. GNFactor jointly optimizesa generalizable neural field (GNF) as a reconstruction module and a PerceiverTransformer as a decision-making module, leveraging a shared deep 3D voxelrepresentation. To incorporate semantics in 3D, the reconstruction module uti-lizes a vision-language foundation model ( e.g., Stable Diffusion) to distill richsemantic information into the deep 3D voxel. We evaluate GNFactor on 3 realrobot tasks and perform detailed ablations on 10 RLBench tasks with a limitednumber of demonstrations. We observe a substantial improvement of GNFactorover current state-of-the-art methods in seen and unseen tasks, demonstrating thestrong generalization ability of GNFactor.Keywords: Robotic Manipulation, Neural Radiance Field, Behavior Cloning1 IntroductionOne major goal of introducing learning into robotic manipulation is to enable the robot to effectivelyhandle unseen objects and successfully tackle various tasks in new environments. In this paper, wefocus on using imitation learning with a few demonstrations for multi-task manipulation. Using imi-tation learning helps avoid complex reward design and training can be directly conducted on the realrobot without creating its digital twin in simulation [1, 2, 3, 4]. This enables policy learning on di-verse tasks in complex environments, based on users’ instructions (see Figure 1). However, workingwith a limited number of demonstrations presents great challenges in terms of generalization. Mostof these challenges arise from the need to comprehend the 3D structure of the scene, understand thesemantics and functionality of objects, and effectively follow task instructions based on visual cues.Therefore, a comprehensive and informative visual representation of the robot’s observations servesas a crucial foundation for generalization.The development of visual representation for robot learning has mainly focused on learning withina 2D plane. Self-supervised objectives are leveraged to pre-train the representation from the 2Dimage observation [6, 7, 8] or jointly optimized with the policy gradients [9, 10, 11]. While theseapproaches improve sample efficiency and lead to more robust policies, they are mostly applied torelatively simple manipulation tasks. To tackle more complex tasks requiring geometric understand-ing ( e.g., object shape and pose) and with occlusions, 3D visual representation learning has beenrecently adopted with robot learning [11, 12]. For example, Driess et al. [12] train the 3D scenerepresentation by using NeRF and view synthesis to provide supervision. While it shows effective-ness over tasks requiring geometric reasoning such as hanging a cup, it only handles the simple7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.“TurntheFaucet”“OpentheTopMicrowaveDoor”“PlacetheTeaPotontheStove”Kitchen1Kitchen2FrontLeftDiffusionFeatureFieldFigure 1: Left: Three camera views used in the real robot setup to reconstruct the feature field generatedby Stable Diffusion [5]. We segment the foreground feature for better illustration. Right: Three language-conditioned real robot tasks across two different kitchens.scene structure with heavy masking in a single-task setting. More importantly, without a semanticunderstanding of the scene, it would be very challenging for the robot to follow the user’s languageinstructions.In this paper, we introduce learning a language-conditioned policy using a novel representationleveraging both 3D and semantic information for multi-task manipulation. We train GeneralizableNeural Feature Fields ( GNF ) which distills pre-trained semantic features from 2D foundation mod-els into the Neural Radiance Fields (NeRFs). We conduct policy learning upon this representation,leading to our model GNFactor . It is important to note that GNFactor learns an encoder to extractscene features in a feed-forward manner, instead of performing per-scene optimization in NeRF.Given a single RGB-D image observation, our model encodes it into a 3D semantic volumetric fea-ture, which is then processed by a Perceiver Transformer [13] architecture for action prediction.To conduct multi-task learning, the Perceiver Transformer takes in language instructions to get taskembedding, and reason the relations between the language and visual semantics for manipulation.There are two branches of training in our framework (see Figure 3): (i) GNF training . Given thecollected demonstrations, we train the Generalizable Neural Feature Fields using view synthesiswith volumetric rendering. Besides rendering the RGB pixels, we also render the features of thefoundation models in 2D space. The GNF learns from both pixel and feature reconstruction atthe same time. To provide supervision for feature reconstruction, we apply a vision foundationmodel ( e.g., pre-trained Stable Diffusion model [5]) to extract the 2D feature from the input viewas the ground truth. In this way, we can distill the semantic features into the 3D space in GNF. (ii)GNFactor joint training. Building on the 3D volumetric feature jointly optimized by the learningobjectives of GNF, we conduct behavior cloning to train the whole model end-to-end.For evaluation, we conduct real-robot experiments on three distinct tasks across two differentkitchens (see Figure 1). We successfully train a single policy that effectively addresses these tasksin different scenes, yielding significant improvements over the baseline method PerAct [3]. We alsoconduct comprehensive evaluations using 10 RLBench simulated tasks [14] and 6 designed general-ization tasks. We observe that GNFactor outperforms PerAct with an average improvement of 1.55xand1.57x, consistent with the significant margin observed in the real-robot experiments.2 Related WorkMulti-Task Robotic Manipulation. Recent works in multi-task robotic manipulation have led tosignificant progress in the execution of complex tasks and the ability to generalize to new scenar-ios [15, 2, 1, 16, 17, 3, 18, 19]. Notable methods often involve the use of extensive interaction datato train multi-task models [2, 1, 16, 17]. For example, RT-1 [1] underscores the benefits of task-agnostic training, demonstrating superior performance in real-world robotic tasks across a variety ofdatasets. To reduce the need for extensive demonstrations, methods that utilize keyframes – whichencode the initiation of movement – have proven to be effective [20, 21, 22, 23, 24]. PerAct [3]employs the Perceiver Transformer [13] to encode language goals and voxel observations and showsits effectiveness in real robot experiments. In this work, we utilize the same action prediction frame-2(a)RGBobservationsfor10RLBenchtasks.(c)Realrobotsetup.(b)SampledviewsforGNFtraininginsimulation.Figure 2: Simulation environments and the real robot setup. We show the RGB observations for our 10RLBench tasks in Figure (a), the sampled views for GNF in Figure (b), and the real robot setup in Figure (c).work as PerAct while we focus on improving the generalization ability of this framework by learninga generalizable volumetric representation under limited data.3D Representations for Reinforcement/Imitation Learning (RL/IL). To improve manipulationpolicies by leveraging visual information, numerous studies have concentrated on enhancing 2D vi-sual representations [8, 7, 6, 25], while for addressing more complex tasks, the utilization of 3Drepresentations becomes crucial. Ze et al. [11] incorporates a deep voxel-based 3D autoencoder inmotor control, demonstrating improved sample efficiency compared to 2D representation learningmethods. Driess et al. [12] proposes to first learn a state representation by NeRF and then use thefrozen state for downstream RL tasks. While this work shows the initial success of utilizing NeRFin RL, its applicability in real-world scenarios is constrained due to various limitations: e.g., therequirement of object masks, the absence of a robot arm, and the lack of scene structure. The workclosest to ours is SNeRL [26], which also utilizes a vision foundation model in NeRF. However, sim-ilar to NeRF-RL [12], SNeRL masks the scene structure to ensure functionality and the requirementfor object masks persists, posing challenges for its application in real robot scenarios. Our proposedGNFactor, instead, handles challenging muti-task real-world scenarios, demonstrating the potentialfor real robot applications.Neural Radiance Fields (NeRFs). Neural fields have achieved great success in novel view synthesisand scene representation learning these years [27, 28, 29, 30, 31, 32], and recent works also start toincorporate neural fields into robotics [33, 34, 35, 12, 26]. NeRF [29] stands out for achieving pho-torealistic view synthesis by learning an implicit function of the scene, while it requires per-sceneoptimization and is thus hard to generalize. Many following methods [36, 37, 38, 39, 40, 41, 42]propose more generalizable NeRFs. PixelNeRF [43] and CodeNeRF [37] encode 2D images as theinput of NeRFs, while TransINR [36] leverages a vision transformer to directly infer NeRF parame-ters. A line of recent works [44, 45, 46, 47, 48, 49] utilize pre-trained vision foundation models suchas DINO [50] and CLIP [51] as supervision besides the RGB image, which thus enables the NeRFto learn generalizable features. In this work, we incorporate generalizable NeRF to reconstructdifferent views in RGB and embeddings from a pretrained Stable Diffusion model [5].3 MethodIn this section, we detail the proposed GNFactor, a multi-task agent with a 3D volumetric represen-tation for real-world robotic manipulation. GNFactor is composed of a volumetric rendering moduleand a 3D policy module, sharing the same deep volumetric representation. The volumetric render-ing module learns a Generalizable Neural Feature Field (GNF), to reconstruct the RGB image fromcameras and the embedding from a vision-language foundation model, e.g., Stable Diffusion [5].The task-agnostic nature of the vision-language embedding enables the volumetric representation tolearn generalizable features via neural rendering and thus helps the 3D policy module better handlemulti-task robotic manipulation. The task description is encoded with CLIP [51] to obtain the taskembedding T. An overview of GNFactor is shown in Figure 3.3Deep 3D VolumeRGB-D(Front View)VoxelEncoderVoxelize“Place the Teapoton the Stove”LanguageEncoderTokenizePerceiver...PatchifyRobot StateQtransQrotQcollideQopenRendererFigure 3: Overview of GNFactor. GNFactor takes an RGB-D image as input and encodes it using a voxelencoder to transform it into a feature in deep 3D volume. This volume is then shared by two modules: vol-umetric rendering (Renderer) and robot action prediction (Perceiver). These two modules are jointly trained,which optimizes the shared features to not only reconstruct vision-language embeddings (Diffusion Feature)and other views (RGB), but also to estimate accurate Q-values ( Qtrans,Qrot,Qcollide,Qopen).3.1 Problem DefinitionTo effectively address complex real-world robotic problems, we structure the observation space asa 3D voxel space O ∈R1003×3, as opposed to the commonly used 2D images [1, 2, 7, 8]. The 3Dvoxel observation originates from an RGB-D image captured by a single front camera with knownextrinsic and intrinsic parameters, ensuring our method’s practical applicability in the real world.In addition to the front camera view used for policy training, we also gather additional kviews fortraining the GNF. We collect only RGB images for these additional views instead of RGB-D images.In real-world scenarios, we use k= 2, while in simulated environments, we set k= 19 .The action of the robot arm with a gripper is represented by translation atrans∈R3, rotation arot∈R(360/5)×3, gripper openness aopen∈[0,1], and collision avoidance acollision ∈[0,1]. For the rotationarot, each rotation axis is discretized into R= 5 bins. The collision avoidance parameter acollisioninstructs the motion planner regarding the necessity to avoid collisions, which is crucial as our tasksencompasses both contact-based and non-contact-based motions.Due to the inefficiency of continuous action prediction and the extensive data requirements that comewith it, we reformulate the behavior cloning problem as a keyframe-prediction problem [3, 52]. Wefirst extract keyframes from expert demonstrations using the following metric: a frame in the tra-jectory is a keyframe when joint velocities approach zero and the gripper’s open state remains con-stant. The model is then trained to predict the subsequent keyframe based on current observations.This formulation effectively transforms the continuous control problem into a discretized keyframe-prediction problem, delegating the internal procedures to the RRT-connect motion planner [53] insimulation and Linear motion planner in real-world xArm7 robot.3.2 Learning Volumetric Representations with Generalizable Neural Feature FieldsIn our initial step, we transform the RGB-D image into a 1003voxel. Then the 3D voxel encoderencodes this 3D voxel and outputs our volumetric representation v∈R1003×128. To enhance thevolumetric representation vwith structural knowledge and language semantics, we learn a General-izable Neural Feature Field (GNF) that takes the deep volume vas the scene representation and themodel is learned by reconstructing the additional views and the features predicted by a 2D vision-language foundation model [5]. The entire neural rendering process is described as follows.We denote vx∈R128as the sampled 3D feature for the 3D point xusing the volumetric represen-tation v.vxis formed with trilinear interpolation due to the discretized nature of the volume v. OurGNF primarily consists of three functions: (i) one density function σ(x, vx) :R3+1287→R+thatmaps the 3D point xand the 3D feature vxto the density σ, (ii) one RGB function c(x,d, vx) :R3+3+1287→R3that maps the 3D point x, the view direction d, and the 3D feature vxto color,and (iii) one vision-language embedding function f(x,d, vx) :R3+3+1287→R512that maps the 3D4pointx, the view direction d, and the 3D feature vxto the vision-language embedding. In Figure 3,the corresponding components of these three functions are illustrated. Given a pixel’s camera rayr(t) =o+td, which is defined by the camera origin o∈R3, view direction dand depth twithbounds [tn, tf], the estimated color and embedding of the ray can be calculated by:ˆC(r, v) =ZtftnT(t)σ(r(t), vx(t))c(r(t),d, vx(t))dt ,ˆF(r, v) =ZtftnT(t)σ(r(t), vx(t))f(r(t),d, vx(t))dt ,(1)where T(t) = exp−Rttnσ(s)ds. The integral is approximated with numerical quadrature in theimplementation. Our GNF is then optimized to reconstruct the RGB image and the vision-languageembedding from multiple views and diverse scenes by minimizing the following loss:Lrecon=Xr∈R∥C(r)−ˆC(r)∥22+λfeat∥F(r)−ˆF(r)∥22, (2)where C(r)is the ground truth color, F(r)is the ground truth vision-language embedding generatedby Stable Diffusion, Ris the set of rays generated from camera poses, and λfeatis the weight for theembedding reconstruction loss. For efficiency, we sample brayrays given one target view, insteadof reconstructing the entire image. To help the GNF training, we use a coarse-to-fine hierarchicalstructure as the original NeRF [29] and apply depth-guided sampling [54] in the “fine” network.3.3 Action Prediction with Volumetric RepresentationsThe volumetric representation vis optimized not only to achieve reconstruction of the GNF module,but also to predict the desired action for accomplishing manipulation tasks within the 3D policy. Assuch, we jointly train the representation vto satisfy the objectives of both the GNF and the 3D policymodule. In this section, we elaborate the training objective and the architecture of the 3D policy.We employ a Perceiver Transformer [3] to handle the high-dimensional multi-modal input, i.e., the3D volume, the robot’s proprioception, and the language feature. We first condense the sharedvolumetric representation vinto a volume of size 203×128using a 3D convolution layer with akernel size and stride of 5, followed by a ReLU function, and flatten the 3D volume into a sequenceof small cubes of size 8000×128. The robot’s proprioception is projected into a 128-dimensionalspace and concatenated with the volume sequence for each cube, resulting in a sequence of size8000×256. We then project the language token features from CLIP into the same dimensions ( 77×256) and concatenate these features with a combination of the 3D volume, the robot’s proprioceptionstate, and the CLIP token embedding. The result is a sequence with dimensions of 8077×256.This sequence is combined with a learnable positional encoding and passed through the PerceiverTransformer, which outputs a sequence of the same size. We remove the last 77features for the easeof voxelization [3] and reshape the sequence back to a voxel of size 203×256. This voxel is thenupscaled to 1003×128with trilinear interpolation and referred to as vPT.vPTis shared across threeaction prediction heads ( Qopen,Qtrans,Qrot,Qcollide in Figure 3) to determine the final robot actionsat the same scale as the observation space. To retain the learned features from GNF training, wecreate a skip connection between our volumetric representation vandvPT. The combined volumefeature (v, v PT)is used to predict a 3D Q-function Qtransfor translation, as well as Q-functionsfor other robot operations like gripper openness ( Qopen), rotation ( Qrot), and collision avoidance(Qcollide ). The Q-function here represents the action values of one timestep, differing from thetraditional Q-function in RL that is for multiple timesteps. For example, in each timestep, the 3DQtrans-value would be equal to 1for the most possible next voxel and 0for other voxels. The modelthen optimizes the cross-entropy loss like a classifier,Laction=−EYtrans[logVtrans]−EYrot[logVrot]−EYopen[logVopen]−EYcollide[logVcollide ],(3)where Vi= softmax( Qi)forQi∈[Qtrans,Qopen,Qrot,Qcollide]andYi∈[Ytrans, Yrot, Yopen, Ycollide]is the ground truth one-hot encoding. The overall learning objective for GNFactor is as follows:LGNFactor =Laction+λreconLrecon, (4)502040Success Rates20.431.7Multi-Task (10)0204018.028.3Generalization (6)020406028.354.8Multi-Task (6)0204016.733.3Generalization (6)PerAct GNFactor (ours)RLBench RealRobotFigure 4: Main experiment results. We present the average success rates in both the multi-task and gener-alization settings across RLBench tasks and real robot tasks. The error bar represents one standard deviation.The number in the bracket denotes the number of tasks.where λreconis the weight for the reconstruction loss to balance the scale of different objectives. Totrain the GNFactor, we employ a joint training approach in which the GNF and 3D policy module areoptimized jointly, without any pre-training. From our empirical observation, this approach allowsfor better fusion of information from the two modules when learning the shared features.4 ExperimentsIn this section, we conduct experiments to answer the following questions: (i) Can GNFactor surpassthe baseline model in simulated environments? (ii) Can GNFactor generalize to novel scenes insimulation? (iii) Does GNFactor learn a superior policy that handles real robot tasks in two differentkitchens with noisy and limited real-world data? (iv) What are the crucial factors in GNFactor toensure the functionality of the entire system? Our concluded results are given in Figure 4.4.1 Experiment SetupFor the sake of reproducibility and benchmarking, we conduct our primary experiments in RLBenchsimulated tasks. Furthermore, to show the potential of GNFactor in the real world, we design a setof real robot experiments across two kitchens. We compare our GNFactor with the strong language-conditioned multi-task agent PerAct [3] in both simulation and the real world, emphasizing theuniversal functionality of GNFactor. Both GNFactor and PerAct use the single RGB-D image fromthe front camera as input to construct the voxel grid. In the multi-task simulation experiments, wealso create a stronger version of PerAct by adding more camera views as input to fully cover thescene (visualized in Figure 10). Figure 2 shows our simulation tasks and the real robot setup. Webriefly describe the tasks and details are left in Appendix B and Appendix C.Simulation. We select 10challenging language-conditioned manipulation tasks from the RLBenchtasksuites [14]. Each task has at least two variations, totaling 166 variations. These variationsencompass several types, such as variations in shape and color. Therefore, to achieve high successrates with very limited demonstrations, the agent needs to learn generalizable knowledge aboutmanipulation instead of merely overfitting to the given demonstrations. We use the RGB-D imageof size 128×128×3from the single front camera as the observation. To train the GNF, we alsoadd additional 19camera views to provide RGB images as supervision.Real robot. We use the xArm7 robot with a parallel gripper in real robot experiments. We set uptwo toy kitchen environments to make the agent generalize manipulation skills across the scenes anddesigned three manipulation tasks, including open the microwave door ,turn the faucet , and relocatethe teapot , as shown in Figure 1. We set up three RealSense cameras around the robot. Amongthe three cameras, the front one captures the RGB-D observations for the policy training and theleft/right one provides the RGB supervision for the GNF training.Expert Demonstrations. We collect 20demonstrations for each RLBench task with the motionplanner. The task variation is uniformly sampled. We collect 5demonstrations for each real robottask using a VR controller. Details for collection remain in Appendix D.Generalization tasks. To further show the generalization ability of GNFactor, we design additional6simulated tasks and 3real robot tasks based on the original training tasks and add task distractors.Training details. One agent is trained with two NVIDIA RTX3090 GPU for 2days ( 100k iterations)with a batch size of 2. The shared voxel encoder of GNFactor is implemented as a lightweight 3DUNet with only 0.3M parameters. The Perceiver Transformer keeps the same number of parametersas PerAct [3] ( 25.2M parameters), making our comparison with PerAct fair.6Table 1: Multi-task test results on RLBench. We evaluate 25episodes for each checkpoint on 10tasksacross 3seeds and report the success rates (%) of the final checkpoints. Our method outperforms the mostcompetitive baseline PerAct [3] with an average improvement of 1.55x and even still largely surpasses PerActwith 4 cameras as input. The additional camera views are visualized in Figure 10.Method / Task close jar open drawer sweep to dustpan meat off grill turn tap AveragePerAct 18.7±8.2 54.7±18.6 0.0±0.0 40.0±17.0 38.7±6.8PerAct (4 Cameras) 21.3±7.5 44.0±11.3 0.0±0.0 65.3±13.2 46.7±3.8GNFactor 25.3±6.8 76.0±5.7 28.0±15.0 57.3±18.9 50.7±8.2Method / Task slide block put in drawer drag stick push buttons stack blocksPerAct 18.7±13.6 2.7±3.3 5.3±5.0 18.7±12.4 6.7±1.9 20.4PerAct (4 Cameras) 16.0±14.2 6.7±6.8 12.0±3.3 9.3±1.9 5.3±1.9 22.7GNFactor 20.0±15.0 0.0±0.0 37.3±13.2 18.7±10.0 4.0±3.3 31.7Table 2: Generalization to unseen tasks on RLBench. We evaluate 20episodes for each task with the finalcheckpoint across 3seeds. We denote “L” as a larger object, “S” as a smaller object, “N” as a new position, and“D” as adding a distractor. Our method outperforms PerAct with an average improvement of 1.57x.Method / Task drag (D) slide (L) slide (S) open (n) turn (N) push (D) AveragePerAct 6.6±4.7 33.3±4.7 5.0±4.1 25.0±10.8 18.3±6.2 20.0±7.1 18.0GNFactor 46.7±30.6 25.0±4.1 6.7±6.2 31.7±6.228.3±2.431.7±2.4 28.34.2 Simulation ResultsWe report the success rates for multi-task tests on RLBench in Table 1 and for generalization to newenvironments in Table 2. We conclude our observations as follows:Dominance of GNFactor over PerAct for multi-task learning. As shown by Table 1 and Fig-ure 4, GNFactor achieves higher success rates across various tasks compared to PerAct, particularlyexcelling in challenging long-horizon tasks. For example, in sweep to dustpan task, the robotneeds to first pick up the broom and use the broom to sweep the dust into the dustpan. We find thatGNFactor achieves a success rate of 28.0%, while PerAct could not succeed at all. In simpler taskslikeopen drawer where the robot only pulls the drawer out, both GNFactor and PerAct performreasonably well, with success rates of 76.0%and54.7%respectively. Furthermore, we observethat enhancing PerAct with extra camera views does not result in significant improvements. Thisunderscores the importance of efficiently utilizing the available camera views.Generalization ability of GNFactor to new tasks. In Table 2, we observe that the change made onthe environments such as distractors impacts all the agents negatively, while GNFactor shows bettercapability of generalization on 5 out of 6 tasks compared to PerAct. We also find that for somechallenging variations such as the smaller block in the task slide (S) , both GNFactor and PerActstruggle to handle. This further emphasizes the importance of robust generalization skills.Ablations. We summarize the key components in GNFactor that contribute to the success ofthe volumetric representation in Table 4. From the ablation study, we gained several insights:Table 4: Ablations. We report theaveraged success rates on 10RL-Bench tasks. “DGS” is short fordepth-guided sampling. “ →” meansreplacing.Ablation Success Rate (%)GNFactor 36.8w/o. GNF objective 24.2w/o. RGB objective 27.2w/o. Diffusion 30.0Diffusion →DINO 30.4Diffusion →CLIP 32.0w/o. DGS 29.2w/o. skip connection 27.6k= 19→9 33 .2λfeat= 0.01→1.0 35 .2λrecon= 0.01→1.0 35 .2(i) Our GNF reconstruction module plays a crucial role in multi-task robot learning. Moreover, the RGB loss is essential forlearning a consistent 3D feature in addition to the feature loss,especially since the features derived from foundation models arenot inherently 3D consistent.(ii) The volumetric representation benefits from Diffusion fea-tures and depth-guided sampling, where the depth prior is uti-lized to enhance the sampling quality in neural rendering. Anintuitive explanation is that GNF, when combined with DGS,becomes more adept at learning depth and 3D structure infor-mation. This enhanced understanding allows the 3D representa-tion to better concentrate on the surfaces of objects rather thanthe entire volume. Moreover, replacing Stable Diffusion withDINO [50] or CLIP [51] would not result in similar improvements easily, indicating the importanceof our vision-language feature.7Table 3: Multi-task test results on real robot. We evaluate 10episodes for each task and report the resultingsuccess rate (%). We denote “door” as “open door”, “faucet” as “turn faucet”, and “teapot” as “relocate teapot”.The number in the parenthesis suggests the kitchen ID and “d” suggests testing with distractors.Method / Task door (1) faucet (1) teapot (1) door (1,d) faucet (1,d) teapot (1,d) AveragePerAct 30 80 0 10 50 0GNFactor 40 80 40 30 50 30Method / Task door (2) faucet (2) teapot (2) door (2,d) faucet (2,d) teapot (2,d)PerAct 10 50 0 10 30 0 22.5GNFactor 50 70 40 20 40 30 43.3(iii) While the use of skip connection is not a new story and we merely followed the structure ofPerAct, the result of removing the skip connection suggests that our voxel representation, whichdistills features from the foundation model, plays a critical role in predicting the final action.(iv) Striking a careful balance between the neural rendering loss and the action prediction loss iscritical for optimal performance and utilizing information from multiple views by our GNF moduleproves to be beneficial for the single-view decision module.Furthermore, we provide the view synthesis in the real world, generated by GNFactor in Figure 5and Figure 6. We also give the quantitative evaluation measured by PSNR [29]. We observe thatthe rendered views are somewhat blurred since the volumetric presentation learned by GNFactor isoptimized to minimize both the neural rendering loss as well as the action prediction loss, and therendering quality is largely improved when the behavior cloning loss is removed and only the GNF istrained. Notably, for the view synthesis in the real world, we do not have access to ground-truth pointclouds for either training or testing. Instead, the point clouds are sourced from RealSense camerasand are therefore imperfect. Despite the limitations in achieving accurate pixel-level reconstructionresults, we focus on learning semantic understanding of the whole scene from distilling Diffusionfeatures, which is more important for policy learning.4.3 Real Robot ExperimentsWe summarize the results of our real robot experiment in Table 3. From the experiments, GNFactoroutperforms the PerAct baseline on almost all tasks. Notably, in the teapot task where the agentis required to accurately determine the grasp location and handle the teapot from a correct angle,PerAct fails to accomplish the task and obtains a zero success rate across two kitchens. We observedthat it is indeed challenging to learn a delicate policy from only 5demonstrations. However, byincorporating the representation from the embedding of a vision-language model, GNFactor gainsan understanding of objects. As such, GNFactor does not simply overfit to the given demonstrations.The second kitchen (Figure 1) presents more challenges due to its smaller size compared to thefirst kitchen. This requires higher accuracy to manipulate the objects effectively. The performancegap between GNFactor and the baseline PerAct becomes more significant in the second kitchen.Importantly, our method does not suffer the same performance drop transitioning from the firstkitchen to the second, unlike the baseline.We also visualize our 3D policy module by Grad-CAM [55], as shown in Figure 7. We use thegradients and the 3D feature map from the 3D convolution layer after the Perceiver Transformer tocompute Grad-CAM. We observe that the target objects are clearly attended by our policy, thoughthe training signal is only the Q-value for a single voxel.5 Conclusion and LimitationsIn this work, we propose GNFactor, a visual behavior cloning agent for real-world multi-task roboticmanipulation. GNFactor utilizes a Generalizable Neural Feature Field (GNF) to learn a 3D volu-metric representation, which is also used by the action prediction module. We employ the vision-language feature from the foundation model Stable Diffusion besides the RGB feature to supervisethe GNF training and observe that the volumetric representation enhanced by the GNF is helpful fordecision-making. GNFactor achieves strong results in both simulation and the real world, across 10RLBench tasks and 3real robot tasks, showcasing the potential of GNFactor in real-world scenarios.One major limitation of GNFactor is the requirement of multiple views for the GNF training, whichcan be challenging to scale up in the real world. Currently, we use three fixed cameras for GNFactor,but it would be interesting to explore using a cell phone to randomly collect camera views, wherethe estimation of the camera poses would be a challenge.8Acknowledgment. This work was supported, in part, by the Amazon Research Award, Cisco Fac-ulty Award and gifts from Qualcomm.References[1] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv ,2022.[2] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In CoRL , 2022.[3] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In CoRL , 2023.[4] M. Dalal, A. Mandlekar, C. Garrett, A. Handa, R. Salakhutdinov, and D. Fox. Imitating taskand motion planning with visuomotor transformers. arXiv , 2023.[5] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image syn-thesis with latent diffusion models. In CVPR , 2022.[6] S. Parisi, A. Rajeswaran, S. Purushwalkam, and A. Gupta. The unsurprising effectiveness ofpre-trained vision models for control. In ICML , 2022.[7] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. arXiv , 2022.[8] I. Radosavovic, T. Xiao, S. James, P. Abbeel, J. Malik, and T. Darrell. Real-world robotlearning with masked visual pre-training. In CoRL , 2023.[9] M. Laskin, A. Srinivas, and P. Abbeel. Curl: Contrastive unsupervised representations forreinforcement learning. In ICML , 2020.[10] N. Hansen and X. Wang. Generalization in reinforcement learning by soft data augmentation.InICRA , 2021.[11] Y . Ze, N. Hansen, Y . Chen, M. Jain, and X. Wang. Visual reinforcement learning with self-supervised 3d representations. RA-L , 2023.[12] D. Driess, I. Schubert, P. Florence, Y . Li, and M. Toussaint. Reinforcement learning withneural radiance fields. NeurIPS , 2022.[13] A. Jaegle, F. Gimeno, A. Brock, O. Vinyals, A. Zisserman, and J. Carreira. Perceiver: Generalperception with iterative attention. In ICML , 2021.[14] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. RA-L , 2020.[15] R. Rahmatizadeh, P. Abolghasemi, L. B ̈ol ̈oni, and S. Levine. Vision-based multi-task manip-ulation for inexpensive robots using end-to-end learning from demonstration. In 2018 IEEEinternational conference on robotics and automation (ICRA) , pages 3758–3765. IEEE, 2018.[16] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In CoRL , 2022.[17] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv , 2018.9[18] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onrobot learning , pages 1094–1100. PMLR, 2020.[19] R. Yang, H. Xu, Y . Wu, and X. Wang. Multi-task reinforcement learning with soft modular-ization. Advances in Neural Information Processing Systems , 33:4767–4777, 2020.[20] S. Song, A. Zeng, J. Lee, and T. Funkhouser. Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations. IEEE Robotics and Automation Letters , 5(3):4978–4985, 2020.[21] A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox. 6-dof grasping for target-drivenobject manipulation in clutter. In 2020 IEEE International Conference on Robotics and Au-tomation (ICRA) , pages 6232–6238. IEEE, 2020.[22] A. Mousavian, C. Eppner, and D. Fox. 6-dof graspnet: Variational grasp generation for objectmanipulation. In ICCV , 2019.[23] Z. Xu, Z. He, and S. Song. Universal manipulation policy network for articulated objects.RA-L , 2022.[24] Y . Li, S. Agrawal, J.-S. Liu, S. K. Feiner, and S. Song. Scene editing as teleoperation: A casestudy in 6dof kit assembly. In IROS , 2022.[25] N. Hansen, Z. Yuan, Y . Ze, T. Mu, A. Rajeswaran, H. Su, H. Xu, and X. Wang. On pre-trainingfor visuo-motor control: Revisiting a learning-from-scratch baseline. In ICML , 2022.[26] D. Shim, S. Lee, and H. J. Kim. Snerl: Semantic-aware neural radiance fields for reinforcementlearning. ICML , 2023.[27] Y . Chen, S. Liu, and X. Wang. Learning continuous image representation with local implicitimage function. In Proceedings of the IEEE/CVF conference on computer vision and patternrecognition , pages 8628–8638, 2021.[28] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks:Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 4460–4470, 2019.[29] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf:Representing scenes as neural radiance fields for view synthesis. Communications of the ACM ,65(1):99–106, 2021.[30] M. Niemeyer, L. Mescheder, M. Oechsle, and A. Geiger. Occupancy flow: 4d reconstructionby learning particle dynamics. In Proceedings of the IEEE/CVF international conference oncomputer vision , pages 5379–5389, 2019.[31] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove. Deepsdf: Learning con-tinuous signed distance functions for shape representation. In Proceedings of the IEEE/CVFconference on computer vision and pattern recognition , pages 165–174, 2019.[32] V . Sitzmann, M. Zollh ̈ofer, and G. Wetzstein. Scene representation networks: Continuous3d-structure-aware neural scene representations. Advances in Neural Information ProcessingSystems , 32, 2019.[33] Z. Jiang, Y . Zhu, M. Svetlik, K. Fang, and Y . Zhu. Synergies between affordance and geometry:6-dof grasp detection via implicit representations. arXiv preprint arXiv:2104.01542 , 2021.[34] Y .-C. Lin, P. Florence, A. Zeng, J. T. Barron, Y . Du, W.-C. Ma, A. Simeonov, A. R. Garcia,and P. Isola. Mira: Mental imagery for robotic affordances. In Conference on Robot Learning ,pages 1916–1927. PMLR, 2023.10[35] Y . Li, S. Li, V . Sitzmann, P. Agrawal, and A. Torralba. 3d neural scene representations forvisuomotor control. In Conference on Robot Learning , pages 112–123. PMLR, 2022.[36] Y . Chen and X. Wang. Transformers as meta-learners for implicit neural representations. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part XVII , pages 170–187. Springer, 2022.[37] W. Jang and L. Agapito. Codenerf: Disentangled neural radiance fields for object categories.InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 12949–12958, 2021.[38] K.-E. Lin, Y .-C. Lin, W.-S. Lai, T.-Y . Lin, Y .-C. Shih, and R. Ramamoorthi. Vision transformerfor nerf-based view synthesis from a single input image. In Proceedings of the IEEE/CVFWinter Conference on Applications of Computer Vision , pages 806–815, 2023.[39] J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny. Commonobjects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 10901–10911, 2021.[40] K. Rematas, R. Martin-Brualla, and V . Ferrari. Sharf: Shape-conditioned radiance fields froma single view. arXiv preprint arXiv:2102.08860 , 2021.[41] A. Trevithick and B. Yang. Grf: Learning a general radiance field for 3d representation andrendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision ,pages 15182–15192, 2021.[42] Q. Wang, Z. Wang, K. Genova, P. P. Srinivasan, H. Zhou, J. T. Barron, R. Martin-Brualla,N. Snavely, and T. Funkhouser. Ibrnet: Learning multi-view image-based rendering. In Pro-ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages4690–4699, 2021.[43] A. Yu, V . Ye, M. Tancik, and A. Kanazawa. pixelnerf: Neural radiance fields from one orfew images. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 4578–4587, 2021.[44] V . Tschernezki, I. Laina, D. Larlus, and A. Vedaldi. Neural feature fusion fields: 3d distillationof self-supervised 2d image representations. arXiv , 2022.[45] S. Kobayashi, E. Matsumoto, and V . Sitzmann. Decomposing nerf for editing via feature fielddistillation. NeurIPS , 2022.[46] J. Ye, N. Wang, and X. Wang. Featurenerf: Learning generalizable nerfs by distilling founda-tion models. arXiv , 2023.[47] J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik. Lerf: Language embeddedradiance fields. arXiv preprint arXiv:2303.09553 , 2023.[48] K. M. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, et al. Conceptfusion: Open-set multimodal 3d mapping. arXiv preprintarXiv:2302.07241 , 2023.[49] K. Blomqvist, F. Milano, J. J. Chung, L. Ott, and R. Siegwart. Neural implicit vision-languagefeature fields. arXiv preprint arXiv:2303.10962 , 2023.[50] M. Caron, H. Touvron, I. Misra, H. J ́egou, J. Mairal, P. Bojanowski, and A. Joulin. Emergingproperties in self-supervised vision transformers. In ICCV , 2021.11[51] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In ICML , 2021.[52] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine q-attention: Efficient learningfor visual robotic manipulation via discretisation. In CVPR , 2022.[53] S. Klemm, J. Oberl ̈ander, A. Hermann, A. Roennau, T. Schamm, J. M. Zollner, and R. Dill-mann. Rrt*-connect: Faster, asymptotically optimal motion planning. In 2015 IEEE interna-tional conference on robotics and biomimetics (ROBIO) , pages 1670–1677. IEEE, 2015.[54] H. Lin, S. Peng, Z. Xu, Y . Yan, Q. Shuai, H. Bao, and X. Zhou. Efficient neural radiance fieldsfor interactive free-viewpoint video. In SIGGRAPH Asia , 2022.[55] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam:Visual explanations from deep networks via gradient-based localization. In Proceedings of theIEEE international conference on computer vision , pages 618–626, 2017.[56] Y . You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer,and C.-J. Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes.arXiv , 2019.12A VisualizationsPSNR=15.01PSNR=13.88PSNR=14.67PSNR=21.51PSNR=21.28PSNR=19.56Ground Truth RGB With Action LossWithout Action LossRGBFeatureRGBFeatureLeftFrontRightFigure 5: View synthesis of GNFactor in the real world. PSNR is computed for quantitative evaluation.The visualization with the action loss is relatively blurred compared to that without the action loss. The noisyrendering is mainly because, in inference, we do not optimize per-step for rendering but just perform onefeedforward to obtain the feature.FeatureRGBFigure 6: More novel view synthesis results. Both RGB and features are synthesized. We remove the actionloss here for a better rendering quality. Videos are available on yanjieze.com/GNFactor .Input RGB-D ImageInput RGB-D Image3D Grad-CAM3D Grad-CAMFigure 7: Visualize the 3D policy module by Grad-CAM [55]. Though the supervision signal is only theQ-value for a single voxel during the training process, we observe in visualizations that the target objects areclearly attended by our policy. Videos are available on yanjieze.com/GNFactor .13B Task DescriptionsSimulated tasks. We select 10language-conditioned tasks from RLBench [14], all of which involveat least 2variations. See Table 5 for an overview. Our task variations include randomly sampledcolors, sizes, counts, placements, and categories of objects, totaling 166different variations. The setof colors have 20 instances: red, maroon, lime, green, blue, navy, yellow, cyan, magenta, silver, gray,orange, olive, purple, teal, azure, violet, rose, black, and white. The set of sizes includes 2 types:short and tall. The set of counts has 3 instances: 1, 2, 3. The placements and object categoriesare specific to each task. For example, open drawer has 3 placement locations: top, middle andbottom. In addition to these semantic variations, objects are placed on the tabletop at random poseswithin a limited range.Table 5: Language-conditioned tasks in RLBench [14].Task Variation Type # of Variations Avg. Keyframs Language Templateclose jar color 20 6.0 “close the — jar”open drawer placement 3 3.0 “open the — drawersweep to dustpan size 2 4.6 “sweep dirt to the — dustpan”meat off grill category 2 5.0 “take the — off the grill”turn tap placement 2 2.0 “turn — tap”slide block color 4 4.7 “slide the block to — target”put in drawer placement 3 12.0 “put the item in the — drawer”drag stick color 20 6.0 “use the stick to drag the cube onto the — — target”push buttons color 50 3.8 “push the — button, [then the — button]”stack blocks color, count 60 14.6 “stack — — blocks”Generalization tasks in simulation. We design 6additional tasks where the scene is changed basedon the original training environment, to test the generalization ability of GNFactor. Table 6 gives anoverview of these tasks. Videos are also available on yanjieze.com/GNFactor .Table 6: Generalization tasks based on RLBench.Task Base Changedrag (D) drag stick add two colorful buttons on the tableslide (L) slide block change the block size to a larger oneslide (S) slide block change the block size to a smaller oneopen (n) open drawer change the position of the drawerturn (N) turn tap change the position of the tappush (D) push buttons add two colorful jar on the tableReal robot tasks. In the experiments, we perform three tasks along with three additional tasks wheredistracting objects are present. The door task requires the agent to open the door on an mircowave,a task which poses challenges due to the precise coordination required. The faucet task requiresthe agent to rotate the faucet back to center position, which involves intricate motor control. Lastly,theteapot task requires the agent to locate the randomly placed teapot in the kitchen and move iton top of the stove with the correct pose. Among the three, the teapot task is considered the mostchallenging due to the random placement and the need for accurate location and rotation of thegripper. All 6tasks are set up in two different kitchens, as visualized in Figure 8. The keyframesused in real robot tasks are given in Figure 9.C Implementation DetailsVoxel encoder. We use a lightweight 3D UNet (only 0.3M parameters) to encode the input voxel1003×10(RGB features, coordinates, indices, and occupancy) into our deep 3D volumetric rep-resentation of size 1003×128. Due to the cluttered output from directly printing the network, we14(a) Kitchen 1. (b) Kitchen 2.Figure 8: Kitchens. We give a closer view of our two kitchens for real robot experiments. Thefigures are captured in almost the same position to display the size difference between the two.timeTurnFaucetOpenOvenRelocateTeapotFigure 9: Keyframes for real robot tasks. We give the keyframes used in our 3real robot tasksacross 2kitchens.15provide the PyTorch-Style pseudo-code for the forward process as follows. For each block, we usea cascading of one Convolutional Layer, one BatchNorm Layer, and one LeakyReLU layer, whichis common practice in the vision community.def forward(self, x):conv0 = self.conv0(x) # 100^3x8conv2 = self.conv2(self.conv1(conv0)) # 50^3x16conv4 = self.conv4(self.conv3(conv2)) # 25^3x32x = self.conv6(self.conv5(conv4)) # 13^3x64x = conv4 + self.conv7(x) # 25^3x32x = conv2 + self.conv9(x) # 50^3x16x = self.conv_out(conv0 + self.conv11(x)) # 100^3x128return xGeneralizable Neural Field (GNF). The overall network architecture of our GNF is close to theoriginal NeRF [29] implementation. We use the same positional encoding as NeRF and the encodingfunction is formallyγ(p) =sin20πp,cos20πp,···,sin2L−1πp,cos2L−1πp. (5)This function is applied to each of the three coordinate values and we set L= 6in our experiments.The overall position encoding is then 36-dimensional. The input of GNF is thus a concatenation ofthe original coordinates ( R3), the position encoding ( R36), the view directions ( R3), and the voxelfeature ( R128), totaling 170dimensions. Our GNF mainly consists of 5ResnetFCBlocks , in whicha skip connection is used. The input feature is first projected to 512with a linear layer and fedinto these blocks, and then projected to the output dimension 516(RGB, density, and Diffusionfeature) with a cascading of one ReLU function and one linear layer. We provide the PyTorch-Stylepseudo-code for the networks as follows.GNF(Linear(in_features=170, out_features=512, bias=True),(0-4): 5 x ResnetFCBlocks((fc_0): Linear(in_features=512, out_features=512, bias=True)(fc_1): Linear(in_features=512, out_features=512, bias=True)(activation): ReLU()),ReLU(),Linear(in_features=512, out_features=516, bias=True))Percevier Transformer. Our usage of Percevier Transformer is close to PerAct [3]. We use 6attention blocks to process the sequence from multi-modalities (3D volume, language token, androbot proprioception) and output a sequence also. The usage of Perceiver Transformer enablesus to process the long sequence with computational efficiency, by only utilizing a small set oflatents to attend the input. The output sequence is then reshaped back to a voxel to predict therobot action. The Q-function for translation is predicted by a 3D convolutional layer, and for theprediction of openness, collision avoidance, and rotation, we use global max pooling and spatialsoftmax operation to aggregate 3D volume features and project the resulting feature to the outputdimension with a multi-layer perception. We could clarify that the design for the policy module isnot our contribution; for more details please refer to PerAct [3] and its official implementation onhttps://github.com/peract/peract .D Demonstration Collection for Real Robot TasksFor the collection of real robot demonstrations, we utilize the HTC VIVE controller and bases-tation to track the 6-DOF poses of human hand movements. We then use triad-openvr package(https://github.com/TriadSemi/triad_openvr ) to employ SteamVR and accurately maphuman operations onto the xArm robot, enabling it to interact with objects in the real kitchen.16We record the real-time pose of xArm and 640×480RGB-D observations with the pyrealsense2(https://pypi.org/project/pyrealsense2/ ). Though the image size is different from oursimulation setup, we use the same shape of the input voxel, thus ensuring the same algorithm isused across the simulation and the real world. The downscaled images ( 80×60) are used for neuralrendering.E Detailed DataBesides reporting the final success rates in our main paper, we give the success rates for the best sin-gle checkpoint ( i.e., evaluating all saved checkpoints and selecting the one with the highest successrates), as shown in Table 7. Under this setting GNFactor outperforms PerAct with a larger margin.However, we do not use the best checkpoint in the main results for fairness.We also give the detailed number of success in Table 8 for reference in addition to the success ratescomputed in Table 2.Table 7: Multi-task test results on RLBench. We report the success rates for the best single checkpoint forreference. We could observe GNFactor surpasses PerAct by a large margin.Method / Task close jar open drawer sweep to dustpan meat off grill turn tap AveragePerAct 22.7±5.0 62.7±13.2 0.0±0.0 46.7±14.7 36.0±9.8GNFactor 40.0±5.7 77.3±7.5 40.0±11.8 66.7±8.2 45.3±3.8Method / Task slide block put in drawer drag stick push buttons stack blocksPerAct 22.7±6.8 9.3±5.0 12.0±6.5 18.7±6.8 5.3±1.9 23.6GNFactor 18.7±10.5 10.7±12.4 73.3±13.6 20.0±3.3 8.0±0.0 40.0Table 8: Detailed data for generalization to novel tasks. We evaluate 20episodes, each across 3seeds, for the final checkpoint and report the number of successful trajectories here.Generalization PerAct GNFactor w/o. Diffusion GNFactordrag (D) 2,0,2 15 ,2,5 18 ,5,5slide (L) 6,6,8 1 ,10,10 6 ,5,4slide (S) 0,2,1 6 ,1,5 0 ,3,1push (D) 6,3,3 4 ,4,5 7 ,6,6open (N) 6,2,7 5 ,2,9 8 ,5,6turn (N) 4,5,2 2 ,7,2 6 ,6,5F Stronger BaselineTo make the comparison between our GNFactor and PerAct fairer, we enhance Peract’s input byusing 4 camera views, as visualized in Figure 10. These views ensure that the scene is fully covered.It is observed in our experiment results (Table 1) that GNFactor which takes the single view as inputstill outperforms PerAct with more views.G HyperparametersWe give the hyperparameters used in GNFactor as shown in Table 9. For the GNF training, we usea ray batch size bray= 512 , corresponding to 512pixels to reconstruct, and use λfeat= 0.01andλrecon= 0.01to maintain major focus on the action prediction. For real-world experiment, we setthe weight of the reconstruction loss to 1.0 and the weight of action loss to 0.1. This choice wasbased on our observation that reducing the weight of the action loss and increasing the weight ofthe reconstruction loss did not significantly affect convergence but did help prevent overfitting to17FrontWristLeftRightFigure 10: Visualization of 4 cameras used for the stronger PerAct baseline. To enhance thePerAct baseline, we add more views as the input of PerAct. These views are pre-defined in RLBench,making sure the observation covers the entire scene.a limited number of real-world demonstrations. We uniformly sample 64points along the ray forthe “coarse” network and sample 32points with depth-guided sampling and 32points with uniformsampling for the “fine” network.Table 9: Hyperparameters used in GNFactor.Variable Name Valuetraining iteration 100kimage size 128×128×3input voxel size 100×100×100batch size 2optimizer LAMB [56]learning rate 0.0005ray batch size bray 512weight for reconstruction loss λrecon 0.01weight for embedding loss λfeat 0.01number of transformer blocks 6number of sampled points for GNF 64number of latents in Perceiver Transformer 2048dimension of Stable Diffusion features 512dimension of CLIP language features 512hidden dimension of NeRF blocks 51218 |
bIvIUNH9VQ | Hijacking Robot Teams Through AdversarialCommunicationZixuan Wu Sean Ye Byeolyi Han Matthew GombolayGeorgia Institute of Technology, Atlanta, GA, USA{zwu380, seancye, bhan67, mgombolay3 }@gatech.eduAbstract:Communication is often necessary for robot teams to collaborate and completea decentralized task. Multi-agent reinforcement learning (MARL) systems allowagents to learn how to collaborate and communicate to complete a task. Thesedomains are ubiquitous and include safety-critical domains such as wildfire fight-ing, traffic control, or search and rescue missions. However, critical vulnerabil-ities may arise in communication systems as jamming the signals can interruptthe robot team. This work presents a framework for applying black-box adver-sarial attacks to learned MARL policies by manipulating only the communicationsignals between agents. Our system only requires observations of MARL poli-cies after training is complete, as this is more realistic than attacking the trainingprocess. To this end, we imitate a learned policy of the targeted agents withoutdirect interaction with the environment or ground truth rewards. Instead, we inferthe rewards by only observing the behavior of the targeted agents. Our frame-work reduces reward by 201% compared to an equivalent baseline method andalso shows favorable results when deployed in real swarm robots. Our novel at-tack methodology within MARL systems contributes to the field by enhancing ourunderstanding on the reliability of multi-agent systems.Keywords: Adversarial Attacks, Multi-Agent Reinforcement Learning1 IntroductionEffective communication among robots is essential for information exchange, collaboration, andcollective decision-making. It plays a vital role in various robotics domains, including collaborativemanipulation [1] and multi-robot navigation [2]. Ensuring reliable and secure communication iscrucial for maintaining the overall system’s performance, safety, and integrity.Multi-agent reinforcement learning (MARL) has been a powerful tool to train agents in complexdomains but the literature is lacking in studies about the vulnerabilities and defenses of these sys-tems. MARL techniques that utilize communication have been widely applied to scenarios wheremultiple agents need to collaborate for a shared goal in robotics tasks such as autonomous driving[3, 4, 5] and path planning [2, 6, 7]. Researchers examined various aspects of communication inMARL, including when to communicate [8], who to communicate with [9], and different types ofgraph-structured communication [10, 11]. Some of these frameworks use binarized communicationto pursue Low-Size, -Weight, and -Power (Low-SWAP) systems [11] which causes agents to com-municate in a highly efficient manner. Malicious actors can compromise these systems and endangerthe lives of many [12].There are only a few works that assume the communication channel can be imperfect [13] or canbe attacked [14] paralleling with its computer vision counterpart [15, 16, 17, 18, 19], which makesit crucial to understand these weaknesses such that we can create appropriate defense mechanisms.In this work, we learn to attack the communication signals within a multi-robot team discretely7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Adversarial Communication Pipeline: Multi-agent team (left) communicates informationfor decentralized coordination; an adversarial system (middle) learns a model of the teams’ activitiesand communication patterns and (right) broadcasts counterfeit team messages to trick team memberstowards pursuing low-priority activities.without any trace of the training process on the target robots. First, we learn surrogate policies fromthe observation, messages, and actions of the target robots which are accessible from malware orinsecure networks [20, 21, 22, 23, 24]. Second, we estimate the agent rewards from their behaviorsinstead of using rewards from the environment. Finally, we use an actor-critic framework completelyoffline to learn how to hijack the targeted system without environment interactions. Our methodrequires the least prior knowledge as compared to prior work and results in robots traversing to thewrong location and drastically hindering team performance.Contributions:1. We propose an actor-critic framework that enables our adversarial policy to learn withoutdirect interaction with the target agents or the environment nor the ground truth agent re-ward. Our framework manipulates the behaviors of target robots with the communicationattacking strategy learned through surrogate target policies and transferable to real ones.Additionally, we introduce a differentiable framework for training adversarial communica-tion policies that can modify digital communication signals [10, 11].2. We demonstrate the effectiveness of our algorithm in three distinct multi-agent domains:predator-capture-prey, partially observable predator-prey, and speaker-listener. Acrossthese domains, our method surpasses the baseline approach by reducing the reward of thetarget agents by 465% compared to a baseline approach.3. We validate the applicability of our algorithm on physical swarm robots in the Robotarium[25]. By acting as a strong adversary, our method reduces the reward achieved by thetarget agents by an average of 201% across all three environments compared to a baselinestrategy, which employs an equivalent random flipping approach.2 Related WorkIn this section, we describe how multi-agent reinforcement learning can be used to control robots in aDec-POMDP setting. We also provide an overview of the role of communication in a MARL frame-work and highlight the vulnerability of communication to adversarial attacks, leading to potentialsystem failures and safety risks.2.1 Multi-Agent Reinforcement LearningIn recent years, communication has played a crucial role in enhancing coordination and collaborationamong robots in multi-agent reinforcement learning (MARL) frameworks [8, 9, 10, 26, 11, 27, 28].Compared to previous works in learning policies [29, 30], recent MARL frameworks have enabledlearning in more complex environments and train multiple agents. These MARL communicationframeworks have been used in several robotics applications such as multi-robot path planning [2]and cooperative driving [31]. Various approaches have been proposed, including trainable differen-tiable communication channels [26, 27], partially observable environments [28], and soft-attentionnetworks for selective communication [8, 9]. However, the effectiveness of communication is threat-ened by adversarial attacks, which can lead to system failures and safety risks, particularly in do-mains like self-driving vehicles [32]. This paper aims to evaluate the robustness of communication2in MARL systems, building upon prior advancements in communication techniques of the binarizedcommunication approach [11] that improves bandwidth efficiency. Key to note is that our adversar-ial policy does not make any assumptions on targets other than inter-agent binarized communication.Our adversarial attack can hypothetically be utilized in any framework as long as we can learn goodsurrogate policies of target agents. We aim to show the generality of our approach in future work.2.2 Adversarial Attacks in MARL and CommunicationAdversarial attacks were first studied within the context of computer vision, where small perturba-tions to the input could induce faulty outputs [18]. Adversarial attacks aim to deteriorate modelperformance in tasks like classification [16, 17, 19], segmentation [33, 34, 35], or object detec-tion [36, 37]. These ideas were later extended to reinforcement learning [38, 39], altering agentactions through perturbations in environment observations [40].In adversarial attacks, two common categories are white-box attacks, which assume knowledge ofneural network weights, and black-box attacks, which assume limited model parameter information.White-box attacks typically optimize objectives using methods such as Fast Gradient Sign Method(FGSM) [16, 17] or Projected Gradient Descent (PGD) [19]. Meanwhile, black-box attacks rely onsurrogate models that approximate decision boundaries such that the adversarial attacking targetingon it could be transferred to the original models. Input-output pairs in black-box scenarios canalso be augmented by FGSM [18] or PGD [19] to generate a synthetic dataset that induces similardecision boundaries. We utilize black box attacks with augmentation similar to FGSM for first-orderapproximation of our surrogate models.Prior work [41] has trained an adversarial communication protocol in a multi-agent setting by usinga reinforcement learning agent to optimize the adversarial policy. However, this approach requiresdirect interaction with the environment and is impractical as the training process would be easilydetected by observers of the system. To address this limitation, we propose a black box setting andemploy transfer attacks [42], training a surrogate model to mimic the target model. Furthermore,our work distinguishes itself by assuming a “man-in-the-middle” attack, where an interceptor subtlyflips the aggregated binarized communication vectors. This is in contrast to previous assumptions ofa single target victim agent with a limited number ( ≤N−12) of potentially malicious messages [14].3 Problem FormulationWe ground our problem in a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) formalism which is a 10-tuple of ⟨S, M, A, P, R, Υ, O,Π, N, γ⟩.Sis the state set of theenvironment and Mis the message state set. For each agent i∈N:= 1, ..., N , the agent chooses anaction ai∈Aat state si∈S. The transition function is denoted by P(S′|S, A). Each agent has itsown reward based on global state-actions ri=Ri(S, A)andγ∈[0,1)as the discount factor. Sincethe environment is partially observable, each agent also has its own individual partial observationvi∈Υwhich is produced by the observation function Υ =O(S). Agents can have two policies: anaction policy πai(ai|τi)and a message policy πci(mouti|τi). These are both conditioned on their ownpartial observations and the messages received from other communicative agents τi={vi, M\mi}.Figure 2: Adversarial Policy at Test-Time: Our adversarial attacking policy πadvchanges the mes-sages from the message policy πcsuch that the receiving agents are maximally disrupted.3We model a single adversarial policy πadv(δ|Υ, Mout)that has access to the partial observationsΥ, message outputs Moutof all agents and produces adversarial signal δ. The partial observationscould be gathered by the adversary by shadowing the relevant targeted agents. We generate a mali-cious revised communication, denoted as Madv, by combining the perturbation δwith the originalmessages Moutsent to each agent, formulated as Madv=δ⊙Mout. Figure 2 shows how theadversarial policy πadvinfluences the target policies. Here we consider the transition and policiesas deterministic ones: S′=P(S, A),ai=πai(τi),mouti=πci(τi)andδ=πadv(Υ, Mout).4 MethodologyIn this section, we define the design choices for training our adversarial communication policy. Thegoal of the policy is to minimize the reward of the victim agents while minimizing the differencebetween the original and tampered communication vector. Algorithm 1 provides an overview of ouroverall training procedure: 1) Learning a surrogate policy, 2) Learning an actor-critic for the ad-versarial policy, and 3) Updating the actor with differentiable binary communication. We presumeaccess to the observations, messages, and actions of the target agents, assuming that we have suc-cessfully intercepted the communication protocol of the multi-agent system. However, we do notassume access to ground truth rewards. We also assume a binary communication channel of 16-bitsbut further studies could extend our work to remove this assumption.Algorithm 1: Adversarial Communication PseudocodeInput: D(Υ(Observations ), M(Messages ), A(Actions ))fori= 0, 1, 2 ... doforagent j=1 to N doSample batch of observation, message, actions: (υj, υ′j)∼Υ,(mj, m′j)∼M, a j∼AUpdate the surrogate policy πsurrj(aj|υj, mj)endCompute targetsRadv=−1NXjlog(πsurrj(aj|υj, mj)) (1)y=Radv+γQφ,trgt (Υ′, A′)|A′=Πatrgt,surr (τ′i)(2)Update the adversarial critic with one step of gradient descentφ←φ− ∇ φ(Qφ(Υ, A)−y)2) (3)Update the adversarial policy with one step of gradient descentθ←θ+∇θ(Qφ(Υ, Aadv)− Cflip)|Aadv=Πasurr(υj,madvj),Madv=δ⊙Mout,δ=πadvθ(Υ,Mout)(4)endFirst, we train surrogate policies to imitate the real policies (Section 4.1). We can then learn a Q-function for the adversarial policy by assigning rewards based upon the surrogate policies (Eq. 1)and using the Bellman equation to update the critic (Eq. 2, 3). Details are described in Section 4.3.Finally, the adversarial policy is updated with the differentiable binary flipping mechanism (Eq. 4)and described in Section 4.2. We include hyperparameters in Appendix B.2 for more details.4.1 Learning a Surrogate PolicyAs our method is a black-box attack, we assume we do not have access to the ground truth policiesof the agents we are attacking. To this end, we learn a surrogate policy ( πsurr)for each agentwe are attacking, where we assume access to a dataset D(Υ, M, A )consisting of observations,4communication, action pairs from victim agents. The surrogate policies are used in two ways asdescribed in the next section: 1) as a reward signal to learn a critic and 2) as a mechanism tosimulate the adversarial output into real agent actions. To obtain a similar first-order approximationof our surrogate policies to the real policies, we augment our dataset with neighboring data byadding small Gaussian noise to each input to produce augmented input-output pairs. We find that thisaugmentation is enough to obtain a useful surrogate policy and thus there is no need to achieve higherprecision using much more complicated methods like FSGM augmenting [18]. We use behavioralcloning methods to minimize the log-likelihood between actions given the agent observations andmessages from other agents (log p(a|υ, m)).4.2 Differentiable Targeting of Binary Communication ChannelsIn binary communication, each bit has two states: 0 and 1. Modifying a bit involves flipping itto the other state. To make this process differentiable, we need to parameterize it similarly to itscontinuous counterpart. We define the adversarial modified message Madvas the composition ofthe original communication vector Moutand the parameterized modification δ[41].Two methods are available to parameterize the adversarial policy. One method is to directly outputthe adversarially revised communication vector as Madv=δ=πadv(Mout, υ), which we call the“direct” form. The second method is called “flipping” whose adversarial policy output δis used toindicate which digit to flip such that we can write it in an XOR form using boolean algebra as inEquation 5 where ·means pointwise multiplication.Madv=δ·Mout+δ·Mout=δ·(1−Mout) + (1 −δ)·Mout(5)4.3 Learning an Actor-Critic for Adversarial CommunicationOur goal for the adversarial policy is to secretly train itself without any interaction with the en-vironment in a black-box setting such that the adversarial training process does not induce anyabnormalities and cannot be detected. This requirement raises a higher standard than recent work[41], where the adversarial policy is trained with reinforcement learning and requires environmentinteractions of the adversarial policy’s actions. We adapt the actor-critic framework to learn 1) acritic ( Qφ(Υ, A)) that is learned within the observation-action (Υ, A)space of the target agents and2) an actor policy that distorts the communication messages. We use the actor-critic framework as itallows us to utilize the original Dec-POMDP of the target agents for the critic rather than buildinga new separate MDP within the space of the observations and messages as actions. We train ourQ-function Qφ(Υ, A)to use the observation-actions of all agents in the environment based on theBellman equation and TD error as shown in Equation 6.L(φ) =EΥ,A,Radv,Υ′[(Qφ(Υ, A)−y)2], y=Radv+γQφ,targ (Υ′, A′)|A′=Πatrgt,surr (τ′i).(6)A question naturally arises: how do we get the reward Radvto train this Q-function? Because we donot assume access to ground-truth rewards as previous literature does, we cannot utilize the negativemean of all agent reward Radv=−1NPNi=1rito optimally degrade performance on their ownmetrics. Instead, we assign the reward for a certain state-action pair of all agents to be the inverse ofthe log probability of the optimal action from the surrogate policy (Eq. 7).Radv=−1NXjlog(πsurrj(aj|υj, mj)) (7)Intuitively, we are driving the critic to punish state-action pairs visited by the target policies. Anotherinterpretation is that the probability of the samples’ appearance is proportional to the exponential ofthe reward in inverse reinforcement learning (IRL) theory [43, 44].Given a well-trained critic, we can learn the adversarial policy πadv(δ|Υ, Mout)to modify the com-munication messages to maximize the Q-function minus a bit flipping penalty Cflip=L1(δ). Weutilize the surrogate policies again to produce hypothetical actions given the modified communica-tion vectors.δ=πadv(Υ, Mout), Madv=δ·(1−Mout) + (1 −δ)·Mout, Aadv= Πasurr(υj, madvj)(8)5The adversarial policy can then be updated through automatic differentiation to produce messagesthat disturb the surrogate policies. With our method, we can train the adversarial communicationpolicy completely offline with the only assumption being that we have intercepted some observation,message, and action pairs from the target policies.5 Results and Discussion5.1 EnvironmentsWe utilize three domains originally proposed by the MADDPG [28] paper and modified for our use:Predator-Capture-Prey, Partially Observable Predator-Prey, and Speaker-Listener. In all environ-ments, the goal is to maximize the number of collisions between the agents and the target. Furtherdetails of these environments are included in Appendix B.1.Predator-Capture-Prey (PCP) In this environment, a team of agents must capture an adversaryprey opponent. To emphasize the role of communication in this domain, capture agents cannot seeany other agents and have to make decisions based on the received messages from all observingagents.Partially Observable Predator-Prey (PO-PP) We modify the predator-prey environment such thatall predator agents can only receive the location of the prey when they are within a distance dof theprey. The agents must communicate with each other to locate the position of to the prey. We removethe capture agents from this environment.Speaker-Listener (SL) In speaker-listener, a team of two agents, consisting of a speaker and alistener must work together for the listener to reach a target color destination. The speaker mustcommunicate the target color to the listener and the listener must then go to the color destination.5.2 Adversarial Communication ValidationIn this section, we validate our secret adversarial communication channel performance by com-paring it with a random flipping method where we flip the same number of bits as the adversarialcommunication. We ensure that these two methods are always flipping the same number of bits inthe communication and compare the results across various numbers of bits flipped. The randomflipping baseline has been used in prior work [14] and represents a non-adaptive black-box attacker.Therefore, we modify the adversarial policy loss function to L=−Q(v, a, m ) +α·1/Nc·L1(δ),which regularizes the policy loss by the average sum of bits flipped. The coefficient αis used tobalance the adversarial policy performance with the number of bits flipped and we control the regu-larization speed by annealing α=α0·max( ne−β,0)/ε(where neis the training episode number,βis the regularizer intercept, and εis the regularizer slope) to fine-tune the bits flipped. We evaluatethe reward and collision statistics by running 50 episodes for each adversarial policy checkpoint.Training curves with the regularized loss and hyperparameteres can be found in Appendix B.2.Figure 3 shows the rewards and collisions versus the episodes from which we get our adversarialpolicy in the three environments. Agents’ reward and collisions from the adversarial communicationincrease as the number of bits flipped decreases but are always less than those from random flipping.This result validates the adversary property of our method and distinguishes it from random noisewith the same magnitude. We also find that the gap between the two methods is lower when thenumber of bits flipped is lower, which is reasonable since it is difficult for adversarial policy toattack the critical combinations of the communication digits in such a limitation. Interestingly,the random flipping curve nearly remains horizontal in the PO-PP environment, which means thatrandom attack does not work at all regardless of the number of bits flipped. This is because thepredators do not only rely on communication but also on their own observations to take actions,therefore, irregularly changing the communication will not confuse agents much, compared to theadversarial policy which flips crucial bits and guides the agents to low-reward regions.6(a) Average Rewards in PCP, PO-PP and SL respectively(b) Average Collisions in PCP, PO-PP and SL respectivelyFigure 3: The figure illustrates the comparison between the adversarial policy (blue) and randomflipping (red) in terms of average reward and number of collisions. Consistently, the adversarialcommunication policy degrades team performance more effectively than the random policy acrossall bit flip counts, leading to lower rewards and a lower number of collisions for agents.5.3 Comparison of Adversarial Message ParameterizationIn this section, we compare our adversarial policy, between the “flipping” regularization strategy(as described in Section 4.2), and the “direct” strategy in terms of the normalized adversarial policyperformance score ( Sc) which denotes how much worse an agent performs per flipping a singledigit (Appendix B.3). The training and testing settings are the same as section 5.2 and we record themaximum normalized score of the flipping method (ours) to the direct method (Table 1).Table 1: Reward and Collision Normalized ScoresPCP PO-PP SLReward Sc Collision Sc Reward Sc Collision Sc Reward Sc Collision ScAdv[Ours] 0.11 ±0.05 4.58 ±2.23 0.04 ±0.02 1.45 ±0.55 0.13 ±0.06 31.38 ±6.51Adv[Direct] 0.01±0.01 1.07 ±0.14 0.02 ±0.01 0.80 ±0.07 0.11 ±0.05 30.72 ±6.19Random 0.01±0.01 0.96 ±0.29 0.00 ±0.01 0.30 ±0.88 0.04 ±0.02 13.68 ±3.19In the PCP and PO-PP environments, the direct strategy exhibits significantly worse reward andcollision scores, performing at 90.9% and 76.64% lower, respectively, compared to our approach.This discrepancy arises from the failure of the direct strategy to effectively balance lowering agentperformance and reducing flipped bits. In these environments, the direct strategy results in 38.7%and 65% bits flipped, whereas our approach achieves 7.5% and 28.3% of bits flipped. Interest-ingly, in the SL environment, where there is a single message sender, one receiver, and 16 bits ofcommunication, the direct strategy performs relatively better due to the simpler balancing of perfor-mance and regularization terms. Nevertheless, our approach still outperforms the direct strategy by18.18% and 2.15% in terms of reward and collision, with 8.15% and 8.29% of bits flipped, respec-tively. These findings highlight the critical importance of the flipping representation in facilitatingbackward gradient computation.5.4 Robotarium Physical Robot DemonstrationWe demonstrate our results on a physical swarm robotics system (details in Appendix A). We utilizestate-based position control to drive each robot according to their policies both with the adversarial7communication intervention and random flipping intervention. We show that our adversarial com-munication policy drastically reduces reward in all three environment settings (Table 2) with thesame number of bits flipped per episode and averaged over three episodes.Table 2: Attacked Agents Reward and CollisionPCP PO-PP SLReward Collision Reward Collision Reward CollisionAdv[Ours] -0.71 ±0.34 0.05 ±0.23 -0.71 ±0.34 0.05 ±0.23 -0.43 ±0.14 0.00 ±0.00Random -0.18±0.33 0.29 ±0.45 0.02 ±0.56 0.20 ±0.40 -0.21 ±0.15 0.34 ±0.47Figure 4: PCP Demonstration: Adversarial Communication (top) vs Random Flipping (bottom)6 Limitations and Future WorkOur work relies on the assumption that the surrogate policies can accurately model the behaviorsof the true policies and depends on using the agent policies to estimate the ground-truth rewardfunction. A limitation of this approach is the amount of training data required to imitate the surrogatepolicy, where the target agents may detect that information is being collected. Additionally, IRLmethods [44] could be used to learn a reward function that better reflects the ground truth reward.Our work has important ethical implications as it could be used to attack important systems but canimprove the community’s ability to help improve the robustness of these systems by characterizingthe vulnerabilities. This work does not assume any strategies for the target agents to defend againstadversarial attacks. A defender may include a parity bit indicating whether the total number of1-bits is even or odd to defend against our attacks on the communication. Additionally, defenderscould learn a response strategy by adjusting its communication scheme if they had access to previousexperiences with the attackers. Finally, if the adversarial attack is conducted on a multi-agent systemwith interpretable communication vectors, it may be easy to identify that messages have been altered.We leave adversarial attacks in this setting for future work.7 ConclusionWe introduce a practical adversarial communication policy that does not need direct environment in-teraction, enhancing the feasibility of adversarial attacks. Our method also utilizes a robust approachfor estimating agent rewards from observing behaviors only, without reward information. Lastly, wepioneer a differentiable method for adversarial communication in discrete binary channels, flippingbits for improved attack efficacy. Our algorithm is validated on real swarm robots in the Robotariumplatform. This showcases the versatility and real-world applicability of our approach. Our frame-work opens new avenues for enhanced security and robustness in multi-agent systems, with potentialimplications across various domains.8AcknowledgmentsWe wish to thank our reviewers for their valuable feedback in revising our manuscript. We alsothank Letian Chen for his expertise in inverse reinforcement learning, which inspired many of ourideas. Additionally, both Rohan Paleja and Esi Seraj provided invaluable insights on multi-agentreinforcement learning. This work was sponsored by the Naval Research Laboratory under grantnumber N00173-21-1-G009.References[1] L. E. Parker, D. Rus, and G. S. Sukhatme. Multiple mobile robot systems. Springer handbookof robotics , pages 1335–1384, 2016.[2] Q. Li, W. Lin, Z. Liu, and A. Prorok. Message-aware graph attention networks for large-scalemulti-robot path planning. IEEE Robotics and Automation Letters , 6(3):5533–5540, 2021.doi:10.1109/LRA.2021.3077863.[3] Q. Jiang, M. Qin, S. Shi, W. Sun, and B. Zheng. Multi-agent reinforcement learning for trafficsignal control through universal communication method. In L. D. Raedt, editor, Proceedings ofthe Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna,Austria, 23-29 July 2022 , pages 3854–3860. ijcai.org, 2022. doi:10.24963/ijcai.2022/535.[4] E. Van der Pol and F. A. Oliehoek. Coordinated deep reinforcement learners for traffic lightcontrol. Proceedings of learning, inference and control of multi-agent systems (at NIPS 2016) ,8:21–38, 2016.[5] B. Liu and Z. Ding. A distributed deep reinforcement learning method for traffic light control.Neurocomputing , 490:390–399, 2022. ISSN 0925-2312. doi:https://doi.org/10.1016/j.neucom.2021.11.106.[6] H. Qie, D. Shi, T. Shen, X. Xu, Y . Li, and L. Wang. Joint optimization of multi-uav targetassignment and path planning based on multi-agent reinforcement learning. IEEE access , 7:146264–146272, 2019.[7] M. Zolfpour-Arokhlo, A. Selamat, S. Z. Mohd Hashim, and H. Afkhami. Modeling of routeplanning system based on q value-based dynamic programming with multi-agent reinforce-ment learning algorithms. Engineering Applications of Artificial Intelligence , 29:163–177,2014. ISSN 0952-1976. doi:https://doi.org/10.1016/j.engappai.2014.01.001.[8] A. Singh, T. Jain, and S. Sukhbaatar. Learning when to communicate at scale in multiagentcooperative and competitive tasks. In Proceedings of International Conference on LearningRepresentations , 2019.[9] A. Das, T. Gervet, J. Romoff, D. Batra, D. Parikh, M. Rabbat, and J. Pineau. TarMAC: Targetedmulti-agent communication. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the36th International Conference on Machine Learning , volume 97 of Proceedings of MachineLearning Research , pages 1538–1546. PMLR, 09–15 Jun 2019.[10] Y . Niu, R. Paleja, and M. Gombolay. Multi-agent graph-attention communication and teaming.InProceedings of the 20th International Conference on Autonomous Agents and MultiAgentSystems , pages 964–973, 2021.[11] E. Seraj, Z. Wang, R. Paleja, D. Martin, M. Sklar, A. Patel, and M. Gombolay. Learningefficient diverse communication for cooperative heterogeneous teaming. In Proceedings ofthe 21st international conference on autonomous agents and multiagent systems , pages 1173–1182, 2022.[12] S. Strunsky. N.j. man fined $32k for illegal gps device that disrupted newark airport system,2013.9[13] S. G. Konan, E. Seraj, and M. Gombolay. Iterated reasoning with mutual information in coop-erative and byzantine decentralized teaming. In Proceedings of International Conference onLearning Representations , 2022.[14] Y . Sun, R. Zheng, P. Hassanzadeh, Y . Liang, S. Feizi, S. Ganesh, and F. Huang. Certifiablyrobust policy learning against adversarial multi-agent communication. In The Eleventh Inter-national Conference on Learning Representations , 2023.[15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,and Y . Bengio. Generative adversarial networks. Communications of the ACM , 63(11):139–144, 2020.[16] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus.Intriguing properties of neural networks. In Y . Bengio and Y . LeCun, editors, 2nd InternationalConference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014,Conference Track Proceedings , 2014.[17] I. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. InProceedings of International Conference on Learning Representations , 2015.[18] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference oncomputer and communications security , pages 506–519, 2017.[19] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning mod-els resistant to adversarial attacks. In Proceedings of International Conference on LearningRepresentations , 2018.[20] S. Ornes. How to hack a self-driving car. Physics World , 33(8):37, 2020.[21] S. Jafarnejad, L. Codeca, W. Bronzi, R. Frank, and T. Engel. A car hacking experiment: Whenconnectivity meets vulnerability. In 2015 IEEE globecom workshops (GC Wkshps) , pages 1–6.IEEE, 2015.[22] C. Miller. Lessons learned from hacking a car. IEEE Design & Test , 36(6):7–9, 2019. doi:10.1109/MDAT.2018.2863106.[23] T. Roccia. Today’s connected cars vulnerable to hacking, malware, 2018.[24] J. Pisarov and G. Mester. The future of autonomous vehicles. FME Transactions , 49(1):29–35,2021.[25] S. Wilson, P. Glotfelter, L. Wang, S. Mayya, G. Notomista, M. Mote, and M. Egerstedt.The robotarium: Globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems. IEEE Control Systems Magazine , 40(1):26–44, 2020. doi:10.1109/MCS.2019.2949973.[26] J. Foerster, I. A. Assael, N. De Freitas, and S. Whiteson. Learning to communicate with deepmulti-agent reinforcement learning. Advances in Neural Information Processing Systems , 29,2016.[27] S. Sukhbaatar, A. Szlam, and R. Fergus. Learning multiagent communication with backpropa-gation. In Proceedings of the 30th International Conference on Neural Information ProcessingSystems , NIPS’16, page 2252–2260, Red Hook, NY , USA, 2016. Curran Associates Inc. ISBN9781510838819.[28] R. Lowe, Y . I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information pro-cessing systems , 30, 2017.10[29] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observ-able stochastic domains. Artificial Intelligence , 101(1):99–134, 1998. ISSN 0004-3702.[30] C. Amato, D. S. Bernstein, and S. Zilberstein. Solving pomdps using quadratically constrainedlinear programs. In Proceedings of the Fifth International Joint Conference on AutonomousAgents and Multiagent Systems , AAMAS ’06, page 341–343, New York, NY , USA, 2006.Association for Computing Machinery. ISBN 1595933034. doi:10.1145/1160633.1160694.[31] N. Hyldmar, Y . He, and A. Prorok. A fleet of miniature cars for experiments in cooperativedriving. In 2019 International Conference on Robotics and Automation (ICRA) , pages 3238–3244, 2019. doi:10.1109/ICRA.2019.8794445.[32] A. Chowdhury, G. Karmakar, J. Kamruzzaman, A. Jolfaei, and R. Das. Attacks on self-drivingcars and their countermeasures: A survey. IEEE Access , 8:207308–207342, 2020.[33] A. Arnab, O. Miksik, and P. H. S. Torr. On the robustness of semantic segmentation models toadversarial attacks. In Proceedings of IEEE/CVF Conference on Computer Vision and PatternRecognition , 2018.[34] Z. Zhang, S. Huang, X. Liu, B. Zhang, and D. Dong. Adversarial attacks on yolact instancesegmentation. Computers & Security , 116:102682, 2022. ISSN 0167-4048.[35] J. Gu, H. Zhao, V . Tresp, and P. H. Torr. Segpgd: An effective and efficient adversarial attackfor evaluating and boosting segmentation robustness. In ECCV , pages 308–325. Springer,2022.[36] C. Xie, J. Wang, Z. Zhang, Y . Zhou, L. Xie, and A. Yuille. Adversarial examples for seman-tic segmentation and object detection. In 2017 IEEE International Conference on ComputerVision (ICCV) , pages 1378–1387, 2017. doi:10.1109/ICCV .2017.153.[37] M. Yin, S. Li, C. Song, M. S. Asif, A. K. Roy-Chowdhury, and S. V . Krishnamurthy. Adc:Adversarial attacks against object detection that evade context consistency checks. In 2022IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , 2022.[38] T. Chen, J. Liu, Y . Xiang, W. Niu, E. Tong, and Z. Han. Adversarial attack and defense inreinforcement learning-from ai security view. Cybersecurity , 2:1–22, 2019.[39] H. Zhang, H. Chen, C. Xiao, B. Li, M. Liu, D. Boning, and C.-J. Hsieh. Robust deep rein-forcement learning against adversarial perturbations on state observations. Advances in NeuralInformation Processing Systems , 33:21024–21037, 2020.[40] R. Jiao, H. Liang, T. Sato, J. Shen, Q. A. Chen, and Q. Zhu. End-to-end uncertainty-basedmitigation of adversarial attacks to automated lane centering. In 2021 IEEE Intelligent VehiclesSymposium (IV) , pages 266–273, 2021. doi:10.1109/IV48863.2021.9575549.[41] W. Xue, W. Qiu, B. An, Z. Rabinovich, S. Obraztsova, and C. K. Yeo. Mis-spoke or mis-lead:Achieving robustness in multi-agent communicative reinforcement learning. In Proceedingsof the 21st International Conference on Autonomous Agents and Multiagent Systems , pages1418–1426, 2022.[42] A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, andF. Roli. Why do adversarial attacks transfer? explaining transferability of evasion and poi-soning attacks. In 28th USENIX Security Symposium (USENIX Security 19) , pages 321–338,Santa Clara, CA, Aug. 2019. USENIX Association. ISBN 978-1-939133-06-9.[43] B. D. Ziebart, A. L. Maas, J. A. Bagnell, A. K. Dey, et al. Maximum entropy inverse reinforce-ment learning. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence ,volume 8, pages 1433–1438, 2008.[44] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adverserial inverse reinforcementlearning. In International Conference on Learning Representations , 2018.11Appendix A Real-World Demonstrations: RobotariumWe use Robotarium [20], a free remotely accessible swarm robotics research platform, to do real-world demonstrations. It is equipped with a group of miniature differential drive robots ‘GRITSBots’on a testbed measuring 130×90×180 cm, with a projector and an automatic overhead tracking sys-tem. The GRITSBot’s main board has WiFienabled 160 MHz ESP8266 chip as the controller andcommunication (54 MBit/s WiFi) and the stepper motors droven by Atmega 168 microcontroller[20]. The global position is tracked using an overhead camera and then used down-stream for safetychecking and feedback control. Features of the robot environment are displayed by the projector forvisualization.First, we need to run our algorithm in the robotarium simulator before implementing it on the realplatform. However, physical collisions are strictly prohibited when using actual robots. To overcomethis limitation, we record the trajectories of each agent in the real environment and perform post-analysis to determine if there are any instances where two robots collide. This analysis is based onthe relative distance between the robots, following our predefined criteria. A collision between tworobots is defined as when the circles centered on each robot intersect. The radius of each circle isdefined according to the environment specifications [23]. The reward for the environments is definedas the L2 distance between the robot and its target destination. In PP and PCP, the target is the preyrobot. In SL, the target is the designated goal location. We show the trajectories we collected ineach environment Fig 5.We also include the average reward and collision numbers of each robot in Tables 3-8. It showsthat our adversarial method universally outperforms the random flipping one for each agent since itmakes the attacked agents receive less reward and has fewer collisions with their targets. Moreover,we find that our method is even more stable than the random flipping one, with standard deviationonly decreasing by 22.97%, 53.33%, and 40.89% on average in the three environments.Table 3: PCP RewardCapture Agent 1 Capture Agent 2 AverageAdv[Ours] -0.76±0.32 -0.66 ±0.36 -0.71±0.34Random -0.27±0.29 -0.090 ±0.36 -0.18±0.33Table 4: PCP CollisionsCapture Agent 1 Capture Agent 2 AverageAdv[Ours] 0.02±0.14 0.09 ±0.28 0.05±0.23Random 0.17±0.37 0.40 ±0.49 0.29±0.45Table 5: SL RewardListenerAdv[Ours] -0.43±0.14Random -0.21±0.15Table 6: SL CollisionsListenerAdv[Ours] 0.00±0.00Random 0.34±0.47Table 7: PO-PP RewardPO Agent 1 PO Agent 2 PO Agent 3 PO Agent 4 AverageAdv[Ours] -0.81±0.34 -0.80 ±0.27 -0.71 ±0.31 -0.80 ±0.30 -0.71±0.34Random 0.01±0.56 -0.06 ±0.54 -0.00 ±0.60 -0.03 ±0.54 0.02±0.56Table 8: PO-PP CollisionsPO Agent 1 PO Agent 2 PO Agent 3 PO Agent 4 AverageAdv[Ours] 0.00±0.06 0.00 ±0.00 0.00 ±0.00 0.00 ±0.05 0.05±0.23Random 0.26±0.44 0.06 ±0.33 0.33 ±0.47 0.15 ±0.35 0.20±0.4012(a) Predator Capture Prey(b) Partially Observable PP(c) Speaker ListenerFigure 5: Comparison of Environment Trajectories: All three environments are shown, where theleft images are the adversarial communication policy rollouts and the right images are the randomflipping rollouts.13(a) Predator-Capture-Prey Adversarial Communication(b) Predator-Capture-Prey Random Flipping(c) Partial Observability Predator-Prey Adversarial Communication(d) Partial Observability Predator-Prey Random Flipping(e) Speaker-Listener Adversarial Communication(f) Speaker-Listener Random FlippingFigure 6: These image series show the performance of agents when applying our adversarial com-munication and random flipping strategy in three environment: Predator-Capture-Prey (a, b), PartialObservability Predator-Prey (c, d) and Speaker-Listener (e, f).14Appendix B Simulation Experiment DetailsAppendix B.1 DomainsHere we show the hyperparameters used in each environment training (Table 9) and qualitativeresults (Figure 6).In the PCP environment (a, b), the predators (also called perception agents) are shown as red whichcan observe all other agents, however, the yellow capture agent (also called action agents) are blindand can only know where the prey (green) is by receiving the messages from the predators. There-fore, communication is the only useful information based on which the capturers can make decisions.Each capture agent will receive a 16-bit communication from each of the three predators so we in-tercept 48 bits and modify them with our adversarial policy. Compare with Figure 6(a) and 6(b), wefind that our adversarial policy can successfully push the captures agents away from the prey but therandom flipping one cannot stop the capture agents from pursuing the prey with the same number ofbits flipped.We observe similar behaviors in PO-PP when we compare Figure 6(c) and 6(d), in which the preda-tors and prey are shown with red and green. The difference between PO-PP and PCP environmentsis that we remove the capture agents but change the predators to be partially observable agents whichcan only see the prey within a certain distance. Predators change color from red to grey if they ob-serve the prey. If one predator observes the prey, it can broadcast this information to others withits 16-bit communication so that the team can cooperate with each other to achieve higher rewards.When we apply the adversarial policy (see Figure 6), we find that the predators just ignore the preyeven though they see it and never collaborate to collide with the prey compared with the randomflipping one in Figure 6.In the speaker-listener environment (Figure 6(e, f)), the speaker knows the colored goal the listenershould go to but the listener does not. However, the listener knows the position of the three coloredgoals. Therefore, the speaker needs to learn to communicate the correct color within its 16-bitcommunication and the listener should learn which color it needs to go to from the message. Ouradversarial method (6) can make the listener go to a completely wrong colored destination, whilethe random flipping method cannot because it cannot attack the crucial bits of the communication.Appendix B.2 Training DetailsWe show the hyperparameters for all environments in Table 9.Table 9: Hyperparameters for training and testing PCP, PO-PP and SLHyperparameter Environment ValueBuffer Length PCP, PO-PP, SL 1048576Episode Number PCP, PO-PP, SL 50001Episode Length PCP, PO-PP, SL 100Batch Size PCP, PO-PP, SL 1024Discount Factor γ PCP, PO-PP, SL 0.9Learning Rate PCP, PO-PP, SL 0.0001Regularizer Coefficient α0 PCP, SL 0.1Regularizer Coefficient α0 PO-PP 0.004Regularizer Intercept β PCP, PO-PP, SL 3000Regularizer Slope ε PCP, PO-PP, SL 20000Perception Threshold η PCP, PO-PP, SL 3In Figure 7, we show the reward and collision numbers over the training procedure. We start thetraining without the bit-flipping regularizer term Cflip. Then, at episode β(3000), we begin to regu-larize the adversarial policy to flip fewer and fewer bits. As training procedes, the regularization term15dominates the training process and the agents rewards increase as fewer bits are flipped. However,we see that our adversarial agent policy outperforms the random policy at every training iteration.Appendix B.3 Normalized ScoreThe normalized score is defined as:S=RCnoadv−RCadvmax( Nf, η)(9)where RCadvandRCnoadvrepresent the reward or the collision number with and without apply-ing the adversarial policy and their difference represents how much the adversarial communicationchannel degrades the agent performance. It is then normalized by the the number of bits flipped witha perception threshold ηwhich increases the numeric stability in case of extremely small flippingnumber Nf. A higher score signifies that, for each bit flipped, the adversarial approach has a greaterdetrimental impact on the team’s performance.(a) Average Rewards in PCP, PO-PP and SL respectively(b) Average Collisions for PCP, PO-PP and SL respectivelyFigure 7: The average reward and number of collisions during training are displayed for the ad-versarial policy (blue) and random flipping (red), respectively (using left y-axis). The dotted-brownline represents the episode where the bit-flipping regularization term begins. The regularization termCflippushes the adversarial policy to flip fewer bits as training progresses such that adversarial ef-fect becomes weaker. The resulting number of bits flipped is represented as the black curve (usingright y-axis). Our adversarial communication policy consistently outperforms the random policy.16Appendix C Normalized Score Tables for Attacked AgentsWe show detailed tables that quantify the reward and number of collisions for each individual agenthere for our adversarial communication with flipping mode, direct mode and the random flipping.Our proposed method is uniformly better than all other strategies across all attacked agents.Table 10: PCP Reward ScoresCapture Agent 1 Capture Agent 2 AverageAdv[Ours] 0.10±0.06 0.12 ±0.05 0.11±0.05Adv[Direct] 0.01±0.01 0.01 ±0.01 0.01±0.01Random 0.01±0.01 0.01 ±0.01 0.01±0.01Table 11: PCP Collision ScoresCapture Agent 1 Capture Agent 2 AverageAdv[Ours] 4.53±2.23 4.63 ±2.23 4.58±2.23Adv[Direct] 1.07±0.14 1.07 ±0.14 1.07±0.14Random 0.93±0.29 0.99 ±0.29 0.96±0.29Table 12: SL Reward ScoresListenerAdv[Ours] 0.13±0.06Adv[Direct] 0.11±0.05Random 0.04±0.02Table 13: SL Collision ScoresListenerAdv[Ours] 31.38±6.51Adv[Direct] 30.72±6.19Random 13.68±3.19Table 14: PO-PP Reward ScoresPO Agent 1 PO Agent 2 PO Agent 3 PO Agent 4 AverageAdv[Ours] 0.04±0.02 0.04 ±0.02 0.04 ±0.02 0.04 ±0.02 0.04±0.02Adv[Direct] 0.02±0.01 0.02 ±0.01 0.02 ±0.01 0.02 ±0.00 0.02±0.01Random 0.00±0.01 0.00 ±0.01 0.00 ±0.01 0.01 ±0.01 0.00±0.01Table 15: PO-PP Collision ScoresPO Agent 1 PO Agent 2 PO Agent 3 PO Agent 4 AverageAdv[Ours] 1.41±0.55 1.44 ±0.51 1.52 ±0.51 1.42 ±0.65 1.45±0.55Adv[Direct] 0.76±0.07 0.78 ±0.07 0.82 ±0.08 0.84 ±0.08 0.80±0.07Random 0.10±0.87 0.34 ±0.86 0.33 ±0.92 0.43 ±0.88 0.30±0.88Appendix D Whitebox AnalysisIn this section, we compare our methods with a whitebox version of our algorithm (Table 16), wherewe do not utilize surrogate policies and instead use the true agent policies and reward to learn theadversarial policy. The results show that our adversarial policy achieves comparable collision andreward scores across all domains. The performance between the whitebox method and our methodis similar in PCP and PO-PP but has a larger difference in SL. This shows that our surrogate policycan successfully approximate the ground truth agent policies and aid training an adversarial policy.Table 16: Reward and Collision Normalized ScoresPCP PO-PP SLReward Sc Collision Sc Reward Sc Collision Sc Reward Sc Collision ScWhitebox 0.12±0.05 4.60 ±2.30 0.04 ±0.02 1.37 ±0.43 0.17 ±0.09 41.56 ±4.98Adv[Ours] 0.11±0.05 4.58 ±2.23 0.04 ±0.02 1.45 ±0.55 0.13 ±0.06 31.38 ±6.5117Appendix E Surrogate Policy LossesIn this section, we show the surrogate policy loss curves. We see that the surrogate policy lossconverges relatively quickly before episode 5000. This indicates that while we train for up to 60,000episodes, much less data could be used to train a stable surrogate policy that can be used for theadversarial communication policy.Figure 8: Surrogate Policy Loss in PCP, PO-PP and SL respectivelyAppendix F Robustness of Analysis Adversarial PolicyWe evaluate whether the adversarial policy maintains its performance under minor modifications tothe target agent policies. In our experiment, we extend the training of the target team agents by an ad-ditional 10,000 episodes, subtly changing their policies. Every 1,000 episodes during this extendedtraining, we gauge the effectiveness of the adversarial attacking policy, where the adversarial policyis frozen and acts without any further training. Our findings indicate that the adversarial policy’sperformance remains largely consistent, with only a slight decrease in effectiveness and increase invariance (see Figure 9).Figure 9: Robustness of Adversarial Policy: Reward Mean (left) and Standard Deviation (right)18 |
gVBvtRqU1_ | OVIR-3D: Open-Vocabulary 3D Instance RetrievalWithout Training on 3D DataShiyang Lu Haonan Chang Eric Pu Jing Abdeslam Boularias Kostas BekrisRutgers Universityhttps://github.com/shiyoung77/OVIR-3DAbstract: This work presents OVIR-3D, a straightforward yet effective methodfor open-vocabulary 3D object instance retrieval without using any 3D data fortraining. Given a language query, the proposed method is able to return a rankedset of 3D object instance segments based on the feature similarity of the instanceand the text query. This is achieved by a multi-view fusion of text-aligned 2D re-gion proposals into 3D space, where the 2D region proposal network could lever-age 2D datasets, which are more accessible and typically larger than 3D datasets.The proposed fusion process is efficient as it can be performed in real-time formost indoor 3D scenes and does not require additional training in 3D space. Ex-periments on public datasets and a real robot show the effectiveness of the methodand its potential for applications in robot navigation and manipulation.Keywords: Open V ocabulary, 3D Instance Retrieval1 IntroductionThere has been recent progress in open-vocabulary 2D detection and segmentation methods [1, 2, 3]that rely on pre-trained vision-language models [4, 5, 6]. However, their counterparts in the 3Ddomain have not been extensively explored. One reason is the lack of large 3D datasets with suf-ficient object diversity for training open-vocabulary models. Early approaches for dense semanticmapping [7, 8, 9, 10] project multi-view 2D detections to 3D using closed-set detectors but can-not handle arbitrary language queries. More recently, OpenScene [11] and Clip-fields [12] achieveopen-vocabulary 3D semantic segmentation by projecting text-aligned pixel features to 3D pointsand distilling 3D features from the aggregated 2D features. Given a text query during inference,OpenScene [11] generates a heatmap of the point cloud based on the similarity between point fea-tures and the query feature. Nevertheless, manual thresholding is required for object search to con-vert a heatmap to a binary mask and it lacks the ability to separate instances from the same category.This limits its use in robotic applications, such as autonomous robotic manipulation and naviga-tion. Clip-fields [12], on the other hand, requires training on additional ground truth annotation forinstance identification, which makes it less open-vocabulary at the instance level.This work focuses on open-vocabulary 3D instance retrieval, aiming to return a ranked set of 3Dinstance segments given a 3D point cloud reconstructed from an RGB-D video and a languagequery. Some examples are shown in Figure 1. While 2D segmentation from a single viewpoint isoften insufficient for robot grasping and navigation, 3D instance retrieval methods generate a morecomplete and accurate segmentation of objects in 3D space by multi-view fusion and smoothing. Inparticular, this work considers a scenario, where a mobile robot navigates in an indoor scene andautomatically reconstructs the 3D environment using its RGB-D sensor and an off-the-shelf SLAMmodule. Instead of fusing pixel-level information and then either grouping them into instancesby thresholding at inference time [11] or training an object identification model with additionalground truth data [12], the proposed method directly fuses instance-level information into the 3Dscene without additional training, so that given a text query such as “lamp” or ”bed”, a robot canimmediately locate the top-related object instances and perform required tasks.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Examples of open-vocabulary 3D instance retrieval using the proposed system. (a-c)Given a 3D scan reconstructed from an RGB-D video (e.g., scene0645 from ScanNet [13]) and a textquery (e.g., bed, lamp), the proposed method retrieves a set of 3D instances ranked based on theirsemantic similarity to the text query. (d-e) Instances that are not even in the ground-truth annotationscan also be detected and queried by the proposed method, such as the cushions on the sofa.The proposed method addresses this problem by first generating 2D object region proposals and theircorresponding text-aligned features by querying a 2D open-vocabulary detector with an extensivevocabulary. It then performs data association and periodic filtering and merging of 3D instancesto improve instance masks and remove noisy detections. Finally, a post-processing step handlesisolated objects and filters small segments that are likely to be noise. Extensive experiments on realscans from both room-scale dataset ScanNet200 [14] and tabletop-scale dataset YCB-Video [15]demonstrate the effectiveness of the proposed method, which offers an efficient 2D-to-3D instancefusion module ( ∼30 fps for a scene in ScanNet [13] on an NVIDIA RTX 3090) and an open-vocabulary 3D instance retrieval method with near-instant inference time for a text query.The main contributions of this work are: (i) an efficient 2D-to-3D instance fusion module giventext-aligned region proposals, which results in (ii) an open-vocabulary 3D instance retrieval methodthat ranks 3D instances based on semantic similarity given a text query.2 Related Work2.1 2D Open-Vocabulary Detection and SegmentationWith the advent of large vision-language pre-trained models, such as CLIP [4], ALIGN [5] andLiT [16], a number of 2D open-vocabulary object detection and segmentation methods have beenproposed [17, 18, 19, 1, 2, 20, 21]. For 2D semantic segmentation, LSeg [17] encodes 2D imagesand aligns pixel features with segment label embeddings. OpenSeg [18] uses image-level supervi-sion, such as caption text, which allows scaling up training data. GroupViT [19] performs bottom-up hierarchical spatial grouping of semantically-related visual regions for semantic segmentation.For 2D object detection, ViLD [1] achieves open-vocabulary detection by aligning the features ofclass-agnostic region proposals with text label features. Detic [2] attempts to address the long-taildetection problem by utilizing data with bounding box annotations and image-level annotations.2OWL-ViT [20] proposes a pipeline for transferring image-text models to open-vocabulary objectdetection. Our proposed method adopts Detic [2] as a backbone detector to locate objects in 2D im-ages since it can provide pixel-level instance segmentation and text-aligned features. Furthermore,it can be queried with a large vocabulary without sacrificing much speed.2.2 3D Reconstruction and Closed-Vocabulary Semantic MappingEarly works have addressed the 3D reconstruction problem either through online SLAM meth-ods [22, 23, 24, 25, 26] or offline methods like structure-from-motion [27, 28] using a variety of3D representations, such as TSDF [29], Surfel [30], and more recently NeRF [31, 32, 33, 34]. Withthe advancement of learning-based 2D object detection and segmentation methods, recent effortshave focused on point-wise dense semantic mapping of 3D scenes [7, 8, 10, 9, 35]. Despite beingeffective, these methods have not yet been designed to fit open-vocabulary detectors. They eitherassume mutually exclusive instances [7, 10] or utilize category labels for data association [8, 9, 35].The proposed method in this paper adopts an off-the-shelf 3D reconstruction method and focuses onintegrating 2D information with point-cloud information to achieve open-vocabulary 3D instancesegmentation. A key contribution in this context is a method that associates open-vocabulary 2Dinstance detections and fuses them into a 3D point cloud while keeping them open-vocabulary.2.3 3D Open-Vocabulary Scene UnderstandingMore recently, research efforts aim for open-vocabulary 3D scene understanding [12, 11, 36, 37, 38,39, 40]. Given that existing 3D datasets tend to be significantly smaller than 2D image datasets,this is mainly accomplished by fusing pretrained 2D image features into 3D reconstructions. Open-Scene [11] projects pixel-wise features from 2D open-vocabulary segmentation models [17, 18] toa 3D reconstruction and distills 3D features for better semantic segmentation. ConceptFusion [38]fuses multi-modal features, such as sound, from off-the-shelf foundation models that can only pro-duce image-level embeddings. LeRF [39] fuses multi-scale CLIP features to a neural radiance fieldfor open-vocabulary query. These methods can generate a heatmap of a scene that corresponds to aquery, but they do not provide instance-level segmentation, which limits their use in tasks that requirea robot to interact with specific object instances. PLA [40] constructs hierarchical 3D-text pairs for3D open-world learning and aims to perform not only 3D semantic segmentation but instance seg-mentation as well. Nevertheless, the method so far has been demonstrated only on certain furniture-scale objects, and performance in other categories is unclear. On the other hand, our method focuseson instance-level, open-vocabulary 3D segmentation without manual 3D annotation.3 Problem FormulationA 3D scan XNrepresented by Npoints is reconstructed from an RGB-D video V={I1,I2, . . . ,IT}given known camera intrinsics Cand camera poses Pt, where Itis the videoframe at time t. The objective in open-vocabulary 3D instance retrieval is to return a set of Kranked instances represented as binary 3D masks MN={mi|i∈[1, K]}over the 3D scan XN,given a text query Qand the desired number of instances Kto be retrieved. The ranking of instancemasks is based on the semantic similarity between the 3D instance and the text query, where themost similar instance should be ranked first.4 MethodThe overall pipeline of the proposed method is illustrated in figure 2. To summarize, given a videoframe, the method first generates 2D region proposals R2D={r1, .., r k}with text-aligned fea-turesF2D={f2D1, .., f2Dk}using an off-the-shelf 2D open-vocabulary method trained on large 2Ddatasets. The 2D region proposals R2Dof each frame Itare then projected to the reconstructed 3Dpoint cloud given the camera intrinsics Cand poses Pt. The projected 3D regions R3Dare eithermatched to existing 3D object instances O={o1, .., o b}with 3D features F3D={f3D1, .., f3Db}stored in the memory bank B, or added as a new instance if not matched with anything. The 2Dregion to 3D instance matching is based on feature similarity sij=cos(f2Di, f3Dj)and region over-lapping IoU(r3Di, oj)in the 3D space. Matched regions are integrated into the 3D instance. Toremove unreliable detections and improve segmentation quality, periodic filtering and merging of3Figure 2: Pipeline of the proposed method.3D instances in the memory bank Bis performed every Tframes. A final post-processing step re-moves 3D instances that are too small and separates object instances that are isolated in 3D space butincorrectly merged. During inference time, the text query qwill be used to match with a set of rep-resentative features of each 3D instance, and the instances Owill be ranked based on the similarityand returned. Details of the proposed method are presented below.4.1 Text-aligned 2D Region ProposalLearning-based region proposal networks have served as a critical module for many instance seg-mentation methods, such as MaskRCNN [41]. However, directly generating 3D region proposalsfor open-vocabulary instance retrieval is hard due to the lack of annotated 3D data with enoughcategory varieties. This work views the 3D region proposal problem as a fusion problem from 2Dregion proposals. In particular, it leverages the power of an off-the-shelf open-vocabulary 2D detec-tor Detic [2], which is trained with multiple large image datasets, to generate 2D region proposals(masks) R2Dby querying it with an extensive number of categories, i.e. all 21k categories fromImageNet21k [42] dataset. In addition to 2D masks R2D, associated text-aligned features F2Dare extracted before the final classification layer of the 2D model. The output category labels aredropped without being used as they are rather noisy given the input vocabulary size. Both regionproposals R2Dand their text-aligned features F2Dare used for data association in the fusion step.Though the proposed method does not critically depend on a specific 2D detector, not all open-vocabulary 2D detectors can serve as a region proposal backbone. There are two requirements thata 2D detector must exhibit: 1) It must be able to generate pixel-wise masks for a wide varietyof objects in a timely manner; and 2) It should provide a text-aligned feature for each region sothat it can be queried with language. During the development of this work, it’s found that Detic [2]naturally meets the requirement without additional modifications and thus it is adopted for this work.Some other options, such as SAM [43] and Grounding-DINO [44] were not adopted because theyare either too slow or unable to directly output text-aligned features for proposed regions, whichturn out to be critical for data association in the experiments. Detic [2] also has a fast inferencespeed even when queried with all the categories from ImageNet21k [42] ( ∼10fps on an NVIDIARTX3090), which makes it favorable for this task.4.2 2D-to-3D Instance Fusion2D region proposals R2D={r2D1, .., r2Dk}and their corresponding features F2D={f2D1, .., f2Dk}for each frame Itare first projected to the 3D scan using camera intrinsics Cand pose Pt. Theprojected 3D regions R3Dare either matched to existing 3D object instances O={o1, .., o b}with3D features F3D={f3D1, .., f3Db}, where bis the number of 3D instances already stored in the4memory bank B, or added as a new instance if it is not matched with anything. The 3D featuref3Diof instance ifor matching is simply the average of 2D features f2Djfrom all the associated2D regions, i.e. f3Di=1nPnj=1f2Dj, where jis the index of 2D regions that are associated to 3Dinstance i, and nis the number of 2D regions that have already been merged. The memory bank isempty at the beginning.The matching of 2D region rito 3D instance ojis based on cosine similarity sij=cos(f2Di, f3Dj)and 3D intersection over union IoU(r3Di,ˆoj)between the projected region r3Diand visible part ofthe 3D instance ˆojin the current frame. If sijis greater than a predefined threshold θs(defaultθs= 0.75) and the overlapping IoU(r3Di,ˆoj)is also greater than predefined threshold θiou(defaultθiou= 0.25), then they are considered as a match. Matched regions will be aggregated to the 3Dinstance, i.e., oj:=oj∪r3Diandf3Dj:=nn+1f3Dj+1nf2Di. The matching is not restricted toone-to-one as multiple 2D region proposals may correspond to the same instance.4.3 Periodic 3D Instance Filtering and MergingThe fusion process is fast but it will generate redundant 3D instances when a 2D region proposal failsto match properly, potentially leading to low-quality segmentation and inaccurate data association.To address this, periodic filtering and merging of 3D instances stored in memory bank Boccur everyT(default T= 300 ) frames. Point filtering is based on the detection rate rdetpof a point p, whererdetp=coip/cvisp. Intuitively, this is the frequency of a point being considered as part of the instance(coip) over the frequency of it being visible ( cvisp). Points with rdetp< θdet(default θdet= 0.2) areremoved from instance oi. Meanwhile, the number of points in each projected 3D segment from asingle view that corresponds to oiis recorded. If after point filtering, instance oicontains fewer thanthe median number of points in its corresponding segments, then it is filtered entirely. This dynamicthreshold that automatically adapts to instance sizes is critical in the filtering process.Merging of two instances op, oqis determined by feature similarity spq=cos(f3Dp, f3Dq)and 3Dintersection over union IoU(op, op)between two instance segments, using the same thresholds θsandθiouas in Section 4.2. Additionally, instances opandoqare merged if recall (op, oq) =|op∪oq|/|oq| ≥θrecall (default θrecall = 0.25) and spq≥θs, indicating that oqis mostly contained in opand both instances have similar features.Hyper-parameters are justified through ablation studies in Section 6, and these values remain fixedfor experiments in Section 5 across different datasets.4.4 Post-processingA simple post-processing step is executed to separate object instances that are isolated in 3D spaceand filter small segments that are likely to be noise. This is achieved by using DBSCAN [45] to find3D point clusters in each instance, where the distance parameter epsis set to 10cm. If an instanceoihas segments not connected in 3D space, DBSCAN will return more than one point cluster andoiwill be separated in multiple instances.4.5 InferenceDuring inference time, a text query qis converted to a feature vector fq= Θ( q)using CLIP [4].Instead of representing each 3D instance with the average feature of associated 2D regions, the Kclustering centers by K-Means of associated features, which can be viewed as representative featuresfrom a set of viewpoints, are used. The 3D instances are then ranked by the largest cosine similarity sbetween the text query qandKrepresentative features of an instance. An ablation study on differentstrategies of feature ensemble is presented in section 6.3.5 Experiments5.1 DatasetsThe first dataset used for the experiment is ScanNet200 [14], which contains a validation set of 312indoor scans with 200 categories of objects. Uncountable categories ”floor”, ”wall”, and ”ceiling”and their subcategories are not evaluated. The second dataset is YCB-Video [15], which containsa validation set of 12 videos. It’s a tabletop dataset that was originally designed for object 6DoF5pose estimation for robot manipulation. The 3D scans of the tabletop scene are reconstructed byKinectFusion [22]. The ground truth instance segmentation labels are automatically generated giventhe object mesh models and annotated 6DoF poses.5.2 MetricsStandard mean average precision ( mAP ) metric for instance retrieval at different IoU thresholds isadopted for the evaluation purpose. In particular, mAP 25andmAP 50at the IoU threshold θ= 0.25andθ= 0.5respectively, as well as the overall mAP , i.e110PmAP θ, where θ= [0.5:0.05:0.95]are reported. Only annotated object categories in a 3D scene are used as text queries for evaluation.The mAP results were computed for each 3D scene and then averaged for the whole dataset.5.3 BaselinesOpenScene [11], which is the most relevant work to date, is used as the first comparison point. Givenan object query, it returns a heatmap of the input point cloud. A set of thresholds θ= [0.5:0.03:0.9]are tested for each category to convert the heatmap into a binary mask and then foreground points areclustered into 3D instances using DBSCAN [45], similar to the post-processing step in section 4.4.The one with the best overall performance is reported. Furthermore, a series of prior research hasfocused on semantic mapping using closed-vocabulary detectors. Two representative works, Fu-sion++ [9] and PanopticFusion [10], are used as comparison points with two revisions: 1) Instead ofusing their whole SLAM system, this work assumes the 3D reconstruction and ground truth cameraposes are given, and only tested their data association and instance mapping algorithms. 2) Theirbackbone detector MaskRCNN [41] is replaced with Detic [2] for open-vocabulary detection, andthe mean feature of associated 2D detections for each instance is used to match text queries.ScanNet200 [14] YCB-Video [15]Method mAP 25mAP 50mAP mAP 25mAP 50mAPOpenScene [11] 0.268 0.190 0.089 0.421 0.333 0.116*Fusion++ [9] 0.414 0.253 0.094 0.817 0.464 0.120*PanopticFusion [10] 0.539 0.370 0.150 0.851 0.803 0.393Ours 0.564 0.443 0.211 0.863 0.848 0.465Table 1: Results on ScanNet200 [14] and YCB-Video [15] datasetmAP 50 mAPMethod head common tail head common tailOpenScene [11] 0.308 0.178 0.067 0.150 0.076 0.033*Fusion++ [9] 0.235 0.243 0.288 0.094 0.090 0.098*PanopticFusion [10] 0.335 0.360 0.424 0.145 0.146 0.162Ours 0.417 0.433 0.469 0.224 0.214 0.193Table 2: Results on three sets of categories with different frequencies in ScanNet200 [14]5.4 ResultsQuantitative results of instance retrieval on ScanNet200 [14] and YCB-video [15] datasets are shownin Table 1. Furthermore, results on different sets of categories with different frequencies in Scan-Net200 are shown in Table 2. The proposed method outperforms all other baselines by a largemargin in terms of instance retrieval mAP . It seems that OpenScene[11] does not perform well onthis task even with an automatically tuned threshold for each category because fused point featuresare not distinguishable enough. As a result, grouping points into segments with accurate boundariesby thresholding is rather difficult. The proposed method, on the other hand, directly fuses instance-level information and improves segment quality by periodic merging and filtering. The proposedmethod outperforms the other two baselines primarily because of the use of instance feature simi-larity as an additional metric for data association while the baselines only consider 3D overlapping,which can easily fail when the 2D detections are noisy, especially in the open-vocabulary setup.5.5 Running TimeThe inference time is nearly instant ( ∼20ms) for a text query. The running time for the fusionprocess depends on the number of detections ( N) in a frame and the number of fused 3D instances(M), i.e. O(MN). As mentioned in Section 4.2, it requires two dot products to compute the regionoverlapping and feature similarity, which in practice is a fast process that operates at ∼30FPS withan NVIDIA RTX 3090 GPU for most 3D scans in the ScanNet200 [14] dataset.66 Ablation Studies6.1 Input queries to the 2D region proposal methodThe proposed method utilizes an open-vocabulary 2D detector as a region proposal method byquerying it with a large vocabulary. One concern is whether the input query of the 2D methodwould affect the performance of the 3D instance retrieval. This ablation study tests queries frommultiple datasets as input to the region proposal method and displays their impact on the overallperformance. In addition to ScanNet200 [14] and ImageNet21K [42], COCO [46] (80 categories),LVIS [47] (1203 categories), and more aggressively, queries with ImageNet21k categories but with-out ScanNet200 categories are tested. Results of 3D instance retrieval on the ScanNet200 datasetare shown in Table 4. It turns out that an extensive vocabulary is helpful for generating regions ofinterest for arbitrary objects. Furthermore, the results show that the region proposal network has cer-tain generalizability, such that even when ScanNet200 categories are completely removed from theImageNet21k categories, it can still find most regions based on similar categories in the vocabulary,and the final performance of retrieving objects in ScanNet200 only slightly dropped.COCO ScanNet200 LVIS ImageNet21k ImageNet21k - ScanNet200mAP 50 0.228 0.419 0.429 0.443 0.410Table 3: Results on ScanNet200 [14] with different input queries to the 2D region proposal network6.2 Instance features and 2D masksIn this ablation study, alternative approaches are explored to replace instance features and 2D masksderived from Detic in order to demonstrate the adaptability of the proposed method across differentbackbones. Rather than utilizing the mask feature directly extracted from Detic, detected boundingboxes are cropped and fed to the CLIP model to extract features from these cropped regions. For thegeneration of 2D masks, SAM [43] is adopted to create segmentations for the regions proposed byDetic bounding boxes. In this experimental setup, instance fusion is performed every three frames,primarily due to the relatively slow performance of SAM. Substituting the Detic feature with theCLIP feature from cropped images yields slightly inferior results, whereas replacing the Detic maskwith the SAM mask leads to an improvement in performance. This outcome is expected since SAMgenerally produces higher-quality masks, albeit with a tradeoff in terms of speed. It is anticipatedthat as open-vocabulary 2D detection techniques advance, more potent and efficient methods mayemerge as viable alternatives to the current backbone.Proposed w/ CLIP feature w/ SAM segmentationmAP 50 0.414 0.406 0.440Table 4: Results on ScanNet200 [14] with different instance features and 2D masks6.3 Feature ensemble strategiesThree different feature ensemble strategies are tested to represent a 3D instance based on associated2D features. The first strategy is to compute the average of all 2D features. The second strategyinvolves clustering the 2D features from different viewpoints using the K-Means algorithm, andthe clustering centers are used to represent each instance. During instance retrieval, the featuresimilarity is determined as the maximum similarity between the query feature and the clusteringcenters. The third strategy is to use the feature from the largest associated 2D region. Results of3D instance retrieval on the ScanNet200 dataset are presented in Table 5. The approach of usingmultiple features through clustering outperforms simple averaging, while using the feature from thelargest associated 2D region yields the poorest results.average clustered (K=16) clustered (K=64) feature from largest 2D detectionmAP 50 0.428 0.429 0.443 0.380Table 5: Results on ScanNet200 [14] with different feature ensemble strategies6.4 Time intervals and visibility threshold for periodic instance filtering and mergingThis ablation study tested different time intervals Tand visibility threshold θvisfor filtering men-tioned in section 4.3. Results of 3D instance retrieval on the ScanNet200 dataset are shown in7Table 6 and Table 7 respectively. The frame interval T= 300 and visibility threshold θvis= 0.2yields the best results.T= 1 T= 100 T= 300 T= 500 T= 1000mAP 50 0.340 0.417 0.443 0.410 0.412Table 6: Results on ScanNet200 [14] with different time intervals of periodic filtering and mergingθvis= 0 θvis= 0.1θvis= 0.15 θvis= 0.2θvis= 0.25 θvis= 0.3mAP 50 0.256 0.386 0.407 0.443 0.418 0.408Table 7: mAP 50on ScanNet200 [14] dataset7 Robotics ExperimentsFigure 3: Robot Experiments.Contrasting to conventional closed-set semantic methods, the superiority of open-vocabulary detec-tors for manipulation lies in their ability to pinpoint the grasp region with a language specification.A part-based grasping experiment was devised given this inspiration. In particular, the robot is askedto grasp the ”bottle cap” and ”handle” respectively in two sets of experiments with five distinct tablesetups in each set. An RGB-D camera is mounted on the robot’s wrist that captures videos of theobjects on the table. For each scene, there is a short scanning phase as shown in Figure 3(b) thatthe robot arm went through a predefined trajectory to get a more complete view of objects on thetable. Reconstruction is performed using KinectFusion. OVIR-3D is compared against OpenSceneto segment parts given the text query and located segments are used to guide the robot’s grasp.For OVIR-3D, the 5/5 graspable bottle caps and 4/5 ”handle” of the pitcher were detected and therobot grasping success rate was 90%. The detection rate for OpenScene was low at 0/5 and 3/5respectively, and the overall grasping success rate was 30%. An object is considered detected if areasonable visual segment is found.8 Conclusion and LimitationsThis paper presents OVIR-3D, a rather straightforward but effective method for open-vocabulary 3Dinstance retrieval. By utilizing an off-the-shelf open-vocabulary 2D instance segmentation methodfor region proposal and fusing its output 2D regions and text-aligned features in 3D space, theproposed method can achieve much better performance than other baselines without using any 3Dinstance annotation, additional training, or manual heatmap thresholding during inference. Thismethod can also be used for 3D instance pseudo-label generation for self-supervised learning.A limitation of the proposed method is that it is not always able to merge segments of very largeinstances, such as long dining tables. It can also miss tiny objects as they are likely to be treated asnoise and removed during the fusion process. Furthermore, while the proposed method can improvesegmentation quality due to multi-view noisy filtering, it still relies on the 2D region proposal modelto not consistently miss an object or generate bad segmentation, since it does not use any 3D datafor fine-tuning. A promising direction is to integrate this method with a 3D learning-based methodto utilize the scarcer but cleaner 3D annotations.8AcknowledgmentsWe thank Dr. Yu Wu for providing useful suggestions during the development of this work. Thiswork is supported by NSF awards 2309866, 1846043, and 2132972.References[1] X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision and lan-guage knowledge distillation. In International Conference on Learning Representations .[2] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In Computer Vision–ECCV 2022: 17th European Conference,Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX , pages 350–368. Springer, 2022.[3] F. Liang, B. Wu, X. Dai, K. Li, Y . Zhao, H. Zhang, P. Zhang, P. Vajda, and D. Mar-culescu. Open-vocabulary semantic segmentation with mask-adapted clip. arXiv preprintarXiv:2210.04150 , 2022.[4] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[5] C. Jia, Y . Yang, Y . Xia, Y .-T. Chen, Z. Parekh, H. Pham, Q. Le, Y .-H. Sung, Z. Li, and T. Duerig.Scaling up visual and vision-language representation learning with noisy text supervision. InInternational Conference on Machine Learning , pages 4904–4916. PMLR, 2021.[6] L. Yuan, D. Chen, Y .-L. Chen, N. Codella, X. Dai, J. Gao, H. Hu, X. Huang, B. Li, C. Li, et al.Florence: A new foundation model for computer vision. arXiv preprint arXiv:2111.11432 ,2021.[7] J. McCormac, A. Handa, A. Davison, and S. Leutenegger. Semanticfusion: Dense 3d seman-tic mapping with convolutional neural networks. In 2017 IEEE International Conference onRobotics and automation (ICRA) , pages 4628–4635. IEEE, 2017.[8] M. Runz, M. Buffier, and L. Agapito. Maskfusion: Real-time recognition, tracking and recon-struction of multiple moving objects. In 2018 IEEE International Symposium on Mixed andAugmented Reality (ISMAR) , pages 10–20. IEEE, 2018.[9] J. McCormac, R. Clark, M. Bloesch, A. Davison, and S. Leutenegger. Fusion++: V olumetricobject-level slam. In 2018 international conference on 3D vision (3DV) , pages 32–41. IEEE,2018.[10] G. Narita, T. Seno, T. Ishikawa, and Y . Kaji. Panopticfusion: Online volumetric semantic map-ping at the level of stuff and things. In 2019 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 4205–4212. IEEE, 2019.[11] S. Peng, K. Genova, C. M. Jiang, A. Tagliasacchi, M. Pollefeys, and T. Funkhouser. Open-scene: 3d scene understanding with open vocabularies. In CVPR , 2023.[12] A. Submission. Clip-fields: Weakly supervised semantic fields for robotic memory. RSS 2023 ,2023.[13] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recog-nition (CVPR), IEEE , 2017.[14] D. Rozenberszki, O. Litany, and A. Dai. Language-grounded indoor 3d semantic segmentationin the wild. In Proceedings of the European Conference on Computer Vision (ECCV) , 2022.9[15] Y . Xiang, T. Schmidt, V . Narayanan, and D. Fox. Posecnn: A convolutional neural network for6d object pose estimation in cluttered scenes. 2018.[16] X. Zhai, X. Wang, B. Mustafa, A. Steiner, D. Keysers, A. Kolesnikov, and L. Beyer. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 18123–18133, 2022.[17] B. Li, K. Q. Weinberger, S. Belongie, V . Koltun, and R. Ranftl. Language-driven semanticsegmentation. arXiv preprint arXiv:2201.03546 , 2022.[18] G. Ghiasi, X. Gu, Y . Cui, and T.-Y . Lin. Scaling open-vocabulary image segmentation withimage-level labels. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv,Israel, October 23–27, 2022, Proceedings, Part XXXVI , pages 540–557. Springer, 2022.[19] J. Xu, S. De Mello, S. Liu, W. Byeon, T. Breuel, J. Kautz, and X. Wang. Groupvit: Semanticsegmentation emerges from text supervision. arXiv preprint arXiv:2202.11094 , 2022.[20] A. S. M. N. D. W. A. D. A. M. A. A. M. D. Z. S. X. W. X. Z. T. K. N. H. Matthias Minderer,Alexey Gritsenko. Simple open-vocabulary object detection with vision transformers. ECCV ,2022.[21] L. H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y . Zhong, L. Wang, L. Yuan, L. Zhang, J.-N. Hwang, et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 10965–10975, 2022.[22] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shot-ton, S. Hodges, and A. Fitzgibbon. Kinectfusion: Real-time dense surface mapping and track-ing. In 2011 10th IEEE international symposium on mixed and augmented reality , pages127–136. Ieee, 2011.[23] T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison. Elasticfusion:Dense slam without a pose graph. Robotics: Science and Systems, 2015.[24] A. Dai, M. Nießner, M. Zollh ̈ofer, S. Izadi, and C. Theobalt. Bundlefusion: Real-time glob-ally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Transactions onGraphics (ToG) , 36(4):1, 2017.[25] E. Sucar, S. Liu, J. Ortiz, and A. J. Davison. imap: Implicit mapping and positioning in real-time. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages6229–6238, 2021.[26] Z. Zhu, S. Peng, V . Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR) , 2022.[27] J. L. Sch ̈onberger and J.-M. Frahm. Structure-from-motion revisited. In Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2016.[28] J. L. Sch ̈onberger, E. Zheng, M. Pollefeys, and J.-M. Frahm. Pixelwise view selection forunstructured multi-view stereo. In European Conference on Computer Vision (ECCV) , 2016.[29] B. Curless and M. Levoy. A volumetric method for building complex models from rangeimages. In Proceedings of the 23rd annual conference on Computer graphics and interactivetechniques , pages 303–312, 1996.[30] H. Pfister, M. Zwicker, J. Van Baar, and M. Gross. Surfels: Surface elements as renderingprimitives. In Proceedings of the 27th annual conference on Computer graphics and interactivetechniques , pages 335–342, 2000.10[31] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf:Representing scenes as neural radiance fields for view synthesis. Communications of the ACM ,65(1):99–106, 2021.[32] T. M ̈uller, A. Evans, C. Schied, and A. Keller. Instant neural graphics primitives with amultiresolution hash encoding. ACM Trans. Graph. , 41(4):102:1–102:15, July 2022. doi:10.1145/3528223.3530127. URL https://doi.org/10.1145/3528223.3530127 .[33] Sara Fridovich-Keil and Alex Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa. Plenoxels:Radiance fields without neural networks. In CVPR , 2022.[34] C. Sun, M. Sun, and H.-T. Chen. Direct voxel grid optimization: Super-fast convergencefor radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 5459–5469, 2022.[35] X. Kong, S. Liu, M. Taher, and A. J. Davison. vmap: Vectorised object mapping for neuralfield slam. arXiv preprint arXiv:2302.01838 , 2023.[36] J. Zhang, R. Dong, and K. Ma. Clip-fo3d: Learning free open-world 3d scene representationsfrom 2d dense clip. arXiv preprint arXiv:2303.04748 , 2023.[37] D. Hegde, J. M. J. Valanarasu, and V . M. Patel. Clip goes 3d: Leveraging prompt tuning forlanguage grounded 3d recognition. arXiv preprint arXiv: , 2022.[38] K. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, J. Tenenbaum, C. de Melo, M. Krishna, L. Paull, F. Shkurti, and A. Tor-ralba. Conceptfusion: Open-set multimodal 3d mapping. arXiv , 2023.[39] J. Kerr, C. M. Kim, K. Goldberg, A. Kanazawa, and M. Tancik. Lerf: Language embeddedradiance fields. arXiv preprint arXiv:2303.09553 , 2023.[40] R. Ding, J. Yang, C. Xue, W. Zhang, S. Bai, and X. Qi. Language-driven open-vocabulary 3dscene understanding. arXiv preprint arXiv:2211.16312 , 2022.[41] K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick. Mask r-cnn. In Proceedings of the IEEEinternational conference on computer vision , pages 2961–2969, 2017.[42] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-ScaleHierarchical Image Database. In CVPR09 , 2009.[43] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, P. Doll ́ar, and R. Girshick. Segment anything. arXiv:2304.02643 , 2023.[44] S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, C. Li, J. Yang, H. Su, J. Zhu, et al. Groundingdino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprintarXiv:2303.05499 , 2023.[45] E. Schubert, J. Sander, M. Ester, H. P. Kriegel, and X. Xu. Dbscan revisited, revisited: whyand how you should (still) use dbscan. ACM Transactions on Database Systems (TODS) , 42(3):1–21, 2017.[46] T.-Y . Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll ́ar, and C. L. Zitnick.Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th EuropeanConference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 , pages 740–755. Springer, 2014.[47] A. Gupta, P. Dollar, and R. Girshick. Lvis: A dataset for large vocabulary instance segmenta-tion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition ,pages 5356–5364, 2019.11 |
ZFjgfJb_5c | Embodied Lifelong Learning forTask and Motion PlanningJorge Mendez-MendezMIT CSAILjmendez@csail.mit.eduLeslie Pack KaelblingMIT CSAILlpk@csail.mit.eduTom ́as Lozano-P ́erezMIT CSAILtlp@csail.mit.eduAbstract: A robot deployed in a home over long stretches of time faces a truelifelong learning problem. As it seeks to provide assistance to its users, therobot should leverage any accumulated experience to improve its own knowledgeand proficiency. We formalize this setting with a novel formulation of lifelonglearning for task and motion planning (TAMP), which endows our learner withthe compositionality of TAMP systems. Exploiting the modularity of TAMP,we develop a mixture of generative models that produces candidate continuousparameters for a planner. Whereas most existing lifelong learning approachesdetermine a priori how data is shared across various models, our approach learnsshared and non-shared models and determines which to use online during planningbased on auxiliary tasks that serve as a proxy for each model’s understanding of astate. Our method exhibits substantial improvements (over time and compared tobaselines) in planning success on 2D and BEHA VIOR domains [1].Keywords: task and motion planning, lifelong learning, generative models1 IntroductionConsider a home assistant robot operating over a lifetime in a home. The robot initially comesequipped with a number of basic capabilities for planning and control that enable it to executecertain actions, such as NAVIGATE TO(object )andGRASP (object ). The robot’s user expects it toleverage these abilities to immediately assist with house chores, and to become increasingly capableover time, adapting to the types of problems that arise in its new home environment. This settingevokes a novel lifelong learning formulation for embodied intelligence that necessitates learning inthe field , which we formalize and address in this work (Figures 1 and A.1). Notably, we forgo anyartificial separation between training and testing: the robot is continually asked to solve problems tothe best of its abilities, and it is free to use any data it collects to improve its knowledge for future use.One promising tool for tackling this lifelong learning challenge is planning. Planning modelscomprise relatively independent prediction modules, which permits learning disentangled knowledgeand reusing it compositionally. Similar forms of modularity have been shown to provide substantialleverage for training models continually by targeting the learning to individual modules, composingexisting modules into new solutions, and adding new modules over time [2].In this paper, we focus on learning generative models to address the most difficult aspect of task andmotion planning (TAMP) [ 3]: finding continuous parameters (grasps, poses, paths) that guarantee thesuccess of a high-level plan. Our approach learns to generate samples that lead to problem completion,implicitly considering the effect of the current sample on future samples’ success. Compared toother applications of generative models, one peculiarity of the lifelong sampler learning problem isthat it is inherently multitask, since TAMP systems often apply similar basic skills (e.g., GRASP )to varied object types (e.g., ball ,box). At two extremes of possible approaches to this multitaskproblem, we could learn an independent, specialized sampler for each object type, or we could learna single, generic sampler across all types. Intuitively, specialized models are likely to yield better7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.time k=1 k=2 k=3 k=3sort books store leftovers organize office close windows Figure 1: The learning robot will face a sequence of diverse TAMP problems in a true lifelongsetting. It will use its current models to solve each problem as efficiently as possible, and then useany collected data to improve those models for the future. Images captured from BEHA VIOR [1].predictions, but they are trained on a smaller pool of data, making them potentially less robust. Wepropose a hybrid solution that learns both general and specialized samplers, and generates samplesby determining online which model is most appropriate given a state. To do so, we pair each samplerwith an auxiliary predictor. When drawing samples for solving a problem, the measured error onthe auxiliary tasks serves as a proxy for the sampler’s accuracy at the given state, which we use toconstruct a mixture distribution that assigns more weight to lower-error samplers.We evaluate the resulting TAMP system for its cumulative performance as it encounters a sequenceof diverse problems. This realistic formulation measures how effective the robot is, in aggregate, atsolving problems over its entire lifetime. This contrasts with standard (artificial) lifelong learningevaluations, which instead focus on the final performance over the sequence of problems usedfor training. Concretely, we evaluate our approach on problems from a 2D domain and from theBEHA VIOR benchmark [1], demonstrating substantial improvements in planning performance.2 The Lifelong Sampler Learning ProblemOur formulation of lifelong sampler learning emphasizes a realistic setting in which a robot isdeployed in an environment and, from that moment on, is continually evaluated for its ability to solveTAMP problems. This section first formalizes the problem in terms of general TAMP systems, andnext discusses how it applies specifically to search-then-sample (SeSamE) bilevel planning strategies.2.1 Learning Over a Sequence of TAMP ProblemsThe lifelong learning robot faces a continual stream of planning problems τ(1), τ(2), . . .. Eachproblem τ(k)=⟨s(k)init, g(k)⟩ ∼ D(k)is defined by an initial state s(k)initand a goal g(k), drawnsequentially from some problem distribution D(k), which is potentially nonstationary (as indicatedby the superscript k). We assume that the robot has access to a sound and probabilistically completeplanner; while the planner is guaranteed to solve any problem given sufficient computation time, weexpect many problems to be infeasible within a reasonable time. In consequence, the robot shouldleverage any available data to focus its planning process on choices that are likely to be successful.Like in the real world, there are no distinct training and evaluation stages. Instead, at each time k,the robot seeks to construct a plan for a new problem τ(k). We record whether the robot succeeds atthe problem within a bounded number of samples B, and how many samples it attempts. The robotmay use any experience it collects during the planning attempt as training data to improve its models.We measure performance as the cumulative number of problems solved within a given number ofattempted samples. A system that learns efficiently from the data available so far will solve newproblems more quickly, gaining even more data from successful plans, in a virtuous cycle.One attribute of this formulation is that the agent is never evaluated on any previous problem—aftertimek, problem τ(k)is never exactly encountered again. This departs from existing lifelong learningformulations, which evaluate the agent on previous problems τ(1), . . . , τ(k−1)to measure forgetting.Yet, avoiding forgetting still contributes to attaining good performance in our setting. Since we2evaluate the robot continually on newproblems, it must generalize from prior experience. If the robotforgets past knowledge that remains relevant and overfits to the latest problems, then it will neverimprove over the distribution of problems that it may face in the future. Even in nonstationary cases,where later problems differ substantially from earlier ones, retaining knowledge of past instancesmay improve generalization to future problems; we demonstrate this empirically in Section 5.3.In order for the robot to generalize in this fashion, the problem distribution D(k)must consist ofproblems that have some common underlying structure, enabling samplers to be shared acrossproblems to serve as the medium through which prior experience informs future planning.2.2 SeSamE Bilevel PlanningAlgorithm 1 SeSamE (τ, N, M )skeleton gen←discreteSearchSolutionGen( τ)forj= 1, . . . , N : ▷Loop over skeletonsskel←skeleton gen.next() ▷Next skeletoncnt←zeros(len(skel)) ▷Step sample counterfori= 1, . . . , len(skel) : ▷Loop over stepsφ[i]←sample(skel[ i], s[i]);cnt[i]++s[i+ 1]←simulate( s[i],skel[i], φ[i])ifnotvalid( s[i+ 1]) :while cnt[i] = M:cnt[i--]←0▷Backtracki--▷Setiso latest cnt[i]< M is re-sampledifi= 0:break ▷Try new skeletonifs[i]∈τ.goal : return skel, φOne strategy to solve TAMP problems is toseparate the search into two disentangledlevels to perform bilevel planning [ 4]. Weadopt this strategy and formalize the bilevelplanning in a variant of PDDL [ 5] aug-mented with continuous parameters. Therobot will be deployed in a world W=⟨Θ,R,S,O,A⟩. Each object eof typeθe∈Θis described by a feature vectorx. A state s∈ S is characterized by thefeatures xdescribing all objects. The pos-itive predicates r∈ R onsproduce an abstract state s↑(e.g., IN(ball,box)∧ON(box,floor )).An abstract action a↑= (o,C)is an operator o∈ O, with predicate-level preconditions and effects,augmented with a parameterized controller Ca↑φ,e. The parameters of the controller are hybrid: earediscrete typed parameters that specify which objects to apply the controller to, while φare continuousparameters that dictate how the action is applied. For example, a NAVIGATE TOaction applied toe=table could take continuous parameters φ= (∆ x,∆y)that specify the target coordinates,relative to the table . An action a∈ A is the execution of a controller with given parameters.We assume that all symbolic elements of Ware given, and focus on determining the continuousparameters φthat result in actions that lead to complete plans. A planning problem τ=⟨sinit, g⟩isgiven by a continuous-level initial state sinitand a predicate-level goal g. A satisficing plan πis asequence of actions a1, . . . , a jwhich move the robot from sinitto an abstrct state where s↑⊆g.The SeSamE strategy (Algorithm 1) first obtains a skeleton plan at the discrete level using standardplanning techniques (e.g., A∗), and subsequently refines the skeleton into a low-level plan via sam-pling. While generating samples is inexpensive, the simulation step that evaluates each sample can beNavigate Engage PushValid, observed Success, observed Valid, learned Success, learnedFigure 2: Sample distributions optimized for TAMP prob-lem completion. Top: a 2D robot tasked with pushing ablock upward. Bottom: distribution of navigation param-eters that achieve reachability (valid) and that solve theproblem (success), generated by an expert (observed) andlearned by a diffusion model (learned). Samples that con-sider global success yield a distribution over promising(and not merely valid) actions. Diffusion models representthe observed (success or valid) distributions well.quite expensive (e.g., due to inverse kine-matics, collision checking, or path plan-ning). Reducing the number of samplesrequired to obtain a satisficing plan, andhence the number of calls to the simu-lator, can substantially improve the effi-ciency of the TAMP system, vastly in-creasing the space of problems that canbe solved in a tractable amount of time.One challenge in this sampling-based for-mulation is specifying a distribution that1) covers the space of plausible solutionsand 2) generates only promising candi-dates. Learning such a distribution fromdata, as we do in this work, would consti-tute a major step toward constructing aneffective TAMP system.32.3 Samplers as Generative Models of Plausible Candidate Action ParametersThe sampler for each abstract action a↑should generate action parameters from a distributionconditioned on the current state: φ∼p(· |s, a↑). This distribution should capture the whole spaceof action parameters that, given the current state, may lead to successful execution of an overall plan.To illustrate this point, consider the block-pushing domain in Figure 2 (see Section 5.1 for details).A na ̈ıve navigation sampler could bring the robot within reach of the block, but a more intelligentsampler would only consider navigating to the bottom of the block to enable pushing upward.To learn a generative model that meets these criteria, we require access to a pool of paired statesand action parameters (s, φ)that is representative of the choices that eventually (after successfullysampling all subsequent actions in a skeleton plan) lead to a satisficing plan. Critically, the actionparameters that succeed for a given abstract action pa↑depend on the samplers for the remainingabstract actions, and as such the data must be collected jointly to ensure that the learned distributionsare compatible with each other. The lifelong problem formulation of Section 2.1 satisfies this criterion.3 Nested Models for Sparse DataIn our lifelong learning setting, the robot should use whatever data it has to make the best predictionspossible. To handle the sparse-data setting that inevitably occurs early in the robot’s experience, weformulate a solution that nests predictors of different generality.In order to train neural generative models as the samplers for our TAMP system, we construct a vectorrepresentation of the conditioning variables. The sampling distribution for each abstract action a↑isgiven by p(φ|s,Ca↑φ,e). We first represent the state sin terms of the continuous low-level featuresof the objects involved in the action, xo, following prior work [ 4]. A trivial second step is to trainseparate models for each form of controller, pC, since different controllers generally have differentparameterizations, and so there are no commonalities across their sample distributions. However,each such controller may act on objects of different natures in ways that require substantially differentparameters: this distinction is encoded in the discrete variable corresponding to the object types θo.Thus, we would like the predictions of our model to be specialized to each individual object type. Forthis, we can consider the following two strategies (omitting the Csubscript for clarity):1.Learn a single model p(φ|xo, θo)using, for example, a one-hot encoding of the types. Thiswould enable the learning of a single, shared latent representation of xoin the neural net model.However, in cases where the amount of data is small, it is more likely to simply overfit.2. Learn a separate, simpler, model pl(φ|xo)for each discrete value lofθo.However, early on, when the robot has observed very little data, we may prefer an even moreaggressive strategy that pools data over all values of θo, and learns a single simple model p(φ|xo).This model will not be highly specific or accurate, but it may learn more quickly to generate reasonablesuggestions, due to pooled data. It is also important to observe that, in the lifelong setting, we mayhave an asymmetric distribution of experience with samplers for different object types, so that somespecific models would have substantially more training data available than others, in a way thatcannot be predicted in advance and that will change over time.For all these reasons, we propose a nested approach: train both a generic sampler p(φ|xo)and acollection of specialized samplers pl(φ|xo)for all values lofθo. Then, we decide online , given aninput with θo=l, based on an assessment of prediction reliability, how much to value predictionsfrom the generic versus the specialized samplers, and actually sample from a mixture distribution.To construct this assessment of the reliability of each generative model, we construct an auxiliarytraining task to predict a variable z, and augment each training example, so we have (s, φ, z ).Critically, zmust be a value that can be directly measured by the robot, so that the error betweenits learned predictor f(s, φ)and the actual observed zcan serve as a measure of how well trainedp(φ|s)is in the part of the input space near s. So, in parallel with training p(φ|s), we will trainf(s, φ), and in parallel with training each pl(φ|s), we will train a separate specialized fl(s, φ).4At planning time, if the robot must draw a sample in state sfor the action applied to objects withtypes l=θo, it will use the following mixture distribution:pmix(φ|s, z) =ρ(f(s, φ), z)p(φ|s) +ρ(fl(s, φ), z)pl(φ|s)ρ(f(s, φ), z) +ρ(fl(s, φ), z), (1)where ρ(fl(s, φ), z)andρ(f(s, φ), z)∈R+arereliability measures of the specialized and genericmodels at (s, φ), constructed by comparing their predictions to observed z(for example, the inverse ofthe squared prediction error). Note that, since the auxiliary signals zdepend on the action parametersφ, the sampling process must draw samples from the two mixture components, use them to computethe reliability measures, and then choose between the samples based on the resulting mixture weights.Implementation In our implementation, we use simple observable geometric properties of theworld state as zvalues, restricted to the objects ethat parameterize the action. Although they would notsuffice for selecting good samples φ, since they do not consider the effect of φon future steps nor theinteractions with objects outside of e—like a complete simulator would—our ability to predict themaccurately is a signifier that the accompanying model has obtained sufficient data in the neighborhoodof(s, φ)to consider φa good prediction. As an example, consider the NAVIGATE TO(block )actionfrom Figure 2. A useful set of auxiliary signals for this case may be: the orientation of the robotfacing the block , the distance to the center of the block , the distance to the nearest point on theblock ’s boundaries, and the relative coordinates of this point and the robot’s center in the block ’scoordinate frame. Combined, these signals contain abundant information about the effects of theaction, including those relevant for reachability and collision avoidance with the block . A model thatlearns to map a (s, φ)pair to accurate predictions of all these signals has likely observed sufficientlysimilar training pairs to generalize, and is therefore likely to generate high-quality samples φ. SeeAppendices B.2.2 and B.3.2 for the exact auxiliary signals used in our experiments.4 Diffusion Models for Parameter SamplingWe use diffusion models to represent our learned samplers, due to their stability of training andtheir ability to model complex distributions. A diffusion model transforms Gaussian noise intoa distribution over the sample space. To do so, it generates training data by following a forwarddiffusion process, which progressively adds Gaussian noise to observed training samples, and trains areverse diffusion process that gradually denoises Gaussian noise to produce a sample from the learneddistribution [ 6]. Concretely, when learning TAMP samplers, the forward diffusion process is givenbyq(φ0:T|s) =q(φ0|s)QTt=1q(φt|φt−1), where each step q(φt|φt−1)adds Gaussian noise toφt−1andq(φ0|s)denotes the observed distribution of successful action parameters φ. The reverseprocess is the generative model parameterized by ψ, and is similarly defined as a Markov chain:pψ(φT:0|s) =p(φT)Q1t=Tpψ(φt−1|φt, s), where p(φT)is a standard Gaussian prior. Eachsteppψ(φt−1|φt, s) =N(φt−εψ(φt, s, t), σ2I)estimates the mean of a Gaussian by subtractingpredicted noise given a noisy version of the action parameters, the state, and the time step. Oncetrained, the planner can sample from the model by simulating the reverse process pψ(φT:0|s).Separating the learning process into various diffusion models, each in charge of representing aspecific distribution, as described in Section 3, reduces the difficulty of lifelong training as comparedto training a single model to fit all distributions. Even so, each such diffusion model still requirescontinual training: a specialized sampler pl(φ|xo)receives additional data as the robot uses it toattempt to solve new problems, and a generic sampler p(φ|xo)additionally receives data from newobject types as the robot encounters them. Given a previous pool of data Zoldand a newly acquiredset of data points Znew, we consider the following training schemes:•Finetuning Starting from the previous model, train on Znew, likely forgetting previous knowledge.•Retraining Train a new model on ZnewandZoldjointly. In practice, this method has been shownto outperform all true continual training approaches, at the cost of high computational expense.•Replay Starting from the previous model, train over a balanced sampling of data from ZnewandZold[7]. While more advanced strategies are possible, we found that this simple method matchedthe retraining performance with only 10% of the training epochs used for retraining.55 Experimental EvaluationPlace A Place BValid,observedSuccess,observedValid,learnedSuccess,learnedFigure 3: Top: a planner must fittwo blocks in a container that re-stricts the longer block’s placement.Bottom: distribution of parametersfor achieving placement of the smallblock (valid) or solving the problem(success), generated by an expert(observed) and a diffusion model(learned). The distributions match.Our initial experiments sought to validate that diffusion modelscan learn useful TAMP samplers and that the mixture distri-bution proposed in Section 3 achieves good performance inan offline setting. The results of these experiments set up ourmain evaluations over lifelong sequences of 2D and BEHA V-IOR domains. Additional details are provided in Appendix B.5.1 Visualizing Sampling DistributionsTo illustrate the usefulness of our approach to learning sam-plers for TAMP, we created two short-horizon, 2D domainsthat permit us to visualize the learned distributions. The firstdomain (Figure 2, top) requires a robot to push a block upward.We focus on the navigation action, parameterized by the 2Dcoordinates, which succeeds if the block becomes reachable.However, the block can only be pushed from the bottom, somost valid actions do not lead to success. The second domain(Figure 3, top) requires placing two blocks in an L-shapedcontainer, where the first block fits in any of the two sections,but the second block only fits in the long section. The blocksare placed individually by selecting their 2D pose, but not allvalid placements of the first block enable problem success.We collected two data sets for each domain: one with all valid actions and one with only actions thatlead to problem success. We then trained a diffusion-based sampler on each data set. Figures 2 and 3(bottom) show the observed and learned distributions, demonstrating that optimizing for problemsuccess yields distributions that consider the long-term effects of actions, even if future constraintsare not observable by the sampler. The learned distributions match the observed distributions well.5.2 Learning Samplers from Fixed Data on 2D DomainsWe next studied the impact of using our nested models in solving a variety of TAMP problems inan offline setting. We created five 2D robotics domains, each of which requires navigating, picking,and placing different objects in a target container. The domains vary in the object size ranges,graspable regions, and container size ranges and shapes (see Appendix B.2.1 for additional details).We collected demonstration data for solving Kproblems from each domain, trained the samplers onthe fixed data sets, and evaluated their efficiency in solving K′= 50 unseen problems. We repeatedthe evaluation over 10 trials with various random seeds controlling the test problem generation.Our experiments considered the following (primarily) diffusion-based sampler learning methods:• Specialized: separate for each typed abstract action• (CD) generic: shared across actions with a common controller within (across) domains• (CD) mixture: our approach mixing specialized and (CD) generic samplers•NSRTs: The sampler learning component of Chitnis et al.’s [ 4] work—a non-diffusion-based modelFigure 4 shows average results for varying numbers of training problems K(standard errors inAppendix C). As expected, all methods become more efficient at generating good samples as theyare trained on increasing amounts of data. In particular, the specialized samplers (which observethe least amount of data) are the most inefficient when trained on K= 50 problems, but become asefficient as generic samplers upon training on K= 50,000problems. Our mixtures of nested modelsare substantially more efficient than alternative methods when trained with little data. Appendix Danalyzes various mechanisms for selecting between specialized and generic samplers, demonstratingthat using geometric auxiliary signals as a proxy for sampler generalization is a strong choice.68000102103104# train tasks200040000.0 0.5 1.00.000.250.500.751.00avg # samples(a) Number of samples per solved problem8090102103104# train tasks200.00 0.25 0.50 0.75 1.000.000.250.500.751.00avg % solved (b) Number of problems solved102103104# train tasks200040006000avg # samplesSpecializedGenericMixtureCD genericCD mixtureUniformNSRT s [4]Figure 4: Results of learning diffusion models as TAMP samplers from offline data. Given sufficientdata, all samplers solve the majority of problems efficiently. In the small-data regime (note thelog-scale of the x-axis), sharing data across samplers improves sampling efficiency. The mixturedistributions learned either individually on each domain (Mixture) or across all domains (CD mixture)are best, thanks to their ability to automatically select generic or specialized samplers during planning.5.3 Lifelong Evaluation on 2D Domains0.04 0.02 0.00 0.02 0.040.040.020.000.020.04Mixture+ReplayMixture+FinetuneSpecialized+RetrainSpecialized+FinetuneGeneric+RetrainGeneric+FinetuneUniform0.0 0.5 1.0# samples 1e7010002000# cumulative solvedFigure 5: Lifelong learning results on 2D do-mains (avg. over 10 seeds). The mixture dis-tributions are vastly superior. Finetuning themodels directly fails due to forgetting, espe-cially with generic and specialized samplers.Our lifelong evaluation presented the agent with asequence of problems from each domain in turn (firstdomain 1, then domain 2...), imposing a nonstation-ary distribution. The agent attempted to solve eachproblem given its current sampler, and subsequentlyused the collected data for additional training. Therewas no separate test phase: the agent’s performancewas assessed over its attempts to solve the problemsin the sequence (before training on them). We up-dated the models of each method using the retraining,finetuning, and replay strategies of Section 4.Figure 5 shows the cumulative number of problems solved against the number of generated samples.We used retraining for specialized and generic samplers as an approximate upper bound on theirperformance. Our mixture sampler is substantially more efficient than baselines, and is the onlyapproach that outperforms a na ̈ıve uniform sampler. In the lifelong setting, agents use their currentsamplers to collect data, which explains why even the retraining version of the generic baseline fails.The agent uses the model trained on previous types to generate samples for novel types; early on,those samplers are overfit to the small set of initial types, causing them to fail to solve any problemsand preventing the robot from acquiring useful data for subsequent updates. Specialized samplerswork reasonably well, but are slowed down by their inability to quickly generate samples when trainedover little data. The finetuning approaches, which are known to suffer from forgetting, performnoticeably worse, demonstrating the need to retain knowledge throughout the prlblem sequence.5.4 Lifelong Evaluation on BEHA VIOR0 500 1000# samples 1e30500100015002000# cumulative solvedMixture+ReplaySpecialized+RetrainGeneric+RetrainHand-craftedFigure 6: Lifelong learning results onBEHA VIOR. Our lifelong learner pro-gressively improves over its initial (hand-crafted) samplers, becoming increas-ingly better at solving diverse problems.We next applied the evaluation protocol of Section 5.3 to10 families of BEHA VIOR problems (see Appendix B.3for details). Figure 6 demonstrates that our method en-ables continual learning over this more complex and real-istic domain. Notably, the agent improves even over thehand-crafted samplers that we provided as a starting point,demonstrating the usefulness of our approach even whenengineered solutions are already in place.76 Related WorkLifelong learning Recent literature on lifelong learning has primarily focused on avoiding catas-trophic forgetting [ 8] in supervised settings [ 9,10,11,12,13]. Various techniques achieve thisby replaying past data [ 14,15,7], which we adopt. Our focus is on enabling robots to becomeincreasingly capable over time via compositionality. Few works have studied compositionality inlifelong supervised [ 16,17,18] and reinforced [ 19,20,21] domains. Prior lifelong learning workconsiders training an agent over a sequence of machine learning tasks and subsequently evaluating thesystem over those same tasks. We propose a more natural evaluation setting without train/test splits,where the agent continually seeks to solve TAMP problems and uses the data from those problemsto learn. While seemingly this formulation does not require forgetting avoidance (since previousproblems are never encountered again), prior work has suggested that knowledge retention is usefulfor generalization to future problems [2]. Our results in Sections 5.3 and 5.4 exemplify this notion.Learning for TAMP Numerous recent methods seek to broaden the capabilities of TAMP systemsbeyond engineered solutions via learning. A majority of such methods focus on symbolic aspects ofthe problem: given a partial symbolic description of a domain, use demonstration data to fill the gapsin symbolic space to enable solving additional problems. This can take the form of learning operators(i.e., action abstractions) [ 22,23,4] or predicates (i.e., state abstractions) [ 24,25,26]. Given thedifficulty of mapping abstract actions to continuous robot actions, our focus is on learning at thecontinuous level, specifically in the form of samplers. The most closely related approach, whichlearns samplers for SeSamE planners, considers a very simple class of regression samplers witha learned rejection classifier [ 4]. Other methods for learning TAMP samplers use more powerfulgenerative models, but not diffusion models [ 27,28,29,30,31]. None of these prior works studythe underlying multitask problem, nor do they operate in a lifelong setting as our method does.Other (less) related work uses reinforcement learning to bridge between the discrete and continuousactions [32, 33] or automatically decomposes motion plans into discrete components [34].7 Conclusion and LimitationsWe proposed a novel formulation of the embodied lifelong learning problem for TAMP systems,which emphasizes realistic evaluation of the robot as it attempts to solve problems over its lifetime.Our solution approach learns a mixture of nested generative models, assigning higher weight tomodels that attain low error on auxiliary prediction tasks. Our experiments on 2D and BEHA VIORdomains demonstrate the ability of our approach to acquire knowledge over a lifetime of planning.Limitations of chosen TAMP methods We adopt a SeSamE strategy for planning, which requiresa faithful environment simulator. In environments that are difficult to simulate (e.g., because theycontain many objects), we would prefer a different TAMP method; we leave this problem for futurework. Moreover, our samplers are conditioned only on the objects involved in the action. Conditioningon all other objects or on the plan skeleton could generate better samples for problem completion.Limitations of the lifelong setting and approach One direction to improve the learning is todevelop better exploration strategies. Our method directly uses the learned models to generatesamples; trading off exploration and exploitation could yield additional benefits. Additionally, ourlifelong training replays all past data. In our experiments, this represents only a small portion ofthe computation, but it would become infeasible in longer deployments, so subsampling strategies—which our approach can trivially adopt—should be employed. Our lifelong learning formulationbuilds upon operations that the robot can already execute. It would also be valuable to study theautonomous learning of newoperations. We encourage future research in this direction.Limitations of the experimental setting We measure number of samples as a proxy for planningtime. In practice, there is a trade-off between simulation cost and sampling cost; expensive diffusionmodels might lead to higher sampling cost than simulation cost in some cases. Relatedly, we havenot yet evaluated our approach on physical robots. The facts that our method builds upon a workingTAMP system and that it is highly data efficient suggests that hardware deployment is probable.8AcknowledgmentsWe thank Nishanth Kumar, Willie McClinton, and Kathryn Le for developing and granting usaccess to the base TAMP system for BEHA VIOR. We also thank Yilun Du for initial discussionsand guidance on diffusion models. The research of J. Mendez-Mendez is funded by an MIT-IBMDistinguished Postdoctoral Fellowship. We gratefully acknowledge support from NSF grant 2214177;from AFOSR grant FA9550-22-1-0249; from ONR MURI grant N00014-22-1-2740; from ARO grantW911NF-23-1-0034; from the MIT-IBM Watson AI Lab; from the MIT Quest for Intelligence; andfrom the Boston Dynamics Artificial Intelligence Institute.References[1]S. Srivastava, C. Li, M. Lingelbach, R. Mart ́ın-Mart ́ın, F. Xia, K. E. Vainio, Z. Lian, C. Gokmen,S. Buch, K. Liu, S. Savarese, H. Gweon, J. Wu, and L. Fei-Fei. BEHA VIOR: Benchmarkfor everyday household activities in virtual, interactive, and ecological environments. InProceedings of the 5th Conference on Robot Learning (CoRL-22) , pages 477–490, 2022.[2]J. A. Mendez and E. Eaton. How to reuse and compose knowledge for a lifetime of tasks: Asurvey on continual learning and functional composition. Transactions on Machine LearningResearch (TMLR) , 2023.[3]C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated task and motion planning. Annual Review of Control, Robotics, and AutonomousSystems , 4(1):265–293, 2021.[4]R. Chitnis, T. Silver, J. B. Tenenbaum, T. Lozano-P ́erez, and L. P. Kaelbling. Learning neuro-symbolic relational transition models for bilevel planning. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS-22) , pages 4166–4173, 2022.[5]D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso, D. Weld, andD. Wilkins. PDDL: The planning domain definition language. Technical report, Yale Center forComputational Vision and Control. , 1998.[6]J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In Advances in NeuralInformation Processing Systems 33 (NeurIPS-20) , pages 6840–6851, 2020.[7]A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ran-zato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486 ,2019.[8]M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: Thesequential learning problem. In Psychology of Learning and Motivation , volume 24, pages109–165. Elsevier, 1989.[9]J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan,J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, andR. Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the NationalAcademy of Sciences, PNAS , 114(13):3521–3526, 2017.[10] F. Zenke, B. Poole, and S. Ganguli. Continual learning through synaptic intelligence. InProceedings of the 34th International Conference on Machine Learning, ICML-17 , pages3987–3995, 2017.[11] Z. Li and D. Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis andMachine Intelligence, TPAMI , 40(12):2935–2947, 2017.[12] C. V . Nguyen, Y . Li, T. D. Bui, and R. E. Turner. Variational continual learning. In 6thInternational Conference on Learning Representations, ICLR-18 , 2018.9[13] J. Serr `a, D. Sur ́ıs, M. Miron, and A. Karatzoglou. Overcoming catastrophic forgetting with hardattention to the task. In Proceedings of the 35th International Conference on Machine Learning,ICML-18 , pages 4548–4557, 2018.[14] D. Lopez-Paz and M. Ranzato. Gradient episodic memory for continual learning. In Advancesin Neural Information Processing Systems 30, NIPS-17 , pages 6467–6476, 2017.[15] A. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny. Efficient lifelong learning withA-GEM. In 7th International Conference on Learning Representations, ICLR-19 , 2019.[16] J. A. Mendez and E. Eaton. Lifelong learning of compositional structures. In 9th InternationalConference on Learning Representations, ICLR-21 , 2021.[17] T. Veniat, L. Denoyer, and M. Ranzato. Efficient continual learning with modular networksand task-driven priors. In 9th International Conference on Learning Representations, ICLR-21 ,2021.[18] O. Ostapenko, P. Rodriguez, M. Caccia, and L. Charlin. Continual learning via local modulecomposition. In Advances in Neural Information Processing Systems 34, NeurIPS-21 , pages30298–30312, 2021.[19] E. Brunskill and L. Li. PAC-inspired option discovery in lifelong reinforcement learning.InProceedings of the 31st International Conference on Machine Learning, ICML-14 , pages316–324, 2014.[20] C. Tessler, S. Givony, T. Zahavy, D. Mankowitz, and S. Mannor. A deep hierarchical approach tolifelong learning in Minecraft. In Proceedings of the Thirty-First AAAI Conference on ArtificialIntelligence, AAAI-17 , 2017.[21] J. A. Mendez, H. van Seijen, and E. Eaton. Modular lifelong reinforcement learning via neuralcomposition. In 10th International Conference on Learning Representations, ICLR-22 , 2022.[22] T. Silver, R. Chitnis, J. Tenenbaum, L. P. Kaelbling, and T. Lozano-P ́erez. Learning symbolicoperators for task and motion planning. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS-21) , pages 3182–3189, 2021.[23] N. Kumar, W. McClinton, R. Chitnis, T. Silver, T. Lozano-P ́erez, and L. P. Kaelbling. Over-coming the pitfalls of prediction error in operator learning for bilevel planning. arXiv preprintarXiv:2208.07737 , 2022.[24] J. Loula, K. Allen, T. Silver, and J. Tenenbaum. Learning constraint-based planning modelsfrom demonstrations. In 2020 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS-20) , pages 5410–5416, 2020. doi:10.1109/IROS45743.2020.9341535.[25] A. Curtis, T. Silver, J. B. Tenenbaum, T. Lozano-P ́erez, and L. Kaelbling. Discovering state andaction abstractions for generalized task and motion planning. In Proceedings of the Thirty-SixthAAAI Conference on Artificial Intelligence (AAAI-22) , volume 36, pages 5377–5384, 2022.[26] T. Silver, R. Chitnis, N. Kumar, W. McClinton, T. Lozano-P ́erez, L. P. Kaelbling, and J. Tenen-baum. Predicate invention for bilevel planning. In Proceedings of the Thirty-Seventh AAAIConference on Artificial Intelligence (AAAI-23) , 2023.[27] Z. Wang, C. R. Garrett, L. P. Kaelbling, and T. Lozano-P ́erez. Learning compositional modelsof robot skills for task and motion planning. International Journal of Robotics Research , 40(6–7):866–894, jun 2021.[28] Z. Wang, C. R. Garrett, L. P. Kaelbling, and T. Lozano-P ́erez. Active model learning and diverseaction sampling for task and motion planning. In 2018 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS-18) , pages 4107–4114, 2018.10[29] J. Ortiz-Haro, J.-S. Ha, D. Driess, and M. Toussaint. Structured deep generative models forsampling on constraint manifolds in sequential manipulation. In 5th Annual Conference onRobot Learning (ICLR-21) , 2021.[30] B. Kim, L. Kaelbling, and T. Lozano-P ́erez. Guiding search in continuous state-action spacesby learning an action sampler from off-target search experience. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence , 2018.[31] B. Kim, L. Shimanuki, L. P. Kaelbling, and T. Lozano-P ́erez. Representation, learning, andplanning algorithms for geometric task and motion planning. The International Journal ofRobotics Research , 41(2):210–231, 2022.[32] T. Silver, A. Athalye, J. B. Tenenbaum, T. Lozano-P ́erez, and L. P. Kaelbling. Learning neuro-symbolic skills for bilevel planning. In 6th Annual Conference on Robot Learning (CoRL-22) ,2022.[33] G. Liu, J. de Winter, D. Steckelmacher, R. K. Hota, A. Nowe, and B. Vanderborght. Synergistictask and motion planning with reinforcement learning-based non-prehensile actions. IEEERobotics and Automation Letters (RA-L) , 8(5):2764–2771, 2023.[34] J. J. Johnson, A. H. Qureshi, and M. Yip. Learning sampling dictionaries for efficient andgeneralizable robot motion planning with transformers. arXiv preprint arXiv:2306.00851 , 2023.11Appendices to“Embodied Lifelong Learning forTask and Motion Planning”Anonymous Author(s)A Visual Depiction of the Nested Lifelong Sampler Learning Approachtime k=1 k=2 k=3 k=4sort books organize office close windows TAMP system Accumulated samplers plan — NavigateTo(burger1, φ1) Grasp(burger1, φ2) NavigateTo(fridge, φ3) PlaceInside(burger1, fridge, φ4) ... 1. reuse samplers (s1, φ1)(s3, φ3)...NavigateTo(burger) NavigateTo(any) store leftovers init — ∀ burger: onTop(burger, counter) goal — ∀ burger: inside(burger, fridge) 2. construct plan 3. evaluate 4. train samplers 5. store samplers Figure A.1: The learning robot will face a sequence of diverse TAMP problems in a true lifelongsetting. It will use its current models to solve each problem as efficiently as possible, and then useany collected data to improve those models for the future. Images captured from BEHA VIOR [1].B Experimental DetailsThis section provides additional details about the experiments in Section 5 in the main paper.B.1 Network Architectures and Diffusion ModelsAll network architectures used throughout our work were simple multi-layer perceptrons with twohidden layers of 256nodes each. For our nested models, the hidden layers were shared between thegenerative model pψand the auxiliary predictor f, and only the output layer was separate—notethat there was no sharing of parameters between specialized pland generic psamplers. All trainingproceeded for 1,000epochs over the training data in mini-batches of size 512(when that manysamples were available, and a single batch otherwise), using an Adam optimizer with the defaulthyperparameters of PyTorch, including a learning rate of 10−3. The one exception was the replaymethod for lifelong training, which used 100epochs of training during model adaptation.To train the diffusion models, for each point (s, φ), a time step t∈[1, T]was randomly chosenforT= 100 , and a sample was drawn from the forward process φt=φ√ ̄αt+ε√1− ̄αt, whereε∼ N (0,I)is standard Gaussian noise and ̄αtis a scaling constant obtained by expanding theexpression of q(·). The loss function then measured how closely εψ(φt,x, t)approximated the truenoise ε:L(s, φ, t, ε ) =∥εψ(φt,x, t)−ε∥. Once trained, the planner sampled from the model bysimulating the reverse process pψ(φT:0|s).12Books Cups Boxes Sticks BlocksFigure B.2: 2D domains used to evaluate our sampler learning approaches. The objects in eachdomain have properties that ensure that samplers must generate diverse candidate action parametersto solve the problems.To process the input composed of three vectors (φt,x, t), the time index twas first processed usingsinusoidal positional embeddings [ 6] of the same dimension as x. Then, the three vectors wereconcatenated into a single input to the network. Since the auxiliary predictor fshared the same baselayers of the network, we used t= 0as a constant input to f.All inputs and outputs were scaled to [0,1], except when the range of the variable was less than 1, inwhich case the given variable was shifted to 0, but not rescaled.B.2 2D Domain Experimental SettingWe now provide the details of our evaluations on the 2D domains.B.2.1 Domain DescriptionsWe created five different 2D domains, specially crafted to require distinct sampling distributionsacross objects. Figure B.2 depicts the simulated domains. As described in Section 2.2 in the mainpaper, each problem within a domain is a sampled initial state and a goal. In this case, all goals are ofthe form “place all objects in the container.”•Books Rectangular books of sides wbook∈[0.5,1], lbook∈[1,1.5]are scattered in a roomand must be picked and placed on a rectangular shelf of sides wshelf∈[2,5], lshelf∈[5,10].•Cups Square cups with sides lcup∈[0.5,1]must be picked by the handle (one specificside) and palced in a cupboard of sides wcupboard ∈[2,5], lcupboard ∈[5,10]. The cupboardis always against a wall.•Boxes Boxes of sides wbox∈[0.5,1], lbox∈[1,1.5]must be placed on pockets at theextreme ends of a tray of width wtray∈[3,5]and length ltray∈[11,13].•Sitcks Long sticks of sides wstick∈[0.5,1], lstick∈[5,6]in a container of sideswcontainer ∈[3,5], lcontainer ∈[7,10].•Blocks Small square blocks of sides lblock∈[0.25,0.5]must be place in a square bin ofsides lbin∈[4,6]. While previous problems contain n∈[4,5]objects, these require placingn∈[9,10]blocks.The robot has three controllers that it can execute: NAVIGATE TO(object ), parameterized by the rela-tive target coordinates normalized by the object ’s size; PICK(object ), parameterized by the lengthto extend the robot’s gripper and the angle to hold the object at; and PLACE (object ,container ),parameterized by the gripper extension.B.2.2 Auxiliary Geometric SignalsThis section describes the auxiliary signals we used to train our predictors f, and we later describehow those were used to construct the mixture distributions for our samplers. We used the followingauxiliary signals for each form of controller:13• N AVIGATE TO: distance to the nearest point between the robot and the object—in the case ofthe cup, this signal measured distance to the handle, and in the case of the tray, it measureddistance to the nearest pocket; coordinates of the nearest point on the object’s boundaries (inthe object’s frame); and target coordinates (in the world frame and in the object’s frame).• PICK: position of the gripper’s point (in the world frame and in the object’s frame).• PLACE : center of mass of the object (in the world frame and in the container’s frame).Note that all these signals measure the intended effects of an action, but cannot measure the actualattained effect, which would require knowledge of all objects in the world. However, by measuringthe agent’s accuracy in predicting these signals, we can asses how well-trained it is in the neighboringregion of the current state, and use that as a measure of how well its samples may generalize.B.2.3 Constructing the Mixture DistributionIn these 2D domains, we created mixture distributions over three mixture components: a genericsampler trained on all object types, a specialized sampler for each object type, and a fixed uniformsampler over the parameter space of the controller. We used the inverse of the root mean square erroras the assessment of reliability, ρ. For this, we first computed (offline) the average prediction errorfor random guessing via simulation, and assigned this fixed error value to the uniform sampler. Then,we used this value to normalize prediction errors across the various signals.B.2.4 Lifelong Training DetailsIn the lifelong setting, upon facing a new problem, the agent used its mixture sampler to generatesamples for any previously seen object type. For unknown types, the agent used a mixture over thegeneric and the uniform sampler with fixed weights of 0.5. At the very beginning, the samplers wereinitialized with a uniform distribution over the parameter space.B.2.5 Evaluation ProtocolsIn the offline setting, we generated 50test problems for each of 10trials, with varying random seedscontrolling the sizes of objects and their placements.In the lifelong setting, each trial shuffled the order of the domains using the random seed, andpresented the agent with a sequence of 500problems from each domain. Instead of updating themodels after each problem, which would render most updates very minor, we updated the models atintervals of 50problems, resulting in a total of 10model updates per domain.In both settings, we used Fast-Downward as the skeleton generator, getting a single skeleton for eachproblem (i.e., N= 1) and setting the maximum total number of samples to B= 10,000. Duringsearch, a maximum of M= 100 samples were attempted at any given state before backtracking. Wedid not impose a timeout for these experiments.B.3 BEHA VIOR Domain Experimental SettingNext, we describe the precise details of our lifelong learning evaluation on BEHA VIOR problems.B.3.1 Domain DescriptionsWe considered 10BEHA VIOR problems using the simulated humanoid: boxing books up forstorage, collecting aluminum cans, locking every door, locking every window, organizing file cabinet,polishing furniture, putting leftovers away, re-shelving library books, throwing away leftovers,and unpacking suitcase. Following prior work to adapt BEHA VIOR domains to the TAMP set-ting [ 23], only actions with PLACE ONTOP(object ,surface ),PLACE INSIDE (object ,surface ),PLACE UNDER (object ,surface ),PLACE NEXTTO(object ,target ,surface ),NAVIGATE -TO(object ), and GRASP (object ,surface )controllers were implemented at the continuous level,14while other actions (e.g., CLEAN DUSTY orOPEN) were implemented only at the abstract level andassumed to always succeed if their abstract preconditions held.B.3.2 Auxiliary Geometric SignalsThe auxiliary signals that we used to assess each sampler’s reliability were:• N AVIGATE TO: sine and cosine of the robot’s yaw; distance to target; nearest point on theobject’s bounding box (in the object’s frame); distance to the nearest point on the object’sbounding box; and robot position (in the object’s frame).• G RASP : sine and cosine of the Euler angles of the robot’s gripper; distance of the gripperto the target and the surface; distance of the gripper to the nearest point on the target’s andsurface’s bounding boxes; position, and sine and cosine of the Euler angles of the gripper’spose (in the target’s and surface’s frames).• PLACE···: distance from hand and object to surface; nearest points from hand and objectto surface’s bounding box (in the surface’s frame); nearest point from hand to object’sbounding box (in the object’s frame); distances to these nearest points; positions, and sinesand cosines of Euler angles of the gripper’s and object’s poses (in the surface’s frame).ForPLACE NEXTTO, we additionally computed the relevant distances, nearest points, andrelative coordinates with respect to the target object.Like in the 2D domains, these signals measure only intended effects, but have no means to effectivelymeasure if those effects are attained (e.g., due to collisions with unforeseen objects).B.3.3 Constructing the Mixture DistributionIn BEHA VIOR domains, we only considered the trained specialized and generic samplers as mixturecomponents, since computing the uniform sampler’s error like in the 2D case would have requiredprecomputing the error of random predictors via simulation, which was prohibitively expensive forBEHA VIOR. In consequence, we used the root mean square error directly (without normalization) toweight the two mixture components.B.3.4 Lifelong Training DetailsIn the lifelong setting, we only used the learned samplers for exploration when both generic andspecialized samplers had been trained. Whenever a new object type was encountered, hand-craftedsamplers were used. At the start of the robot’s lifetime, all samplers were initialized to hand-crafteddistributions from prior work [ 23]—note that, for BEHA VIOR domains, a uniform distribution, likewe used in the 2D domains, would never complete problems within any reasonable time limit.B.3.5 Evaluation ProtocolsWe repeated the BEHA VIOR experiments over four trials with varying random seeds, which controlledboth the order of BEHA VIOR problem families and the sampled problems within each family. Wetrained the agent sequentially on all ten families in each trial. We presented the robot with 96problems of each family in sequence, and updated models every 48problems.We again used Fast-Downward as the skeleton generator with N= 1. We set the sample bound toB= 1,000, with up to M= 10 samples at each state before backtracking. We did not use a timeoutfor these experiments.C Complete Results of Learning from Fixed Data on 2D DomainsTables C.1 and C.2 show the mean and standard error across trials of the number of samples andnumber of solved problems in the experiments of Section 5.2 in the main paper.15Table C.1: Average ±standard error of the number of samples needed to solve problems in 2Ddomains after training diffusion models from offline data—accompanying table for Figure 4 in themain paper. Variance across trials is very small, demonstrating the statistical significance of ourresults.Sampler choice 50problems 500problems 5,000problems 50,000problemsSpecialized 3970.48±89.042071.54±43.93 1541.26±46.41 1161.38±41.87Generic 2538.38±64.121830.37±48.14 1454.18±49.16 1129.52±40.06Mixture 1904.38±57.821469.85±52.96 1302.50±42.83 1097.14±40.53CD generic 2324.82±48.251720.13±32.23 1376.16±54.48 1333.63±47.58CD mixture 1676.47±36.361426.21±32.79 1297.18±32.88 1134.70±47.31Uniform 3063.76±60.133063.76±60.13 3063.76±60.13 3063.76±60.13NSRTs [4] 8472.66±59.782674.26±42.31 2024.20±55.44 1927.29±54.28Table C.2: Average ±standard error of the number of solved problems in 2D domains after trainingdiffusion models from offline data—accompanying table for Figure 4 in the main paper. Varianceacross trials is very small, demonstrating the statistical significance of our results.Sampler choice 50problems 500problems 5,000problems 50,000problemsSpecialized 76.96±0.98 93.44±0.48 95.32±0.47 96.44±0.36Generic 89.84±0.55 94.32±0.49 95.36±0.43 96.28±0.32Mixture 94.28±0.55 96.32±0.52 95.60±0.31 96.44±0.39CD generic 90.64±0.33 94.48±0.33 95.84±0.36 95.60±0.46CD mixture 94.88±0.32 95.56±0.33 95.72±0.32 96.20±0.41Uniform 92.76±0.54 92.76±0.54 92.76±0.54 92.76±0.54NSRTs [5] 23.16±0.89 89.00±0.38 93.72±0.40 94.32±0.49D Additional ExperimentsIn this section, we present ablations and additional experiments to those presented in Section 5 in themain paper.D.1 Evaluating the Mixture Weight ConstructionWhile the results of our main experiments strongly support the choice of using mixture distributionsfor generating samples for TAMP, we were interested in more clearly understanding the choices ofhow to construct those mixture distributions. For this purpose, we implemented and evaluated fivealternative strategies for constructing the mixture distribution from our nested models:•Distance Our first alternative mixture still follows the process of training auxiliary modelsbut, unlike our main implementation, uses only a single auxiliary variable: distance to target.This allows us to verify whether a collection of auxiliary signals is necessary, or a singleone may suffice.•Reconstruction We similarly create an auxiliary model for directly reconstructing the statefeatures x. With this, we check the usefulness of including the action parameters φin theauxiliary tasks.•Uniform This strategy simply uses a uniform mixture distribution. The purpose ofevaluating this technique is to check whether all the gains of mixture distributions comefrom the mere fact of using a mixture, or the weighting plays an important role.•Proportional This cheating method observes the outcomes of the uniform mixture over alltest problems, and computes the mixture weights as the proportion of successful samplesthat were drawn from each mixture component. We include this strategy to check whetherthere may exist some fixed choice of mixture weights that works across all states .16102103104# train tasks1250150017502000avg # samples(a) Number of samples per solved problem102103104# train tasks93949596avg % solved (b) Number of problems solved102103104# train tasks150020002500avg # samplesGeometryDistanceReconstructionProportional (X)Classifier (X)UniformFigure D.3: Alternative choices of the mixture strategy to generate samples for TAMP in the 2Ddomains. Our geometric prediction method is most effective in the low-data setting.•Classifier This additional cheating method also observes the outcomes of the uniformmixture and trains a classifier to generate the mixture weights. This enables us to studywhether it may be possible to train a model to directly choose which sampler to use, as analternative to using auxiliary tasks as an assessment of reliability.The results across the five 2D domains are shown in Figure D.3. While all mixture choices performwell, in the low-data regime, which is crucial in lifelong settings, our geometric predictions lead tothe highest efficiency across all mixture choices. Notably, the uniform mixture solves the largestfraction of problems. This is expected, given that our mixtures include the uniform sampler over theaction space, which is guaranteed to eventually find a successful sample; only the uniform mixtureensures that this sampler is used sufficiently often to guarantee solving most problems (albeit lessefficiently than other samplers). The reconstruction error performs worst of all in the low-data setting,but eventually matches the performance of our geometry-prediction implementation; this validatesthe importance of including the action parameters φas part of the auxiliary signal computation.Neither of the strategies that cheat is especially strong, indicating that 1) fixed mixture weights arenot sufficiently flexible and 2) directly predicting which sampler to use, given the state, is difficult.D.2 Replay Training Matches Full Retraining in Lifelong Setting on 2D DomainsTo validate our use of balanced replay instead of full retraining, requiring 10×fewer training epochsdue to starting from previously trained models, we compared the performance of the two methodson the lifelong sequence of 2D domains from Section 5.3 in the main paper. Results in Figure D.4demonstrate that the two methods perform equivalently.0.04 0.02 0.00 0.02 0.040.040.020.000.020.04Mixture+Replay Mixture+Retrain Mixture+Finetune0 2 4# samples 1e6010002000# cumulative solvedFigure D.4: Comparison of retraining vs replay on the lifelong learning evaluation in 2D domains.Replay (which is more efficient) matches the performance of retraining over the sequence of problems.17 |
HYka22IcV6 | Online Learning for Obstacle AvoidanceDavid Snyder†,1,3Meghan Booker1Nathaniel Simon1Wenhan Xia2,3Daniel Suo2,3Elad Hazan2,3Anirudha Majumdar1,3Abstract: We approach the fundamental problem of obstacle avoidance for roboticsystems via the lens of online learning. In contrast to prior work that either assumesworst-case realizations of uncertainty in the environment or a stationary stochasticmodel of uncertainty, we propose a method that is efficient to implement andprovably grants instance-optimality with respect to perturbations of trajectoriesgenerated from an open-loop planner (in the sense of minimizing worst-case regret ).The resulting policy adapts online to realizations of uncertainty and provablycompares well with the best obstacle avoidance policy in hindsight from a richclass of policies. The method is validated in simulation on a dynamical systemenvironment and compared to baseline open-loop planning and robust Hamilton-Jacobi reachability techniques. Further, it is implemented on a hardware examplewhere a quadruped robot traverses a dense obstacle field and encounters inputdisturbances due to time delays, model uncertainty, and dynamics nonlinearities.Keywords: Regret Minimization, Obstacle Avoidance, Online Learning1 IntroductionThe problem of obstacle avoidance in motion planning is a fundamental and challenging task atthe core of robotics and robot safety. Successfully solving the problem requires dealing withenvironments that are inherently uncertain and noisy: a robot must take into account uncertainty— external disturbances and unmodeled effects, for example — in its own dynamics and thoseof other agents in the environment. Approaches for tackling the obstacle avoidance problem inrobotics typically fall under two categories: (i) methods that attempt to construct stochastic models ofuncertainty in the agents’ dynamics and use the resulting probabilistic models for planning or policylearning, and (ii) methods that construct plans that take into account worst-case behavior. In Sec. 2we give a more detailed overview of both classes of approaches.In this paper, we are motivated by Vapnik’s principle: “when solving a given problem, try to avoidsolving a harder problem as an intermediate step. ” Constructing accurate models of disturbances andagent dynamics is perhaps more complicated than the task of obstacle avoidance in motion planning,as practical uncertainties rarely conform to assumptions made by the two classes of approacheshighlighted above. As an example, consider a quadruped robot navigating through an ( a prioriunknown) obstacle field subject to unmodeled dynamics and external disturbances in the inputchannel (Fig. 1). Constructing accurate probabilistic models of disturbance and obstacle variationsis challenging, and may cause the robot to violate safety in out-of-distribution settings. In contrast,worst-case assumptions may lead to extremely conservative behavior. This motivates the need foronline learning methods that adapt to the particular context encountered by the robot.Statement of Contributions. In this work, we pose the problem of obstacle avoidance in a regretminimization framework and build on techniques from non-stochastic control. Our primary contribu-tion is a trust-region-based online learning algorithm for the task of obstacle avoidance, coupled withprovable regret bounds that show our obstacle avoidance policy to be comparable to the best policy in†Corresponding Author: dasnyder@princeton.edu.1Intelligent Robot Motion (IRoM) Lab, PrincetonUniversity.2Department of Computer Science, Princeton University.3Google DeepMind.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: A quadruped robot is tasked with traversing a course with densely placed obstacles to a goal. Duringtraversal, the robot is subject to sensor noise, time delays, input-channel disturbances, and nonlinearities in theclosed-loop dynamics, which can render optimistic nominal plans (orange) unsafe. Our online learning control(OLC) algorithm (dashed pink) corrects for this to give a wider margin of error when moving through the course.hindsight from a given class of closed-loop policies. This type of theoretical performance guaranteeis nonstandard, and allows us to flexibly adapt to the behavior of the uncertainty in any instance ofthe obstacle avoidance problem without making a priori assumptions about whether the uncertaintyis stochastic or adversarial. Further, the resulting method is computationally efficient. The methodis applied to dense obstacle environments with complex unmodeled dynamics, and demonstratesimproved performance where open-loop planners and overly-robust methods can respectively struggle.We additionally show the efficacy of our method with hardware demonstrations where a quadrupedrobot has to navigate dense obstacle fields subject to time delays and input-channel disturbances.2 Related WorkEffective motion planning is a central challenge within robotics of continuing and significant inter-est [1, 2, 3]. While planning is well-developed in deterministic settings, robustness remains a majorchallenge in the presence of unmodeled uncertainty. Existing techniques typically fall into one of twocategories: (i) methods that make assumptions on the distribution of uncertainty, or (ii) methods thatassume worst-case disturbances. Below, we set our work within this context and discuss techniquesfrom online learning; we will leverage the latter to develop our framework for obstacle avoidance.Planning under uncertainty. A popular approach to account for uncertainty in motion planningis to assume knowledge of the uncertainty distribution. One early method in this vein utilizeschance constraints to bound the probability of collision under stochastic uncertainty [ 4] and has beensubsequently extended to encompass many sources of stochastic uncertainty in robotics [ 5] - [8].Further development has utilized partially observable Markov decision processes (POMDPs) toaccount for state uncertainty [ 9] - [12]. These approaches are able to provide strong guaranteeson safety, albeit under generally restrictive assumptions on the uncertainty distribution (e.g., i.i.d.Gaussian uncertainty); our approach does not assume knowledge of the distribution of uncertainty,yet provides regret bounds even in the presence of non-Gaussian and non-stationary noise.Recently, learning-based planning techniques relying on domain randomization have demonstratedsignificant empirical success [ 13] - [19] by specifying a distribution of uncertainty over varioussimulation parameters to train robust policies. Combining this domain randomization with onlineidentification of uncertain parameters has been proposed [ 20,21]; however, despite these methods’empirical successes, they still rely on real-world environments being well-represented by the dis-tribution of uncertainty used in (relatively extensive) training. By contrast, our approach focuseson settings where it may be challenging to specify the uncertainty, and we do not require expensivetraining of policies in simulation to provide theoretical guarantees in the form of bounded regret.2Reachability-based robust planning. Hamilton-Jacobi (HJ) Reachability-based [ 22,23] and tra-jectory library-based [ 24,25] robust planning techniques assume worst-case realization of uncertainty.As such, they provide adversarial certificates of safety for path planning problems via the constructionof representations (or outer approximations) of the safe and unsafe regions of the state space condi-tional on the robot dynamics, obstacle placement, and disturbance size. This formalism is similarto related results in adaptive control [ 26,27], which prove stability in the face of disturbances forspecific planning controllers, even in the presence of certain non-convex obstacles. HJ methods,which generally suffer from the curse of dimensionality [ 28] (despite the existence of speed-ups incertain settings [ 29]), use the formalism of the differential game [ 30] to provide a “global” notionof safe and unsafe sets [ 31]. In comparison, robust trajectory libraries, which are usually computedusing convex programs [ 32], provide safety guarantees in the form of robust “tubes” or “funnels”[33, 25] that encompass only the nominal trajectory (or hypothesized trajectories) within the space.Our work differs from these methods via guaranteeing “instance-optimality;” namely, the regret mini-mization framework allows us to adapt to the specific nature of observed disturbances. Our method iseffective in both stochastic and adversarial regimes, we do not sacrifice too much performance in“benign” environments to provide guaranteed robust performance in more adversarial cases.Online learning for control. Our work makes significant use of online learning [34] to makeguarantees on regret , which is the difference between the algorithm’s performance and that of thebest policy in hindsight (once disturbances are realized) from a given class of closed-loop policies.Several canonical control-theoretic results have recently been cast as problems in online learning[35, 36], providing interesting generalizations to established control results like the linear-quadraticregulator [ 37,38,39] and H ∞robust control [ 40]. Results in optimal sample complexity [ 41] andsynthesis for unknown linear systems [ 42] illustrate further generalizations of standard control theory.Standard control formulations are efficiently solvable due to a convex objective. However, “higher-level” decision-making tasks like obstacle avoidance often have non-convex objective functions (e.g.,maximizing the distance to the nearest obstacle). Fortunately, some non-convex objectives admit“hidden convexity” — that they can be reformulated (via transformations, relaxations, or conversionsto a dual formulation — see [ 43] for a survey) into equivalent optimization problems that are convex .This allows for efficient solutions (e.g., [ 44,45,46]) to problems that nominally would be hard tosolve (e.g., [47]). Our work gives such a formulation for the task of obstacle avoidance.3 Problem Formulation and PreliminariesConsider a discrete-time dynamical system with state ̄xand control input ̄u. A planning oracleOT( ̄x0)takes in an initial state and generates a nominal state trajectory with associated control inputsT={ ̄x0t, ̄u0t−1}Tt=1. Here, x∈Rdxandu∈Rdu. We design a robust obstacle avoidance controllerthat will update the trajectory online to avoid local obstacles. Intuitively, this is a faster “safety innerloop” for the slower trajectory planning stage OT, keeping the agent safe from external features. Foranalytical tractability, we assume that the dynamics of perturbations of the nominal trajectory arediscrete-time linear. Defining xt= ̄xt− ̄x0tandut= ̄ut− ̄u0t, this assumption becomesxt+1=A0xt+But+wt, (1)where wt∈Rdwis a bounded, unknown, possibly-adversarial disturbance1. For many practicalsystems the linear dynamics assumption is reasonable; one example is a control-affine system withfeedback linearization. Additionally, wtcan encompass small, unmodeled nonlinearities. Our task isto construct this “residual" controller generating utto avoid obstacles. As such, OTis the optimistic,goal-oriented planner (in practice, an off-the-shelf algorithm, e.g., [ 48]) and our controller is thesafety mechanism that becomes active only when needed and in a provably effective manner.1The results here are presented for linear time-invariant (LTI) systems. Additionally, we present the casedw=dx, which affords the disturbances the most ‘power’; these results immediately extend to the inclusion ofa non-identity disturbance-to-state matrix and dw≤dxin Eqn. 1.33.1 Safety Controller ObjectiveA controller trying to avoid obstacles needs to maximize the distance to the nearest obstacle, subjectto regularization of state deviations and control usage. Assume a sensor mechanism that reports all“relevant” obstacle positions (e.g., within a given radius of the agent). The optimization problem, fora trajectory of length T, safety horizon of length L≤T, and obstacle positions pjtdenoting the jthsensed obstacle at time t, ismaxA∈ACobs(A),where: (2)Cobs(A) :=TXt=1minτ∈[L]minj∥xAt+τ−pjt∥22− ∥xAt∥2Q− ∥uAt∥2R.Here,Ais the set of online algorithms that choose actions for the controller, and xAtdenotes therealized state trajectory conditioned on actions uAt∼A∈ A. The last two terms represent quadraticstate and action costs; these costs2are very common objectives in the control literature, but serveprimarily here to regularize the solution to the obstacle avoidance task. Note that, though the collisionavoidance objective is relaxed, it remains a nonconvex quadratic penalty term rendered additionallycomplex due to the discrete selection over time and obstacle indices of the minimal-distance obstaclein the first term of Cobs. From here, we model the optimal policy search in the online controlparadigm [ 38]. This allows us to define the regret-based safety metric with respect to the bestachievable performance in hindsight. For a sufficiently powerful policy comparator class, we achievemeaningful guarantees on the safety of the resulting controller.3.2 Regret Framework for Obstacle AvoidanceLeveraging the class of linear dynamic controllers (LDCs) as comparators [ 40,38], we use adisturbance-action controller design, which combines state feedback with residual obstacle-avoidance:ut=Kxt+bt+HXi=1M[i]twt−i. (3)Here, [i]indexes the history length and tdenotes the time index of the decision. We rearrange theexpression by moving the state feedback out of ut, defining ̃ut=ut−Kxtand ̃A=A0+BK.Then the system dynamics (Eqn. 1) are equivalentlyxt+1= ̃Axt+B ̃ut+wt. (4)The comparator class Πwill be the class of LDCs parameterized by M; this class has provablyexpressive approximation characteristics [38]. The regret is defined using quantities from Eqn. 2:RegT(A) = supw1,...,w TmaxM∈ΠCobs(M)−Cobs(A). (5)A sublinear bound on Eqn. 5 implies that the adaptive sequence Mtselected by low-regret algorithm Awill perform nearly as well as the best fixed policy M∗in hindsight, for all realizations of uncertaintywithin the system. Thus, sublinear regret bounds establish finite-time (near) optimality; the policyperforms nearly as well as an optimal policy that has a priori knowledge of the realized disturbances.3.3 Trust Region OptimizationTo provide guarantees on regret, the method presented in Sec. 4.2 will construct sequential trustregion optimization instances. A trust region optimization problem [ 49] is a nominally non-convexoptimization problem that admits “hidden convexity”. One can reformulate a trust region instance(see Def. 1) via a convex relaxation in order to solve it efficiently [44, 50].2Here, we omit the fully general LQR costs (time-varying QtandRt) for simplicity of presentation; however,the results we show will also hold for this more general setting.4Definition 1 (Trust Region Instance) .A trust region instance is defined by a tuple (P,p, D)withP∈Rd×d,p∈Rd, and D > 0as the optimization problemmax∥z∥≤DzTPz+pTz. (6)Throughout the remainder of this paper, we will use “trust region instance” to refer interchangeablyto instances of Def. 1 and the implicit, equivalent convex relaxation.4 Methodology, Algorithm, and Regret Bound4.1 Intuitive Decomposition of the OLC Algorithm Control SignalThe intuition of our control scheme is to allow for online, optimization-based feedback to correctfor two key sources of failure in obstacle avoidance: (1) non-robust (risky) nominal plans, and (2)external disturbances. The former can be thought of as errors in planning – that is, they arise whenthe nominal path is followed exactly. Paths that move very close to obstacles or that pass throughthem (e.g., due to sensor noise) would be examples of this problem. The presence of the bias term inthe OLC framework accounts for control input to correct these errors. The latter challenge concernsdeviation from nominal trajectories caused by errors in modeling, time discretization, and physicaldisturbances. The linear feedback term in the OLC framework accounts for these errors.4.2 Theoretical Method and ContributionsWe utilize the “Follow the Perturbed Leader” (FPL) method [ 35] in tandem with an optimizationoracle [ 51,52] from the online convex optimization [ 53] literature. In this area, [ 40] develops amethod to generate adversarial disturbances for linear systems online, and extends the FPL algorithmto the setting with memory. Further, [ 54] gives regret bounds for many game scenarios in which theplayers have varying cost landscapes. Several contributions of this paper lie in reductions : we showthat optimizing Eqn. 8 is equivalent to finding the equilibrium solution of a general-sum two-playergame for which every control-player instance is a trust-region problem, and that both players havelow-regret algorithms to solve for the optimal policies. This allows us to use the results in [ 54].Additionally, we show that extension to the multi-step setting in Eqn. 9 remains a trust region instancein a ‘lifted’ game, allowing us to use the FPL results ( [52, 51, 40]).4.3 Algorithm Exposition and Regret BoundWe formulate the obstacle avoidance controller as an online non-convex FPL algorithm in Alg. 1.At time t, the agent generates a control input ̃utvia Eqn. 3 by playing M[1:H]t∈ { ̃M}. The statedynamics are then propagated to reveal the new state xt+1, and the realized wtis reconstructed inhindsight from the state. The robot simultaneously uses its sensors to observe local obstacle positions.The key conceptual step is then the following: given past reconstructed disturbances and obstaclelocations, one can construct a instantaneous reward function (Eqn. 8) at time t+ 1. This is a functionof acounterfactual set of gains { ̃M}, where x ̃Mt+1and ̃u ̃Mtcorrespond to the state and control inputthatwould have resulted from applying ̃Mgiven the realized disturbances, obstacle locations, andthe initial state x0. The key algorithmic step is to solve for the optimum M[1:H]t+1∈ { ̃M}in Eqn. 9,which is the Follow-the-Perturbed-Leader component of the algorithm ( ̃M•P0is the perturbation).The resulting sublinear regret bound is given in Thm. 2.Theorem 2 (Regret Bound for Online Obstacle Avoidance) .Consider an instance of Alg. 1, with Alg.2 (see Supp. B) acting as a subroutine optimizing Eqn. 9. Then the regret attained by Memory FPLwill be ̃O(poly(L)√T), (7)where Lis a measure of the instance complexity.5Algorithm 1 Online Learning Control (OLC) for Obstacle AvoidanceInput : Observed obstacle positions {pj0}Kj=1, history length H=O(logT).Input : Full horizon T, algorithm parameters {η, ε, λ}, initial state x0.Input : Open-loop plan: ̄uotfort= 1, ..., T .Initialize : Closed-loop correction M[1:H]0 , fixed perturbation P0∼Exp(η)du×Hdw.Initialize : Play randomly for t={0, ..., H −1}, observe rewards, states, noises, and obstacles.fort=Huntil T−1doPlayM[1:H]t , and observe state xt+1and obstacles {pjt+1}j∈[k]t+1.Reconstruct disturbance wtusing observed xt+1.Construct the reward function in ̃M:lt+1( ̃M) = minj∈[k]n∥x ̃Mt+1−pjt+1∥22o− ∥x ̃Mt+1∥2Q− ∥ ̃u ̃Mt∥2R. (8)Receive realized reward lt+1(M[1:H]t).Solve for ‘perturbed leading policy’ M[1:H]t+1 as the solution to:arg max∥ ̃M∥≤DM(t+1Xτ=1lτ( ̃M) +λ( ̃M•P0)). (9)end for5 ExperimentsWe demonstrate the effectiveness of our method in simulated and physical environments. Thesimulated environment in Sec. 5.1 considers a 2D racing problem. The hardware experimentsin Sec. 5.2 are conducted on a Go1 quadruped robot [ 55] avoiding obstacles in the presence ofunmodeled nonlinear dynamics, sensor noise, and time delays, as shown in Fig. 1. In both settings,we demonstrate the benefits of our approach as compared to baselines that use, resp., RRT*/A* (withreplanning) for ‘optimistic’ path generation, and HJ reachability for robust planning. We illustratehow our method acts as an adaptive intermediary, providing a computationally tractable algorithmthat nonetheless maintains improved safety properties relative to purely optimistic planners.5.1 Simulation ExperimentsCenterline Experiment overview. In the simulated environment, a racing vehicle (using doubleintegrator dynamics) observes all obstacles within a fixed sensor range (Fig. 2). The nominal dynamicsare perturbed by disturbances with varying levels of structure (described further below). A “centerline”environment shown in Fig. 2 is utilized to demonstrate key safety and adaptivity criteria. The nominaltrajectory is fixed to be a straight path through obstacles, requiring the obstacle avoidance behavior toemerge via the online adaptation of Alg. 1. For speed, the implementation of the environment andalgorithm is set up in JAX [ 56], using Deluca [ 57] as a framework for the control-theoretic simulationenvironment. Each simulation takes 1-2 minutes to run on a single CPU with H= 10 ,T≈1000 .Comparison with baselines. We utilize a HJ reachability planner [ 58] to generate robust trajecto-ries, as well as a kinodynamic RRT∗implementation [ 59] to generate “optimistic” plans (replannedat 1 Hz, with a path-following controller applied at 5 Hz). We demonstrate two key behaviors: (1)better performance than HJ methods with respect to LQR costs when disturbances are “benign;” (2)better performance than RRT∗when disturbances are adversarial. For each algorithm both stochastic[‘Rand’] and non-stochastic (sinusoidal [‘Sin’] and adversarial [‘Adv’]) disturbance profiles are tested,and metrics for both the safety (number of collisions) and performance (LQR state and input costs)are collected for runs spanning 50 centerline obstacles. Results are presented for each algorithm anddisturbance profile in Table 1. For space considerations, figures illustrating trajectories of simulatedruns referenced below are deferred to Supp. D.6Figure 2: Centerline environment. Thenominal path passes vertically throughobstacles; the sensor range is denotedby the blue shaded region.PlanDist.Rand Sin AdvRRT∗0.27±0.05 —- —-0.60 1 .00 1 .00OLC 0.51±0.09 0.49±0.13 1 .57±0.65(ours) 0.06 0 .04 0 .26HJ Plan 0.55±0.05 0 .59±0.14 1.01±0.030.00 0 .00 0 .00Table 1: Planner performance for each disturbance type. Costs aregiven in terms of linear-quadratic (LQ) costs (top) and fraction offailures (bottom). Best-performing cases for each column are bold .LQ costs are only computed for successful passes; as such, RRT∗isintentionally blank for two entries.Several aspects of Table 1 reflect the expected behavior of each algorithm. RRT∗follows efficientpaths, but fails to handle disturbances effectively, with a high collision rate. Second, HJ paths arerobust, but performance improvements are limited as ‘adversariality’ is reduced, and the method doesnot adapt to disturbance structures. In particular, the sinusoid case incentivizes the racer to pass theobstacles on a specific side; the HJ planner passes on each side in equal proportion In contrast, OLCadapts to the structure of the sinusoidal disturbances to take the “easier route” with lower cost.As shown in Table 1, OLC significantly reduces collisions across all disturbance profiles relativeto RRT∗planning, while also reducing control usage and state costs relative to HJ planners. Thisintermediate solution also provides computational speed-up to HJ, allowing it to be run moreefficiently online and allow for feedback on sensory information (as opposed to the privileged mapinformation that HJ requires in Sec. 5.2). Finally, OLC was tested on several other environments,including with dynamic obstacles – for further details, see Supp. D.5.2 Hardware ExperimentsExperiment overview. For our hardware experiments, we use the Unitree Go1 quadruped robot,shown in Fig. 1. The robot is equipped with LIDAR and an inertial measurement unit (IMU) whichenable obstacle detection and localization using LIO-SAM [ 60]. The robot’s task is to traverse a densecourse of cylindrical obstacles, while encountering time delays and residual, nonlinear dynamics. Theplant is modeled as a Dubins’ car; the equations of motion are included in Supp. C. The high-levelinputs are then translated to joint-level torque commands by the robot’s low-level controller. Wenote that the noisy estimates on hardware of both the state and the relative obstacle positions willfurther demonstrate the OLC algorithm’s robustness to noise in the inputs to Alg. 1, including in thereconstructed disturbances and in the estimated state.Controller architecture. All sensing and computation is performed onboard the robot. We usea Euclidean clustering algorithm for obstacle detection [ 61] based on the LIDAR measurements.This detection algorithm runs onboard the robot and provides updated obstacle locations to thecontroller in real time (not all obstacles are initially sensed due to occlusions and larger distances).At each replanning step, the robot generates a nominal set of waypoints from the estimated state anddetected obstacles using an A* algorithm [ 62] over a discretization of the traversable space, which isconverted to a continuous path by a down-sampled smoothing spline. We then wrap the additionalOLC feedback controller ( H= 5,T= 40 ) of the form presented in Eqn. 3 using Alg. 1. The use ofA∗in lieu of RRT∗was undertaken to ensure smoother resulting paths for the quadruped. We justifythe change by noting that due to discretization A∗should only be equally or more robust than RRT∗(using equivalent margin), thereby reducing the apparent benefits of the OLC module.Baselines. We compare OLC with the baselines used for the simulated experiments (Sec. 5.1),substituting A* for RRT* to improve speed and stability of the nominal planner. Concretely, thefirst baseline uses A* in a receding horizon re-planning scheme with the nominal dynamics model7and obstacle padding of 0.25m to account for the Go1’s size. The robot executes the control inputsprovided by the planner, replanning at 1 Hz and taking actions at 4 Hz. The second baseline is arobust controller that is generated using Hamilton-Jacobi (HJ) reachability. This baseline is providedana priori map of the obstacles, and uses a provided model of the disturbances, taking into accountvelocity-dependent effects. This second baseline acts as a safety ‘oracle,’ with optimal trajectoriessynthesized offline from the true map. As before, we demonstrate that our algorithm improves thesafety of a pure A* (with margin) while providing shorter and more intuitive paths than HJ methods.PlannerMetricPath Length (m) Max Deviation (m) CollisionsA∗10.61±0.40 0.62±0.33 12Online (OLC) 10.59±0.41 0.63±0.28 7HJ Plan 11.13±0.45 1 .16±0.57 0[simulated]Table 2: Planner performance along several metrics for the hardware examples. We observe that our method(‘Online’) performs very similarly to A∗in terms of path efficiency while reducing collisions by over 40%. Notethat HJ methods choose significantly longer paths to account for worst-case uncertainty; HJ was not run onhardware and the optimal paths were calculated offline using the true obstacle positions.Results. We run physical experiments using A∗and OLC for 21 runs each, consisting of threetrials over seven obstacle layouts (see the supplemental video and Supp. C for additional details).In each instance, the robot is required to traverse 10m forwards. For each layout, the optimal HJpath is analyzed offline; for all runs, a ‘crash’ is defined as any contact made with an obstacle.As shown in Table 2, HJ methods choose an overly-conservative route that is generally safe butinefficient. Conversely, A* – even with obstacle padding of 0.25m – does not sufficiently accountfor the disturbances, yielding a significant rate of crashes. However, in contrast to HJ, A∗andOLC take significantly shorter paths (note that the shortest possible path is at minimum 10m inlength). However, our algorithm reduces the number of collisions by nearly half versus the naiveobstacle-padded A∗approach, a result significant at p= 0.1using a Boschloo exact test (see Supp. C).Importantly, while OLC was run onboard the robot in real time, the HJ methods required severalseconds for each computation of the backward reachable set, and could not be run online.6 Conclusion and LimitationsWe develop a regret minimization framework for the problem of online obstacle avoidance. In contrastto prior approaches that either assume worst-case realization of uncertainty or a given stochasticmodel, we utilize techniques from online learning in order to adaptively react to disturbances andobstacles. To this end, we prove regret bounds that demonstrate that our obstacle avoidance policy iscomparable to the best policy in hindsight from a given class of closed-loop policies. Simulation andhardware experiments demonstrate that our approach compares favorably with baselines in terms ofcomputational efficiency and performance with varying disturbances and obstacle behaviors.Some limitations that we hope to address in future work include the limited classes of applicabledynamics (though our hardware experiments apply the method to time-varying linear dynamics)and obstacle geometries. Additionally, the cost function does not have multi-time-step lookahead(i.e., MPC-like). Additional lookahead would yield better foresight in the planning stage and likelyimprove performance. Finally, because the algorithm relies on solutions to optimization problemsthat run real-time, ensuring stability of the optimization procedure and automating selection ofhyperparameters (those used are satisficing but likely suboptimal) would be useful extensions thatmay further improve the runtime performance.8AcknowledgmentsThis work was partially funded by the NSF Career Award No. 2044149. Additionally, this material isbased upon work supported by the National Science Foundation Graduate Research Fellowship underGrant No. DGE-2039656. Any opinion, findings, and conclusions or recommendations expressedin this material are those of the authors(s) and do not necessarily reflect the views of the NationalScience Foundation.References[1] S. M. LaValle. Planning Algorithms . Cambridge University Press, 2006.[2]J. D. Gammell and M. P. Strub. Asymptotically optimal sampling-based motion planningmethods. Annual Review of Control, Robotics, and Autonomous Systems , 4:295–318, 2021.[3]Z. Kingston, M. Moll, and L. E. Kavraki. Sampling-based methods for motion planning withconstraints. Annual review of control, robotics, and autonomous systems , 1:159–185, 2018.[4]A. Charnes and W. W. Cooper. Chance-Constrained Programming. Management Science , 6(1):73–79, Oct. 1959. ISSN 0025-1909. doi:10.1287/mnsc.6.1.73. URL https://doi.org/10.1287/mnsc.6.1.73 .[5]L. Blackmore, H. Li, and B. Williams. A probabilistic approach to optimal robust path planningwith obstacles. In in Proceedings of the American Control Conference , 2006.[6]N. E. Du Toit and J. W. Burdick. Probabilistic Collision Checking With Chance Constraints.IEEE Transactions on Robotics , 27(4):809–815, Aug. 2011. ISSN 1941-0468. doi:10.1109/TRO.2011.2116190. Conference Name: IEEE Transactions on Robotics.[7]L. Janson, E. Schmerling, and M. Pavone. Monte Carlo motion planning for robot trajectoryoptimization under uncertainty. In Robotics Research: Volume 2 , pages 343–361. Springer,2017.[8]S. Jha, V . Raman, D. Sadigh, and S. A. Seshia. Safe autonomy under perception uncertaintyusing chance-constrained temporal logic. Journal of Automated Reasoning , 60:43–62, 2018.[9]L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observablestochastic domains. Artificial Intelligence , 101(1-2):99–134, May 1998. ISSN 00043702. doi:10.1016/S0004-3702(98)00023-X. URL https://linkinghub.elsevier.com/retrieve/pii/S000437029800023X .[10] S. Prentice and N. Roy. The Belief Roadmap: Efficient Planning in Belief Space by Factoringthe Covariance. The International Journal of Robotics Research , 28(11-12):1448–1465, Nov.2009. ISSN 0278-3649. doi:10.1177/0278364909341659. URL https://doi.org/10.1177/0278364909341659 . Publisher: SAGE Publications Ltd STM.[11] R. Platt, L. Kaelbling, T. Lozano-Perez, and R. Tedrake. Non-Gaussian belief space plan-ning: Correctness and complexity. In 2012 IEEE International Conference on Robotics andAutomation , pages 4711–4717, May 2012. doi:10.1109/ICRA.2012.6225223. ISSN: 1050-4729.[12] H. Kurniawati. Partially observable markov decision processes and robotics. Annual Review ofControl, Robotics, and Autonomous Systems , 5:253–277, 2022.[13] F. Muratore, F. Ramos, G. Turk, W. Yu, M. Gienger, and J. Peters. Robot learning fromrandomized simulations: A review. arXiv preprint arXiv:2111.00956 , 2021.[14] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomizationfor transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJinternational conference on intelligent robots and systems (IROS) , pages 23–30. IEEE, 2017.9[15] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino,M. Plappert, G. Powell, R. Ribas, et al. Solving rubik’s cube with a robot hand. arXiv preprintarXiv:1910.07113 , 2019.[16] K.-C. Hsu, A. Z. Ren, D. P. Nguyen, A. Majumdar, and J. F. Fisac. Sim-to-lab-to-real: Safereinforcement learning with shielding and generalization guarantees. Artificial Intelligence , 314:103811, 2023.[17] F. Sadeghi and S. Levine. Cad2rl: Real single-image flight without a single real image. arXivpreprint arXiv:1611.04201 , 2016.[18] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of roboticcontrol with dynamics randomization. In 2018 IEEE international conference on robotics andautomation (ICRA) , pages 3803–3810. IEEE, 2018.[19] G. B. Margolis and P. Agrawal. Walk these ways: Tuning robot control for generalization withmultiplicity of behavior. arXiv preprint arXiv:2212.03238 , 2022.[20] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.arXiv preprint arXiv:2107.04034 , 2021.[21] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robustperceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62):eabk2822,2022.[22] I. M. Mitchell and C. J. Tomlin. Overapproximating Reachable Sets by Hamilton-JacobiProjections. Journal of Scientific Computing , 19(1):323–346, Dec. 2003. ISSN 1573-7691.doi:10.1023/A:1025364227563. URL https://doi.org/10.1023/A:1025364227563 .[23] S. Bansal, M. Chen, S. Herbert, and C. J. Tomlin. Hamilton-Jacobi reachability: A briefoverview and recent advances. In 2017 IEEE 56th Annual Conference on Decision and Control(CDC) , pages 2242–2253, Melbourne, Australia, Dec. 2017. IEEE Press. doi:10.1109/CDC.2017.8263977. URL https://doi.org/10.1109/CDC.2017.8263977 .[24] S. Singh, A. Majumdar, J.-J. Slotine, and M. Pavone. Robust online motion planning viacontraction theory and convex optimization. In 2017 IEEE International Conference on Roboticsand Automation (ICRA) , pages 5883–5890, May 2017. doi:10.1109/ICRA.2017.7989693.[25] A. Majumdar and R. Tedrake. Funnel libraries for real-time robust feedback motionplanning. The International Journal of Robotics Research , 36(8):947–982, 2017. doi:10.1177/0278364917712421. URL https://doi.org/10.1177/0278364917712421 .[26] C. K. Verginis and D. V . Dimarogonas. Adaptive robot navigation with collision avoidancesubject to 2nd-order uncertain dynamics. Automatica , 123:109303, Jan. 2021. ISSN 0005-1098. doi:10.1016/j.automatica.2020.109303. URL https://www.sciencedirect.com/science/article/pii/S0005109820305021 .[27] C. K. Verginis, D. V . Dimarogonas, and L. E. Kavraki. KDF: Kinodynamic Motion Planning viaGeometric Sampling-Based Algorithms and Funnel Control. IEEE Transactions on Robotics ,39(2):978–997, Apr. 2023. ISSN 1941-0468. doi:10.1109/TRO.2022.3208502. ConferenceName: IEEE Transactions on Robotics.[28] J. Darbon and S. Osher. Algorithms for overcoming the curse of dimensionality for certainHamilton-Jacobi equations arising in control theory and elsewhere. Research in the Mathemati-cal Sciences , 3(1):19, Sept. 2016. ISSN 2197-9847. doi:10.1186/s40687-016-0068-7. URLhttps://doi.org/10.1186/s40687-016-0068-7 .10[29] S. L. Herbert, M. Chen, S. Han, S. Bansal, J. F. Fisac, and C. J. Tomlin. FaSTrack: Amodular framework for fast and guaranteed safe motion planning. In 2017 IEEE 56th AnnualConference on Decision and Control (CDC) , pages 1517–1522, Melbourne, Australia, Dec.2017. IEEE Press. doi:10.1109/CDC.2017.8263867. URL https://doi.org/10.1109/CDC.2017.8263867 .[30] R. Isaacs. Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit,Control and Optimization . Dover books on mathematics. Wiley, 1965. ISBN 978-0-471-42860-2.URL https://books.google.com/books?id=gtlQAAAAMAAJ .[31] I. Mitchell, A. Bayen, and C. Tomlin. A time-dependent Hamilton-Jacobi formulation ofreachable sets for continuous dynamic games. IEEE Transactions on Automatic Control , 50(7):947–957, July 2005. ISSN 1558-2523. doi:10.1109/TAC.2005.851439. Conference Name:IEEE Transactions on Automatic Control.[32] P. A. Parrilo. Semidefinite programming relaxations for semialgebraic problems. MathematicalProgramming , 96(2):293–320, May 2003. ISSN 1436-4646. doi:10.1007/s10107-003-0387-5.URL https://doi.org/10.1007/s10107-003-0387-5 .[33] R. R. Burridge, A. A. Rizzi, and D. E. Koditschek. Sequential Composition of DynamicallyDexterous Robot Behaviors. The International Journal of Robotics Research , 18(6):534–555,June 1999. ISSN 0278-3649. doi:10.1177/02783649922066385. URL https://doi.org/10.1177/02783649922066385 . Publisher: SAGE Publications Ltd STM.[34] E. Hazan et al. Introduction to online convex optimization. Foundations and Trends ®inOptimization , 2(3-4):157–325, 2016.[35] J. Hannan. Approximation to Bayes Risk in Repeated Play. In Approximation to Bayes Riskin Repeated Play , pages 97–140. Princeton University Press, 1958. ISBN 978-1-4008-8215-1.doi:10.1515/9781400882151-006. URL https://www.degruyter.com/document/doi/10.1515/9781400882151-006/html?lang=en .[36] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games . Cambridge University Press,2006. doi:10.1017/CBO9780511546921.[37] A. Cohen, A. Hasidim, T. Koren, N. Lazic, Y . Mansour, and K. Talwar. Online Linear QuadraticControl. In Proceedings of the 35th International Conference on Machine Learning , pages 1029–1038. PMLR, July 2018. URL https://proceedings.mlr.press/v80/cohen18b.html .ISSN: 2640-3498.[38] N. Agarwal, B. Bullins, E. Hazan, S. Kakade, and K. Singh. Online Control with AdversarialDisturbances. In Proceedings of the 36th International Conference on Machine Learning , pages111–119. PMLR, May 2019. URL https://proceedings.mlr.press/v97/agarwal19c.html . ISSN: 2640-3498.[39] N. Agarwal, E. Hazan, and K. Singh. Logarithmic Regret for Online Con-trol. In Advances in Neural Information Processing Systems , volume 32. Cur-ran Associates, Inc., 2019. URL https://papers.nips.cc/paper/2019/hash/78719f11fa2df9917de3110133506521-Abstract.html .[40] U. Ghai, D. Snyder, A. Majumdar, and E. Hazan. Generating Adversarial Disturbances forController Verification. In Proceedings of the 3rd Conference on Learning for Dynamics andControl , pages 1192–1204. PMLR, May 2021. URL https://proceedings.mlr.press/v144/ghai21a.html . ISSN: 2640-3498.[41] S. Dean, H. Mania, N. Matni, B. Recht, and S. Tu. On the Sample Complexity of theLinear Quadratic Regulator. Foundations of Computational Mathematics , 20(4):633–679,Aug. 2020. ISSN 1615-3375. URL https://resolver.caltech.edu/CaltechAUTHORS:20190806-130716007 . Number: 4 Publisher: Springer.11[42] E. Hazan, S. Kakade, and K. Singh. The Nonstochastic Control Problem. In Proceedings of the31st International Conference on Algorithmic Learning Theory , pages 408–421. PMLR, Jan.2020. URL https://proceedings.mlr.press/v117/hazan20a.html . ISSN: 2640-3498.[43] Y . Xia. A Survey of Hidden Convex Optimization. Journal of the Operations Research Societyof China , 8:1–28, Jan. 2020. doi:10.1007/s40305-019-00286-5.[44] A. Ben-Tal and M. Teboulle. Hidden convexity in some nonconvex quadratically constrainedquadratic programming. Mathematical Programming , 72(1):51–63, Jan. 1996. ISSN 1436-4646.doi:10.1007/BF02592331. URL https://doi.org/10.1007/BF02592331 .[45] H. Konno and T. Kuno. Multiplicative Programming Problems. In R. Horst and P. M. Pardalos,editors, Handbook of Global Optimization , Nonconvex Optimization and Its Applications,pages 369–405. Springer US, Boston, MA, 1995. ISBN 978-1-4615-2025-2. doi:10.1007/978-1-4615-2025-2_8. URL https://doi.org/10.1007/978-1-4615-2025-2_8 .[46] J. V oss, M. Belkin, and L. Rademacher. The hidden convexity of spectral clustering. InProceedings of the Thirtieth AAAI Conference on Artificial Intelligence , AAAI’16, pages2108–2114, Phoenix, Arizona, Feb. 2016. AAAI Press.[47] T. Matsui. NP-hardness of linear multiplicative programming and related problems. Journal ofGlobal Optimization , 9(2):113–119, Sept. 1996. ISSN 1573-2916. doi:10.1007/BF00121658.URL https://doi.org/10.1007/BF00121658 .[48] I. A. ̧ Sucan, M. Moll, and L. E. Kavraki. The Open Motion Planning Library. IEEE Robotics& Automation Magazine , 19(4):72–82, December 2012. doi:10.1109/MRA.2012.2205651.https://ompl.kavrakilab.org .[49] D. C. Sorensen. Newton’s Method with a Model Trust-Region Modification, Sept. 1980. URLhttps://digital.library.unt.edu/ark:/67531/metadc283479/ . Number: ANL-80-106 Publisher: Argonne National Laboratory.[50] E. Hazan and T. Koren. A linear-time algorithm for trust region problems. MathematicalProgramming , 158(1):363–381, 2016.[51] N. Agarwal, A. Gonen, and E. Hazan. Learning in non-convex games with an optimizationoracle. In Conference on Learning Theory , pages 18–29. PMLR, 2019.[52] A. S. Suggala and P. Netrapalli. Online Non-Convex Learning: Following the Perturbed Leaderis Optimal. In Proceedings of the 31st International Conference on Algorithmic LearningTheory , pages 845–861. PMLR, Jan. 2020. URL https://proceedings.mlr.press/v117/suggala20a.html . ISSN: 2640-3498.[53] E. Hazan. Introduction to Online Convex Optimization . Dec. 2021. URL http://arxiv.org/abs/1909.05207 . arXiv: 1909.05207.[54] E. Hazan. Approximate Convex Optimization by Online Game Playing. arXiv:cs/0610119 , Oct.2006. URL http://arxiv.org/abs/cs/0610119 . arXiv: cs/0610119.[55] Unitree. unitreerobotics, 2023. URL https://github.com/unitreerobotics .[56] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke,J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations ofPython+NumPy programs, 2018. URL http://github.com/google/jax .[57] P. Gradu, J. Hallman, D. Suo, A. Yu, N. Agarwal, U. Ghai, K. Singh, C. Zhang, A. Majumdar,and E. Hazan. Deluca–a differentiable control library: Environments, methods, and benchmark-ing. Differentiable Computer Vision, Graphics, and Physics in Machine Learning (Neurips2020 Workshop) , 2020.12[58] S. ASL. Hj-reachability in jax. https://github.com/StanfordASL/hj_reachability ,2021.[59] H. Zhou. Path planning. https://github.com/zhm-real/PathPlanning , 2020.[60] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and R. Daniela. Lio-sam: Tightly-coupledlidar inertial odometry via smoothing and mapping. In IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 5135–5142. IEEE, 2020.[61] S. Sun. Lidar obstacle detector, 12 2021. URL https://github.com/SS47816/lidar_obstacle_detector .[62] P. Hart, N. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimumcost paths. IEEE Transactions on Systems Science and Cybernetics , 4(2):100–107, 1968.doi:10.1109/tssc.1968.300136. URL https://doi.org/10.1109/tssc.1968.300136 .[63] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predic-tors. Information and Computation , 132(1):1–63, 1997. ISSN 0890-5401. doi:https://doi.org/10.1006/inco.1996.2612. URL https://www.sciencedirect.com/science/article/pii/S0890540196926127 .13A Full Regret ProofA.0 Outline of Proof1. Reduce path planner to linear dynamical systems model2. Demonstrate that a general instance of Alg. 2 is a trust region problem3. Reduce Eqn. 9 in Alg. 1 to an instance of Alg. 24. Justify necessary analogous quantities in our problem to those of the proofs in [40, 38]5.Apply the results for Nonconvex FPL with Memory from [ 40] to Alg. 1 to obtain the regretboundA.1 Reduction of Path Planned case to Standard Controls caseAssume that the planner devises a nominal path (denoted with a ( ̄·)0notation) in coordinates xandinputs u: so the path Pis fully specified as P={ ̄x0t, ̄u0t}Tt=0. Assume that the path is chosen so thatat every xon or near the path, the following dynamics hold around perturbations of the path:xt− ̄x0t=A(xt−1− ̄x0t−1) +B(ut−1− ̄u0t−1) +Dwt−1. (10)Using this change of coordinates, we can essentially negate the path and study the relevant perturbationdynamics δxt:=xt− ̄x0tandδut:=ut− ̄u0t, we recover the desired equation:δxt=Aδxt−1+Bδut−1+Dwt−1. (11)For shorthand, we will define x:=δxandu=δuto ease exposition, remembering that theyrepresent perturbations from the nominal path. Intuitively, this is a reasonable model for ‘quasi-static’systems (e.g., a drone or car or aircraft using path planning for non-aggressive maneuvers).A.2 Algorithm 2 is a Trust Region SolverThe proof that Alg. 2 is a trust region solver is given in Supp. B.1- B.3.A.3 Algorithm 2 Solves Equation 9The proof that Eqn. 9 is a special instance of admissible arguments to Alg. 2 is shown in Supp. B.4.A.4 Technical NotesA.4.1 Continuity and Conditioning ParametersWe begin with an analysis of the Lipschitz constant for the approximate cost functions (this willfollow a similar path to [40]).First, note that the diameter of the decision set is 2DMand that the gradient of the quadratic costabove is ∇mlt= (P+PT)m+p. As such,L:= maxm,t{∥∇ mlt(m)∥∞}≤maxm,t{(∥P∥1+∥P∥∞)DM+∥p∥∞}≤2HdwRD+RWe consider as well a bound on the conditioning number of the optimization problem. Becausethe size of the optimization grows linearly in time, the condition number grows at most linearly aswell. Therefore, the run-time of the algorithm is polynomial (neither the condition number nor thedimension grows too rapidly).Finally, we note bounds on the elements of Pandpin the trust region instance. The bounds on costs,states, inputs, and disturbances together imply that the elements of Ptare bounded by C2uκ2ξ, andthe elements of pare bounded by C2uκ2ξβ(this again follows [40]).14A.4.2 Truncated State ApproximationThe idea of this proof follows directly from [ 40]; however, we show the proof in detail because inour case the truncated state history affects the resulting vectors ajin the optimization, leading to adifferent instantiation of the problem.To give a sense of the added subtlety for obstacle avoidance, observe that in certain scenarios, smallperturbations in the observed relative obstacle positions could yield large changes in the optimalpolicy. For example, imagine that there is one obstacle, located directly on the centerline of thenominal planned motion. Then a small perturbation of the obstacle to the right makes the optimalaction “Left," while a small perturbation of the obstacle to the left makes the optimal action “Right."This phenomenon is not a problem in the regret outline because, while the optimal decision is fragile,the loss incurred of choosing incorrectly is bounded by the quadratic (and therefore, continuous)nature of the cost functions themselves.For the dynamics and control we have assumed thatxt+1=Axt+But+Dwtut=Kxt+bt+Mt ̃wt=Kxt+HXi=1M[i]twt−i,(12)where the bias is included by one-padding the disturbance vector. For simplicity we will omit theexplicit bias from ensuing analysis; in all cases it can be understood to be incorporated into themeasured disturbance. We can then show (as in [ 40]) that the state can be expressed as the sum ofdisturbance-to-state transfer function matrices Ψt,i:xAt+1= ̃AH+1xAt−H+2HXi=0Ψt,iwt−i,where ̃A=A+BK andΨt,i= ̃AiD 1[i≤H] +HXj=0 ̃AjBM[i−j]t−j 1[i−j∈ {1, ..., H}].We define the state estimate and cost asyt+1:=2HXi=0Ψt,iwt−ilt(Mt−H:t) =ct(yt+1(Mt−H:t), ̃ut)where ̃ut=Mt ̃wt(the residual input on top of the closed-loop controller).Now, assume that ∥ ̃A∥ ≤1−γ, that∥ ̃A∥,∥B∥,∥D∥,∥K∥ ≤β, and that for all tit holds that∥wt∥ ≤Cw,∥ut∥ ≤Cu, and∥Qt∥,∥Rt∥ ≤ξ. Then we can show that the approximation error ofthe costs is sufficiently small. Let the condition number be defined as k=∥ ̃A∥∥ ̃A−1∥.A.4.3 Bounding the States Along a TrajectoryNote that ̃ut=Mt ̃wt; this implies that ∥ ̃ut∥ ≤HDC w. This implies further that ∥B ̃ut+Dwt∥ ≤2βHDC wby the triangle inequality. Assuming that there exists τsuch that∥xτ∥2≤2βHDC wγ,we have that for every t > H +τ+ 1,∥xAt−H−1∥2≤2βHDC wγ. (WLOG, we can assume the initialstatex0is bounded in this domain - that is, that the assumption is satisfied with τ= 0; the regiondefined above is the long-term reachable set of the state xtdriven by bounded disturbances wtand (implicitly bounded) residual inputs ̃ut[the norm is limited by the stability parameter γof theclosed-loop ̃A-matrix]).15A.4.4 Bounding the Change in CostsNow, we analyze the change in costs|ct(xAt+1, ̃ut)−lt(Mt−H:t)|=|minj∈[p]∥aj,t+BM t ̃wt∥22−minj∈[p]∥ˆaj,t+BM t ̃wt∥22|Noting the definition of ˆaj,tand of aj,t, we can bound the difference between them as a function ofthe error in approximation of xt(see [40]):aj,t:=pj,t−xt=⇒ˆaj,t−aj,t= (pj,t−ˆxt)−(pj,t−xt)=xt−ˆxt=⇒ ∥ˆaj,t−aj,t∥2=∥xt−ˆxt∥2≤kCxe−γHNow, we argue that the loss incurred due to the noise in ˆxtis less than simply twice the change in costdue to the error in ˆaj,t. Letˆj∗:= arg minj∈[p]{∥ˆaj,t−BM t ̃wt∥22}. Letj∗be defined analogously.Ifj∗=ˆj∗, then the difference in cost is less than or equal to the extra loss incurred by the error in ˆa.Ifj∗̸=ˆj∗, then it is possible that the true ‘binding obstacle’ was biased away, and that the ‘guessed’binding obstacle was ‘biased towards’; therefore, the cost error is possibly due to deviations up totwice the error in the ˆaj,tvectors. This means that, defining δtsuch that ∥δt∥2= 2∥xt−ˆxt∥2, wehave that the following holds:∆ =|ct(xAt+1, ̃ut)−lt(Mt−H:t)|=|minj∈[p]∥aj,t+BM t ̃wt∥22−minj∈[p]∥ˆaj,t+BM t ̃wt∥22|≤ maxδt:∥δt∥2≤2kCxe−γHn∥(ˆaj,t+δt) +BM t ̃wt∥22− ∥ˆaj,t+BM t ̃wt∥22o=δTtδt+ 2δTtˆaj,t−2δTt(BM t ̃wt)≤ ∥δt∥22+ 2(Cx+∥δt∥2)∥δt∥2+ 2∥δt∥2CwβDM= 3∥δt∥22+ 2Cx∥δt∥2+ 2CwβDM∥δt∥2≤5Cx∥δt∥2+ 2CwβDM∥δt∥2≤5(k2C2xe−γH(1 +βDMCw)).Letting H=⌈γ−1log (5 k2Cx(1 +βDMCw)T)⌉, we have that∆≤CxT.16Remark 3. Recursive Definition of HandCx:Currently, there is a recursive nature to the definition of Hand Cx;H :=⌈γ−1log (5 k2Cx(1 +βDC w)T)⌉andCx:=2βHDC wγ. However, this is not problematic becausethe definitions will have a solution (that can be found efficiently); namely:H≥c1log (c2Cx)Cx=k1H=⇒H≥c1log (c2k1H)And for any c1, c2, k1∈R+and fixed T >0, there exists a positive integer Hsuch that the aboveresult holds (e.g., following from the fact that logH=o(H)). Further, the resulting Hwill not betoo large wrt Tfor sufficiently large T(e.g., large enough Tto overcome the constants).A.5 Finalizing the Regret BoundA.5.1 Apply Nonconvex Memory Follow-the-Perturbed-LeaderThis result is from [ 40], Theorem 13 (Corollary 14 gives an equivalent result to our setting in theasymptotic regret behavior; our optimal choice of ηandεdiffers slightly).A.5.2 Completing the BoundFinally, we use Alg. 1 (which acts as an efficient ε-oracle) with an approximate trust region imple-mentation of our desired optimization problem (Alg. 2) acting as a subroutine, in order to composethe regret components into a complete bound.Regret (A):= maxM∈ΠTXt=Hct(xMt, ̃ut(M))−TXt=Hct(xAt, ̃ut(A))≤maxM∈ΠTXt=H(ft(M, M, ..., M ) +CxT)−TXt=H(ft(Mt−H:t) +CxT)=hmaxM∈ΠTXt=Hft(M, M, ..., M )−TXt=Hft(Mt−H:t)i+O(logT)≤ ̃O(poly(L)√T)(13)To clarify the steps: the second line incorporates the approximation error from Section A.4.2 (whichis logarithmic in T, as noted in the third line) and the final line follows from the Nonconvex MemoryFPL result of [40].17B Convex-Concave Game: Algorithm and CorrectnessFor completeness and easier reference, we include a copy of Alg. 1 below. To improve clarity,references to equations within this section (Supp. B) will use their numbering as given in Supp. B,rather than in the main text. The key technical results of this section are to demonstrate: (1) thatAlg. 2 is a trust region instance that can be solved efficiently, and (2) that the optimization procedurein Eqn. 15 is solved correctly and efficiently by the trust region procedure Alg. 2. Given these results,our obstacle avoidance algorithm will be computationally efficient and attain low regret.Algorithm 1 Online Learning Control (OLC) for Obstacle AvoidanceInput : Observed obstacle positions {pj0}Kj=1, history length H=O(logT).Input : Full horizon T, algorithm parameters {η, ε, λ}, initial state x0.Input : Open-loop plan: ̄uotfort= 1, ..., T .Initialize : Closed-loop correction M[1:H]0 , fixed perturbation P0∼Exp(η)du×Hdw.Initialize : Play randomly for t={0, ..., H −1}, observe rewards, states, noises, and obstacles.fort=H...T−1doPlayM[1:H]t , and observe state xt+1and obstacles {pjt+1}j∈[k]t+1.Reconstruct disturbance wtusing observed xt+1.Construct the reward function in ̃M:lt+1( ̃M) = minj∈[k]n∥x ̃Mt+1−pjt+1∥22o− ∥x ̃Mt+1∥2Q− ∥ ̃u ̃Mt∥2R. (14)Receive realized reward lt+1(M[1:H]t).Solve for ‘perturbed leading policy’ M[1:H]t+1 as the solution to:arg max∥ ̃M∥≤DM(t+1Xτ=1lτ( ̃M) +λ( ̃M•P0)). (15)end forAlgorithm 2 (General) Hidden-Convex Formulation for Objective in Eqn. 9Input : Set of vectors {a[τ]j}k,Htj=1,τ=1, matrix B, vectors b,b0, time history Ht≤tInput : Iterations N, learning rate η, approx. error ε, perturbation P0, diameter DM.Initialize : Vector c[τ]0=1k1k,τ= 1, . . . , H t.forn=0...N do(1) Solve for MnMn= arg max∥M∥≤DMnHtXτ=1kXj=1cn(j)[τ]∥a[τ]j+BMb[τ]∥22− ∥b[τ]0+BMb[τ]∥2Q− ∥Mb[τ]∥2R+λ(M•P0)o.(16)(2) Update cn+1c[τ]n+1= Π∆khc[τ]ne−η∇cPjc[τ]n(j)∥a[τ]j+BMb[τ]∥22i,∀τ∈ {1, . . . , H t}. (17)end forreturn MN18B.1 Non-convex Memory FPL for Obstacle AvoidanceIntuitively, Alg. 1 operates by updating the gain matrices M[1:H]t+1 via counterfactual reasoning: inhindsight , given the actual observed disturbances and obstacle locations, what gain matrices wouldhave resulted in good performance (in terms of obstacle avoidance and the state-input penalties)? InSupp. A, we demonstrate that this algorithm results in low regret as formalized in Eqn. 5 of the maintext. For reference: Eqn. 9 (main text) corresponds to Eqn. 15 (Supp. B) henceforth; similarly, Eqn. 8(main text) corresponds to Eqn. 14 (Supp. B).B.2 Efficient Solution of Eqn. 15 (Part 1): Reduction of Alg. 2 to Trust Region InstanceWe now prove that Alg. 2 does indeed solve Eqn. 9. Consider the relaxed optimization problemmaxM∈MHtXτ=1Xjλ[τ]j∥a[τ]j+BMb[τ]∥22 (18)We will first describe some useful quantities and (physically-motivated) assumptions. The physicalquantities of interest have the following characteristics: x∈Rdx,u∈Rdu, andw∈Rdw.Assumption 4. B∈Rdx×du, with du≤dx, and rank (B) =du. This corresponds to the followingphysical assumptions: (1) there are no more inputs than states, and (2) there are no ‘extraneous’inputs (if there are such inputs, then we can find a minimal realization of Band wlog set extraneousinputs to always be zero). Similarly, if (1) fails, then we can remove extraneous inputs by againsetting them equal to zero uniformly (just as in (2)).The dimension of several other variables of interest are: b∈RHdwand for the decision variable M,M∈Rdu×Hdw. We want to show that the optimization in Eqn. 18 is equivalent to a convex trustregion problem, which is efficiently solvable.Further, let the quantity BTBhave a singular value decomposition denoted by:BTB=UTΛU.We see that this decomposition has an orthogonal U∈Rdu×duand positive definite Λbecauserank(B) =duand thus the symmetric BTB∈Rdu×duhasBTB≻0.Now, consider the problem of Eqn. 18, assuming that ∀τ∈ {1, . . . , H t}, there are fixed λ[τ]∈∆p, B,{a[τ]j}kj=1,b[τ]. For now, choose a single time element, notated as [i]. For simplicity, wewill only include this notation at the beginning and end, where we sum the result back together. Werearrange the objective for this case as follows:OBJ partial=Xjλ[i]j∥a[i]j+BMb[i]∥22=Xjλj(aj+BMb)T(aj+BMb)=Xjλj(aTjaj+ 2aTjBMb+bTMTBTBMb)≡Xjλj(2cTjMb+bTMTUTΛUMb)Here, we are searching for the argmax M∗, so the aTjajis irrelevant. Further, we have definedcTj=aTjB. Now, let Mr:=UM, and decompose Mr= [mT1;mT2;...;mTdu]. The optimization19objective is now:OBJ partial=Xjλj∥aj+BMb∥22≡Xjλj(2cTjMb+bTMTUTΛUMb)=Xjλj(2cTjMb+bTMTrΛMrb)= [Xjλj(2cTjMb)] +bTMTrΛMrbThe last simplification follows from the fact thatPj(λj) = 1 (because λ∈∆du). Now, using ourknowledge of the diagonal nature of Λand the column partition of Mr, we can see thatMTrΛMr=duXj=1σ2jmjmTjSubstituting into the OPT formulation, we can further simplify all the way to the desired form:OBJ partial=Xiλi∥ai+BMb∥22≡hXiλi(2cTiMb)i+bTMTrΛMrb=hXiλi(2cTiMb)i+bT(duXj=1σ2jmjmTj)b=hXiλi(2cTiUTMrb)i+ (duXj=1σ2jbTmjmTjb)= 2hXiλi(Xj ̃ci,jmTjb)i+ (duXj=1σ2jmTjbbTmj)=Xjh(2Xiλi ̃ci,j)bTimj) + (duXj=1mTj(σ2jbbT)mj)=mTPm+pTm.where mis a vector concatenation of the transposed rows of Mr. Now, to combine the results overtime, utilizing the convexity-preserving property of function addition, we simply reintroduce the[i]-indexing and sum:OBJ partial=Xjλ[i]j∥a[i]j+BMb[i]∥22=mTP[i]m+ (p[i])Tm=⇒OBJ full=XiXjλ[i]j∥a[i]j+BMb[i]∥22=XimTP[i]m+ (p[i])Tm=XimTPm+pTmOnce we solve for m∗using a trust region solver, we unpack it into M∗r, and get M∗=UTM∗r(=UT(UM∗) =M∗)as desired. Further, we can translate a norm bound on Minto an equivalent oneonm(at least, for appropriate choice of norm bound - e.g., the Frobenius norm on Mbecomes the2-norm on m).20B.3 Solving the FPL Sub-ProblemIn order to feasibly engage the obstacle avoidance algorithm, it is necessary to solve for optimalsolutions to the objective in Eqn. 15, which is an instance of the following max-min problem:arg max∥M∥≤DMt+1Xτ=1minj∥a[τ]j+BMb[τ]∥22− ∥b[τ]0+BMb[τ]∥2Q−. . .···−∥ Mb[τ]∥2R+λ(M•P0),where a[τ]j, B,b[τ]0,b[τ]depend on the dynamics (Eqn. 4, main text) and the obstacle locations. Theresulting algorithm is shown in Alg. 2, which converges to M[1:H]t+1 in Eqn. 15 of Alg. 1.Finally, we need to demonstrate that Alg. 2 will converge to a pair {cN, MN}that corresponds to theoptimal solution of Eqn. 15. This follows immediately from Theorem 7, Part II of [ 54]. We have aninstance of a repeated game in which an optimization oracle efficiently solves Eqn. 16, and then thelow-regret exponentiated gradient algorithm [ 63] iteratively updates cn(Eqn. 17). Again, we utilizethe fact that the operation of function addition preserves, respectively, the convexity and concavityproperties of pairs of operand functions as a necessary tool to allow for the results to still hold whenapplying the summation over time steps.B.4 Efficient Solution of Eqn. 15 (Part 2): Reduction of Obstacle Avoidance to Alg. 2This proof works as a reduction, where we show that the obstacle avoidance problem (Eqn. 15)constitutes a particular set of inputs to the general formulation of Alg. 2. Recall that at time t, thealgorithm Ahas access to the state trajectory {xAτ}tτ=1, the disturbance history {wτ}t−1τ=1, and the setsof sensed obstacles {pjτ}tτ=1. For any τ∈ {H, ..., t }, the loss function can be written as an instanceof Alg. 2. Specifically, let ajτ= ̃Axτ−1+Dwτ−1−pjτ,bτ=wτ−H:τ−1,b0,τ= ̃Axτ−1+Dwτ−1.Then, for an appropriate cτ∈∆kτ, the optimization problems are equivalent. Concatenating over theτ-formulations, we have an instance of Eqn. 16. Specifically, for some choice of c, we will have anequivalent problem as Eqn. 15; here cis an encoding – unknown a priori – of the relevant (nearest)obstacles at each time step.21C Hardware Experiment DetailsC.1 Equations of MotionThe equations of motion for the high-level Go1 control are:"xt+1yt+1ψt+1#="xtytψt#+dt"cosψ0sinψ00 1#uxuψ+"wx,twy,twψ,t#, (19)where uxis the commanded forward velocity and uψis the commanded yaw rate. The disturbanceswx, wyandwψcapture unmodeled disturbances including imperfect velocity tracking by the robot’slow-level controller and deviations due to localization noise.C.2 Boschloo Test for SignificanceIn order to evaluate the improvement of OLC vs A∗in our hardware experiments, we perform aBoschloo Exact Test on the outcomes of the experiment. This test (for our purposes) essentiallyprovides a statistical evaluation of the difference in means of two Bernoulli variables in the small-dataregime. In our experiments, we have a 2x2 test matrix in which OLC and A∗are each run 21 times,with the number of failures a random variable depending on each algorithm’s behavior subject to therealized obstacle layouts. While we do not claim that our distribution of layouts exactly matches the‘true distribution in the world,’ we believe it to be similar enough such that the statistic should bemeaningful here.To be precise, the statistic encapsulates the probability, in the null hypothesis that the two means areequal (or that one mean is greater), of achieving a small sample of realizations that is more extremethan that observed in the true data. By choosing the alternative hypothesis “collision rate of A∗ishigher,” a significant result (say, at α= 10% orα= 5% significance) would mean that we reject thenull hypothesis. This, then, would provide evidence that the use of OLC meaningfully reduced thecollision rate.Using the existing scipy implementation (scipy.stats.boschloo_exact), we find that for the datapresented in Table 2, the test statistic is 0.1073 withp= 0.074, indicating significance at α= 0.1but not α= 0.05. Though not conclusive, this provides reasonable evidence that the reportedimprovement in the collision rate is not due to random chance in the obstacle layout realizations.C.3 Obstacle LayoutsWe provide a bird’s eye view of the layout configurations used during the experiments in Figure 3.Additionally see the supplementary video for the physical instantiation of the layouts used in theexperiments.22Figure 3: Obstacle layouts from a bird’s eye view used during the experiments. Obstacles are denoted by the redcircles. The quadruped was placed at (0, 0) and tasked with traversing 10m in the the x direction.23D Simulation Details and ResourcesD.1 Simulation Implementation: Parameters, Setup, and RuntimeIn this section, we report the hyperparameters used for the experiments results in the main text. Weimplemented our algorithm and environments in JAX. All experiments were carried out on a singleCPU in minutes.We set the full horizon T to 100 and the history length H to 10. For random perturbation acrossenvironments, we sample noise from Gaussian distribution with mean 0 and standard deviation 0.5.For directional perturbation, we sample Gaussian noise with mean 0.5 and standard deviation 0.5. Arandom seed of 0 is used for all experiments. We obtain the nominal control from LQR with Q set to0.001 and R set to 1. We then learn the residual obstacle-avoiding parameter M via gradient descent.The learning rate of gradient descent is 0.008 in the centerline environment and is 0.001 for the otherenvironments.An important note for these experiments: we do not implement existing heuristic techniques likeobstacle padding to improve the RRT∗collision-avoidance performance. As such, this performanceis not meant to suggest that RRT∗cannot work robustly in these settings, only that its nominal(and theoretically grounded) form does not account for disturbances or uncertainty and is therefore“optimistic” as compared to HJ methods, etc.D.2 Additional Figures and Trajectories – Centerline EnvironmentThis appendix includes sample trajectories and other relevant visualizations for each algorithm.D.2.1 RRT∗/ A∗Here, we demonstrate some sample paths for RRT∗in each disturbance regime. Fig. 4 shows uniformrandom noise, Fig. 5 shows sinusoidal noise, and Fig. 6 shows adversarial noise. In each case, theshift at the final time (goal position, top of image) causing a horizontal shift in the path should beignored.Figure 4: RRT∗Planner trajectories against uniform random disturbances. Obstacle is the gray sphere, with thenominal trajectory a dashed black (vertical) line.24Figure 5: RRT∗Planner trajectories against sinusoidal disturbances. Obstacle is the gray sphere, with thenominal trajectory a dashed black (vertical) line.Figure 6: RRT∗Planner trajectories against adversarial disturbances. Obstacle is the gray sphere, with thenominal trajectory a dashed black (vertical) line.Figure 7: Racer backwards reachable set (inside thick black line) and the obstacle (dashed black line).D.2.2 HJ Reachability PlannerFor the centerline example, the HJ Reachability planner constructs in Fig. 7 the backwards-reachableset for a given obstacle (dashed line), subject to the dynamics constraints imposed on the racer.25Note that every positive-value region denotes an unsafe region. The interpretation is that there is a“pseudo-cone" in front of the obstacle from which the vehicle cannot escape hitting the obstacle if thedisturbances are sufficiently adversarial . Note that this means that HJ planning is independent of theactual disturbances. For each of the disturbance patterns (random, sinusoid, adversarial), we plota collapsed view of sample trajectories around an obstacle for the HJ planner in Fig. 8. Note howsimilar each plot is, due to this independence of the control from the actual observed disturbances.D.2.3 Online Learned PlannerThe key illustration here is that the trajectories of the online planner follow the structure of thedisturbances, as illustrated by the following comparison of the uniform random and sinusoidaldisturbances in Fig. 9 and Fig. 10.Figure 8: HJ Planner trajectories against (L) uniform random, (C) sinusoid, and (R) adversarial disturbances.Obstacles are black spheres, with the nominal trajectory a dashed black (vertical) line.Figure 9: Collapsed trajectories of the racer usingthe online planner with random disturbances. Theracer passes on each side evenly.Figure 10: Collapsed trajectories of the racer usingthe online planner with sin disturbances. The racerlearns to pass on the right.26D.3 Experiment Details: Dynamic Pedestrian EnvironmentThe pedestrian environment comprises many dynamic obstacles moving through the Racer scene, inwhich the Racer must avoid collision (contact). In this setting, the robot is not given direct obstaclestate information, but must act instead on a history of observations of each (visible) obstacle’s positiononly. The difficulty of the instances of this setting are illustrated in the following figures, whichhighlight several particular instances of success and collision respectively within the experimentsconducted.Figure 11: (L) Non-crash sequence of still images in which OLC successfully avoids a set of 4 close obstacles;(R) Crash sequence of images in which OLC cannot avoid a crash with one of 6 close obstacles.We note several important details for the simulations. The pedestrians are non-avoiding, and in somecases move nearly as fast as the simulated Racer agent; the majority of collisions are observed due toa combination of (1) local density of pedestrians, (2) high relative speed of at least 2 pedestrians, and(3) non-avoiding nature of the pedestrian trajectories. These environments are more dynamic andchaotic than most in the related literature, as they are intended to demonstrate a ‘proof-of-concept,’not to rigorously compare to the many available baselines (the comparison in static environmentsallows for more interpretable cases that highlight the relative advantages of stochastic (e.g., RRT),adversarial (e.g., HJ), and hybrid (e.g., OLC) methodologies, respectively). We think that adaptationto moving obstacles is very natural future work, and a possibly valuable point of future developmentfor our algorithm.D.4 Failures in the Slalom SettingWe include an image of the four major environments in Fig. 12. Our simulated performance wasstrong in all environments except for the slalom environment, which we discuss further below.The “slalom" setting allows us to tune its difficulty by varying the x-position (i.e. offset) of the gatesor by narrowing their width. Fig. 13 illustrates the effect of increasing gate offset from center (i.e.,error in the nominal planned trajectory – x-axis) and decreasing gate width (i.e., greater sensitivityto disturbances – y-axis) on failure rate through the slalom gates. As expected, reduced gate width2740 20 0 20 40X (m)020406080100Y (m)random40 20 0 20 40X (m)centerline40 20 0 20 40X (m)pedestrian40 20 0 20 40X (m)slalomFigure 12: Illustration of the four environments used as a proof-of-concept for our Online algorithm.and increased offset broadly increase failure rates. This is due to the online planner being forcedto overcome a poor nominal planned trajectory; in combination with the gated passageways, thisrequires very precise sequences of inputs and a longer memory of previously observed gates (due tothe limited sensing horizon). This is discussed further in Supp. D.4.1.D.4.1 DiscussionThe first answer is relatively direct: in all of our examples, we are implicitly acting in a kind of Frenetframe, where all obstacle positions and other referencing is to the ego vehicle (racer) position. Assuch, the nominal planned trajectory can always be thought of as mapped to a straight line ahead ofthe racer. In this context, some slalom gates represent a 20m deviation from the nominal trajectory.However, this flies in the face of the central modeling intuition of the online framework – thatobstacle avoidance is local, with local sensing, local deviations from the nominal trajectory, and“reactive" control to disturbances as they arise. In this vein, the nominal slalom is a challengingtask, precisely because it stretches the limits of what can be met by our setup. Concretely: limitedsensing makes each slalom wall a kind of “gradient-less" observation (shifting left and right yieldsonly a continuation of the wall unless the gap is already sensed ), meaning that choosing the correctLeft/Right action is difficult. Additionally, the map displays memory, because going the wrong wayearly through one gate can render the next gate infeasible.It is in light of these considerations that we argue that the slalom case is actually a case for our model,because it interpretably creates a setting in which the key assumptions are broken. Just like an actualskier who overshoots through one gate and cannot recover for the next gate, so too does our obstacleavoidance algorithm run the risk of “dooming" itself due to a wrong turn – but this is, as described,fundamental to the hardness of the obstacle avoidance problem! As such, we consider the slalom gateas a fundamentally hard problem, and consider a case for future work a fuller characterization of howour planner works for slaloms of varying difficulty, as measured by the sensor range, the distancebetween gates (both laterally and longitudinally), and the fundamental “cost memory" as it dependson these and other parameters.D.4.2 Experimental Parameter Sweep – Slalom CourseIn an effort to better represent the effects of the slalom setting and the dependence on gate width andoffset, Fig. 13 illustrates the effect of increasing gate offset from center (i.e., error in the nominalplanner trajectory – x-axis) and decreasing gate width (i.e., greater sensitivity to disturbances – y-axis)on failure rate through the slalom gates. As expected, reduced gate width and increased offset broadlyincrease failure rates.We note that for narrow gates, a zero-offset slalom is actually quite challenging to ensure - we believethis is due to the fundamental H ∞limit of stabilization of disturbances for this system; namely, atoo-narrow gate requires too-strong robustness about the setpoint (origin), causing failure. This alsoexplains why failure rates are high but not one for moderate offsets in the narrow-gate environmentas well: specifically, they allow some freedom away from the zero-offset regularization problem.28Figure 13: Failure rate heatmap for increasing gate offset (left to right) and decreasing gate width (bottom totop). As expected, failure increases with narrower gates (top) and larger offsets (right).Besides this, the trend quite clearly demonstrates the fundamental increases in difficulty observed fornarrower gates and larger offsets, as expected.29 |
h8halpbqB- | Im2Contact: Vision-Based Contact LocalizationWithout Touch or Force SensingLeon Kim, Yunshuang Li, Michael Posa, and Dinesh JayaramanGRASP LaboratoryUniversity of Pennsylvania, USA{leonmkim, sheylali, posa, dineshj }@seas.upenn.eduAbstract: Contacts play a critical role in most manipulation tasks. Robots to-day mainly use proximal touch/force sensors to sense contacts, but the informa-tion they provide must be calibrated and is inherently local, with practical ap-plications relying either on extensive surface coverage or restrictive assumptionsto resolve ambiguities. We propose a vision -based extrinsic contact localizationtask: with only a single RGB-D camera view of a robot workspace, identifywhen and where an object held by the robot contacts the rest of the environ-ment. We show that careful task-attuned design is critical for a neural networktrained in simulation to discover solutions that transfer well to a real robot. Our fi-nal approach im2contact demonstrates the promise of versatile general-purposecontact perception from vision alone, performing well for localizing various con-tact types (point, line, or planar; sticking, sliding, or rolling; single or multiple),and even under occlusions in its camera view. Video results can be found at:https://sites.google.com/view/im2contact/home .Keywords: contact perception, manipulation, vision-based1 IntroductionPerceiving and reacting to contact is critical for performing manipulation tasks [1–5]. Consider whathappens when a person puts a book on a crowded shelf: they hold the book and aim for a gap untilit meets resistance, jostle the book to make room, then press sideways to line up the book against itsneighbor, and finally slide the book snugly into place. Throughout, the key events they must trackall have to do with the physical contacts between various surfaces: when and where they are made,broken, and transition from sticking to sliding. What means does a robot have today to sense suchcontacts between its body, its tools, and external objects?Current contact perception techniques for robots operate mainly from force torque and touch sens-ing. Force torque (F/T) sensors [6] located at the robot’s joints can inform model-based contactestimation techniques [7–11]. However, this estimation problem is under-determined. To see this,consider a two-fingered manipulator in two contact configurations: applying a 10 N force to onefinger, versus applying a 5 N force to each finger. A wrist F/T sensor located directly behind thegripper’s midpoint is aliased: it senses identical forces in both cases. For such direct contacts withthe robot, aliasing can be partially resolved with proximal touch sensors [12–15] applied to con-tacting surfaces on the robot, but over only a limited area. Further, consider as in our introductoryexample, a robot holding a book that contacts a bookshelf. Such “extrinsic” contact is still onlysensed indirectly at the fingers, and remains impossible to resolve. To overcome these issues, to-day’s contact estimation techniques reduce the number of unknowns by operating under restrictiveassumptions, such as about the number and types of contacts, and the shapes of the various objectsinvolved (Sec 2). Finally, even when operating within these assumptions, F/T and touch sensors areoften expensive, and drift or deteriorate quickly over use [15–18] requiring frequent and cumber-some re-calibration.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Prediction Probability > εFigure 1: Sequence of predictions of our method im2contact for localizing extrinsic contact in animage between a grasped object (such as a spatula) and the environment (such as a bowl)To address both the lack of global information available to F/T and touch sensors, and the sensordrift issue, we instead operate from camera images. Cameras can see the entire scene, and functionwithout deterioration for long durations. Since contacts are a function of the shapes and movementsof the objects and the environment, contact estimation would be easy if we could first get perfect3D shape and pose estimates from vision. However, we argue that contact from 3D would be a poorsolution. First, given occlusions around the critical contact regions and arbitrary-shaped objects,near-perfect shape and pose estimates would require impractically many camera views. Next, theoften-crucial binary distinction between near-contacts and true contacts can be hard to capture bycontinuous-valued object pose estimates.Rather than infer contacts from intermediate visual representations such as shapes and poses, wepropose im2contact , a more direct approach for data-driven, model-free visual contact locationestimation at the outputs of a deep neural network. Our system is trained entirely in simulationand incorporates the combination of cropped depth images of the salient regions of the scene, anadditional reference depth image to specify the grasped object, and motion cues from optical flow.In our zero-shot transfer evaluations on a real robot, im2contact models demonstrate the possibilityand promise of versatile general-purpose contact perception from vision alone, performing well forlocalizing various contact types (point, line, or planar; sticking, or sliding; single or multiple), andeven under occlusions in its camera view.2 Related WorkThe robotics community has long recognized the crucial role of external contact sensing in manipu-lation, particularly of contacts between the manipulator and environment (e.g. [8, 9, 19]), coined byBicchi et al. [7] as “intrinsic contact”. However, “extrinsic contact,” e.g. between a held object andthe environment, is similarly useful, but notably more difficult. Prior work on extrinsic contact hasprimairily focused on the use of force and touch sensors and typically found success through strongassumptions which ultimately limit the potential scope of the results. For example, related work hasrelied on the assumption and/or enforcement of pre-defined contact configurations [20–22], limitingapplication to grasped objects seen during training [23], or tight coupling with information gather-ing motions [24]. Other approaches have incorporated pointclouds with force and touch sensing, butassume full coverage of grasped objects [25, 26] or restrict interactions to line contacts [27].In order to enable robots to readily use unmodeled tools in unstructured environments, many ofthe above restrictions must be lifted. We make steps towards this via the choice of vision-basedsensing with no explicit assumptions made on possible contact configurations, object properties,and minimal access to privileged knowledge of the object or environment.3 Visually Localizing Extrinsic Contacts In General Manipulation ScenesFor a robot arm holding a grasped object, we are interested in localizing contact between the heldobject and the rest of the environment, as the robot arm moves in its workspace. Unlike much of the2(a)Annotation Prediction Probability > ε(b) (c) (d) (e) (f)Figure 2: U-Net Depth generally performs well in sim but poorly on real data in challenging casessuch as: (a,d) occluded contact (b,e) ambiguous grasped object geometry (c,f) near-contact withbackground objectprior work described above, we do not assume access to prior information about the held object orthe environment, other than the table.As inputs at each time instant, we assume a H ×W view of the robot workspace from a single fixedRGB-D camera as shown in Fig 3, alongside proprioceptive sensing of the robot state from jointencoders. This is a minimal sensing setup for vision-based robot manipulation, chosen to maximizethe scope of our problem formulation.As output, we would like our system to generate a H ×W map of estimated contact locations, that canbe overlaid on the camera image, similar to dense image-based features [28, 29]. To achieve this, wewill train neural networks with standard pixel-wise binary cross-entropy classification objectives.This treats the the output contact map values at each pixel as contact probabilities, and maximizesthe likelihood under the model of the annotated ground truth contact locations in the training data.3.1 Simulated Training DataIt is not practical to obtain ground truth contact location annotations from real video, but fortunatelysimulators provide this information. To generate target contact maps for training, we project 3Dcontact points from the Gazebo simulator [30] into the camera frame. Next, we generate trainingdata in an episodic fashion. In each 15-second episode, we randomize the geometry, masses, fric-tion coefficients, and initial poses of grasped and environment objects. Shapes are chosen to becylinders, spheres, and cuboids with random parameters. The grasped object is rigidly attachedto the robot arm throughout the episode, as the robot end-effector moves to randomly set targetswith a low-impedance controller to generate rich interaction data. Additional details are includedin Appendix A. Our 4500 episodes (675000 frames at 10 fps) of training data span many types ofextrinsic contacts: point, line, and plane contacts, instantaneous collisions and sustained sliding orrolling contacts along with simultaneous contact with multiple bodies.3.2 A Simple Baseline To Illustrate The Difficulty of Sim-To-Real TransferTo motivate our final method, we first showcase the difficulty of sim-to-real transfer with a baseline,U-Net Depth. U-Net Depth builds on the widely used U-Net [31, 32] architecture for image-to-imageproblems. In the U-Net, an encoder first assimilates information from over the entire 240 ×320 inputimage into a “bottleneck” representation of size 15 ×20×1024. Then, a decoder iteratively spatiallyupsamples this representation with the aid of skip connections from intermediate encoder layers,to finally produce an output over the original 240 ×320 input dimensions. To facilitate sim-to-realtransfer and access 3D information, U-Net Depth uses depth images [33] as input rather than colorimages. We train using pixel-wise cross entropy.U-Net Depth trains to near-zero training losses, and performs very well on held-out data in simu-lation. However, on real data, its performance deteriorates significantly. In Fig 2, we anecdotallynote a few challenging scenarios where this baseline struggles in the real world: (a) under significantocclusion, it misses the occluded contact and predicts irrelevant false positives on the table, (b) whenthe geometry of the grasped object is ambiguous, it predicts contact between an extraneous box andtable, and (c) it produces far more false positives in near-contact scenarios than in sim. We morethoroughly evaluate U-Net Depth in Sec 4.3U-NetRGB imageDepth imageOutput contact mapdepth cropoptical flowobjectreferenceFigure 3: im2contact architecture. Depth and flow are both cropped and concatenated before beingpassed through the U-Net. The object reference is passed through a separate encoder whose outputis concatenated at the bottleneck of the U-Net. The final output of the U-Net is a probability mapover the original image dimensions.3.3 Facilitating Generalizable Contact Localization With im2contactTo mitigate the failures of U-Net Depth on real data, we consider the possible causes of poor gener-alization. Machine learning systems commonly over rely on spurious correlates of the target labelsin the training data [34–36], which fails to generalize under distribution shift. During training, spu-rious correlates could distract the optimizer [36] from finding true “causes”; this can be alleviatedby anticipating such distracting but irrelevant correlates in advance and removing them from thelearner’s inputs. Inputs might be also incomplete, lacking information on the true underlying causeof the outputs. We propose three potential improvements to U-Net Depth, one to remove distractingcorrelates, and two to add missing causal information into the inputs.•Depth Cropping (+ crop): Assuming grasped objects that are not very large, all extrinsic contactswill occur close to the end-effector. With a calibrated camera and proprioceptive knowledge, itis easy to locate the end-effector in the image. We propose to focus the network on the mostrelevant regions by cropping the depth images into a 90x110 box around this end-effector locationat each time, before feeding them to the U-Net. In addition, we add three channels for coordinateconvolution as proposed by Liu et al. [37] to provide the network with the spatial relationship ofthe cropped image with respect to the original image.•Grasped Object Reference (+ obj-ref): We have thus far reduced contact localization to aninstantaneous task, performed from the current sensory observations. However, consider the sce-nario when the grasped object is occluded by the environment: if we do not know its spatial extent,it may be impossible to know whether there is any extrinsic contact. Similarly, in cluttered viewsit may be difficult to disambiguate between the boundaries of the grasped object and the envi-ronment from a single depth image. We propose to provide such missing information through anadditional input: a single reference depth image of the grasped object in the robot gripper, partiallyspecifying the shape of the grasped object.•Optical Flow (+ flow): From a single image, it may be difficult, or even impossible, to differen-tiate between contact and near-contact. For example, a gap between the grasped object and theenvironment of a few millimeters is likely imperceptible without a perfect vantage. However, con-tact forces often induce motion in the environment, causing environmental objects to slide or roll.To capture these cues, we propose to use optical flow computed from the camera RGB images.In the rest of the paper, we use the name im2contact for the combined approach: U-Net Depth +crop + obj-ref + flow. Fig 3 schematically depicts the network with all inputs and outputs.Reducing To Discrete Contact Locations. Our training procedure generates contact probabilitymaps, but it is convenient for evaluation and useful for downstream tasks to identify discrete contactlocations. To this end, we adapt the greedy non-maximum suppression (NMS) technique [38–40],commonly used in object detection. To identify spatially separated peaks in the contact probability4map, we first reject all pixels scoring below ε= 0.01, sort pixels by descending score, and iterativelyreject pixels within a 5 px neighborhood of higher-scoring pixels. See Appendix B for pseudocode.Implementation Details. The pixel-wise cross-entropy loss treats each pixel independently of allothers and hence penalizes a 1 px deviation in predicted location equivalently to a 100 px deviation.To abate this, we spatially blur ground-truth contact maps before computing our training target, andfind that this accelerates training. We monitor performance on held-out simulation data to implementearly termination. For computing optical flow, we use the off-the-shelf RAFT [41] model. Code andmodels will be available at our website.4 ExperimentsWe test im2contact in simulation and in real to evaluate performance in realistic tabletop manipu-lation settings. We further test, via ablation studies, whether, and under what settings, our proposedchanges (crop, obj-ref, and flow) do improve real performance. We include additional anecdotalexperiments which push im2contact beyond its training settings, evaluating generalization.Real Robot Data, Annotations, and Performance Metrics. We perform teleoperated experi-ments in a table-top environment with a Franka Emika Panda robot arm and an Intel Realsense L515RGB-D camera, with reasonably well-matched simulation and real robot setups, as shown earlier inSec 3. To facilitate some quantitative comparison, we manually annotate contact locations for 30episodes (approximately 13 mins comprising 12,362 frames, of which 1/3 involve at least one con-tact) of real robot interaction data, spanning variations in the grasped object, environment objects,robot movements, and contact scenarios. Recall that for simulation experiments, annotated contactsare readily available as in Sec 3; we use 500 episodes (75,000 frames) for computing simulationperformance metrics.Our main metrics are precision, recall, and their harmonic mean, the F1 score. All three must be in[0,1], and higher is better. To compute them, we first follow the procedure from Sec 3.3 for reducingthe network’s output probability maps to discrete contact locations. We then match each ground truthcontact point to a predicted contact point with the Hungarian algorithm, and drop predictions wherethe ground truth is over 15px away as false positives. For our metrics, ground truth contacts withmatches are true positives, and those without are false negatives. We run 5 random seeds for eachmethod and report means and standard errors.Quantitative Results on Simulation and Real Data. Fig 4 plots contact localization precisionand recall in real (hollow circles) and simulated (solid circles) test data, for im2contact , the base-line U-Net Depth (Sec 3.2), and leave-one-out ablations of im2contact that in turn drop crop,obj-ref, and flow. Tabular results are presented in Appendix C.First, for the simple U-Net Depth baseline, these results clearly validate our qualitative observationsfrom Sec 3: precision and recall both deteriorate dramatically from sim to real. Next, U-Net Depthandim2contact are both among the best performing methods on sim data, but im2contact onlydrops marginally in performance on real data, so it remains among the best-performing methods inreal data, while U-Net Depth performs worst.Moving on to the leave-one-out ablations, “w/o crop” deteriorates nearly as much as U-Net Depth,while “w/o obj-ref” and “w/o flow” degrade more gracefully upon transfer to real data, crop con-tributes the most among the three components of im2contact . Fig 4 (middle) plots standard errorfor real data and the legend (right) lists F1 scores. From this initial coarse analysis with limitedquantitative metrics, losing obj-ref marginally hurts precision, and losing flow does not significantlyaffect these scores. Note that these aggregate scores might not reflect performance in rare scenarios,of which there are many.5Figure 4: (Left) Degradation under sim-to-real transfer (Middle) Precision-Recall on real data(Right) Legend along with F1 scores on real data.Qualitative Analysis. Armed with the initial evidence from the coarse quantitative results, weperformed a thorough qualitative analysis over the aforementioned 30 real robot episodes. Figs 5and 6 present keyframes of selected video reels. Fig 7 compares the outputs of all methods at someselected frames across the dataset. These examples are selected to illustrate some key insights fromour more comprehensive analysis (remaining examples available on our website):• As our metrics above suggested, crop is indeed the most important contributor to im2contactperformance. In Fig 7, both im2contact “w/o crop” and U-Net Depth perform similarly poorly,demonstrating the criticality of focusing the model on relevant regions for finding good solutions.• The grasped object reference frame (obj-ref) frequently improves performance in occluded contactsituations (Fig 5, left), or when the grasped object moves coherently with others, creating a mis-leading flow field (Fig 5, right and Fig 7 (d)). This is consistent with our arguments in Sec 3.3; insuch situations, knowing the shape of the grasped object is critical for localizing extrinsic contacts.• Consistent with the metrics above, the effects of optical flow are less obvious, and w/o flow isvery similar to im2contact in Fig 7. We have however anecdotally observed that it improvesidx=112 idx=153 idx=328 idx=461 Annotation Prediction Probability > εim2contact w/o obj -refFigure 5: im2contact w/o obj-ref : obj-ref provides useful information about the grasped object’sshape, which is not otherwise available under occlusions or misleading flow fields. Optical flowfields overlaid on all images. (Left) The grasped can behind the cereal box, slides up its side totopple the box. Here, the grasped object is occluded. (Right) The robot pushes two boxes to topplethem, then pushes one of them back upright. These sustained pushes lead to coherent flow fields thatdo not help to separate a grasped object from a neighbor. In both cases, “w/o obj-ref” errs by treatingthe can-cum-boxes collective as a single grasped object, generating false positives at locations wherethe boxes contact their neighbors.6Obj-ref Frame idx=485 idx=571 idx=1046 idx=1052 Annotation Prediction Probability > εim2contact w/o flowFigure 6: im2contact w/o flow (Optical flow fields overlaid only on im2contact images): (Left)The robot uses a grasped can to press against and pivot a box into an upright position. The referenceframe does not clearly show the can’s shape, so flow must separate the grasped object from others.Sure enough, in frame 485 during the motion, “w/o flow” predicts irrelevant contacts at the foot ofthe shelf, but im2contact performs correctly. Later, in frame 571 immediately after the motion,im2contact no longer has the flow cue, and fails. (Right) The robot uses a large box to sweepa number of cans under a shelf. In this highly cluttered scene, “w/o flow” consistently predicts afalse contact between a stationary can in the foreground rather than the true contact happening inthe background underneath the shelf. im2contact on the other hand uses the flow on the movingbackground can to localize the true contacts.performance in certain cases. In Fig 6 (a), flow suppresses a false positive in a near-contactsituation and helps identify a contact with a can within clutter in Fig 6 (b).Out-of-Distribution Evaluation of im2contact We conclude with a brief highlight reel inFig 8 visualizing the output of im2contact on chosen out-of-distribution settings that are diffi-cult to annotate manually. Despite significant distribution shift from the training domain, we findim2contact still produces reasonable estimates in examples including moderately deformable ob-jects and human demonstrations. The promising results on the latter suggest our extrinsic contactestimates may be amenable for use in guiding robot policy learning from human videos, previouslydemonstrated by Bahl et al. [42] in the context of intrinsic contacts. The details of our humandemonstration evaluations are included in Appendix E and additional examples are on our website.5 Limitations and Future WorkOur method, im2contact , using only a single camera and proprioception, can localize contacts inthe 2D image. While this is already surprising and useful, there is much information that our systemdoes not capture: it does not localize contacts in 3D, perceive contact forces, or classify modes ofcontact. Our deliberately simple sensing setup might be fundamentally limited for solving thesebroader problems. For example, predictions over the RGB-D image may be backprojected into 3Dspatial coordinates using the depth map. However, single-viewpoint depth images are only a 2.5Drepresentation of geometry, so we are unable to localize full 3D coordinates of contact points duringoccluded contacts. Multiple cameras may help address this issue.In addition, our cropping method assumes an upper bound on the dimensions of the grasped object.Integrating an off-the-shelf object segmentation method may enable more more adaptive schemesfor cropping and grasped object specification that may also improve robustness to occlusions.Lastly, our method leverages the global information provided by vision over more local tactile sens-ing. In doing so, we sacrifice the precision of our method which is prone to false positives duringnear-contact and occlusions, even with the addition of optical flow. Our preliminary efforts to inte-7(a)(b)(c)(d)Occluded push Ambiguous geometry Contact in clutter Near-contactim2contact w/o obj -ref w/o flow w/o crop U-Net DepthAnnotation Prediction Probability > εFigure 7: Our full ablations on four illustrative samples (flow visualized on the models that usethem): (a)a grasped cereal box encounters numerous near-contacts in a highly cluttered scene.(b)From the same episode, the grasped cereal box comes into contact with a foreground can inthe cluttered scene. (c)A can is stacked on top of another in a cluttered scene (d)A significantlyoccluded grasped sugar box pushes a can across the tableDeformable object Human DemonstrationPrediction Probability > εFigure 8: im2contact predictions on (Left) interactions between two moderately deformableplushies (Right) human video of inserting a book into a tight spacegrate F/T sensing in Appendix D validate that F/T sensing can improve performance in sim, but willrequire substantial effort to transfer to real which we hope to actualize in future work.6 ConclusionWe present a method that enables extrinsic contact localization from vision alone with minimalprior knowledge or explicit assumptions on the grasped object and environment. By incorporatingwell-motivated inputs to our model in simulation, we show successful sim-to-real transfer of ourmodel whereas a naive baseline fares poorly. The method also shows promise on our chosen out-of-distribution settings that include deformable objects and human demonstrations. In future work,we hope to address the limitations of im2contact and explore its downstream utility in enablingcontrol tasks involving tool use or assembly.8AcknowledgmentsWe would like to thank Denny Cao for his contributions to the synthetic data generation pipelineand teleoperation interface, as well as the reviewers for their insightful feedback during the rebuttalperiod. Leon Kim was supported by the NSF GRFP, Yunshuang Li by the Chiang Chen OverseasFellowship, Michael Posa by NSF CAREER Award FRR-2238480, and Dinesh Jayaraman by NSFCAREER Award 2239301.References[1] R. S. Johansson and J. R. Flanagan. Coding and use of tactile signals from the fingertips inobject manipulation tasks. Nature Reviews Neuroscience , 10(5):345–359, 2009.[2] R. D. Howe. Tactile sensing and control of robotic manipulation. Advanced Robotics , 8(3):245–261, 1993.[3] S. Tian, F. Ebert, D. Jayaraman, M. Mudigonda, C. Finn, R. Calandra, and S. Levine. Ma-nipulation by feel: Touch-based control with deep predictive models. In 2019 InternationalConference on Robotics and Automation (ICRA) , pages 818–824. IEEE, 2019.[4] A. Aydinoglu, P. Sieg, V . M. Preciado, and M. Posa. Stabilization of complementarity systemsvia contact-aware controllers. IEEE Transactions on Robotics , 38(3):1735–1754, 2021.[5] J. M. Romano, K. Hsiao, G. Niemeyer, S. Chitta, and K. J. Kuchenbecker. Human-inspiredrobotic grasp control with tactile sensing. IEEE Transactions on Robotics , 27(6):1067–1079,2011.[6] M. Y . Cao, S. Laws, and F. R. y Baena. Six-axis force/torque sensors for robotics applications:A review. IEEE Sensors Journal , 21(24):27238–27251, 2021.[7] A. Bicchi, J. K. Salisbury, and D. L. Brock. Contact sensing from force measurements. 12(3):249–262. ISSN 0278-3649. doi:10.1177/027836499301200304. URL https://doi.org/10.1177/027836499301200304 . Publisher: SAGE Publications Ltd STM.[8] A. De Luca, A. Albu-Schaffer, S. Haddadin, and G. Hirzinger. Collision detection and safereaction with the DLR-III lightweight manipulator arm. In 2006 IEEE/RSJ International Con-ference on Intelligent Robots and Systems , pages 1623–1630. doi:10.1109/IROS.2006.282053.ISSN: 2153-0866.[9] L. Manuelli and R. Tedrake. Localizing external contact using proprioceptive sensors: Thecontact particle filter. In 2016 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 5062–5069. IEEE. ISBN 978-1-5090-3762-9. doi:10.1109/IROS.2016.7759743. URL http://ieeexplore.ieee.org/document/7759743/ .[10] A. Zwiener, C. Geckeler, and A. Zell. Contact Point Localization for Articulated Manipulatorswith Proprioceptive Sensors and Machine Learning. In 2018 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 323–329. doi:10.1109/ICRA.2018.8462869. URLhttps://ieeexplore.ieee.org/document/8462869 .[11] T. Pang, J. Umenberger, and R. Tedrake. Identifying external contacts from joint torque mea-surements on serial robotic arms and its limitations. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 6476–6482. doi:10.1109/ICRA48506.2021.9561761.ISSN: 2577-087X.[12] J. A. Fishel and G. E. Loeb. Sensing tactile microvibrations with the biotac—comparison withhuman sensitivity. In 2012 4th IEEE RAS & EMBS international conference on biomedicalrobotics and biomechatronics (BioRob) , pages 1122–1127. IEEE, 2012.9[13] W. Yuan, S. Dong, and E. H. Adelson. Gelsight: High-resolution robot tactile sensors forestimating geometry and force. Sensors , 17(12):2762, 2017.[14] M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V . R. Most, D. Stroud, R. Santos,A. Byagowi, G. Kammerer, et al. Digit: A novel design for a low-cost compact high-resolutiontactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters ,5(3):3838–3845, 2020.[15] R. Bhirangi, T. Hellebrekers, C. Majidi, and A. Gupta. Reskin: versatile, replaceable, lastingtactile skins. arXiv preprint arXiv:2111.00071 , 2021.[16] B. Wu, Z. Wu, and F. Shen. Research on calibration system error of 6-axis force/torque sensorintegrated in humanoid robot foot. In 2010 8th World Congress on Intelligent Control andAutomation , pages 6878–6882. IEEE, 2010.[17] F. J. A. Chavez, G. Nava, S. Traversaro, F. Nori, and D. Pucci. Model based in situ cali-bration with temperature compensation of 6 axis force torque sensors. In 2019 InternationalConference on Robotics and Automation (ICRA) , pages 5397–5403. IEEE, 2019.[18] M. Reinvee, K. Jansen, et al. Utilisation of tactile sensors in ergonomic assessment of hand–handle interface: a review. Agronomy Research , 12(3):907–914, 2014.[19] P. Grady, J. A. Collins, S. Brahmbhatt, C. D. Twigg, C. Tang, J. Hays, and C. C. Kemp.Visual pressure estimation and control for soft robotic grippers. In 2022 IEEE/RSJ In-ternational Conference on Intelligent Robots and Systems (IROS) , pages 3628–3635. doi:10.1109/IROS47612.2022.9982073. ISSN: 2153-0866.[20] K.-T. Yu and A. Rodriguez. Realtime state estimation with tactile and visual sensing for in-serting a suction-held object. doi:10.1109/IROS.2018.8594077.[21] D. Ma, S. Dong, and A. Rodriguez. Extrinsic contact sensing with relative-motion trackingfrom distributed tactile measurements. In 2021 IEEE International Conference on Roboticsand Automation (ICRA) , pages 11262–11268. doi:10.1109/ICRA48506.2021.9561781. ISSN:2577-087X.[22] S. Kim, D. K. Jha, D. Romeres, P. Patre, and A. Rodriguez. Simultaneous tactile estimationand control of extrinsic contact. URL http://arxiv.org/abs/2303.03385 .[23] A. Molchanov, O. Kroemer, Z. Su, and G. S. Sukhatme. Contact localization on grasped objectsusing tactile sensing. In 2016 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 216–222. doi:10.1109/IROS.2016.7759058. ISSN: 2153-0866.[24] Y . Karayiannidis, C. Smith, F. E. Vina, and D. Kragic. Online contact point estimation foruncalibrated tool use. In 2014 IEEE International Conference on Robotics and Automa-tion (ICRA) , pages 2488–2494. IEEE. ISBN 978-1-4799-3685-4. doi:10.1109/ICRA.2014.6907206. URL http://ieeexplore.ieee.org/document/6907206/ .[25] C. Higuera, S. Dong, B. Boots, and M. Mukadam. Neural contact fields: Tracking extrinsiccontact with tactile sensing. URL http://arxiv.org/abs/2210.09297 .[26] A. Sipos and N. Fazeli. Simultaneous contact location and object pose estimation using pro-prioception and tactile feedback. In 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 3233–3240. doi:10.1109/IROS47612.2022.9981762. ISSN:2153-0866.[27] M. Van der Merwe, D. Berenson, and N. Fazeli. Learning the dynamics of compliant tool-environment interaction for visuo-tactile contact servoing. URL http://arxiv.org/abs/2210.03836 .10[28] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu,E. Romo, N. Fazeli, F. Alet, N. C. Dafle, R. Holladay, I. Morona, P. Q. Nair, D. Green, I. Taylor,W. Liu, T. Funkhouser, and A. Rodriguez. Robotic pick-and-place of novel objects in clutterwith multi-affordance grasping and cross-domain image matching. URL http://arxiv.org/abs/1710.01330 .[29] P. R. Florence, L. Manuelli, and R. Tedrake. Dense object nets: Learning dense visual objectdescriptors by and for robotic manipulation. URL http://arxiv.org/abs/1806.08756 .[30] N. Koenig and A. Howard. Design and use paradigms for gazebo, an open-source multi-robotsimulator. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS)(IEEE Cat. No. 04CH37566) , volume 3, pages 2149–2154. IEEE, 2004.[31] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical imagesegmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings,Part III 18 , pages 234–241. Springer, 2015.[32] N. Siddique, S. Paheding, C. P. Elkin, and V . Devabhaktuni. U-net and its variants for medicalimage segmentation: A review of theory and applications. Ieee Access , 9:82031–82057, 2021.[33] M. Danielczuk, M. Matl, S. Gupta, A. Li, A. Lee, J. Mahler, and K. Goldberg. Segmentingunknown 3d objects from real depth images using mask r-cnn trained on synthetic data. In2019 International Conference on Robotics and Automation (ICRA) , pages 7283–7290. IEEE,2019.[34] H. Ye, C. Xie, T. Cai, R. Li, Z. Li, and L. Wang. Towards a theoretical framework of out-of-distribution generalization. Advances in Neural Information Processing Systems , 34:23519–23531, 2021.[35] R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wich-mann. Shortcut learning in deep neural networks. Nature Machine Intelligence , 2(11):665–673,2020.[36] V . Nagarajan, A. Andreassen, and B. Neyshabur. Understanding the failure modes of out-of-distribution generalization. arXiv preprint arXiv:2010.15775 , 2020.[37] R. Liu, J. Lehman, P. Molino, F. Petroski Such, E. Frank, A. Sergeev, and J. Yosinski. Anintriguing failing of convolutional neural networks and the coordconv solution. Advances inneural information processing systems , 31, 2018.[38] G. Burel and D. Carel. Detection and localization of faces on digital images. Pattern Recogni-tion Letters , 15(10):963–967, 1994.[39] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale,deformable part model. In 2008 IEEE conference on computer vision and pattern recognition ,pages 1–8. Ieee, 2008.[40] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate objectdetection and semantic segmentation. In Proceedings of the IEEE conference on computervision and pattern recognition , pages 580–587, 2014.[41] Z. Teed and J. Deng. RAFT: Recurrent all-pairs field transforms for optical flow. URL http://arxiv.org/abs/2003.12039 .[42] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as aversatile representation for robotics. URL http://arxiv.org/abs/2304.08488 .11A Randomized Simulation Data GenerationEnvironment objects: At the beginning of each 15 second episode, 8 objects are spawned intothe environment with either box or cylinder geometry with equal probability. We do not includespherical geometries as they are rare in household/kitchen settings, though we do introduce theminto the possible grasped object geometries.The spawn position of each object is sampled uniformly in a box above the table ( x: [0.2m,0.8m],y: [−0.38m,0.38m],z: [0.1m,0.4m]). Euler angles are sampled uniformly between ±π. Therespective dimensions for each primitive geometry (height, width, length for box, diameter andlength for cylinder, and diameter for sphere) is sampled uniformly between 0.02m and 0.3m, masssampled uniformly between 0.05kg and 2kg, and friction sampled uniformly between 0and1.Grasped object: The grasped object is spawned with the same procedure used to generate thegeometry, dimensions, mass, friction as above, but the position is sampled uniformly in a cylinderdefined in the end-effector frame with radius 0.001m and end-points at −0.007m and 0.001m alongthe z-axis. The orientation is sampled uniformly in a cone about the z-axis of the end-effector framewith an aperture angle of 0.7π.Robot policy: Desired delta end-effector positions are sampled at 50Hz from an Ornstein-Uhlenbeck (OU) process which is tracked by an impedance controller. Desired orientation is con-stant and chosen to match the world frame, though we use very low orientation stiffness in theimpedance controller to encourage diverse orientations during contact.In the general form of the OU process: dxt=θ(μ−xt)dt+σ dW t, the first term is deterministicand draws the process back to a constant μ(referred to as “drift”) with linear gain θ, while the secondterm is the stochastic wiener process where the variance is scaled by σ.The desired x, ytrajectory is sampled independently with different parameters from the desired ztrajectory as we’d like to keep the motion in the xy-plane as diverse as possible, while ensuring theend-effector is close enough to the table to make frequent contact with the environment objects andtable surface.For the x, yprocess, we define four halfspaces in polar coordinates that contribute to the first driftterm in the OU process to keep the desired trajectories within a polar rectangle around the robot’sworkspace. These terms only become active when the end-effector leaves the respective halfspace.Hence, when the robot is within all halfspaces, the x, yprocess becomes simply a wiener process.The polar rectangle boundaries are defined such that the radius is between 0.35m and 0.7m and theangle is between ±2.2radians. For θwe choose both x, yto be 20and for the variance matrix wechoose a diagonal with both elements equal to 0.22For the zprocess, we choose a drift that is 0.2m above the table surface, θequal to 1, and a varianceof0.052B Adapted Greedy NMS AlgorithmAlgorithm 1 Adapted NMS AlgorithmInput: P∈[0,1]H×W,pt∈[0,1],rnms∈R+,Kiter∈NOutput: C ⊂N21C ← { (i, j)|Pi,j> pt}2C ← sort(C,by=Pi,j,order =descending )3fork= 1, . . . , min(Kiter,|C|)do4 C ← C \ { (i, j)|((i−ik)2,(j−jk)2)2< rnms andPi,j< Pik,jk}5end for6return C12C Tabular ResultsRecall ↑ Precision ↑ F1↑ Avg. TP distance ↓methodU-Net depth 0.577±0.012 0.524±0.014 0.548±0.003 3.317±0.052w/o obj-ref+flow 0.585±0.011 0.467±0.008 0.519±0.004 3.42±0.08w/o obj-ref 0.559±0.01 0.5±0.002 0.528±0.005 3.341±0.048w/o flow 0.574±0.011 0.518±0.01 0.544±0.004 3.453±0.029w/o crop 0.551±0.011 0.53±0.015 0.539±0.005 3.265±0.061im2contact 0.613±0.017 0.514±0.017 0.558±0.005 3.526±0.05Table 1: Metrics on simulation data with all ablationsRecall ↑ Precision ↑ F1↑ Avg. TP distance ↓methodU-Net Depth 0.41±0.022 0.36±0.017 0.38±0.01 4.1±0.138w/o obj-ref+flow 0.58±0.018 0.44±0.014 0.5±0.009 4.09±0.109w/o obj-ref 0.57±0.009 0.47±0.006 0.52±0.006 4.07±0.103w/o flow 0.56±0.019 0.5±0.011 0.53±0.005 3.92±0.085w/o crop 0.44±0.025 0.39±0.025 0.41±0.007 4.47±0.065im2contact 0.56±0.022 0.5±0.017 0.53±0.005 4.08±0.094Table 2: Metrics on real data with all ablationsD Preliminary Evaluations of Adding Force-Torque SensingWe have found in our initial efforts to integrate force-torque sensing that this does indeed improveperformance in simulation, with an increase in F1 score from 0.56 to 0.60 as seen in Table 3. How-ever, sim-to-real transfer is challenging because of calibration errors, drift, and deterioration. Toaccount for this, we compensate the available joint-torque sensing with a model of the robot to-gether with estimated grasped-object mass to approximate the external joint-torques which can beattributed to contact. However, unmodeled effects remain: robot joint friction is difficult to identifywell, and object inertial properties are only coarsely estimated. As a result, im2contact + F/T per-forms worse in real data: F1 score drops from 0.53 to 0.50 (Table 4). We show qualitative videoexamples on real data comparing im2contact with the addition of F/T sensing at our website:https://sites.google.com/view/im2contact .We may conclude that F/T sensing does not offer a simple silver bullet solution for consistentlyimproving extrinsic contact sensing in our settings. Incorporating additional sensors to improve theperformance of our vision-only im2contact approach will require non-trivial additional contribu-tions.Recall ↑ Precision ↑ F1↑ Avg. TP distance ↓methodim2contact+F/T 0.62±0.004 0.584±0.011 0.601±0.006 3.507±0.042im2contact 0.613±0.017 0.515±0.018 0.558±0.005 3.526±0.05Table 3: Metrics on simulation data comparing im2contact with and without F/T sensing13Recall ↑ Precision ↑ F1↑ Avg. TP distance ↓methodim2contact+F/T 0.479±0.024 0.54±0.017 0.504±0.011 4.069±0.123im2contact 0.56±0.022 0.501±0.017 0.526±0.004 4.078±0.093Table 4: Metrics on real data comparing im2contact with and without F/T sensingD.1 Implementation Details of Adding Force-Torque SensingThe Franka Panda robot provides estimated external joint-torques by compensating the joint-torquemeasurements with an internal model of the robot’s inertial and kinematic properties. These are thentransformed into a “virtual” external force-torque measurement at the end-effector by applying thepseudo-inverse of the jacobian, followed by alignment into the world-frame.We attempt to compensate the unknown grasped-object’s inertial effects by coarsely estimating theinduced gravitational wrench. On real data, we assume the robot does not move at the beginningof the episode and average the first 0.5 seconds of the world-frame z-component of the externalwrench, followed by division by gravitational acceleration to obtain an estimated mass. In sim, wesimply use the ground-truth object mass. In both sim and real, We assume the CoM location is fixedand located 0.15m along -z-axis of the end-effector frame to obtain the adjoint map to compute theapproximate gravitational wrench as a function of the end-effector pose.On real data, we apply a causal low-pass Butterworth filter to reduce the observed force-torqueoscillations during free-space motion that we suspect are attributable to coupled effects between theunmodelled joint-friction and the impedance controller.We integrate the current-most external wrench estimate to im2contact by first passing the 6-dimensional wrench vector through a small MLP, followed by tiling and concatenation to the15×20×1024 bottleneck of the U-Net. We train with the same training procedure, hyperparam-eters, and dataset as before.E Implementation Details of Human Demonstration EvaluationsWe localize the human hand in the image by affixing a passive reflective ball to the hand which caneasily be thresholded and localized in the RGB-D camera’s infrared image stream. We apply thesame 90 ×110 cropping window to the ball pixel coordinate with a relative shift downward by 55pixels.We modify the cropping window hyperparameter during training of im2contact by additionallyshifting the window down by 14 pixels to mitigate effects of domain shift in the agent’s end-effector.Otherwise, we retain the same training procedure, hyperparameters, and dataset as before.14 |
q0VAoefCI2 | Task-Oriented Koopman-Based Control withContrastive EncoderXubo LyuSimon Fraser Universityxlv@sfu.caHanyang HuSimon Fraser Universityhha160@sfu.caSeth SiriyaUniversity of Melbournessiriya@student.unimelb.edu.auYe PuUniversity of Melbourneye.pu@unimelb.edu.auMo ChenSimon Fraser Universitymochen@cs.sfu.caAbstract: We present task-oriented Koopman-based control that utilizes end-to-end reinforcement learning and contrastive encoder to simultaneously learn theKoopman latent embedding, operator, and associated linear controller within aniterative loop. By prioritizing the task cost as the main objective for controllerlearning, we reduce the reliance of controller design on a well-identified model,which, for the first time to the best of our knowledge, extends Koopman controlfrom low to high-dimensional, complex nonlinear systems, including pixel-basedtasks and a real robot with lidar observations. Code and videos are available here.Keywords: Learning and control, Koopman-based control, Representation learning1 IntroductionRobot control is crucial and finds applications in various domains. Nonlinear and linear controlare two primary approaches for robot control. Nonlinear control [1, 2, 3] is suitable for complexsystems when a good nonlinear dynamical model is available. But such a model is not easy to obtainand the nonlinear computation can be sophisticated and time-consuming. Linear control [4, 5, 6]is relatively simple to implement and computationally efficient for systems with linear dynamics,but can exhibit poor performance or instability in realistic systems with highly nonlinear behaviors.Based on Koopman operator theory [7], Koopman-based control [8, 9, 10, 11] offers a data-drivenapproach that reconciles the advantages of nonlinear and linear control to address complex robotcontrol problems. It transforms the (unknown) nonlinear system dynamics into a latent space inwhich the dynamics are (globally) linear. This enables efficient control and prediction of nonlinearsystems using linear control theory.Numerous studies have been done on Koopman-based control and they typically follow a two-stagemodel-oriented process [11, 12, 13]. The first stage is to identify a Koopman model – that is, a glob-ally linear model – from system data, which involves finding a Koopman operator and its associatedembedding function to represent linearly evolving system dynamics in the latent space. Classicalmethods use matrix factorization or solve least-square regression with pre-defined basis functions,while modern methods leverage deep learning techniques [14, 12, 11, 13, 15, 16], such as deep neu-ral networks (DNNs) and autoencoder frameworks, to enhance Koopman model approximation. Inthe second stage, a linear controller is designed over the latent space based on the Koopman model.Various optimal control methods for linear systems, including Linear Quadratic Regulator (LQR)[17, 13, 12] and Model Predictive Control (MPC) [18, 19, 20, 16], have been employed.The model-oriented approach in the aforementioned works prioritizes Koopman model accuracy forprediction rather than control performance. While it allows the model to be transferred and reusedacross different tasks, it has certain limitations. Firstly, it involves a sequential two-stage process,where the performance of the controller is highly dependent on the prediction accuracy of the Koop-7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.man model. Thus, slight prediction inaccuracies of the learned model can significantly degrade thesubsequent control performance. Secondly, even if the model is perfect, the cost function parametersfor the linear controller (e.g. Q and R matrices in LQR controller) need careful manual tuning in bothobserved and latent space in order to have good control performance. These challenges are partic-ularly pronounced in problems with high-dimensional state spaces, thus restricting the applicabilityof Koopman-based control to low-dimensional scenarios.Contributions . In this paper, we propose a task-oriented approach with a contrastive encoder forKoopman-based control of robotic systems. Unlike existing works that prioritize the Koopmanmodel for prediction, our task-oriented approach emphasizes learning a Koopman model with theintent of yielding superior control performance. To achieve this, we employ an end-to-end rein-forcement learning (RL) framework to simultaneously learn the Koopman model and its associatedlinear controller over latent space within a single-stage loop. In this framework, we set the mini-mization of the RL task cost to be the primary objective, and the minimization of model predictionerror as an auxiliary objective. This configuration has the potential to alleviate the aforementionedlimitations: (1) RL optimization provides a dominant, task-oriented drive for controller update, re-ducing its reliance on accurate model identification, (2) manual tuning of cost function parametersis unnecessary as they can be learned implicitly along with the controller in the end-to-end loop.We adopt a contrastive encoder as the Koopman embedding function to learn the linear latent rep-resentation of the original nonlinear system. In contrast to the commonly-used autoencoder, thecontrastive encoder is demonstrated to be a preferable alternative, delivering latent embedding thatis well-suited for end-to-end learning, especially in high-dimensional tasks such as pixel-based con-trol. To design the Koopman controller, we develop a differentiable LQR control process to be thelinear controller for the Koopman latent system. This controller is gradient-optimizable, allowing usto integrate it into the end-to-end RL framework and optimize its parameters through gradient back-propagation. We empirically evaluate our approach across various simulated tasks, demonstratingsuperior control performance and accurate Koopman model prediction. We compare our approachwith two-stage Koopman-based control and pure RL, and deploy it on a real robot.Figure 1: Overview of our method. We adopt an end-to-end RL framework to simultaneously learna Koopman model and its associated controller. The Koopman model includes a contrastive encoderas the embedding function and a linear matrix as the operator. The Koopman controller is integratedinto the loop as a differentiable LQR controller that allows for gradient-based updates. We optimizethe entire loop by considering the task cost as the primary objective and incorporating contrastiveand model prediction losses as auxiliary objectives.2 Related WorkKoopman-Based Control. B.O. Koopman [7] laid the foundation for analyzing nonlinear systemsthrough an infinite-dimensional linear system via the Koopman operator. Subsequent works pro-2posed efficient computation algorithms such as dynamical mode decomposition (DMD) [21, 22] andextended DMD (EDMD) [23, 24] to approximate the Koopman operator from observed time-seriesdata. Recent research has expanded the Koopman operator theory to controlled systems [9, 25], andexplored its integration with various control techniques such as LQR [26], MPC [20, 18, 27, 19],pulse control [28]. The emergence of deep learning has further enhanced the learning of Koopmanembedding and operator using neural networks and autoencoders [14, 15], enabling their integrationwith optimal control techniques [11, 12, 16].Contrastive Representation Learning. Contrastive representation learning has emerged as aprominent approach in self-supervised learning in computer vision and natural language process-ing [29, 30, 31, 32, 33, 34, 35], where it employs an encoder to learn a latent space where the latentrepresentation of similar sample pairs are proximate while dissimilar pairs are distant. Recent workshave extended contrastive learning to RL for robot control. Particularly, CURL [36] learns a visualrepresentation for RL tasks by matching embeddings of two data-augmented versions of the rawpixel observation in a temporal sequence. The use of a contrastive encoder on RL enables effectiverobot control directly from high-dimensional pixel observations.Relations to Our Work. Our work falls into the realm of using deep learning for Koopman-basedcontrol. In contrast to existing two-stage approaches [11, 12] involving model identification andcontroller design, we propose a single-stage, end-to-end RL loop that simultaneously learns theKoopman model and controller in a task-oriented way. We also draw inspiration from the use ofcontrastive encoder [36], and specifically tailor it as a Koopman embedding function for nonlinearsystems with physical states and pixel observations. Our approach enhances the Koopman-basedcontrol to be used in high-dimensional control tasks beyond traditional low-dimensional settings.3 Problem FormulationConsider an optimal control problem over a nonlinear, controlled dynamical systemsminu0:T−1PT−1k=0c(xk,uk)subject to xk+1=f(xk,uk), (1)where the state xevolves at each time step kfollowing a dynamical model f. We aim to find acontrol sequence u0:Tto minimize the cumulative cost c(xk,uk)overTtime steps. Koopmanoperator theory [7, 9] allows the lifting of original state and input space x∈Xandu∈Uto ainfinite-dimensional latent embedding space z∈Zvia a set of scalar-valued embedding functionsg: (X,U)→Z, where the evolution of latent embedding zk=g(xk,uk)can be globally capturedby a linear operator K, as shown in Eq. (2).Kg(xk,uk)≜g(f(xk,uk),uk+1) (2)Identifying the Koopman operator Kas well as the embedding function gis the key to Koopman-based control. In practice, Kis often approximated using a finite-dimensional matrix K, and thechoice of gis typically determined through heuristics or learning from data. Recent research [11, 12]has employed neural networks ψ(·)to encode state x, and define the Koopman embedding functiong(x,u) = [ψ(x)u]. Correspondingly, Kis decoupled into state and control components, denotedby matrices AandB, to account for ψ(x)andurespectively. This results in a linear time-invariantsystem with respect to ψ(x)anduin Eq. (3), facilitating linear system analysis and control synthesis.Kg(xk,uk) = [A B ]⊤[ψ(xk)uk] =Aψ(xk) +Buk=ψ(xk+1) (3)The goal of Koopman-based control is to identify the Koopman operator K= [A B ]⊤, the em-bedding function ψ(x)as well as a linear controller u=π(x)to minimise the total task cost.4 Method: Task-Oriented Koopman Control with Contrastive Encoder4.1 Contrastive Encoder as Koopman Embedding FunctionDeep neural networks are extensively employed as flexible and expressive nonlinear approximatorsfor learning Koopman embeddings in a latent space. Inspired by the success of contrastive learning,3we adopt a contrastive encoder to parameterize the embedding function ψ(·). Specifically, for eachstatexiin the state set X={xi|i= 0,1,2, ...}, we create its associated query sample xqiand a setof key samples xkithat include positive and negative samples x+iand{x−j|j̸=i}.x+iis generatedby using different versions of augmentations on xi, while {x−j|j̸=i}are generated by applyingsimilar augmentations for all the other states: X\{xi}={xj|j̸=i}.Following [32, 36], we use two separate encoders ψθqandψθkto compute the latent embeddings:zqi=ψθq(xqi),z+i=ψθk(x+i)andz−j=ψθk(x−j). We compute the contrastive loss Lcstover RLdata batch Bbased on Eq. (4) to update encoders parameters θq,θkandW, which is a learnableparameter matrix to measure the similarity between the query and key samples. Two encoders ψθqandψθkare used for contrastive loss computation, but eventually only ψθqserves as the Koopmanembedding function, and we simplify its notation as ψθ. We use t= (z,u,z′, r, d)to denote a tuplewith current and next latent state z,z′, action u, reward r=−cand done signal d.Lcst=Et∼Blog exp(zqi⊤Wz+i)exp(zqi⊤Wz+i) +Pj̸=iexp(zqi⊤Wz−j)!(4)Different encoder structures and augmentation strategies are required to handle system states de-pending on how they are represented. For pixel-based states, we adopt convolutional layers as theencoder structure and apply random cropping for augmentation [32, 36]. For physical states, weutilize fully connected layers as the encoder structure and augment the states by adding uniformlydistributed, scaled random noise as defined in Eq. (5). x|·|refers to element-wise absolute of x.∆x∼U(−ηx|·|, ηx|·|);x+=x+ ∆x (5)4.2 Linear Matrices as Koopman OperatorThe Koopman operator describes linear-evolving system dynamics over the latent embeddings andcan be represented by a matrix K. Following Eq. (3), we decompose Kinto two matrices AandB,representing the state and control coefficient of a linear latent dynamical system.zk+1=Azk+Buk (6)To learn AandB, we optimise a model prediction loss Lm, which is described by Mean-Squared-Error (MSE) as defined in Eq. (7). ˆ zk+1is the latent embedding obtained through contrastive en-coder at k+ 1step. It supervises the predicted latent embedding at the k+ 1step from Eq. (6).Lm=Et∼B∥ˆ zk+1−Azk−Buk∥2;ˆ zk+1=ψθ(xk+1) (7)4.3 LQR-In-The-Loop as Koopman Linear ControllerAlgorithm 1: Iterative solution of DARE1: Set the total number of iterations M2: Prepare current A,B,Q,R; initialise PM=Q.3:form=M, M −1, M−2, ...,1do4:Pm=A⊤Pm+1A−A⊤Pm+1B(R+B⊤Pm+1B)−1B⊤Pm+1A+Q5:end for6: Compute linear gain:G= (B⊤P1B+R)−1B⊤P1A7: Generate optimal control for latent embedding z:u∗=−GzGiven Koopman embeddings z=ψθ(x)and itsassociated linear latent system parameterizedbyK= [A B ]⊤shown in Eq. (6), Koopman-based approaches allow for linear control syn-thesis over latent space Z. Formally, considerthe infinite time horizon LQR problem in Koop-man latent space that can be formulated asEq. (8) where QandRare state and controlcost matrices. In practice, we choose to repre-sentQandRas diagonal matrices to maintaintheir symmetry and positive definiteness. TheLQR latent reference, denoted as zref, can be obtained from ψ(xref)ifxrefis provided. zrefcan alsobe set to 0ifxrefis not available. This is particularly useful in cases where the LQR problem doesnot have an explicit, static goal reference, such as controlling the movement of a cheetah.minu0:∞∞Xk=0h(zk−zref)⊤Q(zk−zref) +u⊤kRukisubject to zk+1=Azk+Buk, (8)4Solving the LQR problem in Eq. (8) involves solving the Discrete-time Algebraic Riccati Equation(DARE). One way this can be done is to take a standard iterative procedure to recursively updatethe solution of DARE until convergence, as shown in Algo. 1. In practice, we find performing asmall number of iterations, typically M < 10, is adequate to obtain a satisfactory and efficientapproximation for the DARE solution. Thus, we build a LQR control policy πLQRover Koopmanlatent embedding zwhile dependent on a set of parameters A,B,Q,R, as described by Eq. (9).u∼πLQR(z|G)≜πLQR(z|P1,A,B,R)≜πLQR(z|A,B,Q,R) (9)Together with z=ψθ(x), Eq. (9) implies that the Koopman control policy πLQRis differentiablewith respect to the parameter group Ω={Q,R,A,B, ψθ}over the input x. Therefore, this processcan be readily used in our gradient-based, end-to-end RL framework. During learning, we followAlgo. 1 to dynamically solve an LQR problem (8) at each step kwith current parameters Ωto derivea control ukfor the robot. To optimize the controller πLQRtowards lowering the task-oriented cost,we adopt a well-known RL algorithm, soft actor-critic (SAC) [37], to maximize the objective ofEq. (10) via off-policy gradient ascent over data sampled from batch buffer B.Q1, Q2are twoQ-value approximators used in SAC. In principle, any other RL algorithms can also be utilized.Lsac=Et∼Bmini=1,2Qi(z,u)−αlogπsac(u|z);z=ψθ(x) (10)4.4 End-to-End Learning for Koopman ControlAlgorithm 2: End-to-End Learning for Koopman Control1: Initialise Koopman control parameters Q,R,A,B, ψθ2: Reset task environment E.3: Initialise a data replay buffer D.4:foriteration η= 0,1,2, ...do5: Collect new roll-outs τηfromEby running policy πLQRfollowing Algo. 1 , and save τηtoD.6: Sample a batch of data BfromD.7: Compute Lsac,Lcst,Lmbased on B.8: Update Ω={Q,R,A,B, ψθ}based on Lsac.9: Update ψθandA,Bbased on LcstandLmrespectively.10:end forWe summarise the previous discus-sions and present the end-to-endlearning process for task-orientedKoopman control with contrastiveencoder, as illustrated in Fig. 1and Algo. 2. We repeatedly col-lect batches of trajectory data fromtask environment Eand utilise threeobjectives to update the parametergroup Ω={Q,R,A,B, ψθ}ateach iteration. We take the RL tasklossLsacfrom Eq. (10) to be the pri-mary objective to optimize all param-eters in Ωfor achieving better control performance on the task. We use contrastive learning Lcstandmodel prediction Lmlosses from Eq. (4) and Eq. (7) as two auxiliary objectives to regularise theparameter learning. Lcstis used to update ψθ(·)to ensure a contrastive Koopman embedding space,whileLmis used to update A,Bto ensure an accurate Koopman model in the embedding space.(a) Controlled states of 4D CartPole system(b) Visualization of pixel-based CartPole control.(c) Visualization of cheetah 18D control.Figure 2: Dynamical system behaviors obtained by learned Koopman controller.55 Simulation ResultsWe present simulated experiments to mainly address the following questions: (1) Can our methodachieve desirable Koopman control performance for problems involving different state spaces withdifferent dimensionalities? (2) Are we able to obtain a well-fitted globally linear model in the latentspace? For all control tasks, we assume the true system models are unknown.5.1 Task EnvironmentsWe include three robotic control tasks with varying dimensions in their state and control spacesfrom DeepMind Control Suite Simulator [38]: (1) 4D CartPole Swingup. The objective of thistask is to swing up a cart-attached pole that initially points downwards and maintain its balance.To achieve this, we need to apply proper forces to the cart. This task has 4D physical states ofcart-pole kinematics as well as 1D control. (2) 18D Cheetah Running. The goal of this task is tocoordinate the movements of a planar cheetah to enable its fast and stable locomotion. It has 18Dstates describing the kinematics of the cheetah’s body, joints, and legs. The 6D torques are used asthe control to be applied to the cheetah’s joints. (3) Pixel-Based CartPole Swingup. The CartPoleswingup task with the third-person image as state.5.2 Result AnalysisWe report the results in Fig. 2, 3, 4 to demonstrate the effectiveness of our method. Fig. 2 show-cases dynamical system behaviors by running a learned Koopman controller. The state evolutionand temporal visual snapshots of the three tasks illustrate the successful control achieved by ourmethod.Fig. 3 shows the Koopman controller’s performance by comparing its evaluation cost withthe reference cost at various learning stages. The reference cost, obtained from [36], is consideredthe optimal solution to the problem. All experiments are tested over 5 random seeds. Across all threetasks, our method can eventually reach within 10% of the reference cost and continues to make fur-ther progress. This indicates our method is generally applicable to both simple, low-dimensionalsystems and very complex systems involving high-dimensional physical and pixel states.(a) 4D CartPole swingup (b) 18D cheetah running (c) Pixel-based CartPole swingupFigure 3: Mean and standard deviation of error between reference cost and our controller cost during learning.(a) 4D CartPole swingup with meanmodel error: 2.72×10−3(b) 18D cheetah running with meanmodel error: 8.10×10−2(c) Pixel-based CartPole with meanmodel error: 3.71×10−1Figure 4: Distribution maps of 2D data points projected via t-SNE from latent trajectories. znext denotes truetrajectories while zpred denotes predicted trajectories using learned Koopman model.6Fig. 4 shows the Koopman model’s prediction accuracy in the latent space. We employ t-SNE [39]to project the latent trajectories from a 50D latent space onto a 2D map for improved visualization.The plot in Fig. 4 shows the true and predicted states from trajectories consisting of 1000 steps.Significant overlapping and matching patterns are observed in the distribution of the data pointsfor the 4D CartPole and 18D cheetah systems. This, plus the model prediction error, indicates thepotential of utilizing a globally linear latent model to capture the state evolution in both simple andhighly complex nonlinear systems. However, for pixel-based CartPole control, the projected statesdo not perfectly match, suggesting difficulties in accurately modeling the pixel space. Nevertheless,our method still achieves good control performance, even with slight modeling inaccuracies. Thishighlights the advantage of our approach where the controller is less affected by the model.6 Comparison with Other Methods6.1 Ours vs. Model-Oriented Koopman ControlWe compare our method with model-oriented Koopman control (MO-Kpm), which often requires atwo-stage process of Koopman model identification and linear controller design. We compare withthe most recent work [12] and conduct analysis through the 4D CartPole-swingup task.ModelErrorMO-Kpm TO-Kpm (Ours)Total Cost Cost Var Total Cost Cost Var∼10−4-188.10 - -872.18 -∼10−3-107.67 42.75% -846.88 2.90%∼10−2-64.32 40.27% -784.01 7.42%Table 1: Total control cost and its variation under different lev-els model error using MO-Kpm and our method.Controller More Robust to Model In-accuracy. Table. 1 presents the perfor-mance of the Koopman controller undervarying levels of Koopman model accu-racy. MO-Kpm experiences a rapid in-crease in total control cost with slightlyincreasing model error. In contrast, ourmethod demonstrates superior and con-sistent control performances, indicatingits better control quality as well as lessdependency on the model’s accuracy. This advantage arises from designing the controller primarilybased on task-oriented costs rather than relying heavily on the model. Thus, our method is applica-ble not only to low-dimensional systems but also to complex and high-dimensional scenarios, suchas the cheetah and pixel-based CartPole, where MO-Kpm cannot obtain a reasonable control policy.Figure 5: Learned Q matrixMO-Kpm Total CostQ1=Diag (84.12,62.07,65.79,0.04,0.04,0, ...) -188.10Q2=Diag (0.01,10,10,0.01,0.01,0,0, ...) -109.23Q3=Diag (10,60,60,0.01,0.01,0,0, ...) -70.65Q4=Diag (10,60,60,10,10,0.1,0.1, ...) -124.80TO-Kpm (Ours) Total CostQis shown in Fig. 5 -846.88Table 2: Manually tuned and learned Q matrices for latent LQR, andtheir associated control costs.Automatic Learning of QMatrix in Latent Space. One major challenge of MO-Kpm is thedifficulty in determining the state weight matrix Qfor the latent cost function (Eq. (8)), especiallyfor latent dimensions that may not have direct physical meanings. This challenge can lead to poorcontrol performance, even when the identified model is perfect. Table. 2 compares the control costsobtained from several manually tuned Qmatrices under the best-fitted Koopman model ( 10−4level)with the learned Qusing our method. Our approach enables automatic learning of Qover latentspace and achieves the best control performance.6.2 Ours vs. CURLWe compare our method with CURL [36], a model-free RL method that uses a contrastive encoderfor latent representation learning and a neural network policy for control.7System Analysis using Control Theory. Our method differs from CURL in that we learn a linearKoopman model, whereas CURL does not. The presence of a Koopman model (parameterized byA,Bin Eq. 6) allows us to analyze the system using classical control theory and provides insightsfor optimizing the controller design. For the CartPole system, we perform stability analysis on boththe 50D latent and the 4D true systems, and draw the pole-zero plots in Fig. 6. We find that thelearned system demonstrates the same inherent instability as the true system, with the true system’spoles accurately reflected in the poles of the latent system (overlapping blue and red dots).We analyze the controllability of the learned latent system and find its matrix rank to be 6, whichindicates that a latent dimension of 50 results in excessive uncontrollable states. Using this informa-tion, we apply our method with a lower-dimensional 6D latent space and it is able to maintain thesame control and model performance. Further decreasing the latent dimension to 4 leads to degradedcontrol performance, suggesting that the controllability matrix rank is valuable for controller design.This demonstrates the benefit of having an interpretable representation of the state space.Figure 6: Pole-zero plot of true andlearned latent CartPole systems.Latent System Dimensions Total Cost Model ErrorDim (Z) = 50 -846.88 7.76×10−3Dim (Z) = rank( WZ) = 6 -834.18 6.3×10−3Dim (Z) = 4 -253.80 5.4×10−2CURL Control Performance -841 -Table 3: Our method achieves comparable control cost to CURLwhile providing more interpretable information about the system.7 Zero-Shot Sim-to-Real EvaluationWe deploy our algorithm trained from the Gazebo simulator to the turtlebot3 burger ground robot.We use 2D Lidar measurements as well as the odometry information as observation, and the linearLQR policy generates the linear and angular velocities as control. We aim to control the robot tonavigate through a narrow curved path without any collisions. We directly transfer the trained policy(which is trained with only 40 episodes and each episode contains approximately 700 steps) to thehardware without any fine-tuning, demonstrating the applicability of our approach to a real robot.(a) 4s (b) 9s (c) 15s (d) 20s (e) 26s (f) 29s (g) 30sFigure 7: Snapshots of real robot curved trajectory at different time stamps (seconds).8 Conclusion and LimitationsIn this work, we propose task-oriented Koopman-based control with a contrastive encoder to enablesimultaneous learning of the Koopman embedding, model, and controller in an iterative loop whichextends the application of Koopman theory to high-dimensional, complex systems.Limitation: End-to-end RL sometimes suffers from poor data efficiency. Therefore, it can be bene-ficial to leverage an identified model from model-oriented approaches to derive a linear controller toinitialise end-to-end RL and improve efficiency. Furthermore, the method has only been validatedthrough simulations and needs hardware deployment for a more practical evaluation.8AcknowledgmentsWe thank reviewers for their invaluable feedback and express a special appreciation to our labmate Rakesh Shrestha for his help on the hardware setup. This project received support fromthe NSERC Discovery Grants Program, the Canada CIFAR AI Chairs program, and Huawei Tech-nologies Canada Co., Ltd. Ye Pu’s research was supported by the Australian Research Council(DE220101527).References[1] J.-J. E. Slotine, W. Li, et al. Applied nonlinear control , volume 199. Prentice hall EnglewoodCliffs, NJ, 1991.[2] B. Lantos and L. M ́arton. Nonlinear control of vehicles and robots . Springer Science &Business Media, 2010.[3] C. Dawson, Z. Qin, S. Gao, and C. Fan. Safe nonlinear control using robust neural lyapunov-barrier functions. In Conference on Robot Learning , pages 1724–1735, Auckland, NewZealand, December 2022. PMLR.[4] H. L. Trentelman, A. A. Stoorvogel, and M. Hautus. Control theory for linear systems . SpringerScience & Business Media, 2012.[5] Y . Li, X. Chen, and N. Li. Online optimal control with linear dynamics and predictions:Algorithms and regret analysis. Advances in Neural Information Processing Systems , 32, 2019.[6] F. Rinaldi, S. Chiesa, and F. Quagliotti. Linear quadratic control for quadrotors uavs dynamicsand formation flight. Journal of Intelligent & Robotic Systems , 70:203–220, 2013.[7] B. O. Koopman. Hamiltonian systems and transformation in hilbert space. Proceedings of theNational Academy of Sciences , 17(5):315–318, 1931.[8] B. E. Jackson, J. H. Lee, K. Tracy, and Z. Manchester. Data-efficient model learning for controlwith jacobian-regularized dynamic-mode decomposition. In Conference on Robot Learning ,pages 2273–2283, Atlanta, GA, November 2023. PMLR.[9] J. L. Proctor, S. L. Brunton, and J. N. Kutz. Dynamic mode decomposition with control. SIAMJournal on Applied Dynamical Systems , 15(1):142–161, 2016.[10] J. H. Tu. Dynamic mode decomposition: Theory and applications . PhD thesis, PrincetonUniversity, 2013.[11] Y . Han, W. Hao, and U. Vaidya. Deep learning of koopman representation for control. In2020 59th IEEE Conference on Decision and Control (CDC) , pages 1890–1895, Jeju Island,Republic of Korea, December 2020. IEEE.[12] H. Shi and M. Q.-H. Meng. Deep koopman operator with control for nonlinear systems. IEEERobotics and Automation Letters , 7(3):7700–7707, 2022.[13] P. Laferri `ere, S. Laferri `ere, S. Dahdah, J. R. Forbes, and L. Paull. Deep koopman representationfor control over images (dkrci). In 2021 18th Conference on Robots and Vision (CRV) , pages158–164, Burnaby, British Columbia, May 2021. IEEE.[14] B. Lusch, J. N. Kutz, and S. L. Brunton. Deep learning for universal linear embeddings ofnonlinear dynamics. Nature communications , 9(1):4950, November 2018.[15] Y . Xiao, X. Xu, and Q. Lin. Cknet: A convolutional neural network based on koopman operatorfor modeling latent dynamics from pixels. arXiv preprint arXiv:2102.10205 , 2021.9[16] B. van der Heijden, L. Ferranti, J. Kober, and R. Babu ˇska. Deepkoco: Efficient latent planningwith a task-relevant koopman representation. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 183–189, Prague, Czech Republic, September2021. IEEE.[17] G. Mamakoukas, M. Castano, X. Tan, and T. Murphey. Local koopman operators for data-driven control of robotic systems. In Robotics: science and systems XV , Freiburg im Breisgau,Germany, June 2019.[18] I. Abraham, G. De La Torre, and T. D. Murphey. Model-based control using koopman opera-tors. arXiv preprint arXiv:1709.01568 , 2017.[19] E. Kaiser, J. N. Kutz, and S. L. Brunton. Data-driven discovery of koopman eigenfunctions forcontrol. Machine Learning: Science and Technology , 2(3):035023, 2021.[20] M. Korda and I. Mezi ́c. Linear predictors for nonlinear dynamical systems: Koopman operatormeets model predictive control. Automatica , 93:149–160, 2018.[21] P. J. Schmid. Dynamic mode decomposition of numerical and experimental data. Journal offluid mechanics , 656:5–28, 2010.[22] P. J. Schmid. Application of the dynamic mode decomposition to experimental data. Experi-ments in fluids , 50:1123–1130, 2011.[23] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A data–driven approximation of thekoopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science ,25:1307–1346, 2015.[24] M. O. Williams, C. W. Rowley, and I. G. Kevrekidis. A kernel-based approach to data-drivenkoopman spectral analysis. arXiv preprint arXiv:1411.2260 , 2014.[25] M. O. Williams, M. S. Hemati, S. T. Dawson, I. G. Kevrekidis, and C. W. Rowley. Extend-ing data-driven koopman analysis to actuated systems. IFAC-PapersOnLine , 49(18):704–709,2016.[26] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz. Koopman invariant subspacesand finite linear representations of nonlinear dynamical systems for control. PloS one , 11(2):e0150171, 2016.[27] M. Korda and I. Mezi ́c. Optimal construction of koopman eigenfunctions for prediction andcontrol. IEEE Transactions on Automatic Control , 65(12):5114–5129, 2020.[28] A. Sootla, A. Mauroy, and D. Ernst. Optimal control formulation of pulse-based control usingkoopman operator. Automatica , 91:217–224, 2018.[29] S. Chopra, R. Hadsell, and Y . LeCun. Learning a similarity metric discriminatively, withapplication to face verification. In 2005 IEEE Computer Society Conference on ComputerVision and Pattern Recognition (CVPR’05) , volume 1, pages 539–546. IEEE, 2005.[30] A. v. d. Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 , 2018.[31] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learningof visual representations. In International conference on machine learning , pages 1597–1607.PMLR, 2020.[32] K. He, H. Fan, Y . Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visualrepresentation learning. In Proceedings of the IEEE/CVF conference on computer vision andpattern recognition , pages 9729–9738, 2020.10[33] K. Kotar, G. Ilharco, L. Schmidt, K. Ehsani, and R. Mottaghi. Contrasting contrastive self-supervised representation learning pipelines. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision , pages 9949–9959, virtually, October 2021. IEEE.[34] X. Zhao, T. Du, Y . Wang, J. Yao, and W. Huang. Arcl: Enhancing contrastive learning withaugmentation-robust representations. arXiv preprint arXiv:2303.01092 , 2023.[35] Lilian Weng. Contrastive Representation Learning. Blog article, 2021-05-31. URL https://lilianweng.github.io/posts/2021-05-31-contrastive/ . Accessed on 28 May 2023.[36] M. Laskin, A. Srinivas, and P. Abbeel. Curl: Contrastive unsupervised representations forreinforcement learning. In International Conference on Machine Learning , pages 5639–5650,The Baltimore Convention Center, July 2020. PMLR.[37] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. International Conference on MachineLearning (ICML) , 2018.[38] Y . Tassa, Y . Doron, A. Muldal, T. Erez, Y . Li, D. d. L. Casas, D. Budden, A. Abdolmaleki,J. Merel, A. Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690 , 2018.[39] L. Van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learningresearch , 9(11), 2008.[40] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latentdynamics for planning from pixels. In International Conference on Machine Learning , pages2555–2565, Long Beach, California, US, June 2019. PMLR.[41] D. Hafner, T. Lillicrap, M. Norouzi, and J. Ba. Mastering atari with discrete world models.arXiv preprint arXiv:2010.02193 , 2020.11AppendixComparison with Other MBRL Methods(a) Cart-Pole with Pixel Observation (b) Cart-Pole with 4D state (c) Cheetah Running with 18D stateFigure 8: Comparison with well-known model-based RL methods.To benchmark our approach against established model-based RL methods, we select two widely rec-ognized methods: PlaNet [40] and Dreamer (Version 2) [41]. This comparison spans three tasks inthe main paper. Remarkably, across all three tasks, our method demonstrates a clear superiority overPlaNet in terms of both data efficiency and peak performance. Additionally, our method achievescompetitive performance with Dreamer-v2 in two Cart-Pole experiments, along with comparabledata efficiency in cheetah running tasks. It’s important to note that this comparable performance isachieved while our method learns a globally linear model, whereas Dreamer employs a much largernonlinear network to approximate a world model. Therefore, our approach achieves a substantialreduction in both computational demands and structural complexity while not compromising con-trol performance too much. It is also worth noting that our approach stands out from commonlyused Model-Based RL methods as it’s rooted in Koopman theory, offering the advantage of enablingcontrol theory analysis for the system.Comparison with Other Encoders and Losses(a) Cart-Pole with Pixel Observation (b) Cart-Pole with 4D state (c) Cheetah Running with 18D stateFigure 9: Comparison with autoencoder and varying losses.We perform experiments to validate our selection of the contrastive embedding function. To achievethis, we replace the contrastive encoder with a canonical autoencoder (AE), a commonly usedmethod for learning condensed representations from high-dimensional observations. Specifically,we utilize the AE along with two types of loss functions: one involving only reconstruction loss,and another involving a combination of reconstruction and one-step prediction loss. Importantly, wekeep all other aspects of our method unchanged. Results are based on 3 random seeds.As shown in Figure. 9, our approach achieves comparable control performance with the use ofAE. This aligns with the notion that autoencoder excels in reconstructing and representing pixel-based observations. However, when tasks involve non-pixel observations, such as normal states, ourapproach still maintains significant control efficiency and performance, while the AE-based structurestruggles to learn a useful policy even with ample data. Particularly, we observed that the AE withonly reconstruction loss slightly outperforms the one employing the combined loss, but still falls12short of achieving the performance obtained by our method using the contrastive encoder. Theseresults provide validation for our choice of contrastive embedding function within our approach.Ablations Study of Hyper-parametersWe undertook an ablation study involving key parameters of the LQR solving iteration and Koopmanembedding dimension. Results are the mean over 3 random seeds and summarized in Table. 4.We found that our approach remains robust regardless of the specific number of iterations usedfor the LQR solution, as long as it falls within a reasonable range. This suggests that achieving acertain level of precision in solving the Riccati Equation contributes positively to both policy andlinear model learning. We also studied the impact of varying the latent embedding dimensions forthe encoder. Our findings indicate that using a smaller dimension that aligns with the encoder’sintermediate layers yields consistently good results, while excessively increasing the embedding di-mension (d=100) can diminish control performance. One reason could be the reduced approximationcapabilities of the encoder due to inappropriate high latent dimension. It might also be because thelatent state becoming overly sparse and failing to capture crucial information for effective control.LQR iteration Latent Dimensioniter=3 iter=5* iter=10 d=30 d=50* d=100cartpole pixelcontrol cost -863 -873 -835 -848 -873 -813model error 0.075 0.058 0.269 0.165 0.058 0.038cartpole statecontrol cost -751 -762 -769 -777 -762 -667model error 0.00047 0.00031 0.00042 0.00043 0.00031 0.0002cheetah runcontrol cost -436 -464 -448 -477 -464 -235model error 0.732 0.862 0.842 0.653 0.862 0.046Table 4: Ablation results of final cost and model-fitting error regarding LQR solving iteration and Koopmanlatent state dimension. The asterisk refers to the parameter used in the main paper.Interpretable Q Matrix in Latent Space.Figure 10: Relations of learned weights inlatentQmatrix and original pixel statesOne key distinction of our method from CURL [36] is theutilization of a structured LQR policy in the latent space.In Fig. 10, we illustrate that the LQR policy parameters,especially the Q matrix, can capture the relative signifi-cance weights of latent embedding and their relationshipto the original pixel states. We take the pixel-based Cart-Pole task as an example. The larger diagonal elementsin the learned Q matrix correspond to visual patches thatcontain the CartPole object, which provides interpretableinformation that captures useful latent information relatedto the CartPole object’s area in the image13 |
hRZ1YjDZmTo | MimicPlay: Long-Horizon Imitation Learningby Watching Human PlayChen Wang1, Linxi Fan2, Jiankai Sun1, Ruohan Zhang1,Li Fei-Fei1, Danfei Xu23, Yuke Zhu24†, Anima Anandkumar25†1Stanford,2NVIDIA,3Georgia Tech,4UT Austin,5Caltech,†Equal AdvisingAbstract: Imitation learning from human demonstrations is a promising paradigmfor teaching robots manipulation skills in the real world. However, learning complexlong-horizon tasks often requires an unattainable amount of demonstrations. Toreduce the high data requirement, we resort to human play data —video sequences ofpeople freely interacting with the environment using their hands. Even with differentmorphologies, we hypothesize that human play data contain rich and salient informationabout physical interactions that can readily facilitate robot policy learning. Motivatedby this, we introduce a hierarchical learning framework named MIMICPLAYthat learnslatent plans from human play data to guide low-level visuomotor control trained ona small number of teleoperated demonstrations. With systematic evaluations of 14 long-horizon manipulation tasks in the real world, we show that MIMIC PLAYoutperformsstate-of-the-art imitation learning methods in task success rate, generalization ability,and robustness to disturbances. Code and videos are available at mimic-play.github.io.Keywords: Imitation Learning, Learning from Human, Long-Horizon Manipulation90 seconds robot data segment (slow) Latent planLow-level controlHigh-level plannerMimicPlayHuman play dataRobot demosUnlabelled in-domain videosMulti-task teleoperation data5 seconds human data segment (fast)Figure 1: Human is able to complete a long-horizon task much faster than a teleoperated robot. Thisobservation inspires us to develop MIMIC PLAY, a hierarchical imitation learning algorithm that learnsa high-level planner from cheap human play data and a low-level control policy from a small amount ofmulti-task teleoperated robot demonstrations. We show that MIMIC PLAYsignificantly improves sampleefficiency and robustness of imitation learning algorithms in long-horizon tasks.1 IntroductionEfficiently teaching robots to perform general-purpose manipulation tasks is a long-standing challenge.Imitation Learning (IL) has recently made considerable strides towards this goal, especially throughsupervised training using either human teleoperated demonstrations or trajectories of expert policies [ 1,2].Despite this promise, IL methods have been confined to learning short-horizon primitives, such as openinga door or picking up a specific object. The time cost and labor intensity of collecting long-horizondemonstrations, especially for complex real-world tasks with wide initial and goal condition distributions,remains a key barrier to the widespread adoption of IL methods.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Two connected directions have emerged in recent literature to scale up imitation learning to complexlong-horizon tasks: hierarchical imitation learning andlearning from play data . The former aims toincrease learning sample efficiency by decoupling end-to-end deep imitation learning into the learningof high-level planners and low-level visuomotor controllers [ 3,4]. The latter leverages an alternative formof robot training data, named play data [5]. Play data is typically collected through human-teleoperatedrobots interacting with their environment without specific task goals or guidance. Prior works show thatdata collected this way covers more diverse behaviors and situations compared to typical task-orienteddemonstrations [ 5,6]. The methods for learning from such play data often seek to uncover such diversebehaviors by training a hierarchical policy [ 5], where the high-level planner captures the distribution ofintent and the low-level policies learn goal-directed control. However, collecting such play data in thereal world can be very costly. For example, C-BeT [ 6] requires 4.5 hours of play data to learn manipulationskills in a specific scene, and TACO-RL [7] needs 6 hours of play data for one 3D tabletop environment.In this work, we argue that the data required for learning high-level plan and low-level control can comein different forms, and doing so could substantially reduce the cost of imitation learning for complexlong-horizon tasks. Based on this argument, we introduce a new learning paradigm, in which robots learnhigh-level plans from human play data , where humans use their hands to interact with the environmentfreely. Human play data is much faster and easier to collect than robot teleoperation data (Fig. 1). Itallows us to collect data at scale and cover a wide variety of situations and behaviors. We show that suchscalability plays a key role in strong policy generalization. The robot then learns low-level manipulationpolicies from a small amount of demonstration data , which is collected by humans teleoperating withthe robots. Unlike human play data, demonstration data is expensive to collect but does not lead to issuesdue to the mismatch between human and robot embodiments.To scale imitation learning to long-horizon manipulation tasks, we present MIMIC PLAY, a new imitationlearning algorithm that leverages the complementary strengths of two data sources mentioned above: humanplay data and robot teleoperation data. MIMIC PLAYtrains a goal-conditioned latent planner from humanplay data to predict the future 3D human hand trajectories conditioned on the goal images. Such latentplans provide rich 3D guidance ( what to do and where to interact) at each time step, tackling the challenginglong-horizon manipulation problem by converting it into a guided motion generation process. Conditionedon these latent plans, the low-level controller incorporates state information essential for fined-grainedmanipulation to generate the final actions. We evaluate our method on 14 real-world long-horizon manip-ulation tasks in six environments. Our results demonstrate significant improvement over state-of-the-artimitation learning methods in terms of sample efficiency and generalization abilities. Moreover, MIMIC -PLAYintegrates human motion and robotic skills into a joint latent plan space, which enables an interfacethat allows using human videos directly as “prompts” for specifying goals in robot manipulation tasks.To summarize, the main contributions of our work are as follows:• Anovel paradigm for learning 3D-aware latent plans from cheap human play data.•Ahierarchical framework that trains a plan-guided multi-task robot controller to accomplishchallenging long-horizon manipulation tasks sample-efficiently.•In 14 real-world long-horizon evaluation tasks, MIMICPLAYshows state-of-the-art performancewith generalization to novel tasks and robustness against disturbance, which further allowsprompting robot motion with human videos.2 Related WorkImitation learning from demonstrations. Imitation Learning (IL) has enabled robots to successfullyperform various manipulation tasks [ 8–15]. Traditional IL algorithms such as DMP and PrMP [ 16–19]enjoy high learning sample efficiency but are limited in their ability to handle high-dimensionalobservations and settings that require closed-loop control. In contrast, recent IL methods built upon deepneural networks can learn reactive policies from raw demonstration data [ 20,2,3,21–24]. While thesemethods offer greater flexibility, they require a large number of human demonstrations to learn even simplepick-and-place tasks, which remains labor- and resource-intensive [ 25,26]. Our work instead proposes2to leverage human play data, which does not require robot hardware and can be collected efficiently, toreduce the need for on-robot demonstration data dramatically.Hierarchical imitation learning. Our idea of learning a hierarchical policy from demonstrations isalso related to prior works [ 27,3,5,4,28]. However, all previous methods focus on learning both planningand control with a single type of data—teleoperated robot demonstrations, which is expensive to collect.Our approach uses cheap human play data for learning high-level planning and a small number of robotdemonstrations for learning low-level control, which significantly strengthens the model’s planningcapability while keeping a low demand on demonstration data.Learning from human videos. A plethora of recent research has explored leveraging large-scale humanvideo data to improve robot policy learning [ 29–39]. Closely related to our work are R3M [ 40] andMVP [ 41], which use an Internet-scale human video dataset Ego4D [ 42] to pretrain visual representationsfor subsequent imitation learning. However, due to diversity in the data source and large domain gaps,transferring the pre-trained representation to a specific manipulation task might be difficult. Notably,Hansen et al. [ 43] found simple data augmentation techniques could have similar effects as thesepre-trained representations. To reduce the domain gap, another thread of work [ 29,44,35,45,34,46,47]utilizes in-domain human videos, where human directly interacts with the robot task environment withtheir own hands. Such type of data has a smaller gap between human and robot domains, which allowssample-efficient reward shaping for training RL agents [ 35,45–47] and imitation learning [ 29,44,34].However, these works focus on learning either task rewards or features from human videos, which doesn’tdirectly help the low-level robot action generation. In this work, we extract meaningful trajectory-level taskplans from human play data, which provides high-level guidance for the low-level controller for solvingchallenging long-horizon manipulation tasks.Learning from play data. Our idea of leveraging human play data is heavily inspired by learning fromplay [ 5,48,6], an alternative imitation learning paradigm that focuses on multi-task learning from play data,a form of teleoperated robot demonstration provided without a specific goal. Although play data exhibitshigh diversity in behavior [ 5], it requires the laborious teleoperation process (4.5 hours [ 6] and 6 hours [ 7]).In this work, we instead learn from human play data, where humans freely interact with the scene withtheir hands. This method of data collection is not only time-effective, requiring a mere 10 minutes, but italso provides rich trajectory-level guidance for the robot’s motion generation. Consequently, the robot onlyrequires a minimal amount of teleoperation data, empirically less than 30 minutes, to translate the guidanceinto its own motor commands and successfully perform complex long-horizon manipulation tasks.3 MimicPlayTraining a robot for long-horizon tasks is challenging, as it requires high-level planning to determine whereandwhat to interact during different task stages, as well as low-level motor controls to handle how toachieve the goals. MIMIC PLAY is based on the key insight that high-level planning can be effectivelylearned from human play data that are fast to collect. Meanwhile, low-level control skills are best acquiredfrom teleoperated demonstration data that do not have any embodiment gap. In particular, due to thedifference in the embodiments of humans and robots, it is critical to find an intermediate representationthat can bridge the gap between the two data sources. MIMICPLAYaddresses this challenge by learning a3D-aware latent planning space to extract diverse plans from cost-effective human play data. The overviewof M IMIC PLAYis illustrated in Fig. 2.3.1 Collecting human play dataHuman play data. We leverage human play data, where a human operator freely interacts with thescene with one hand driven by their curiosity. For instance, in the kitchen, the operator might open theoven then pull out the tray or pick up a pan and place it on the stove. This type of data contains rich statetransitions and implicit human knowledge of objects’ affordance and part functionalities. More importantly,collecting human play data is cheaper and much more efficient than teleoperation, as it does not requiretask labeling or environment resetting and takes only a small fraction of time—a human operator canfinish a task that would take 90-second robot teleoperation time in just five seconds (Fig. 1). In this work,3Robotpolicy!Goal image"!"Currentimageo!"GMMdecoder3D hand trajectoryHuman play(multi-view)(a) Trainingstage1 -Learninglatentplans(b) Trainingstage2 -Plan-guidedimitation learningGoal image"!#Currentimage#!#SmallamountrobotdataWrist img'!Proprio.(!Robotaction!!3Dhand location,!Latent plan "!Latent plan"!LatentplannerRobotpolicy!(c) TestingstageGoal image"!"or"!#Currentimage#!#Proprio.(!Robotaction!"Latent plan"!LatentplannerWrist img'!orLatentplannerFigure 2: Overview of MIMIC PLAY.(a) Training Stage 1 : using cheap human play data to train agoal-conditioned trajectory generation model to build a latent plan space that contains high-level guidancefor diverse task goals. (b) Training Stage 2 : using a small amount of teleoperation data to train a low-levelrobot controller conditioned on the latent plans generated by the pre-trained (frozen) planner. (c) Testing :Given a single long-horizon task video prompt (either human motion video or robot teleoperation video),MIMIC PLAYgenerates latent plans and guides the low-level controller to accomplish the task.we collect 10 minutes of human play video as the training dataset for each task environment, which isapproximately equivalent to a 3-hour dataset of robot teleoperation video.Tracking 3D human hand trajectories. When performing the play, human constructs a sequence ofmovements within their mind, then engages its hand to physically interact with the environment. Thisinteraction creates a hand trajectory that contains rich information regarding the individual’s underlyingintentions. Based on the hand trajectories, the robot can learn to mimic human’s motion planning capabilityby reconstructing the hand trajectory conditioned on the goals. We will show how to train a latentplanner from human hand trajectory data in Sec. 3.2. However, common human video datasets comprisesingle-view observations, providing only 2D hand trajectories. Such trajectories present ambiguitiesalong the depth axis and suffer from occlusions. We instead use two calibrated cameras to track 3D handtrajectories from human play data. We use an off-the-shelf hand detector [ 49] to identify hand locationsfrom two viewpoints, reconstructing a 3D hand trajectory based on the calibrated camera parameters.Details of the data collection process and system are introduced in the Appendix.3.2 Learning 3D-aware latent plans from human play dataGiven a long-term task represented by a goal image, the policy should generate actions conditioned on thisgoal. We formalize the problem into a hierarchical policy learning task, where a goal-conditioned high-levelplanner Pdistills key features from the goal observation gtand transforms them into low-dimensionallatent plans pt. These plans are then employed to guide the low-level motor controller toward the goal.However, learning such a vision-based motion planner Prequires a large dataset since it needs to becapable of handling the multimodality inherent in the goal distribution. We address this issue by leveraginga cheap and easy-to-collect data source— human play data .Learning multimodal latent plans. With the collected human play data and corresponding 3D handtrajectory τ, we formalize the latent plan learning process as a goal-conditioned 3D trajectory generationtask. More specifically, an observation encoder E, implemented as convolutional networks, processesthe visual observations ohtand goal image ghtfrom the human video Vhinto low-dimensional features,which are further processed by an MLP-based encoder network into a latent plan vector pt(as shown inFig. 2(a)). Based on the latent plan ptand the hand location lt, an MLP-based decoder network generatesthe prediction of the 3D hand trajectory. However, simple regression of the trajectory cannot fully coverthe rich multimodal distribution of human motions. Even for the same human operator, one task goal canbe achieved with different strategies. To address this issue, we use an MLP-based Gaussian Mixture Model(GMM) [50] to model the trajectory distribution from the latent plan pt. For a GMM as Eq. (1) shows:p(τ|θ)=Xzp(τ|θ,z)p(z|θ), (1)4(a) Kitchen (b) Study desk(c) Flower (d) Whiteboard (e) Sandwich (f) ClothFigure 3: Evaluation Tasks. We design six environments with long-horizon tasks for a Franka Emika robotarm, with initial (left) and goal (right) states shown in images. Tasks include: (a) Kitchen environment :3 individual tasks including cooking food with oven. (b) Study Desk environment : 7 individual tasksincluding tidying up the desk. (c) Flower : flower insertion into a vase. (d) Whiteboard : erasing curvelines. (e) Sandwich : ingredient selection for cheeseburger or sandwich. (f) Cloth : folding a towel twice.See the Appendix for details.where θ={μk,σk,ηk}Kk=1are the parameters of the GMM and p(τ|θ,zk)is a Gaussian distributionN(τ|μk,σk)withzconsisting of Kcomponents. A specific weight ηkrepresents the probability of thek-th component. GMMs are more expressive than simple MLPs, because they are designed to capturethe multi-modality which is inherited in the human play data. The final learning objective of our GMMmodel is to minimize the negative log-likelihood of the detected 3D human hand trajectory τas Eq. (2)LGMM(θ)=−Eτlog KXk=1ηkN(τ|μk,σk)!,where 0≤ηk≤1,KXk=1ηk=1 (2)Handling visual gap between human and robot domains. We consider the setup where the humanand the robot interact in the same environment. However, different visual appearances (for example, top vs.bottom row in Fig. 1) between human and robot domains pose a challenge in transferring the learned latentplanner to the downstream robot control. We introduce a new learning objective to minimize the visualrepresentation gap between the two domains. Given human video frames oh∈Vhand on-robot video framesor∈Vr, we calculate the distribution (mean and variance) of the feature embeddings outputted by the visualencoder Eof the human domain Qh=E(oh)and the robot domain Qr=E(or)in each training data batch.We then minimize the distance between QhandQrwith a Kullback–Leibler (KL) divergence loss: LKL=DKL(Qr||Qh). Note that, our approach does not require paired human-robot video data. VhandVrcan bedifferent behavior and solving different tasks. Only the image frames from the video are used to minimizethe representation gap between the two domains. The final loss function for training the latent planner is:L=LGMM+λ·LKL, where λis a hyperparameter that controls the weights between the two losses.3.3 Plan-guided multi-task imitation learningMIMICPLAYfocuses on multi-task imitation learning settings, where a single policy is trained to performmultiple tasks conditioned on different goals. Prior works often learn multi-task visuomotor policiesend-to-end from scratch [ 6,48,5]. However, given the large goal space, these methods require a largeamount of teleoperation data for policy training (4.5 hours [ 6] and 6 hours [ 7]). In this work, weleverage the latent planner P, pretrained with cost-effective human play data (10 minutes), to condensehigh-dimensional inputs into low-dimensional latent plan vectors pt. Since these latent plans ptcan offerrich 3D guidance for formulating low-level robot actions at, the low-level policy πcan focus on learningthe conversion between the low-dimensional plans ptand actions at- a task it can learn efficiently due5Subgoal (first subgoal) Long horizon ( ≥3 subgoals)20 demos 40 demos 20 demos 40 demosTask-1 Task-2 Task-3 ALL Task-1 Task-2 Task-3 ALL Task-1 Task-2 Task-3 ALL Task-1 Task-2 Task-3 ALLGC-BC (BC-RNN) [20] 0.1 0.0 0.1 0.07 0.1 0.2 0.2 0.17 0.0 0.0 0.0 0.00 0.0 0.0 0.1 0.03GC-BC (BC-trans) [52] 0.2 0.0 0.0 0.07 0.3 0.7 0.6 0.53 0.0 0.0 0.0 0.00 0.0 0.0 0.1 0.03C-BeT [6] 0.5 0.6 0.0 0.37 0.4 1.0 0.0 0.47 0.0 0.0 0.0 0.00 0.0 0.0 0.0 0.00LMP [5] 0.3 0.1 0.2 0.20 0.6 0.3 0.2 0.37 0.1 0.0 0.1 0.07 0.3 0.1 0.0 0.13R3M-BC [40] 0.9 0.0 0.0 0.30 0.5 0.4 0.0 0.30 0.0 0.0 0.0 0.00 0.5 0.0 0.0 0.17Ours (0 %human) 1.0 0.5 0.3 0.60 1.0 0.5 0.5 0.67 0.3 0.1 0.3 0.23 0.4 0.3 0.5 0.40Ours 1.0 0.8 0.7 0.83 1.0 0.9 0.8 0.90 0.7 0.3 0.4 0.47 0.7 0.6 0.8 0.70Table 1: Quantitative evaluation results in the Kitchen environment.to the decreased dimensionality. In the following, we introduce how to generate the latent plan ptandthe details of training the plan-guided low-level controller πwith a small amount of data.Video prompting for latent plan generation. Instructing a robot to perform visuomotor long-horizontasks is challenging due to the complexity of goal specifications. Our latent planner P, learned from humanplay videos, is capable of interpolating 3D-aware task-level plans directly from human motion videos,which can serve as an interface for promoting long-horizon robot manipulation. More specifically, we usea one-shot video V(either human video Vhor robot video Vr) as a goal specification prompt sent to thepre-trained latent planner to generate robot-executable latent plans pt. The one-shot video is first convertedinto a sequence of image frames. At each time step, the high-level planner Ptakes one image from thesequence as a goal-image input gtand generates a latent plan ptto guide the generation of low-level robotaction at. After executing at, the next image frame in the sequence is used as a new goal image. Duringthe training (Fig. 2(a)(b)), the goal image grt(grt∈Vr) is specified as the frame Hsteps after the currenttime step in the demonstration. His a uniformly sampled integer number within the range of [200,600](10-30 seconds), which performs as a data augmentation process.Transformer-based plan-guided imitation. Decoupling planning from control allows the low-levelpolicy πto focus on learning how to control the robot by following the guidance pt. The plan-guidedimitation learning process is illustrated in Fig. 2(b). However, to execute fine-grained behaviors likegrasping an oven handle, merely having high-level guidance is insufficient. It is equally important toconsider low-level specifics of the robot end-effector during the action-generation process. Therefore, weconvert the robot’s wrist camera observation wtand proprioception data etinto low-dimensional featurevectors, both with a shape of R1×d. We then combine these features with the generated latent plan ptto create a one-step token embedding st=[wt,et,pt]. The sequence of these embeddings over Ttime steps,s[t:t+T], is processed through a transformer architecture[ 51]ftrans. The transformer-based policy, knownfor its efficacy in managing long-horizon action generation, produces an embedding of action predictionxtin an autoregressive manner. The final robot control commands atare computed by processing theaction feature xtthrough a two-layer fully connected network. To address the multimodal distributionof robot actions, we utilize an MLP-based Gaussian Mixture Model (GMM) [ 50] for action generation.Details regarding the model architecture are outlined in the Appendix.Multi-task prompting. Learning from human play data enables the planner to handle diverse task goals.We demonstrate this empirically by designing all of our evaluation environments to be multi-task and sharethe same planner Pand the policy πmodels across all tasks in the same environment. For each trainingsample, the prompting video is uniformly sampled from the training videos of the same task category.4 Experiment SetupsEnvironments and Tasks. We create six environments with 14 tasks (Fig. 3), featuring tasks such as contact-rich tool use, articulated-object handling, and deformable object manipulation. Three tasks are designedfor the Kitchen environment and four for the Study Desk environment, focusing on long-horizon taskswith different goals. We assess methods using Subgoal andLong horizon task categories. The Study Deskenvironment examines compositional generalization with three unseen tasks: Easy,Medium , and Hard . TheEasy task combines two trained tasks, while the Medium andHard tasks require novel motions for unseensubgoal compositions, i.e., the transition from subgoal Ato subgoal Bis new. The horizon of all tasksis between 2000 to 4000 action steps, which equals to 100-200 seconds of robot execution (20Hz controlfrequency). For more details about the environments and simulation results, please refer to the Appendix.6Trained tasks Unseen tasksTask-1 Task-2 Task-3 Task-4 ALL Easy Medium Hard ALLGC-BC (BC-trans) [52] 0.0 0.0 0.0 0.0 0.00 0.0 0.0 0.0 0.00LMP [5] 0.0 0.0 0.0 0.0 0.00 0.0 0.0 0.0 0.00Ours (0 %human) 0.2 0.3 0.1 0.2 0.20 0.2 0.1 0.0 0.10Ours (50 %human) 0.3 0.4 0.1 0.4 0.30 0.4 0.3 0.1 0.27Ours (w/o KL) 0.3 0.7 0.3 0.2 0.38 0.4 0.2 0.0 0.20Ours (w/o GMM) 0.4 0.2 0.2 0.3 0.28 0.2 0.0 0.0 0.07Ours 0.6 0.7 0.4 0.5 0.55 0.7 0.5 0.2 0.47Table 2: Ablation evaluation results in the Study Deskenvironment (20 demos).SpatialgeneralizationExtremelong horizonDeformableFlower Whiteboard Sandwich Cloth ALLLMP-single 0.1 0.0 0.1 0.3 0.13LMP [5] 0.0 0.0 0.0 0.2 0.05R3M-single 0.2 0.1 0.3 0.4 0.25R3M [40] 0.1 0.1 0.2 0.2 0.15Ours-single 0.5 0.5 0.6 0.7 0.58Ours 0.4 0.2 0.8 0.8 0.55Table 3: Quantitative evaluation results ofmulti-task learning.Baselines. We evaluate five methods: 1) GC-BC (BC-RNN) and 2) GC-BC (BC-trans), goal-conditionedBC variants of [ 20] using RNN and GPT-based transformer architectures, respectively; 3) C-BeT [ 6], analgorithm using Behavior Transformer [ 53] to learn from teleoperated robot play data; 4) LMP [ 5], a methodthat learns to generate plans and actions in an end-to-end fashion from robot play data; and 5) R3M-BC,a goal-conditioned BC variant of [ 20] using R3M pre-trained visual representation [ 40]. All methods,including ours, train on the same robot teleoperation demos (20 or 40 demos per task). Besides this commondataset, baselines add an extra 10-minute robot demos, while MIMICPLAYuses 10-minute human play data.The total data collection time is consistent across methods. Task success rate (%) is the primary metric.5 ResultsLearning latent plans from human play data significantly improves performance. Our methodoutperforms Ours (0 %human) by more than 23% in long-horizon task settings over all trained tasks, asshown in both Tab. 1 (ALL) and 2 (Trained tasks ALL). This result showcases that learning a latent planspace does not need to rely fully on teleoperated robot demonstration data. A 10-minute of cheap andunlabelled human play data brings large improvements in the task success rate and sample efficiency.Hierarchical policy is important for learning long-horizon tasks. Ours (0 %human) trained with ourtwo-stage framework outperform prior end-to-end learning methods in the long-horizon task settings bymore than 15%, as is shown in Tab. 1 (ALL) and 2 (Trained tasks ALL). This result shows that end-to-endlearning for planning and control is less effective than learning to act based on pre-trained latent plans forlong-horizon tasks. We also find the same conclusion in simulation results as is introduced in the Appendix.Latent plan pre-training benefits multi-task learning. In Tab. 3, we study how each method performswhen training each task with a separate model. For end-to-end learning approaches (e.g., LMP and R3M-BC), training task-specific models will lead to better performance (LMP-single vs. LMP; R3M-single vs.R3M). These results showcase the difficulty of learning multiple tasks with a single model. However, ourapproach has the smallest performance drop in multi-task training (Ours-single vs. Ours). These findingshighlight the advantage of learning plan-guided low-level robot motions based on the pre-trained latentplan space. However, we do observe an uneven performance drop with our method (the success rate of thewhiteboard task drops from 0.5to0.2). We hypothesize this is due to the reason that the length of the demon-stration for the whiteboard task is shorter than the other tasks, which leads to an imbalanced training dataset.GMM is crucial for learning latent plans from human play data. In Tab. 2, our full with GMMmodel largely outperforms Ours (w/o GMM). Although being trained with full human play data,Task-2 Task-3 Medium ALL0.00.20.40.60.8Success rate (%)Ours (human prompt)Ours (robot prompt)Figure 4: Evaluation of multi-task policyprompted with robot/human videos in theStudy Desk environment.Ours (w/o GMM) even fails to match the performance ofOurs (0 %human) in the generalization task settings. Wevisualize the trajectories generated by all of the model variantsin the Appendix, where we found Ours (w/o GMM) has theworst quality of trajectory generation. This result highlightsthe importance of using the GMM model to handle themultimodal distribution of the hand trajectory when learningthe latent plan space from human play data.KL loss helps minimize the visual gap between humanand robot data. In Tab. 2, although Ours (w/o KL) baselineoutperforms most baselines in trained tasks, its success rate is17% lower than Ours. In the generalization setting, Ours (w/o7KL) fails to match the performance of Ours (50 %human). These results showcase that the visual gapbetween the human play data and robot data exists, and KL loss helps close the gap when training the latentplanner. More analysis of the distribution shifts between human play data and robot data can be found inthe Appendix.Observation and latent plan before disturbanceHuman disturbanceReal-time replanningRecoveryFigure 5: Qualitative visualization of the latent plansbefore the disturbance and re-planning. Column 1 : third-person view. Column 2 : visualization of the latent planbefore disturbance. Column 3 : human disturbance; thered arrow indicts the direction of disturbance. Column 4 :visualization of real-time re-planning capabilities, whichshow robustness against disturbance. Column 5 : robotrecovers with the updated task plan.The scale of the human play data matters.In Tab. 2, we compared the model variants with50% human play data (Ours (50 %human)) andfound it fails to match the performance of Ours,which has access to 100% human play data.Most critically, in the unseen task settings, usingmore human play data to cover unseen cases inthe training set significantly benefits generaliza-tion (Ours vs Ours (50 %human)).Human play data improves generalization tonew subgoal compositions. In Tab. 2 unseentasks, Ours surpasses all baselines by more than35%. This result highlights that our approachextracts novel latent plans from human play dataand guides the robot’s low-level policy to gener-alize to new compositions of subgoals.An intuitive interface for prompting robotmotion with human videos. Fig. 4 showsthat our policy model, when prompted with hu-man videos, retains competitive performances asprompted with robot oracle videos across threeStudy Desk tasks. The reason is that MIMIC -PLAYintegrates human motion and robot skillsinto a joint latent plan space, which enables an intuitive interface for directly specifying robot manipulationgoals with human videos. More results can be found in the supplementary video.Real-time planning capability is robust against disturbance. In Fig. 5, we showcase how our modelreacts to unexpected human disturbance. None of these disturbances appears in the teleoperated robotdemonstration data. For instance, in the cloth folding task (Fig. 5 third row), the human unfolds the towelafter the robot has folded it, and the robot replans and folds the towel again. Since our whole system(including the vision-based latent planner, low-level guided policy, and robot control) is running at a speedof 17Hz, our model is able to achieve real-time re-planning capability against disturbance and manipulationerrors. For more details, please refer to the supplementary video.6 Conclusion and LimitationsExisting limitations of the MIMIC PLAY include: 1) The current high-level latent plan is learned fromscene-specific human play data. The scalability of MIMIC PLAY can greatly benefit from training onInternet-scale data. 2) The current tasks are limited to table-top settings. However, humans are mobileand their navigation behaviors contain rich high-level planning information. The current work can beextended to more challenging mobile manipulation tasks, and 3) There is plenty of room to improve onthe cross-embodiment representation learning. Potential future directions include temporal contrastivelearning [54] and cycle consistency learning [55] from videos.We introduce MIMIC PLAY, a scalable imitation learning algorithm that exploits the complementarystrengths of two data sources: cost-effective human play data and small-scale teleoperated robot demonstra-tion. Using human play data, the high-level controller learns goal-conditioned latent plans by predictingfuture 3D human hand trajectories given the goal image. Using robot demonstration data, the low-levelcontroller then generates the robot actions from the latent plans. With this hierarchical design, MIMICPLAYoutperforms prior arts by over 50% in 14 challenging long-horizon manipulation tasks. MIMICPLAYpavesthe path for future research to scale up robot imitation learning with affordable human costs.8AcknowledgmentsThis work is partially supported by ONR MURI N00014-21-1-2801. L. F-F is partially supported by theStanford HAI Hoffman-Y ee Research Grant. This work is done during Chen Wang’s internship at NVIDIA.Stanford provides the computing resources and robot hardware for this project. We are extremely gratefulto Yifeng Zhu, Ajay Mandlekar for their efforts in developing the robot control library Deoxys[ 22] andRoboTurk[ 56]. We would also like to thank Y ucheng Jiang for assisting with multi-seed evaluation insimulation.References[1]D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances in neuralinformation processing systems , 1, 1988.[2]T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P . Abbeel. Deep imitationlearning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 5628–5635. IEEE, 2018.[3]A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, S. Savarese, and L. Fei-Fei. Learning to generalize acrosslong-horizon tasks from human demonstrations. arXiv preprint arXiv:2003.06085 , 2020.[4]K. Shiarlis, M. Wulfmeier, S. Salter, S. Whiteson, and I. Posner. Taco: Learning task decompositionvia temporal alignment for control. In International Conference on Machine Learning , pages4654–4663. PMLR, 2018.[5]C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P . Sermanet. Learning latentplans from play. In Conference on robot learning , pages 1113–1132. PMLR, 2020.[6]Z. J. Cui, Y . Wang, N. Muhammad, L. Pinto, et al. From play to policy: Conditional behaviorgeneration from uncurated robot data. arXiv preprint arXiv:2210.10047 , 2022.[7]E. Rosete-Beas, O. Mees, G. Kalweit, J. Boedecker, and W. Burgard. Latent plans for task-agnosticoffline reinforcement learning. arXiv preprint arXiv:2209.08959 , 2022.[8]S. Calinon, F. D’halluin, E. L. Sauser, D. G. Caldwell, and A. G. Billard. Learning and reproductionof gestures by imitation. IEEE Robotics & Automation Magazine , 17(2):44–54, 2010.[9]A. Ijspeert, J. Nakanishi, and S. Schaal. Movement imitation with nonlinear dynamical systems inhumanoid robots. In Proceedings 2002 IEEE International Conference on Robotics and Automation(Cat. No.02CH37292) , volume 2, pages 1398–1403 vol.2, 2002. doi:10.1109/ROBOT.2002.1014739.[10] S. Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences , 3(6):233–242, 1999.[11] J. Kober and J. Peters. Imitation and reinforcement learning. IEEE Robotics & Automation Magazine ,17(2):55–62, 2010.[12] P . Englert and M. Toussaint. Learning manipulation skills from a single demonstration. TheInternational Journal of Robotics Research , 37(1):137–154, 2018.[13] C. Finn, T. Y u, T. Zhang, P . Abbeel, and S. Levine. One-shot visual imitation learning via meta-learning. In Conference on robot learning , pages 357–368. PMLR, 2017.[14] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot programming by demonstration. InSpringer handbook of robotics , pages 1371–1394. Springer, 2008.[15] B. D. Argall, S. Chernova, M. V eloso, and B. Browning. A survey of robot learning from demonstra-tion. Robotics and autonomous systems , 57(5):469–483, 2009.9[16] S. Schaal. Dynamic movement primitives-a framework for motor control in humans and humanoidrobotics. In Adaptive motion of animals and machines , pages 261–280. Springer, 2006.[17] J. Kober and J. Peters. Learning motor primitives for robotics. In 2009 IEEE International Conferenceon Robotics and Automation , pages 2112–2118. IEEE, 2009.[18] A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann. Probabilistic movement prim-itives. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger,editors, Advances in Neural Information Processing Systems , volume 26. Curran Asso-ciates, Inc., 2013. URL https://proceedings.neurips.cc/paper/2013/file/e53a0a2978c28872a4505bdb51db06dc-Paper.pdf .[19] A. Paraschos, C. Daniel, J. Peters, and G. Neumann. Using probabilistic movement primitives inrobotics. Autonomous Robots , 42(3):529–551, 2018.[20] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese, Y . Zhu,and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrations for robotmanipulation. In 5th Annual Conference on Robot Learning , 2021. URL https://openreview.net/forum?id=JrsfBJtDFdI .[21] P . Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotor policylearning. IEEE Robotics and Automation Letters , 5(2):492–499, 2019.[22] Y . Zhu, A. Joshi, P . Stone, and Y . Zhu. VIOLA: Object-centric imitation learning for vision-based robot manipulation. In 6th Annual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=L8hCfhPbFho .[23] C. Wang, R. Wang, A. Mandlekar, L. Fei-Fei, S. Savarese, and D. Xu. Generalization throughhand-eye coordination: An action space for learning spatially-invariant visuomotor control. In 2021IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 8913–8920.IEEE, 2021.[24] P . Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, andJ. Tompson. Implicit behavioral cloning. Conference on Robot Learning (CoRL) , 2021.[25] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman,A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. Joshi, R. Julian,D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath, I. Mordatch,O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao, M. Ryoo, G. Salazar,P . Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran, V . V anhoucke, S. V ega,Q. Vuong, F. Xia, T. Xiao, P . Xu, S. Xu, T. Y u, and B. Zitkovich. Rt-1: Robotics transformer forreal-world control at scale. In arXiv preprint arXiv:2212.06817 , 2022.[26] brian ichter, A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan,E. Jang, R. Julian, D. Kalashnikov, S. Levine, Y . Lu, C. Parada, K. Rao, P . Sermanet, A. T. Toshev,V . V anhoucke, F. Xia, T. Xiao, P . Xu, M. Yan, N. Brown, M. Ahn, O. Cortes, N. Sievers, C. Tan,S. Xu, D. Reyes, J. Rettinghouse, J. Quiambao, P . Pastor, L. Luu, K.-H. Lee, Y . Kuang, S. Jesmonth,K. Jeffrey, R. J. Ruano, J. Hsu, K. Gopalakrishnan, B. David, A. Zeng, and C. K. Fu. Do as i can, notas i say: Grounding language in robotic affordances. In 6th Annual Conference on Robot Learning ,2022.[27] A. Mandlekar, F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox. Iris: Implicitreinforcement without interaction at scale for learning control from offline robot manipulation data. In2020 IEEE International Conference on Robotics and Automation (ICRA) , pages 4414–4420. IEEE,2020.[28] D. Xu, S. Nair, Y . Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese. Neural task programming:Learning to generalize across hierarchical tasks. In 2018 IEEE International Conference on Roboticsand Automation (ICRA) , pages 3795–3802. IEEE, 2018.10[29] H. Xiong, Q. Li, Y .-C. Chen, H. Bharadhwaj, S. Sinha, and A. Garg. Learning by watching: Physicalimitation of manipulation skills from human videos. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 7827–7834. IEEE, 2021.[30] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inverse reinforce-ment learning from visual demonstrations. In Conference on Robot Learning , pages 1930–1942.PMLR, 2021.[31] K. Zakka, A. Zeng, P . Florence, J. Tompson, J. Bohg, and D. Dwibedi. Xirl: Cross-embodimentinverse reinforcement learning. In Conference on Robot Learning , pages 537–546. PMLR, 2022.[32] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. Concept2robot: Learning manipulationconcepts from instructions and human demonstrations. The International Journal of RoboticsResearch , 40(12-14):1419–1434, 2021.[33] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from” in-the-wild”human videos. Robotics: Science and Systems (RSS) , 2021.[34] P . Sharma, D. Pathak, and A. Gupta. Third-person visual imitation learning via decoupled hierarchicalcontroller. Advances in Neural Information Processing Systems , 32, 2019.[35] L. Smith, N. Dhawan, M. Zhang, P . Abbeel, and S. Levine. Avid: Learning multi-stage tasks viapixel-level translation of human videos. arXiv preprint arXiv:1912.04443 , 2019.[36] K. Schmeckpeper, A. Xie, O. Rybkin, S. Tian, K. Daniilidis, S. Levine, and C. Finn. Learningpredictive models from observation and interaction. In European Conference on Computer Vision ,pages 708–725. Springer, 2020.[37] A. D. Edwards and C. L. Isbell. Perceptual values from observation. arXiv preprint arXiv:1905.07861 ,2019.[38] K. Schmeckpeper, O. Rybkin, K. Daniilidis, S. Levine, and C. Finn. Reinforcement learning withvideos: Combining offline observations with interaction. In J. Kober, F. Ramos, and C. Tomlin, editors,Proceedings of the 2020 Conference on Robot Learning , volume 155 of Proceedings of MachineLearning Research , pages 339–354. PMLR, 16–18 Nov 2021. URL https://proceedings.mlr.press/v155/schmeckpeper21a.html .[39] K. Shaw, S. Bahl, and D. Pathak. Videodex: Learning dexterity from internet videos. CoRL , 2022.[40] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual representationfor robot manipulation. In 6th Annual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=tGbpgz6yOrI .[41] T. Xiao, I. Radosavovic, T. Darrell, and J. Malik. Masked visual pre-training for motor control. arXivpreprint arXiv:2203.06173 , 2022.[42] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang,M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 18995–19012, 2022.[43] N. Hansen, Z. Y uan, Y . Ze, T. Mu, A. Rajeswaran, H. Su, H. Xu, and X. Wang. On pre-training forvisuo-motor control: Revisiting a learning-from-scratch baseline. arXiv preprint arXiv:2212.05749 ,2022.[44] Y . Liu, A. Gupta, P . Abbeel, and S. Levine. Imitation from observation: Learning to imitate behaviorsfrom raw video via context translation. In 2018 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1118–1125. IEEE, 2018.[45] S. Kumar, J. Zamora, N. Hansen, R. Jangir, and X. Wang. Graph inverse reinforcement learning fromdiverse videos. Conference on Robot Learning (CoRL) , 2022.11[46] M. Sieb, Z. Xian, A. Huang, O. Kroemer, and K. Fragkiadaki. Graph-structured visual imitation. InConference on Robot Learning , pages 979–989. PMLR, 2020.[47] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. arXiv preprintarXiv:2207.09450 , 2022.[48] E. Rosete-Beas, O. Mees, G. Kalweit, J. Boedecker, and W. Burgard. Latent plans for task agnosticoffline reinforcement learning. In Proceedings of the 6th Conference on Robot Learning (CoRL) ,2022.[49] D. Shan, J. Geng, M. Shu, and D. Fouhey. Understanding human hands in contact at internet scale.InCVPR , 2020.[50] C. M. Bishop. Mixture density networks. 1994.[51] A. V aswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin.Attention is all you need. Advances in neural information processing systems , 30, 2017.[52] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P . Dhariwal, A. Neelakantan, P . Shyam,G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural informationprocessing systems , 33:1877–1901, 2020.[53] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodeswith one stone. arXiv preprint arXiv:2206.11251 , 2022.[54] I. Dave, R. Gupta, M. N. Rizve, and M. Shah. Tclr: Temporal contrastive learning for videorepresentation. Computer Vision and Image Understanding , 219:103406, 2022.[55] D. Dwibedi, Y . Aytar, J. Tompson, P . Sermanet, and A. Zisserman. Temporal cycle-consistencylearning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition ,pages 1801–1810, 2019.[56] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta, E. Orbay,et al. Roboturk: A crowdsourcing platform for robotic skill learning through imitation. In Conferenceon Robot Learning , pages 879–893. PMLR, 2018.[57] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings ofthe IEEE conference on computer vision and pattern recognition , pages 770–778, 2016.[58] O. Khatib. A unified approach for motion and force control of robot manipulators: The operationalspace formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987. doi:10.1109/JRA.1987.1087068.[59] A. Graves and A. Graves. Long short-term memory. Supervised sequence labelling with recurrentneural networks , pages 37–45, 2012.[60] B. Liu, Y . Zhu, C. Gao, Y . Feng, Q. Liu, Y . Zhu, and P . Stone. Libero: Benchmarking knowledgetransfer for lifelong robot learning, 2023.[61] Y . Zhu, J. Wong, A. Mandlekar, R. Mart ́ın-Mart ́ın, A. Joshi, S. Nasiriany, and Y . Zhu. robosuite: Amodular simulation framework and benchmark for robot learning. arXiv preprint arXiv:2009.12293 ,2020.[62] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE,2012. doi:10.1109/IROS.2012.6386109.[63] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart ́ın-Mart ́ın, C. Wang, G. Levine,M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everydayactivities and realistic simulation. In Conference on Robot Learning , pages 80–93. PMLR, 2023.12[64] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of MachineLearning Research , 9:2579–2605, 2008. URL http://www.jmlr.org/papers/v9/vandermaaten08a.html .13A Implementation detailsHere we lay down the details of the data collection, training, and testing process.Collecting human play data and training details. The human play data is collected by letting a humanoperator directly interact with the scene with a single hand for 10 minutes for each scene. The entiretrajectory τis recorded at the speed of 60 frames per second and is used without cutting or labeling. The3D hand trajectory is detected with an off-the-shelf multi-view human hand tracker [ 49]. The total numberof video frames within 10 minutes of human play video is around 36k. We train one latent planner for eachenvironment with the collected human play data. For the multi-environment setup (for the experiments inTab. 3), we merge the human play data from each scene to train a single latent planner. The latent plannercontains two ResNet-18 [ 57] networks for image processing and MLP-based encoder-decoder networkstogether with a GMM model, which has K=5distribution components. We train 100k iterations for thelatent planner which takes a single GPU machine for 12 hours.Collecting robot demonstrations and training details. The robot teleoperation data is collected with anIMU-based phone teleoperation system RoboTurk [ 56]. The control frequency of the robot arm is 17-20Hzand the gripper is controlled at 2Hz. For each task, we collect 20 demonstrations. In the experiments, wealso have a 40 demonstration dataset for testing the sample efficiency of different approaches. The robotpolicy model is a GPT-style transformer [ 52], which consists of four multi-head layers with four heads.We train 100k iterations for the policy with a single GPU machine in 12 hours. For a fair comparisonwith our method, the baseline approaches trained without human play data have five more demonstrationsduring training the latent planner Pand the low-level policy π.Video prompting. In this work, we use a one-shot video V(either human video Vhor robot video Vr)to prompt the pre-trained latent planner to generate corresponding plans pt=P(ot,gt,lt),gt∈V. Duringtraining (Fig. 2(b)), we specify the goal image grt(grt∈Vr) as the frame Hsteps after the input observationortin the robot demonstration. His a uniformly sampled integer number within the range of [200,600],which equals 10-30 seconds in wall-clock time. lthere is the 3D location of the robot’s end-effector. Duringinference (Fig. 2(c)), we assume access to a task video (either human or robot video) which is used as asource of goal images. The goal image will start at the 200frame of the task video and move to the next iframe after each step. We use i=1in all our experiments. Based on the inputs, the latent planner generatesa latent plan feature embedding ptof shape R1×d, which is used as guidance for the low-level robot policy.Data visualization. We visualize the collected human play data and robot demonstration data in Tab. 9.For the human play data, we use an off-the-shelf hand detector [ 49] to localize the hand’s 2D locationon the left and right image frame, which are visualized as red bounding boxes in Tab 9. For the robotdemonstration data, we directly project the 3D location of the robot end-effector to the left and right imageframes, which are visualized as blue bounding boxes in Tab 9.Testing. We perform real-time inference on a Franka Emika robot arm with a control frequency of 17Hz—directly from raw image inputs to 6-DoF robot end-effector and gripper control commands with our trainedmodels. The robot is controlled with the Operational Space Control (OSC) [58].B Experiment setupsEnvironments. We design six environments with a total of 14 tasks for a Franka Emika robot arm, asillustrated in Fig. 3. These environments feature several manipulation challenges, such as contact-rich toolmanipulation (cleaning the whiteboard), articulated-object manipulation (opening the oven and the box onthe study desk), high-precision tasks (inserting flowers and turning on the lamp by pressing the button),and deformable object manipulation (folding cloth).Tasks. We design three tasks in the Kitchen environment and four tasks in the Study desk environment.All these tasks have different goals. In this work, we focus on long-horizon tasks that require the robot tocomplete several subgoals. To better analyze the performance of each method, we define the Subgoal taskcategory that only counts whether the first subgoal of the task has been achieved and the Long horizontask category which is the full task. In the Study desk environment, we design three tasks for testing the14Subgoal (first subgoal) Long Horizon ( ≥3 subgoals)Task-1 Task-2 Task-3 Task-4 Task-5 Task-1 Task-2 Task-3 Task-4 Task-5GC-BC (BC-RNN) 0.96±0.020.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.00GC-BC (BC-trans) 0.95±0.030.01±0.020.00±0.000.01±0.020.00±0.000.00±0.000.00±0.000.00±0.000.00±0.000.00±0.00Ours (0 %human) 0.92±0.060.70±0.050.80±0.070.74±0.110.78±0.040.58±0.040.29±0.100.62±0.110.59±0.050.67±0.09Table 4: Quantitative evaluation results in simulation (success rates % averaged over 5 seeds)compositional generalization ability of the models to novel task goal sequences, which are not includedin the training dataset. These three tasks are classified as Easy,Medium , and Hard depending on theirdifference compared to the training tasks. The Easy task is a simple concatenation of two trained tasksand their subgoals. The Medium task contains an unseen composition of a pair of subgoals that is notcovered by any trained tasks, i.e., the transition from subgoal Ato subgoal Bis new. The model needsto generate novel motions to reach these subgoals. The Hard task contains two such unseen transitions.For the rest four environment, each scene has one task goal and features different types of challenges inmanipulation, e.g., generalization to new spatial configuration, extremely long horizon, and deformableobject manipulation.Baselines. We compare with five prior approaches: (1). GC-BC (BC-RNN) [ 20]: Goal-conditionedbehavior cloning algorithm [ 5] implemented with recurrent neural networks (RNN) [ 59]. (2). GC-BC (BC-trans) [ 52]: Another goal-conditioned behavior cloning algorithm implemented with GPT-like transformerarchitecture. (3). C-BeT [ 6]: Goal-conditioned learning from teleoperated robot play data algorithmimplemented with Behavior Transformer (BeT) [ 53]. (4). LMP [ 5]: A learning from teleoperated robotplay data algorithm designed to handle variability in the play data by learning an embedding space. LMP(single) is a variant by training each task with a separate model. (5). R3M-BC [ 40]: A goal-conditionedimitation learning framework that leverages R3M visual representation pre-trained with internet-scalehuman video dataset Ego4D [ 42]. R3M-BC (single) is a variant by training each task with a separatemodel.Ablations. We compare four variants of our model to showcase the effectiveness of our architecture design:(1). Ours: MIMIC PLAY with full collection (10 min) of human play data. Ours (single) is a variant bytraining each task with a separate model. (2). Ours (0% human): variant of our model without using humanplay data. The pre-trained latent plan space is trained only with the teleoperated robot demonstrations. (3).Ours (50% human): variant of our model where the latent planner is trained with 50% of human play data(5 min). (4). Ours (w/o GMM): variant without using the GMM model for learning the latent plan spacefrom human play data. (5). Ours (w/o KL): Our approach without using KL loss for addressing the visualgap between human and robot data when pre-training the latent planner.C Supplementary Experiment ResultsResults in simulation. To extensively evaluate the methods with more testing trials and training seeds,we conduct an experiment in simulation LIBERO [ 60], which is a multitask robot manipulation benchmarkbased on robosuite [ 61] and MuJoCo [ 62]. We choose LIBERO due to its utilization of the BDDLlanguage [ 63] for goal specification, which facilitates the multitask evaluation for learning from play data.Note that, in our main paper, we leverage human play data. However, in simulation, there is no way to getsuch dataset, which will always end up being robot teleoperation. Therefore, in this experiment, we use thesame teleoperated robot play dataset to train both high-level planner and low-level controller, and report theresults of Ours (0% human) and baselines in Table 4. For each method, we train with 5 random seeds andreport the average success rate over 100 testing trials. The results showcase the advantage of MIMICPLAY’shierarchical policy learning framework over the baselines, which is consistent with the real-world results(Tab. 1, 2). The implementation code is available at https://github.com/j96w/MimicPlay .Visualization of the trajectory prediction results. We visualize the 3D trajectory decoded from thelatent plan by projecting it onto the 2D image in Fig. 6. In the last two rows, we showcase the results of15Sandwich makingWhiteboard cleaningFlower arranging(a) Trajectory prediction results decoded from the latent plans (b) t-SNE visualization of the latent plansStudy desk cleaningFlowerSandwichGener. EasyCurrent view Goal image Ours Ours(0-human) Ours(w/o GMM) Ground truthGener. MediumFigure 6: Qualitative visualization of the learned latent plan. ( a) Visualization of the trajectory predictionresults decoded from the latent plans learned by different methods. The fading color of the trajectory fromblue to green indicates the time step from 1 to 10. ( b) t-SNE visualization of latent plans, the latent plans ofthe same task tend to cluster in the latent space.(a) Distribution overlap of Ours (w /oKL) (b) Distribution overlap of OursFigure 7: t-SNE visualization of the generated feature embeddings by taking human data and robot data asinputs. The slashes refer to the overlap region of two data distributions. ( a) Feature visualization results ofour method without using KL divergence loss. ( b) Feature visualization results of our method with KLdivergence loss. Our approach covers 23% more area than the baseline.two unseen subgoal transitions. The trajectory generated by our model is most similar to the ground truthtrajectory, while Ours (0% human) is overfitted to the subgoal transitions in the training set and generatesthe wrong latent plan. For instance, in the training data, the robot only learns to open the box after turningoff the lamp, meanwhile in the Easy setting of generalization tasks, the robot is prompted to pick up thepen after turning off the lamp. Ours (0% human) variant still outputs a latent plan to open the box, whichcauses the task to fail since the box is already open.Transformer architecture helps multi-task learning. In Tab. 1, GC-BC (BC-trans) with the GPTtransformer architecture outperforms GC-BC (BC-RNN) by more than 30% in a 40-demos Subgoal setting.However, the performance of GC-BC (BC-trans) quickly drops to the same level as GC-BC (BC-RNN) in1620-demos settings. The result showcases that training vision-based transformer policy end-to-end requiresmore data.Visualization of the learned latent plans. We use t-SNE [ 64] to visualize the generated latent plansconditioned on different tasks, as shown in Fig. 6(b). We find that the latent plans of the same task tend tocluster in the latent space, which shows the effectiveness of our approach in distinguishing different tasks.Left cameraRight cameraHuman play(a) Human play data collectionPhone teleoperation (RoboTurk)(b) Robot demonstration data collectionLeft cameraRight cameraWrist cameraFigure 8: System setups for the data collection. ( a)Human play data collection. A human operator directlyinteracts with the scene with one of its hand and performinteresting behaviors based on its curiosity without a spe-cific task goal. ( b) Robot demonstration data collection.A human demonstrator uses a phone teleoperation sys-tem to control the 6 DoF robot end-effector. The gripperof the robot is controlled by pressing a button on thephone interface.Analysis of the visual gap between humanand robot data. As is introduced in themethod Sec. 3.2, to minimize the visual gapbetween human play data and robot demonstra-tion data, we use a KL divergence loss over thefeature embeddings outputted by the visual en-coders. In Fig. 7, we use t-SNE to process andvisualize the learned feature embeddings gener-ated by Ours and the model variant Ours (w/oKL) on the 2D distribution plots. To better vi-sualize the distribution overlap, we use slashesto highlight the overlap area in both plots. Weobserve that our approach with KL loss has a23% larger overlap between the human data andthe robot data compared to Ours (w/o KL). Thisresult showcases the effectiveness of our KL di-vergence loss and supports the result in Tab. 2(Ours (w/o KL) is inferior to Ours in task successrate).D Details of system setupsWe illustrate the system designs for the datacollection in Fig. 8. The human play data iscollected by having a human operator directlyinteract with the environment with one of itshands (Fig. 8(a)). The left and right camerasrecord the video at the speed of 100 frames persecond. During the collection process of humanplay data, no specific task goal is given and thehuman operator freely interacts with the scenefor interesting behaviors based on its curiosity.For each scene in our experiments, we collect 10minutes of human play data.The robot teleportation demonstration is col-lected with a phone teleoperation system Robo-Turk [ 56] (Fig. 8(b)). The left, right, and end-effector wrist cameras record the video at thespeed of 20 frames per second, which is alignedwith the control speed of the robot arm (20Hz).Each sequence of robot demonstration has a pre-defined task goal. During the data collection, the humandemonstrator completes the assigned sub-goals one by one and finally solves the whole task. For eachtraining task in our experiments, we collect 20 demonstrations. In the Kitchen environment, we collect 40demonstrations for each task to figure out which approach is more sample inefficiency.17E Details of the task designsThe definition of our long-horizon tasks is listed below. For each task, the initial state and subgoals arepre-defined. The whole task is completed if and only if all subgoals are completed in the correct order.E.1 Kitchen•Task-1–Initial state: A drawer is placed on the left side of the table. The drawer is not fully open andcontains pumpkin and lettuce. A closed microwave oven is placed on the right side of thedesktop. A bowl and a stove are placed on the lower edge of the tabletop. There is a carrotinside the bowl. A pan is placed on top of the stove.–Subgoals: a) Open the microwave oven door. b) Pull out the microwave oven tray. c) Pick upthe bowl. d) Place the bowl on the microwave tray.•Task-2–Initial state: same as Kitchen Task-1.–Subgoals: a) Open the drawer. b) Pick up the carrot. c) Put the carrots in the drawer.•Task-3–Initial state: same as Kitchen Task-1.–Subgoals: a) Pick up the pan. b) Place the pan on the table. c) Pick up the bowl. d) Place thebowl on the stove.E.2 Study desk•Task-1–Initial state: The book is on the rack. The lamp is on. The box is opened and closed in a randomstate. The pen is located either in the center of the table or in the box.–Subgoals: a) Turn off the lamps. b) Pick up the book. c) Place the book on the shelf position.•Task-2–Initial state: The location of the book is either on the shelf or on the rack. The lamp is off. Thebox is closed. The pen is in the center of the table.–Subgoals: a) Turn on the lamps. b) Open the box. c) Pick up the pen. d) Put it in the box.•Task-3–Initial state: The book is on the rack. The state of the lamp is random. The box is closed. Thepen is in the center of the table.–Subgoal a) Open the box. b) Pick up the pen. c) Place the pen in the box. d) Pick up the book.e) Place the book on the shelf.•Task-4–Initial state: The location of the book is either on the shelf or on the rack. The lamp is on. Thebox is closed. The pen is located either in the center of the table or in the box.–Subgoals: a) Open the box. b) Turn off the lamp.•Easy–Initial state: The location of the book is either on the shelf or on the rack. The lamp is off. Thebox is closed. The pen is located either in the center of the table or in the box.–Subgoals: a) Turn on the lamp. b) Open the box. c) Turn off the lamp.•Medium–Initial state: The location of the book is either on the shelf or on the rack. The lamp is on. Thebox is closed. The pen is in the center of the table.18–Subgoals: a) Open the box. b) Turn off the lamp. c) Pick up the pen. d) Place the pen in the box.•Hard–Initial state: The book is on the shelf. The lamp is on. The box is closed. The pen is locatedeither in the center of the table or in the box.–Subgoals: a) Turn off the lamp. b) Open the box. c) Pick up the book. d) Place the book on theshelf.E.3 Flower•Initial state: Two flowers and a vase are placed on the table. The vase will randomly be placed on thetop left or top right corner of the table.•Subgoals: a) Picking up a flower. b) Insert the flower into the vase. c) Pick up the other flower. d)Insert the flower into the vase.E.4 Whiteboard•Initial state: A whiteboard and board eraser are placed on the table. The board eraser is placed on theleft side of the whiteboard.•Subgoals: a) Pick up the board eraser. b) Moves over the curve line. c) Erase the curve line. d) Returnthe eraser to the original location.E.5 Sandwich•Initial state: A circular ingredient selector is placed in the upper right corner of the table. Half of thecircle holds ingredients for a sandwich (bread, lettuce, sliced tomato) and half holds ingredients for acheeseburger (bread, cheese, burger patty). A white plate is placed in the lower left corner of the table.•Subgoals for a sandwich: a) Rotate the ingredient selector to the right position. Pick up a piece ofbread from it and place it on the plate. b) Rotate the ingredient selector to the correct position. Pickup the lettuce and place it on top of the bread. c) Rotate the ingredient selector to the right position.Pick up the sliced tomato and place it on top of the lettuce. d) Rotate the ingredient selector to theright position. Pick up another piece of bread and place it on top of the tomato.E.6 Cloth• Initial state: An unfolded brown cloth is randomly placed on the table.•Subgoals: a) The robot folds the cloth in half once to become 1/2 of its original size. b) The robotfolds the cloth once more to become 1/4 of its original size.F Training hyperparametersWe list the hyperparameters for training the models in Tab. 5 for the latent planner Pand Tab. 6 for the robotpolicy π. The hyperparameters that are named starting with GMM are related to the MLP-based GMMmodel. The hyperparameters that are named starting with GPT are related to the transformer architecture.We also list the hyperparameters for the baseline GC-BC (BC-trans) in Tab. 7.19Hyperparameter DefaultBatch Size 16Learning Rate (LR) 1e-4Num Epoch 1000LR Decay NoneKL Weights λ 1000MLP Dims [400, 400]Image Encoder - Left View ResNet-18Image Encoder - Right View ResNet-18Image Feature Dim 64GMM Num Modes 5GMM Min Std 0.0001GMM Std Activation SoftplusTable 5: Hyperparameters - Ours (La-tent Planner P)Hyperparameter DefaultBatch Size 16Learning rate (LR) 1e-4Num Epoch 1000Train Seq Length 10LR Decay Factor 0.1LR Decay Epoch [300, 600]MLP Dims [400, 400]Image Encoder - Wrist View ResNet-18Image Feature Dim 64GMM Num Modes 5GMM Min Std 0.01GMM Std Activation SoftplusGPT Block Size 10GPT Num Head 4GPT Num Layer 4GPT Embed Size 656GPT Dropout Rate 0.1GPT MLP Dims [656, 128]Table 6: Hyperparameters -Ours (Robot Policy π)Hyperparameter DefaultBatch Size 16Learning rate (LR) 1e-4Num Epoch 1000Train Seq Length 10LR Decay Factor 0.1LR Decay Epoch [300, 600]MLP Dims [400, 400]Image Encoder - Wrist View ResNet-18Image Encoder - Left View ResNet-18Image Encoder - Right View ResNet-18Image Feature Dim 64GMM Num Modes 5GMM Min Std 0.01GMM Std Activation SoftplusGPT Block Size 10GPT Num Head 4GPT Num Layer 4GPT Embed Size 656GPT Dropout Rate 0.1GPT MLP Dims [656, 128]Table 7: Hyperparameters -GC-BC (BC-trans)G Network ArchitectureTransformer-based policy network. The embedding sequence of Ttime steps is represented as s[t:t+T]=[wt,et,pt,···,wt+T,et+T,pt+T], which passes through a transformer architecture [ 51]. The transformermodel ftransprocesses the input embeddings using its Nlayers of self-attention and feed-forward neuralnetworks. Given an embedding sequence of T−1time steps, ftransgenerates the embedding of trajectoryprediction in an autoregressive way - xT=ftrans(w1:T−1,e1:T−1,p1:T−1), where xTis the predicted actionembedding at time step T. The transformer architecture uses the multi-head self-attention mechanism togather context and dependencies from the entire history trajectory at each step. The final robot controlcommands atare computed by processing the action feature xtwith a two-layer fully-connected network.To handle the multimodal distribution of robot actions, we also use a MLP-based GMM model [ 50] for theaction generation.20View Left view Right viewt=0.1st=0.2st=0.3st=0.4st=0.5st=0.1st=0.2st=0.3st=0.4st=0.5sKitchen(humanplaydatat=0.6st=0.7st=0.8st=0.9st=1.0st=0.6st=0.7st=0.8st=0.9st=1.0st=1.1st=1.2st=1.3st=1.4st=1.5st=1.1st=1.2st=1.3st=1.4st=1.5st=246.6st=248.4st=250.2st=252.0st=254.0st=246.6st=248.4st=250.2st=252.0st=254.0sKitchen(Robotmulti-taskdemos)t=255.8st=257.6st=259.4st=261.2st=263.0st=255.8st=257.6st=259.4st=261.2st=263.0st=264.8st=266.6st=268.4st=270.2st=272.0st=264.8st=266.6st=268.4st=270.2st=272.0st=2.6st=2.7st=2.8st=2.9st=3.0st=2.6st=2.7st=2.8st=2.9st=3.0sStudydesk(humanplaydata)t=3.1st=3.2st=3.3st=3.4st=3.5st=3.1st=3.2st=3.3st=3.4st=3.5st=3.6st=3.7st=3.8st=3.9st=4.0st=3.6st=3.7st=3.8st=3.9st=4.0st=948.8st=950.6st=952.4st=954.2st=956.0st=948.8st=950.6st=952.4st=954.2st=956.0sStudydesk(Robotmulti-taskdemos)t=957.4st=959.2st=961.0st=963.0st=964.6st=957.4st=959.2st=961.0st=963.0st=964.6st=964.6st=966.6st=968.4st=970.2st=972.0st=964.6st=966.6st=968.4st=970.2st=972.0sFigure 9: Dataset visualization.21 |
oqOfLP6bJy | Contrastive Value Learning:Implicit Models for Simple Offline RLBogdan Mazoure⇤†1Benjamin Eysenbach⇤†2Ofir Nachum3Jonathan Tompson31Apple2Princeton University3Google DeepMindAbstract: Model-based reinforcement learning (RL) methods are appealing in theoffline setting because they allow an agent to reason about the consequences of actionswithout interacting with the environment. While conventional model-based methodslearn a 1-step model, predicting the immediate next state, these methods must beplugged into larger planning or RL systems to yield a policy. Can we model theenvironment dynamics in a different way, such that the learned model directly indicatesthe value of each action? In this paper, we propose Contrastive Value Learning (CVL),which learns an implicit, multi-step dynamics model. This model can be learnedwithout access to reward functions, but nonetheless can be used to directly estimate thevalue of each action, without requiring any TD learning. Because this model representsthe multi-step transitions implicitly, it avoids having to predict high-dimensionalobservations and thus scales to high-dimensional tasks. Our experiments demonstratethat CVL outperforms prior offline RL methods on complex robotics benchmarks.1I n t r o d u c t i o nWhile control from offline demonstrations is relevant to many real-world applications (e.g. sample-efficientpre-training for robots, [ 1]) in case the ability for online data collection is limited, it often requiresthe algorithms to find policies that are not well-supported by the training data. Instead of learning viatrial-and-error, offline RL algorithms must leverage logged historical data to learn about the outcome ofdifferent actions, potentially by capturing environment dynamics as a proxy signal. Many prior approachesfor this offline learning setting have been proposed, whether in model-free [ 2,3,4]o rm o d e l - b a s e d[ 5,6]settings. Our focus will be on those that address this prediction problem head-on: by learning a predictivemodel of the environment which can be used in conjunction with most model-free algorithms.Prior model-based methods [ 7,8,5,6]l e a r nam o d e lt h a tp r e d i c t st h eo b s e r v a t i o na tt h en e x tt i m es t e p .This model is then used to generate synthetic data that can be passed to an off-the-shelf RL algorithm.While these approaches can work well on some benchmarks, they can be complex and expensive: themodel must predict high-dimensional observations, and determining the value of an action may requireunrolling the model for many steps. Learning a model of the environment has not made the RL problemany simpler. Moreover, as we will show later in the paper, the environment dynamics are intertwined withthe policy inside the value function; model-based methods aim to decouple these quantities by separatelyestimating them. On the other hand, we show that one can directly learn a long-horizon transition modelfor a given policy, which is then used to estimate the value function. A natural use case for learning thislong-horizon transition model (specifically, a state occupancy measure) from unlabelled data is multi-taskpretraining, where the implicit dynamics model is trained on trajectory data across a collection of tasks,often exhibiting positive transfer properties. As we demonstrate in our experiments, this multi-taskoccupancy measure can then be finetuned using reward-labelled states on the task of interest, greatlyimproving performance upon existing pretraining methods as well as tabula rasa approaches.In this paper, we propose to learn a different type of model for learning from offline data, a model which (1)will not require predicting high-dimensional observations and (2)can be directly used to estimate Q-valueswithout requiring either model-based rollouts or model-free temporal difference learning. Precisely, wewill learn an implicit model of the discounted state occupancy measure, i.e. a function which takes inas t a t e ,a c t i o na n df u t u r es t a t ea n do u t p u t sas c a l a rp r o p o r t i o n a lt ot h el i k e l i h o o do fv i s i t i n gt h ef u t u r estate under some fixed policy. We will learn this implicit model via contrastive learning, treating it as⇤Work done while at Google.†These authors have contributed equally.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Contrastive Value Learning :As t y l i z e di l l u s t r a t i o no ft r a j e c t o r i e s( grey)a n dt h er e w a r d sa tf u t u r es t a t e s(e.g., +8,-5).(Left) Q-learning estimates Q-values by “backing up” the rewards at future states. (Right) Our methodlearns the Q-values by fitting an implicit model to estimate the likelihoods of future states ( blue), and taking thereward-weighted average of these likelihoods.ac l a s s i f i e rr a t h e rt h a nag e n e r a t i v em o d e lo fo b s e r v a t i o n s . O n c el e a r n e d ,w ep r e d i c tt h el i k e l i h o o do freaching every reward-labeled state. By weighting these predictions by the corresponding rewards, weform an unbiased estimate of the Q-function. Whereas methods like Q-learning estimate the Q-functionof a state “backing up” reward values, our approach goes in the opposite direction, “propagating forward”predictions about where the robot will go.We name our proposed algorithm Contrastive Value Learning(CVL). CVL is a simple algorithm formodel-free control from offline data which learns the future state occupancy measure using contrastivelearning and re-weights it with the future reward samples to construct a quantity proportional to the truevalue function. Because CVL represents multi-step transitions implicitly, it avoids having to predicthigh-dimensional observations and thus scales to high-dimensional tasks. Using the same algorithm, wecan handle settings where reward-free data is provided, which cannot be directly handled by classicaloffline RL methods such as FQI [ 9]o rB C Q[ 3]. We compare our proposed method to competitive offlineRL baselines, notably CQL [ 4]a n dC Q L + U D S[ 10]o na no f f l i n ev e r s i o no ft h em u l t i - t a s k Metaworldbenchmark [ 11], and find that CVL greatly outperforms the baseline approaches as measured by therliable library [ 12]. Additional experiments on image-based tasks from this same benchmark showthat our approach scales to high-dimension tasks more seamlessly than the baselines. We also conductas e r i e so fa b l a t i o ne x p e r i m e n t sh i g h l i g h t i n gc r i t i c a lc o m p o n e n t so fo u rm e t h o d .2R e l a t e d w o r k sPrior work has given rise to multiple offline RL algorithms, which often rely on behavior regularizationin order to be well-supported by the training data. The key idea of offline RL methods is to balanceinterpolation and extrapolation errors, while ensuring proper diversity of out-of-dataset actions. Popularoffline RL algorithms such as BCQ and CQL rely on a behavior regularization loss [ 2]a saw a yt oc o n t r o lthe extrapolation error. This regularization term ensures that the learned policy is well-supported by the data,i.e. does not stray too far away from the logging policy. The major issue with current offline RL algorithmsis that they fail to fully capture the entire distribution over state-action pairs present in the training data.To directly learn a value function using policy or value iteration, one needs to have information aboutthe transition model in the form of sequences of state-action pairs, as well as the reward emitted by thistransition. However, in some real-world scenarios, the reward might only be available for a small subsetof data. For instance, in the case of recommending products available in an online catalog to the user, thetrue long-term reward (user buys the product) is only available for users who have browsed the item list forlong enough and have purchased a given item. It is possible to decompose the value function into reward-dependent and reward-free parts, as was done by [ 13]t h r o u g ht h es u c c e s s o rr e p r e s e n t a t i o nf r a m e w o r k[ 14].More recent approaches [ 15,16,17]u s eag e n e r a t i v em o d e lt ol e a r nt h eo c c u p a n c ym e a s u r eo v e rf u t u r estates for each state-action pair in the dataset; its expectation corresponds to the successor representation.However, learning an explicit multi-step model such as [ 15]c a nb eu n s t a b l ed u et ot h eb o o t s t r a p p i n gt e r min the temporal difference loss. Similarly to model-based approaches, our method will learn a reward-freerepresentation of the world, but will do so without having to predict high-dimensional observations andwithout having to do costly autoregressive rollouts. Thus, while our critic is trained without requiringrewards, it is much more similar to a value function than a standard 1-step model.Learning a conditional probability distribution over a highly complex space can be a challenging task,which is why it is often easier to instead approximate it using a density ratio specified by an inner product2in a much lower-dimensional latent space. To learn an occupancy measure over future states withoutpassing via the temporal difference route, one can use noise-contrastive estimation [NCE, 18,19]t oapproximate the corresponding log ratio of densities as an implicit function. Contrastive learning wasoriginally proposed as an alternative to classical maximum likelihood estimation, but has since then seensuccesses in static self-supervised learning [ 20,21]. In reinforcement learning, NCE was shown to improvethe robustness of state representations to exogenous noise [ 22,23,24]a n d ,m o r er e c e n t l y ,t ob ea ne f f i c i e n treplacement for traditional goal-conditioned methods [ 17].3P r e l i m i n a r i e sReinforcement learning. We assume a Markov decision process Mdefined by the tuplehS,S0,A,P[·|s, a],r ,i,w h e r e Sis a state space, S0✓Sis the set of starting states, Ais anaction space, P[·|st,at]:S⇥A!(S)is a one-step transition function2,r:S⇥A![rmin,rmax]is areward function and 2[0,1)is a discount factor. The system starts in one of the initial states s02S0.At every timestep t=1,2,3,..,t h ep o l i c y ⇡:S!(A),s a m p l e sa na c t i o n at⇠⇡(·|ot).T h ee n v i r o n m e n ttransitions into a next state st+1⇠P[·|st,at]and emits a reward rt=r(st,at).W i t h M a r k o v i a n p o l i c y⇡(a|s),w ed e f i n et h ed i s c o u n t e do c c u p a n c ym e a s u r ec o n d i t i o n n e do n (st,at)to beP⇡t:t+K(st,at)=(1 )KXt=1t1P[St+t|st,at;⇡].With this notation in place, the objective is to maximize the discounted sum of returns over Hsteps:max⇡2⇧EP⇡0:H(S0)"HXt=1t1r(st,at)#. (1)We will study this problem in the offline setting: rather than learning by trial and error (by interactingwith the environment), the algorithm instead must learn from an offline dataset of logged trajectories.Value-based RL algorithms maximize cumulative episodic rewards by estimating the state-action valuefunction under a policy ⇡,w h i c hc a ne q u i v a l e n t l yb ee x p r e s s e da sa ne x p e c t a t i o nu n d e rt h ed i s c o u n t e doccupancy measure:Q⇡(st,at)=EP⇡t:H(st,at)"HtXt=1t1r(st+t,at+k)#=11Es,a⇠P⇡t:H(st,at),⇡(s)[r(s,a)].(2)Note that the occupancy measure can equivalently be re-written in terms of the geometric distributionover the time interval [0,1)for infinite-horizon rollouts:P⇡0:1(s0,a0)=Et⇠Geom (1)[P[St+t|s0,a0;⇡]] (3)This decomposition of the value function has already been used in previous works based on the successorrepresentation [ 14,13]a n d ,m o r er e c e n t l y , -models [ 15]. We will use it to efficiently learn an implicitdensity ratio proportional to the state occupancy measure using contrastive learning.Noise-contrastive estimation Noise-contrastive estimation [NCE, 18]s p a n sab r o a dc l a s so fl e a r n i n galgorithms, at the core of which is negative sampling [ 25]. NCE learns a metric space from positive andnegative examples. Given reference samples, samples from a positive distribution (high similarity withreference points) and samples from a negative distribution (low similarity with reference points), contrastivelearning methods learn an embedding where positive examples are located closer to the reference points thannegative examples. One of the most well-known and commonly used NCE objectives is InfoNCE [ 19]:max, 2Ex,y,yïloge(x)> (y)Py02y[ye(x)> (y0)ò(4)over some hypothesis class :{:X!Z } for input space X,l a t e n ts p a c e Z,x⇠P(X),y⇠Ppositives (X)andy⇠Pnegatives (X).C o n t r a s t i v el e a r n i n gh a sb e e nw i d e l ys t u d i e di nt h es t a t i cu n s u p e r v i s e d /s u p e r v i s e d2(X)denotes the entire set of distributions over the space X.3learning settings [ 26,21,20], as well as in reinforcement learning [ 27,23]f o rl e a r n i n gs t a t er e p r e s e n t a t i o n swith desirable properties such as alignment and uniformity [ 28].Solving Equation ( 4)f o r (⇤, ⇤)yields a ratio estimator f:X⇥Y!Rwhich decomposes asf⇤(x,y)=⇤(x)> ⇤(y)and, at optimality3,c a p t u r e st h el o g - r a t i oo f Ppositives (X)andPnegatives (X):f⇤(x,y)/logP[y|x]P[y]. (5)Implicit dynamics models via NCE. Various prior works [ 30,23,31]h a v es t u d i e dt h eu s eo fN C Eto approximate a single-step dynamics model, where triplets (st,at,st+1)have higher similarity than(st,at,st06=t+1),e f f e c t i v e l yd e f i n i n gp o s i t i v ea n dn e g a t i v ed i s t r i b u t i o n so v e rt r a j e c t o r yd a t a .M o r er e c e n t l y ,contrastive goal-conditioned RL [ 17]u s e dI n f o N C Et oc o n d i t i o nt h er a t i oe s t i m a t o ro ng o a ls t a t e ss a m p l e dfrom the replay buffer. These methods use asymetric encoders, using (st,at)and (st+t),w h e r epositive samples of st+tare sampled from the discounted state occupancy measure for t0.The conditional probability distribution of future states given the current state-action pair can be efficientlyestimated using an implicit model trained via contrastive learning over positive andnegative featuredistributions, as shown in Equation ( 6).Within each batch, the states used in positive examples for onebatch element are used as negative examples for every other batch element.`InfoNCE (, )=Est,at,t,tïloge(st,at)> (st+t)Pt02t[te(st,at)> (st+t0)ò. (6)Minimizing `InfoNCE over trajectory data yields a ratio estimator which, at optimality, approximates thefuture discounted state occupancy measure up to a multiplicative term as per Equation ( 5),f⇤(st,at,st+t)/logP[st+t|st,at;⇡]P[st+t;⇡]. (7)Intuitively, f⇤approximates a H-step dynamics model which has an implicit dependence on policy⇡that collected the training data, but is time-independent since Equation ( 7)i so p t i m i z e do na v e r a g eacross multiple t,t.O r d i n a r i l y ,t r a i n i n gs t a t e - s p a c em o d e l si sh a r dw h e nt h ed i m e n s i o n sa r el a r g e ,e . g .image-based domains. However, by using contrastive learning, we can learn this model without havingto require it predict high-dimensional observations, as similarity is evaluated in a lower-dimensional latentspace (observe that in Equation ( 6)t h ei n n e rp r o d u c ti sc o m p u t e di n Z,w h o s ed i m e n s i o nw ec o n t r o l ,instead of X,w h i c hi ss p e c i f i e de x t e r n a l l y ) .A na p p a r e n tl i m i t a t i o no ft h ea p p r o a c hi st h a tt h ep r o b a b i l i t yof future states st+tis recovered only up to a constant. However, it turns out that we can still use thismodel to get accurate estimates of the Q-values, as is described in the next section.4E s t i m a t i n g a n d M a x i m i z i n g R e t u r n s v i a C o n t r a s t i v e L e a r n i n gIn this section, we show how NCE can be used to learn a quantity proportional to a value function, andhow the later can be used in a policy iteration scheme.4.1 Estimating Q-values using the Contrastive ModelAs shown in Equation ( 2), the Q-function at (st,at)can be thought of as evaluating the reward functionat states sampled from the discounted occupancy measure P⇡t:H(st,at).T h a t i s , t o e s t i m a t e a q u a n t i t yakin to Q⇡,w ec a nf i r s te s t i m a t et h eo c c u p a n c ym e a s u r ea n dt a k eaw e i g h t e da v e r a g eo fr e w a r d sover future states using the probabilities from the log-density ratio learned by the contrastive model.Precisely, Equation ( 2)c o r r e s p o n d st ou s i n ga ni m p o r t a n c e - w e i g h t e de s t i m a t o r ,w h e r ea no p t i m a lc r i t i ct h a tminimizes Equation ( 6)a p p r o x i m a t e st h ed e n s i t yr a t i of r o mE q u a t i o n( 7). The positive samples come fromthe discounted state occupancy measure: we first sample a time offset t⇠Geometric (1)(columnin the dataset), and then sample a state from the distribution of states at this given offset (row in the dataset).Formally, we can view this as applying infoNCE to a positive distribution P[st,at,st+t]and a negativedistribution formed as the product of the marginal distributions, P[st,at]P[st+t].The critic itself can be trained using the occupancy measure formulation specified in Equation ( 3)o v e rall state-action pairs in a given episode. However, Equation ( 3)n e e d st ob er e - a d j u s t e dt oa c c o u n tf o rfinite-horizon truncation of the geometric mass function presented in Definition 1.3See [ 29]f o re x a c td e r i v a t i o n .4Definition 1 (Truncated distribution) LetXbe a random variable with distribution function FX.Yis a called the truncated distribution ofXwith support [m,M]s.t.0<m<M ifP[Y=y]=FX(ym)FX(y1m)FX(M)FX(m),y=m,m+1,m+2,···,M. (8)We denote the special case of the truncated geometric distribution as TruncGeom (p,m,M ).The contrastive objective to train the ratio estimator to approximate the discounted occupancy measure overad a t a s e t Dis then the dot product of features of current state and action with future state ,n o r m a l i z e dby the product of negative samples`InfoNCE (, )=E st,at⇠D,t⇠TruncGeom (1,t,H ),t⇠TruncGeom (1,t06=t,H)ïloge(st,at)> (st+t)Pt02t[te(st,at)> (st+t0)ò. (9)It is possible that multiple optimal ratio estimators exist such that the multiplicative proportionality constantdepends on the action. To avoid this, we adopt a similar approach as [ 17]a n di n t r o d u c ear e g u l a r i z a t i o nterm over the partition function, making the ratio estimator training objective be`Contrastive =`InfoNCE +Partition Est,at,t,t24 logXt02t[te(st,at)> (st+t0)!235. (10)Now, suppose we found an optimal ratio estimator f.C o m b i n i n gE q u a t i o n( 3)w i t hD e f i n i t i o n 1,w eo b t a i nthe following form of the Q-function for an optimal ratio estimator fwhich minimizes Equation ( 6):QNCE(st,at)=1Xt=1t1Zst+tr(st+t)P[st+t|st,at;⇡]dst+t=1Ht1Et⇠TruncGeom (1,t,H )ïZst+tr(st+t)ef(st,at,st+t)P[st+t;⇡]dst+tò/1Ht1Et⇠TruncGeom (1,t,H )îEP⇡t+tîr(st+t)ef(st,at,st+t)óó. (11)Here, the offset tis a random variable sampled from TruncGeom (1,t,H )where His the horizonof the MDP. Later on, we show that QNCE(s,a)/Q(s,a)for all s2Sanda,a02A,w h i c hm a k e st h econtrastive Q-values suitable for policy evaluation.4.2 Efficient Estimation using Random Fourier FeaturesAm a j o ri s s u ew i t hu s i n g QNCEout-of-the-box is that it is computationally expensive, requiring evaluationof the inner product (st,at)> (st+t)with a large number of future states. The underlying cause ofthis computational overhead is the RBF kernel term e(st,at)> (st+t).I fw ei n s t e a du s e dal i n e a rk e r n e l ,the constant term (st,at)would be factored out, and we could separately keep track of reward-weightedfuture expected features. This would (1) reduce the computational complexity of Nactor updates overDfrom O(|D|N)toO(|D|+N)and (2) reduce the variance of the representation if averaging featuresof future states using exponential moving average. It turns out that the RBF kernel can be approximatelylinearized by using random Fourier features [ 32,31].Lemma 1 (Adapted from [ 32])Letx,y2Rdbe unit vectors, and let FW,b(x)=»2edcos(Wx+b)where W⇠Normal (0,I)andb⇠Uniform (0,2⇡).T h e n , E[FW,b(x)>FW,b(y)]=ex>y.Lemma 1is a straightforward modification of the result from [ 32]a n da l l o w su st or e d u c et h eR B Fk e r n e lto an expectation over d-dimensional random feature vectors:QNCE(st,at)=11Et⇠TruncGeom (1,t,H )[EP(st+t;⇡)[e(st,at)> (st+t)r(st+t)]]=11FW,b((st,at))>Et⇠TruncGeom (1,t,H )[EP(st+t;⇡)[FW,b( (st+t))r(st+t)]]=11FW,b((st,at))>⇠t(⇡). (12)5The advantage of using the RFF approximation is that it allows us to split the exponential term insidethe expectation and separately keep track of the policy-dependent, reward-weighted future state probabilityterm, while the state-action dependence term is learned online. Intuitively, the ⇠t(⇡)term accumulatesthe Fourier features of future states re-weighted by the corresponding reward averaged over the geometricmixture of future states. Since it does not depend on the current state st,i tc a nb et r a c k e du s i n gam e m o r ybank ⇠0(⇡),..,⇠H(⇡),w h i c hi su p d a t e dv i aa ne x p o n e n t i a l - m o v i n ga v e r a g et or e d u c ev a r i a n c e .44.3 Learning the PolicyOnce the policy evaluation phase completes and we have an estimate QNCE,w eo p t i m i z eap o l i c yto maximize the returns predicted by this Q-value. We can decode the policy by minimizing itsKullback-Leibler divergence to the Boltzmann Q-value distribution (see [ 33]), which can be efficientlydone by minimizing the following objective:`Policy(✓)=Est⇠DïDKLÅ⇡✓(st)eQ(st,·)/⌧Ra2AeQ(st,a)/⌧daãò. (13)Note that in discrete action spaces, minimizing Equation ( 13)l e a d st oas o f tv e r s i o no ft h eg r e e d ypolicy decoding ⇡greedy(s)=a r gm a xa2AQNCE(s,a)fors2S.I np r a c t i c e ,w ea p p r o x i m a t et h eK Lt e r min Equation ( 13)u s i n g NaMonte-Carlo action samples {t}Nai=1⇠TruncGeom (1,t,H ).Decoding ⇡in such a way can lead to sampling out-of-distribution actions in regions with low datasetcoverage, thus making the QNCEestimator less accurate. To mitigate this issue, we follow priorwork [ 34,35,36]a n da d dap o l i c yb e h a v i o rc l o n i n gt e r mw h i c hp r e v e n t st h en e wp o l i c yf r o ms t r a y i n gtoo far away from the data:`BC(✓)=Ea,s⇠D[log⇡✓(a|s)], (14)for the entropy estimator H(⇡(s)) = Ea⇠⇡(s)[log⇡(a|s)].W e a d d t h i s e x t r a l o s s t o `Policy to learn apolicy ⇡which prioritizes high Q-values that are well-supported by the offline dataset D.T h u s ,t h ef i n a lpolicy optimization objective becomes`Policy(✓)=`Policy(✓)+BC`BC(✓). (15)Lemma 2tells us that using CVL as a surrogate Q-function corresponds to one step of conservative policyimprovement, where ⇡satisfies soft constraints of Equation ( 13)a n ds m a l l EDμ[DKL(⇡(s)||μ(s))]viathe BC term. Its proof is located in Section 6.2.Lemma 2 (Contrastive policy improvement) Let μbe a policy and let QμNCE =min , 2EDμ[`Critic(, )].I f⇡(s)=argmin⇡2⇧DKLÅ⇡(s)eQμNCE(st,·)/⌧Ra2AeQμNCE(st,a)/⌧daã(16)thenQ⇡(s,a)Qμ(s,a)for all (s,a)2Dμ.4.4 Practical ImplementationWe now present our complete method, which can be viewed as an actor-critic method for offline RL. Welearn the ratio estimator via contrastive learning (Equation ( 10)) and learn the policy via Equation ( 15).We will interleave these steps in most of our experiments, but experiments in Section 6.3show that theratio estimator can be pretrained e.g. in the presence of unlabeled data from related tasks. We summarizethe method in Algorithm 1.4.5 Interpretations and Connections with Prior workThe main distinction between Contrastive Value Learning and prior works consists specifically inrepresenting the Q-values in a two-step decomposition: the Q-value is represented as an occupancy measure4This idea can be adapted to online learning settings as well by clipping policy improvement steps so that ⇠doesn’tchange too fast under newly collected data.6Algorithm 1: Contrastive Value Learning (CVL)Input : Dataset D⇠μ, ,networks, temperature parameter ⌧,e x p o n e n t i a lm o v i n ga v e r a g eparameter 1forepoch j=1,2,..,Jdo2 forminibatch B⇠D do/*Update density ratio estimator using Equation ( 10)* /3 Update (j+1), (j+1)using r, `NCE((j), (j));/*Estimate the contrastive Q-function */4 Q(s,a) Equation ( 12)i fu s i n gR F F ,o t h e r w i s eE q u a t i o n( 11);/*Decode policy from Q-function using Equation ( 15)* /5 Update ⇡✓using r✓{`Policy(✓)};/*Update future state encoder using EMA */6 (j+1)EMA (j+1)+(1) (j)EMA;/*Update future state features weighted by rewards */7 ⇠(j+1)EMA (j+1)EMA ·B[rt+t];Figure 2: Metaworld benchmark. (Left) We evaluate CVL on 50 tasks from Metaworld, a subset of which areshown here. (Right) Compared with three offline RL baselines, CVL achieves statistically-significant improvements inoffline performance. Results are reported over 5 random seeds.weighted by the reward signal; the occupancy measure itself is represented using a powerful likelihood-based model parameterized using an implicit function. Decoupling the learning of the occupancy measurefrom reward maximization allows, among others, for efficient pretraining strategies on unlabeled data, i.e.trajectory data without reward information, and can be used to learn provably optimal state representationsforanyreward function [ 37]. While CVL is similar in spirit to the successor representation [ 14,13], theoccupancy measure learned by CVL is much richer than that of SR, as it captures the entire distribution overfuture states instead of only the first moment. Another method, -models [ 15], is closely related to CVL,but uses a surrogate single-step TD objective to learn the occupancy measure, similarly to C-learning [ 16].5E x p e r i m e n t sOur experiments aim to answer three questions. First, we study how CVL compares with baselineapproaches on a large benchmark of state-based tasks. Our second set of experiments look at image-basedtasks, testing the hypothesis that CVL scales to these tasks more effectively than the baselines. We concludewith ablation experiments. Our main point of comparison will be a high-performing offline RL method,CQL [ 4]. While CVL learns an implicit model, that model is structurally more similar to value-basedRL methods than model-based methods, motivating our comparison to a value-based baseline (CQL).Metaworld. We first test our approach on the MetaWorld benchmark [ 11], which consists of 50 roboticmanipulation tasks such as open a door, pick up an object, reach a certain area of the table, executed byar o b o t i ca r m( s e eF i g u r e 2(left)). This domain is an ideal testbed for CVL, as it allows for both full-stateand image-based experiments, has a dense and informative reward function thus decoupling the problemof representation learning from exploration, and is challenging for model-free methods which leaves roomfor improvement. While the original MetaWorld domain has been used to evaluate online RL agents, wecreate an ad hoc dataset suitable for offline learning. To do so, we train Soft Actor-Critic [ 33]f r o mf u l lstates on each of the 50 tasks separately for 500k frames, and save the resulting replay buffer, which formsthe training dataset. As shown in Figure 2(right), CVL manages to considerably improve upon strongbaselines such as behavior cloning, CQL and CQL with UDS [ 10]5.W e r e p o r t t h e r e s u l t s o n a l l t a s k s5For CQL+UDS, we combine all data from the current task with unlabeled data from related tasks with rewards setto 0. In the absence of related tasks, we pre-train the ratio estimator on the current task with 0 rewards.7TaskBC CQL CVLdoor-close571±9.9 4249±269.94480±305.1door-open178±4.0 2099±0.93389±76.6drawer-close2414±1736.53964±1634.92177±1679.5drawer-open1030±104.2 820±56.02543±115.0Table 1:Offline RL with Images.We compare CVL tobaselines on four offline, image-based tasks from Meta-World offline image-based tasks on 5 random seeds.Figure 3:RFF ablation.CVL with RFF (orange) per-forms slightly better than without RFF (blue).Taskmedium-r. medium randomwalker2d+56±10-43±12+415±72ant+9±3+21±6+23±5hopper+59±11-15±5+40±8Table 2:Offline RL with full states.We compare CVLto CQL on the robotics suite D4RL [38].Taskmedium-r. medium randomwalker2d+11±383±2192±23ant66±1121±1414±7hopper+21±5-14±10+270±54Table 3:Comparison to successor features.We comparesuccessor features ([39]) to CQL on D4RL.Taskmedium-r. medium randomwalker2d4.27+8-ant-- -hopper0.32+13.3-Table 4:Comparison to IQL.We compare IQL ([40]) toCQL on D4RL.of the MetaWorld suite over 5 random seeds, according to the aggregation methodology proposed by [12].Per-environment scores are available in Table7.D4RLTable2shows the relative improvement in normalized scores of CVL over CQL [4], a strongoffline RL baseline, on the offline RL robotics suite D4RL [38]. Notably, CVL is able to outperform CQLon data coming from a random policy. Moreover, Table3and Table4compare two baselines, successorfeatures (inspired from [39]) and IQL [40], to CQL.Image-based experimentsOur working hypothesis is that contrastive formulation of the value functionacts in itself as a pre-training mechanism via the prism of representation learning. For this reason, weconduct further experiments on 4 image-based tasks from the MetaWorld suite (similarly to full-states,the dataset was obtained from the SAC replay buffer trained on rendered images). Results presentedin Section5show that CVL is also able to learn meaningful Q-values and achieve good empiricalperformance on hard image-based tasks.AblationsIn Section6.3,w ea s s e s st h es i m i l a r i t yb e t w e e nc o n t r a s t i v ea n dt r u eQ - v a l u e so nt h ec o n t i n -uous Mountain Car environment [41]b yf i t t i n gC V Lt ot h ed a t af r o mS A C ’ s[33]r e p l a yb u f f e r .F i g u r e8(left) shows the contrastive Q-values on a log-scale, evaluated on trajectories from the SAC replay; for com-parison, we also show the Q-values learned by online SAC in Figure8(right). Note that the value functionlearned by CVL conserves the same topology as the true value function, up to a multiplicative rescaling.6D i s c u s s i o nThis paper presented an RL algorithm that learns a contrastive model of the world, and uses that modelto obtain Q values by estimating the likelihood of visiting future states. Our experiments demonstratethat this approach can effectively solve a large number of offline RL tasks, including from image-basedobservations. Our pretraining results hinted that CVL can be pretrained on datasets from other tasks, andwe are excited to pretrain our model on datasets of increasing size.Limitations.One limitation of our approach is that it corresponds to a single step of policy improvement.This limitation might be lifted by training the contrastive model using a temporal difference update forthe contrastive model [16,42].As e c o n dl i m i t a t i o ni st h a tt h eR F Fa p p r o x i m a t i o nc a nb ep o o rw h e nthe Fourier dimension is small, which has not been a case in our experiments as CVL+RFF performedon par with full kernel CVL. We tried to train the contrastive model using non-exponentiated features(akin to [43]), but failed to achieve satisfactory results. Figuring out how to effectively train these spectralmodels remains an important question.References[1]A. Kumar, A. Singh, F. Ebert, Y. Yang, C. Finn, and S. Levine. Pre-training for robots: Offline rlenables learning new tasks from a handful of trials.arXiv preprint arXiv:2210.05178,2 0 2 2 .8[2]Y. Wu, G. Tucker, and O. Nachum. Behavior regularized offline reinforcement learning. arXivpreprint arXiv:1911.11361 ,2 0 1 9 .[3]S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without exploration.InInternational Conference on Machine Learning ,p a g e s2 0 5 2 – 2 0 6 2 .P M L R ,2 0 1 9 .[4]A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforcementlearning. arXiv preprint arXiv:2006.04779 ,2 0 2 0 .[5]R. Kidambi, A. Rajeswaran, P. Netrapalli, and T. Joachims. Morel: Model-based offline reinforcementlearning. arXiv preprint arXiv:2005.05951 ,2 0 2 0 .[6]T. Yu, A. Kumar, R. Rafailov, A. Rajeswaran, S. Levine, and C. Finn. Combo: Conservativeoffline model-based policy optimization. Advances in neural information processing systems ,3 4 :28954–28967, 2021.[7]T. Yu, G. Thomas, L. Yu, S. Ermon, J. Y. Zou, S. Levine, C. Finn, and T. Ma. Mopo: Model-basedoffline policy optimization. Advances in Neural Information Processing Systems ,3 3 : 1 4 1 2 9 – 1 4 1 4 2 ,2020.[8]A. Argenson and G. Dulac-Arnold. Model-based offline planning. arXiv preprint arXiv:2008.05556 ,2020.[9]R. Munos. Error bounds for approximate policy iteration. In ICML ,v o l u m e3 ,p a g e s5 6 0 – 5 6 7 ,2 0 0 3 .[10] T. Yu, A. Kumar, Y. Chebotar, K. Hausman, C. Finn, and S. Levine. How to leverage unlabeleddata in offline reinforcement learning. arXiv preprint arXiv:2202.01741 ,2 0 2 2 .[11] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: A benchmarkand evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning ,pages 1094–1100. PMLR, 2020.[12] R. Agarwal, M. Schwarzer, P. S. Castro, A. C. Courville, and M. Bellemare. Deep reinforcementlearning at the edge of the statistical precipice. Advances in neural information processing systems ,34:29304–29320, 2021.[13] A. Barreto, W. Dabney, R. Munos, J. J. Hunt, T. Schaul, H. Van Hasselt, and D. Silver. Successorfeatures for transfer in reinforcement learning. arXiv preprint arXiv:1606.05312 ,2 0 1 6 .[14] P. Dayan. Improving generalization for temporal difference learning: The successor representation.Neural Computation ,5 ( 4 ) : 6 1 3 – 6 2 4 ,1 9 9 3 .[15] M. Janner, I. Mordatch, and S. Levine. Generative temporal difference learning for infinite-horizonprediction. arXiv preprint arXiv:2010.14496 ,2 0 2 0 .[16] B. Eysenbach, R. Salakhutdinov, and S. Levine. C-learning: Learning to achieve goals via recursiveclassification. arXiv preprint arXiv:2011.08909 ,2 0 2 0 .[17] B. Eysenbach, T. Zhang, R. Salakhutdinov, and S. Levine. Contrastive learning as goal-conditionedreinforcement learning. arXiv preprint arXiv:2206.07568 ,2 0 2 2 .[18] M. Gutmann and A. Hyv ̈arinen. Noise-contrastive estimation: A new estimation principle forunnormalized statistical models. In Proceedings of the thirteenth international conference on artificialintelligence and statistics ,p a g e s2 9 7 – 3 0 4 .J M L RW o r k s h o pa n dC o n f e r e n c eP r o c e e d i n g s ,2 0 1 0 .[19] A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 ,2 0 1 8 .[20] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visualrepresentation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition ,p a g e s9 7 2 9 – 9 7 3 8 ,2 0 2 0 .[21] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learningof visual representations. In International conference on machine learning ,p a g e s1 5 9 7 – 1 6 0 7 .P M L R ,2020.9[22] A. Srinivas, M. Laskin, and P. Abbeel. Curl: Contrastive unsupervised representations forreinforcement learning. International Conference on Machine Learning ,2 0 2 0 .[23] B. Mazoure, R. T. d. Combes, T. Doan, P. Bachman, and R. D. Hjelm. Deep reinforcement andinfomax learning. Neural Information Processing Systems ,2 0 2 0 .[24] R. Agarwal, M. C. Machado, P. S. Castro, and M. G. Bellemare. Contrastive behavioral similarityembeddings for generalization in reinforcement learning. arXiv preprint arXiv:2101.05265 ,2 0 2 1 .[25] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vectorspace. arXiv preprint arXiv:1301.3781 ,2 0 1 3 .[26] R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y . Bengio.Learning deep representations by mutual information estimation and maximization. arXiv preprintarXiv:1808.06670 ,2 0 1 8 .[27] H. Kim, J. Kim, Y. Jeong, S. Levine, and H. O. Song. Emi: Exploration with mutual information.arXiv preprint arXiv:1810.01176 ,2 0 1 8 .[28] T. Wang and P. Isola. Understanding contrastive representation learning through alignment anduniformity on the hypersphere. In International Conference on Machine Learning ,p a g e s9 9 2 9 – 9 9 3 9 .PMLR, 2020.[29] Z. Ma and M. Collins. Noise contrastive estimation and negative sampling for conditional models:Consistency and statistical efficiency. arXiv preprint arXiv:1809.01812 ,2 0 1 8 .[30] Y. Du and I. Mordatch. Implicit generation and modeling with energy based models. Advancesin Neural Information Processing Systems ,3 2 ,2 0 1 9 .[31] O. Nachum and M. Yang. Provable representation learning for imitation with contrastive fourierfeatures. Advances in Neural Information Processing Systems ,3 4 : 3 0 1 0 0 – 3 0 1 1 2 ,2 0 2 1 .[32] A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in neuralinformation processing systems ,2 0 ,2 0 0 7 .[33] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deepreinforcement learning with a stochastic actor. In International conference on machine learning ,pages 1861–1870. PMLR, 2018.[34] K. W. Cobbe, J. Hilton, O. Klimov, and J. Schulman. Phasic policy gradient. In InternationalConference on Machine Learning ,p a g e s2 0 2 0 – 2 0 2 7 .P M L R ,2 0 2 1 .[35] Y. Zhao, R. Boney, A. Ilin, J. Kannala, and J. Pajarinen. Adaptive behavior cloning regularizationfor stable offline-to-online reinforcement learning. 2021.[36] M. Schwarzer, N. Rajkumar, M. Noukhovitch, A. Anand, L. Charlin, D. Hjelm, P. Bachman, andA. Courville. Pretraining representations for data-efficient reinforcement learning. arXiv preprintarXiv:2106.04799 ,2 0 2 1 .[37] A. Touati and Y . Ollivier. Learning one representation to optimize all rewards. Advances in NeuralInformation Processing Systems ,3 4 ,2 0 2 1 .[38] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-drivenreinforcement learning, 2020.[39] A. Filos, C. Lyle, Y. Gal, S. Levine, N. Jaques, and G. Farquhar. Psiphi-learning: Reinforcementlearning with demonstrations using successor features and inverse temporal difference learning. InInternational Conference on Machine Learning ,p a g e s3 3 0 5 – 3 3 1 7 .P M L R ,2 0 2 1 .[40] I. Kostrikov, J. Tompson, R. Fergus, and O. Nachum. Offline reinforcement learning with fisherdivergence critic regularization. arXiv preprint arXiv:2103.08050 ,2 0 2 1 .[41] A. W. Moore. Efficient memory-based learning for robot control. 1990.10[42] L. Blier, C. Tallec, and Y. Ollivier. Learning successor states and goal-dependent values: Amathematical viewpoint. arXiv preprint arXiv:2101.07123 ,2 0 2 1 .[43] J. Z. HaoChen, C. Wei, A. Gaidon, and T. Ma. Provable guarantees for self-supervised deep learningwith spectral contrastive loss. Advances in Neural Information Processing Systems ,3 4 : 5 0 0 0 – 5 0 1 1 ,2021.[44] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley,I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learnerarchitectures. In International Conference on Machine Learning ,p a g e s1 4 0 7 – 1 4 1 6 .P M L R ,2 0 1 8 .[45] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutionalnetworks. In Proceedings of the IEEE conference on computer vision and pattern recognition ,p a g e s4700–4708, 2017.[46] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450 ,2 0 1 6 .[47] C. X. Wang and W. P. Tay. Practical bounds of kullback-leibler divergence using maximum meandiscrepancy. arXiv preprint arXiv:2204.02031 ,2 0 2 2 .11 |
RQ_7yVV8vA | Learning to See Physical Properties with ActiveSensing Motor PoliciesGabriel B. Margolis Xiang Fu Yandong Ji Pulkit AgrawalImprobable AI Lab, Massachusetts Institute of Technologyhttps://gmargo11.github.io/active-sensing-locoAbstract: To plan efficient robot locomotion, we must use the information abouta terrain’s physics that can be inferred from color images. To this end, we train avisual perception module that predicts terrain properties using labels from a smallamount of real-world proprioceptive locomotion. To ensure label precision, weintroduce Active Sensing Motor Policies (ASMP). These policies are trained toprefer motor skills that facilitate accurately estimating the environment’s physics,like swiping a foot to observe friction. The estimated labels supervise a visionmodel that infers physical properties directly from color images and can be reusedfor different tasks. Leveraging a pretrained vision backbone, we demonstrate ro-bust generalization in image space, enabling path planning from overhead imagerydespite using only ground camera images for training.1 IntroductionAerial Imagesat̂etotItActive Sensing Motor PolicyPretrained BackboneFeaturesPhysics AdapterDense PhysicsTrainInferGrounded Perception ModuleπCost Function from SimulatorNominal PlanPayload PlanEgocentric ImagesTraversal LabelsFigure 1: Learning to see how terrains feel. Wepropose (1) learning an optimized gait for collect-ing informative proprioceptive terrain labels thatare (2) used to supervise training for a vision mod-ule, which can (3) be used for navigation planningwith new tasks and image sources.In recent years, legged locomotion controllershave exhibited remarkable stability and controlacross a wide range of terrains such as pave-ment, grass, sand, ice, slopes, and stairs [1, 2,3, 4, 5, 6, 7, 8]. State-of-the-art approaches usesim-to-real learning with a combination of pro-prioception and depth sensing to perceive theground beneath the robot or obstacles aroundit [5, 7, 8, 9, 10, 11, 12, 13, 14] but have dis-carded a valuable signal about terrain’s physi-cal properties that is conveyed by color images.To utilize this information, some works learnto predict locomotion performance or interac-tion properties from terrain appearance usingdata collected in the real world [15, 16, 17, 18].However, the learned representations in theseworks are task- or policy-specific. Instead, wepropose directly predicting the terrain’s physi-cal properties (e.g. friction, roughness) that (a)can be simulated and (b) are invariant to the pol-icy and task. With this approach, we can learna cost map from simulated rollouts to informtraversal planning when performing a new task(like payload dragging) or optimizing a new ob-jective (like a preference for speed vs energyefficiency).A natural way of estimating the terrain’s physical parameters during data collection is by traininga neural network to predict them from the proprioceptive sensor history, supervised by the groundtruth labels available in simulation [4, 5]. We discovered that the estimates obtained through this7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Passive Estimation in RL Concurrent State Estimation / RMAr(st,at)L(θ)=̂Et[logπθ(at|st,̂et)̂At]∥et−̂et∥2+RL Loss (Skill Only)Active Estimation in RL Active Sensing Motor Policiesst...t−HatPolicŷetEstimatoretPrivileged OraclêEt[logπestθ(at|st,̂et)̂Aestt]L(θ)=∥et−̂et∥2+RL Loss (Skill + Estimation)r(st,at,∥et−̂et∥2)Supervised Estimation LossActive Estimation Rewardst...t−HatPolicŷetEstimatoretPrivileged OracleSupervised Estimation LossPropagate Estimation ErrorPropagate Estimation ErrorPropagate Estimation ErrorFigure 2: Active Sensing Motor Policies optimize for estimation. Unlike passive methods (left),which estimate the state only to the extent that it is observable as a byproduct of control relevance,Active Sensing Motor Policies (right) directly incentivize improved state estimation through theadvantage function. This incentivizes a policy to adopt information-gathering behaviors, like inten-tionally swiping the robot’s foot during legged locomotion, to improve the estimate quality.approach can be imprecise because the locomotion behavior often makes the terrain properties hardto predict. Therefore, unlike prior works in terrain perception that predict the terrain character ortraversal cost from passive data [15, 16, 17, 18], we propose training a specialized data collectionpolicy that directly optimizes for terrain property estimation. This Active Sensing Motor Policy(ASMP) learns emergent locomotion behavior, such as dragging the feet on the ground to betterestimate friction, and improves the informativeness of its proprioceptive traversals.We use the improved data obtained through ASMP as self-supervision to learn a visual perceptionmodule that predicts terrain material properties. The same model can inform efficient plans fornominal locomotion and for dragging objects by considering the impact of terrain properties ontraversal cost. Because the robot is low to the ground, its onboard cameras only provide enoughrange for local planning. Sometimes the robot is in a position where it must plan its motion usingonly the information in front of it, but in other cases, it might have access to some global informationabout the environment. Therefore, we also consider the scenario of a teamed drone that flies abovethe legged robot and provides an extended view of the environment. Despite being trained solelywith images captured from an onboard camera, our resulting model can also be evaluated to predictterrain properties using images from various cameras and viewpoints, which allows this type ofglobal planning to succeed.2 MethodOur approach consists of the following stages, which are also illustrated graphically in Figure 1:1.Active Sensing : We estimate the terrain dynamics parameter, et, from the proprioceptive sensorhistory during an initial blind traversal. Our Active Sensing Motor Policy (ASMP) crucially providesbetter-calibrated estimates than the baseline policy. In our experiments, the estimated parameter etis the ground friction coefficient, the ground roughness magnitude, or both. (Section 2.1)2.Self-Supervised Vision Learning : Using labels of etrecorded from the real-world traversal ofthe robot, we learn a function, ˆe=f(I), that predicts the per-pixel value of etfor a given image I.The labels for training are only available at the pixels corresponding to the places the robot traversed,but the resulting model can be queried to predict the terrain parameter at any pixel. (Section 2.2)3.Cost Function Learning : To inform planning, we learn cost functions that relate the terraindynamics to various performance metrics. First, we create terrains with a range of etvalues insimulation. Then, we perform rollouts in simulation to measure a cost function C(ek)that relatesdynamics parameters to performance. We learn a separate cost function for each task. (Section 2.3)4.Dynamics-Aware Path Planning : Combining (2-3), we compute cost maps directly from colorimages and use them for path planning. (Section 2.4)22.1 Active Sensing Motor Policies: Learning Whole-Body Active EstimationIn learning control policies under partial observations, it is commonplace to train with an im-plicit [1, 2, 3, 5, 8] or explicit [4, 19] incentive to form representations within the policy network thatcorrespond to the unobserved dynamics parameters. Consider the concurrent state estimation frame-work of Ji et al. [4], under which a state estimation network is trained simultaneously with the policynetwork to predict the unobserved parameters. The predictions of the state estimation network areconcatenated with the rest of the observation to construct the policy network input. This approachoptimizes a two-part loss consisting of the standard policy objective and the state estimation error:L(θ, θ′) =ˆEthlogπθ(at|st,ˆet)ˆAti+∥et−ˆeθ′(st)∥2.This has been empirically shown to yield better policy performance in environments with random-ized dynamics or unobserved state variables [4].In the formulation above, the estimation error is used to update the state estimator weights θ′, but notthe policy weights θ. This does not incentivize the policy to adjust its actions to improve estimationperformance beyond what is required for control. Typically, this is no problem because it allowsthe policy to maximize its performance at the current control task. However, our end goal is touse the output of the state estimator to train a visual perception module that may be reused withother controllers and tasks. To support this, the labels should be as accurate as possible even whenthat is not necessary for control. To obtain the most accurate perception module, we would like amechanism to improve the state estimate quality of the proprioceptive data collection policy as muchas possible by adapting the policy’s behavior. To this end, we propose Active Sensing Motor Policiesin which the policy πestis trained with an additional estimation reward :rest=c·exp (∥e−ˆe∥2).Figure 2 illustrates the policy architecture.In practice, we observe that an Active Sensing Motor Policy that is rewarded for estimating theground friction coefficient slides one foot along the ground or swipes it vigorously to improve thefriction coefficient observability in the state history.2.2 Grounding Visual Features in Physics from Real-world ExperienceWe collect paired proprioceptive and vision data from the state estimation policy in the real worldin order to learn about the relationship between visual appearance and terrain physics. Specifically,we collect data of the form (I,ˆe,x)twhere Iis a camera image, ˆeare the estimated dynamicsparameters and xis the position and orientation of the robot in a fixed reference frame. We obtain xby training an additional 2D output of the final MLP layer in our learned state estimator to predictthe displacement in the ground plane of the base from its location at the previous timestep, ∆x, andthen integrate the estimated displacements. The integrated estimates xwill drift over time, but wewill only rely on them over a short time window. This alleviates the need for a separate odometryalgorithm to estimate the robot’s state.Using the camera intrinsic and extrinsic transform, we project the relative positions of the robot inthe past and future into each camera image frame. We restrict the positions to those between 1 mand5 mfrom the robot along the traversal path so that they are neither too far away to see nor soclose as to be obstructed from view by the robot’s own body. We label each of the projected robotpositions with the estimated dynamics parameters ˆethat the robot felt when it walked there. Thisyields a corresponding label image Ietfor each color image Iwhere the traversed pixels are labeledwith their measured dynamics.For each color frame It, we use the pretrained convolutional backbone [20] to compute a densefeature map. Similar to the procedure that Oquab et al. [21] used for depth estimation, we discretizethe labels ˆetinto20bins and train a single linear layer with cross-entropy loss where the inputs arethe features of one patch and the outputs are the logits of the patch’s ˆetlabel from proprioception.32.3 Cost Function Learning: Connecting Physics Parameters to AffordancesFriction Coefficient Mean VelocityDragging PayloadLocomotionOperating ModesAffordance MeasurementsFigure 3: Locomotion affordances. We mea-sure the dependence of locomotion performance(1 m/s) on terrain friction in two different oper-ating modes. In free locomotion, the controllermaintains the target velocity across a range of fric-tion coefficients, except for the lowest friction. Incontrast, when dragging a weighted box, the robotslows down as the terrain friction increases.The impact of terrain properties on robot per-formance is task-dependent: for example, arobot dragging an object may face distinct con-straints that inhibit its traversal on some ter-rains, compared to a robot without any payload.To use our vision module for planning, we mustestablish a mapping between terrain propertiesand robot performance for each task. We pro-pose a simple procedure for extracting a taskcost function from simulated data to demon-strate that our perception module can be usefulin planning for multiple tasks, which we referto as “operating modes”. We sample simulatedterrains with a variety of terrain properties etand command a locomotion policy from priorwork [19] to walk forward at 1 m/s. We recordthe actual resulting velocity achieved on eachterrain. We evaluate the mean realized velocityfor multiple operating modes: (1) locomotion, (2) payload dragging. We construct a cost functionfor each operating mode as the average time spent traversing one meter of a given terrain. Mini-mizing this cost function during path planning will yield an estimated shortest-time path. While wefocus on time-optimal payload dragging as an example, (1, 2) could be any combination of task andmetric as long as their relation to terrain properties can be evaluated in simulation.2.4 Integrated Dynamics-Aware Path Planning from VisionOur perception module (Section 2.2) runs in real-time ( 2 Hz) using onboard compute. Although itwas trained using images from a 360-degree camera, the resulting pixel-wise friction estimator canbe evaluated in images from other cameras including the robot’s onboard fisheye camera and anoverhead drone. This is useful because the perception module can remain useful when deployed ona new robot or evaluated from a new viewpoint.One possible scenario for carrying ground objects across a long distance is that of a drone-quadrupedteam. In these cases, we can directly evaluate our grounded vision module in overhead images toobtain a pixel-wise friction mask. Then, considering the robot’s operating mode, we compute thecost associated with each pixel using the corresponding cost function determined from simulation(Section 2.3). Given this overhead cost map, we use the A∗search algorithm [22] to compute theminimum cost traversal path for the current operating state.2.5 System SetupRobot : We use the Unitree Go1 robot, a 12-motor quadruped robot stands 40 cm tall. It has anNVIDIA Jetson Xavier NX processor, which runs the control policy and the vision module. Forpayload dragging experiments, the robot’s body is connected to an empty suitcase using a rope.360 Camera : We use a Insta360 X3 360 action camera mounted on the robot to collect images fortraining the perception module. This camera provides a 360◦field of view. Before the image datais used for training, we use the Insta360 app to perform image stabilization, which takes about twominutes for data collected from a ten-minute run.Training Compute : We perform policy training, video postprocessing, and vision model trainingon a desktop computer equipped with an NVIDIA RTX 2080 GPU.Drone Camera : For planning from overhead images, we record terrain videos using a DJI Mini 3,a consumer camera drone.4(a) Performance and estimate quality during training.0.0 1.5 0.5 1.0 0.00.20.8DensityPassive Estimation Active Estimation0.51.0 2.53.0 1.5Predictions Ground truth0.51.01.52.02.53.0Friction0.00.20.40.81.00.42.00.60.00.20.40.81.00.6Density Density0.6Predictions Ground truthFriction Estimation Error (b) Distribution of friction estimates at convergence.Figure 4: Learning active estimation. Active Sensing Motor Policies ( Active-SE ) automaticallylearn motor skills (e.g. dragging the feet) that improve observability of the environment properties.3 Results3.1 Interaction among Estimation, Adaptation, and PerformanceObserving supervised internal state estimates improves proprioceptive locomotion. Affirming theresults of Ji et al. [4], we train a state estimation network using supervised learning to predict privi-leged information (the ground friction coefficient and terrain roughness parameter) from the historyof sensory observations. When the policy is allowed to observe the output of this state estimationnetwork ( Passive-SE ), the policy training is more stable and results in a more performant finalpolicy than when the state estimate is not observed ( No-SE ) (Figure 4).Observing passive state estimates can degrade the state observability. We analyze the error dis-tribution of the learned state estimator in Passive-SE andNo-SE policies (Figure 4). It may besurprising that the friction estimation error of the more-performant Passive-SE policy is worsethan that of the less-performant No-SE policy. We suggest a mechanistic explanation for this be-havior: Supposing some irreducible sensor noise, two terrains of different frictions will only bedistinguishable if they make the robot slip in sufficiently different ways. However, a control policywith a better adaptive facility is more likely to avoid slipping across a wide range of ground frictions.Therefore, in the more adaptive policy, slip will occur less intensely, and as a result, the observabilityof the ground friction coefficient will degrade.Our method, ASMP , produces the best privileged state observability. We train an active sensing mo-tor policy ( Active-SE ) to intentionally measure the friction as described in Section 2.1. (The fullreward function for each policy we trained is provided in the appendix.) We find that the Active-SEpolicy provides the most accurate friction estimates among the three architectures (Figure 4). There-fore, as we will further show, it is the superior policy for supervising a task-agnostic physical ground-ing for vision.3.2 Learning to See FrictionEvaluation in Simulated Environment. We collect five minutes of simulated data on fourterrains: ice, gravel, brick, and grass, assigning them arbitrary friction coefficients of μ={0.25,1.17,2.08,3.0}respectively. Figure 10 compares the resulting visual perception modulelearned from the policies performing passive vs. active estimation. Qualitatively, the vision modulelearned from passive data learns to see ice but fails to distinguish between higher-friction terrains(gravel, brick, and grass). This makes sense as Figure 3 shows that frictions in this range have lessinfluence on the performance of locomotion. In contrast, the vision module trained on data fromour Active Sensing Motor Policy correctly learns to distinguish all four terrains. Quantitatively,ASMP results in lower dense prediction loss on images from a held-out test trajectory (Figure 10,Appendix).Real-world Training. We collect 15 minutes of real-world traversal data spanning diverse terrains:grass, gravel, dirt, and pavement. Following the procedure in Section 2.2, we project the traversed5Grass Pavement Dirt Gravel123μ(Friction Coefficient)Measured ASMP (Ours) Passive (Baseline) Vision (Train) Vision (Test)Figure 5: Real-world friction sensing performance with proprioception and vision. Measuredvalues are directly measured by a dynamometer. The predictions from our proposed ASMP (Ours)agree more strongly with the dynamometer measurements than the baseline Passive (Baseline) .Vision (Train) shows the generalization of visual prediction to un-traversed patches in the train-ing images from the onboard camera; ( Vision (Test) ) shows the generalization to unseen patchesand viewpoints by evaluating on drone footage. We use manual segmentation maps (Appendix Fig-ure 9) to match pixel predictions to terrains. Error bars indicate one standard deviation.points into the corresponding camera images and train a linear head on top of a convolutional back-bone trained for segmentation [20] to predict the terrain friction estimate for each traversed patch.To evaluate estimation performance in the real world, we manually label image segments in a subsetof train and test images containing grass, pavement, dirt, or gravel and compute the distribution ofproprioceptive and visual friction predictions for each (Figure 5). To obtain a ground truth frictionvalue for comparison, we use dynamometer to measure the weight of a payload made of robot footmaterial and its drag force across each terrain. The proprioceptive estimates from ASMP are muchcloser to the dynamometer measurements than the estimates from the passive baseline. They do notmatch perfectly, suggesting a small but measurable sim-to-real gap in the robot dynamics or terrainmodeling. They agree with the dynamometer measurements on the ordering of terrains from mostto least slippery. The grounded vision module is close to the distribution of proprioceptive estimatesfor both train and test images, with increased variance in test images.3.3 Integrated PlanningCost Function Evaluation. We define a cost metric for the locomotion policy from [19] as thedistance traveled per second when commanded with a speed of 1.0 m/s. We evaluate this metric insimulation by averaging the performance of 50 agents simulated in parallel for 20 s on terrains ofdifferent friction coefficients ranging from a lower limit of μ= 0.25to an upper limit of μ= 3.0.This procedure is performed once with the robot in nominal locomotion and again with the robotdragging a 1.0 kg payload. Figure 3 shows the measured result; both tasks yield poor performanceon extremely slippery terrain, but on higher terrains, the robot dragging a payload slows down whilethe free-moving robot adapts to maintain velocity. Knowledge of the ground’s physical propertiesmotivates a difference in high-level navigation decisions between the two tasks.Path Planning and Execution. We plan paths for locomotion and payload dragging and executethem via teleoperation to evaluate whether the predicted preferences hold true in the real world. Wefly a drone over the same environment where the vision model was trained and choose a bird’s-eye-view image that includes grass and pavement. We estimate the friction of each pixel and from thiswe compute the associated cost for locomotion and payload dragging. Then we use A∗search tocompute optimal paths. The optimized paths and traversal result are shown in Figure 6. In agreementwith the planning result, it is preferred to remain on the sidewalk while dragging the payload andcut directly across the grass when in free locomotion.6(a)(b)(c)Operating Mode Metric Cross Grass Stay on SidewalkDragging Payload Time ( s) 48±1 45 ±1Locomotion Time ( s) 23±1 26 ±0Figure 6: Path planning in overhead images. (a) We use the learned vision module to plan navi-gation in overhead images of terrain. (b) The vision module is only trained using first-person viewsfrom the robot but can infer the terrain friction with a different camera model and viewing pose.(c) We teleoperate the robot across both planned paths in both locomotion modes. The preferenceamong paths in the real world matches the planning result from our pipeline.4 Related WorkSelf-supervised traversability estimation has been studied previously for the navigation of wheeledand legged robots. Some works have focused on the direct estimation of a traversability metric, ascalar value quantifying the cost of traversing a particular terrain [16, 17, 23]. These approaches arespecialized to the robot’s traversal capability at the time of data collection, implying that a changein the policy or task may necessitate repeated data collection to train a new vision module.Other works have demonstrated self-supervised terrain segmentation from proprioceptive data [15,24, 25]. Wu et al. [24] demonstrated that proprioceptive data from a C-shaped leg equipped withtactile sensors may be sufficient to classify different terrains. Wellhausen et al. [15] took supervi-sion from the dominant features of a six-axis force-torque foot sensor during traversal and traineda model to densely predict a ”ground reaction score” from color images to be used for planning.Łysakowski et al. [25] also demonstrated that terrain classification from proprioceptive readingscould be performed in an unsupervised manner on a full-scale quadruped and showed that this infor-mation could be used as an additional signal to improve localization. Our work differs from these inthat (1) we do not use any dedicated sensor in the foot but predict the terrain properties using onlystandard sensors of the robot’s ego-motion, and (2) thanks to our Active Sensing Motor Policies, wecan directly predict the terrain properties instead of a proxy score, which allows us to compute thecost function in simulation for multiple scenarios as in Section 2.3.Another possibility is directly predicting which locomotion skill to execute from visualinformation[18, 26]. Loquercio et al. [26] learned to predict the future latent state of the policy[2] from a front-facing camera image to improve low-level control performance in stair climb-ing. An advantage of their approach is that it does not require the choice of an explicit terrainparameterization, but this comes at the cost that its visual representation is specialized to the la-tent of a single motor policy, so it cannot be reused for new policies or operating states, and pre-dicting the next latent is only meaningful for egocentric images, so it cannot be used for novelviewpoints, as in drone-quadruped teaming or planning from satellite imagery. Yang et al. [18]trained a semantic visual perception module for legged quadrupeds using human demonstrations.7EstimationModeFrictionLossRoughnessLossTorquePenaltyPassive 1.00 1 .00 −0.34Friction 0.47 1 .06 −0.87Roughness 0.99 0 .72 −0.84JointFr.+Ro.0.49 0 .80 −1.18Figure 7: ASMP for multiple physical param-eters. Friction and roughness estimates are im-proved by ASMP, even when both parameters arejointly targeted. We report estimation loss for pas-sive estimation ( None (Passive) ), active esti-mation of each parameter separately ( Friction ,Roughness ), and active estimation for both pa-rameters in a single policy ( Joint Fr.+Ro. ).Variation in torque reflects that a change in mo-tor strategy enabled the improved estimation.The resulting system imitated an operator’s re-sponse to different terrains, controlling velocityand gait. This relies on a human operator to pre-dict the terrain properties during the demonstra-tion. Other work has learned general navigationthrough supervised learning on diverse roboticplatforms, including legged robots [27, 28, 29].These works train an omni-policy for all robotsand environments, enabling interesting zero-shot generalization but not explicitly adjust-ing for embodiment, operating state, or cameraviewpoint variation.Several works on wheeled robots visually es-timate the geometry or contact properties ofthe terrain through self-supervision or hand-designed criteria and then compute the traversalcost from these metrics [30, 31, 32, 33, 34, 35, 36, 37, 38]. Compared to legged robots, wheeledrobots have a limited variety of traversal strategies. Consequently, the question of selecting a lo-comotion controller to gather the most informative self-supervision data has not been directly ad-dressed. Active perception, in which a robot agent acts to increase environmental observability,suggests a solution. This approach has been applied to vision systems [39, 40, 41] , and more re-cently has been extended to include physical interaction [42, 43, 44, 45]. This inspired our approachto the controller selection issue in labeling vision with proprioception for legged robots.5 Discussion and LimitationsOur work assumes a mapping between the estimated terrain properties and the robot’s performance.Friction affects the slip of the robot’s feet against the ground and the drag force of payloads andother objects, so it is an interesting source of performance variation for practical locomotion tasks.To account for other parameters besides friction that vary in the environment, our framework canpotentially be extended to include them. For example, Figure 7 shows that ASMP successfullyenhances the estimation of a terrain roughness parameter in addition to friction.In general, ASMP may be applied under two conditions: (1) a history of proprioceptive readingsis sufficient to infer the parameter of interest, and (2) the parameter of interest can be effectivelysimulated. If these conditions are not met, a different technique besides ASMP may be necessaryto collect training data. Additionally, to train our vision module, we assume terrains with differentproperties are visually different. If some parameters do not have an impact on the terrain’s visualappearance, it may be impossible to learn a vision module of the form we propose for those parame-ters. If the mapping varies quickly over time, future work could explore representing uncertainty orperforming fast online adaptation of the estimates to the current environment based on new proprio-ceptive information. Finally, our pipeline does not yet account for terrain geometry and occlusions.In the future, ASMP may also be useful to complement data from other sensors like LiDAR toachieve the most detailed perceptual representation that includes geometric information.6 ConclusionProprioceptive self-supervision is a promising data source for robots to learn about the relationshipbetween vision and physics. In this work, we exposed that the quality of proprioceptive supervisioncan be strongly influenced by the style of the motor policy acquired through reinforcement learning.We proposed a novel technique, Active Sensing Motor Policies, and show that it improves the pro-prioceptive estimation quality and the corresponding performance of a grounded vision module thatis reusable for new sensor configurations and physical tasks.8AcknowledgmentsWe thank the members of the Improbable AI lab for the helpful discussions and feedback on thepaper. We are grateful to MIT Supercloud and the Lincoln Laboratory Supercomputing Center forproviding HPC resources. This research was supported by the DARPA Machine Common SenseProgram, the MIT-IBM Watson AI Lab, and the National Science Foundation under CooperativeAgreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Inter-actions, http://iaifi.org/). We acknowledge support from ONR MURI under grant number N00014-22-1-2740. This research was also sponsored by the United States Air Force Research Laboratoryand the United States Air Force Artificial Intelligence Accelerator and was accomplished underCooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in thisdocument are those of the authors and should not be interpreted as representing the official policies,either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Gov-ernment is authorized to reproduce and distribute reprints for Government purposes, notwithstandingany copyright notation herein.Author Contributions•Gabriel B. Margolis ideated, implemented, and evaluated Active Sensing Motor Policies andshared ideation and implementation of the vision module and overall experimental design.•Xiang Fu shared ideation and implementation of vision module and overall experimental design.•Yandong Ji contributed ideas and supported infrastructure development during the project.•Pulkit Agrawal advised the project and contributed to its conceptual development, experimentaldesign, positioning, and writing.References[1] J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning quadrupedal locomo-tion over challenging terrain. Science robotics , 5(47):eabc5986, 2020.[2] A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid motor adaptation for legged robots.Robotics: Science and Systems , 2021.[3] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. Rapid locomotion via rein-forcement learning. Robotics: Science and Systems , 2022.[4] G. Ji, J. Mun, H. Kim, and J. Hwangbo. Concurrent training of a control policy and a stateestimator for dynamic and robust legged locomotion. IEEE Robotics and Automation Letters ,7(2):4630–4637, 2022.[5] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robust per-ceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62):eabk2822,2022.[6] Y . Ji, G. B. Margolis, and P. Agrawal. Dribblebot: Dynamic legged manipulation in the wild.arXiv preprint arXiv:2304.01159 , 2023.[7] A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrainsusing egocentric vision. In Conference on Robot Learning , pages 403–415. PMLR, 2023.[8] S. Choi, G. Ji, J. Park, H. Kim, J. Mun, J. H. Lee, and J. Hwangbo. Learning quadrupedallocomotion on deformable terrain. Science Robotics , 8(74):eade2256, 2023.[9] D. Hoeller, L. Wellhausen, F. Farshidian, and M. Hutter. Learning a state representation andnavigation in cluttered and dynamic environments. IEEE Robotics and Automation Letters , 6(3):5081–5088, 2021.9[10] G. B. Margolis, T. Chen, K. Paigwar, X. Fu, D. Kim, S. Kim, and P. Agrawal. Learning tojump from pixels. Conference on Robot Learning , 2021.[11] R. Yang, M. Zhang, N. Hansen, H. Xu, and X. Wang. Learning vision-guided quadrupedal lo-comotion end-to-end with cross-modal transformers. arXiv preprint arXiv:2107.03996 , 2021.[12] I. M. A. Nahrendra, B. Yu, and H. Myung. Dreamwaq: Learning robust quadrupedal lo-comotion with implicit terrain imagination via deep reinforcement learning. In 2023 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 5078–5084. IEEE, 2023.[13] S. Kareer, N. Yokoyama, D. Batra, S. Ha, and J. Truong. Vinl: Visual navigation and loco-motion over obstacles. In 2023 IEEE International Conference on Robotics and Automation(ICRA) , pages 2018–2024. IEEE, 2023.[14] J. Truong, A. Zitkovich, S. Chernova, D. Batra, T. Zhang, J. Tan, and W. Yu. Indoorsim-to-outdoorreal: Learning to navigate outdoors without any outdoor experience. arXiv preprintarXiv:2305.01098 , 2023.[15] L. Wellhausen, A. Dosovitskiy, R. Ranftl, K. Walas, C. Cadena, and M. Hutter. Where shouldi walk? predicting terrain properties from images via self-supervised learning. IEEE Roboticsand Automation Letters , 4(2):1509–1516, 2019.[16] M. G. Castro, S. Triest, W. Wang, J. M. Gregory, F. Sanchez, J. G. Rogers III, and S. Scherer.How does it feel? self-supervised costmap learning for off-road vehicle traversability. arXivpreprint arXiv:2209.10788 , 2022.[17] J. Frey, M. Mattamala, N. Chebrolu, C. Cadena, M. Fallon, and M. Hutter. Fast traversabilityestimation for wild visual navigation. Robotics: Science and Systems , 2023.[18] Y . Yang, X. Meng, W. Yu, T. Zhang, J. Tan, and B. Boots. Learning semantics-aware locomo-tion skills from human demonstration. In Conference on Robot Learning , pages 2205–2214.PMLR, 2023.[19] G. B. Margolis and P. Agrawal. Walk these ways: Tuning robot control for generalization withmultiplicity of behavior. In Conference on Robot Learning , pages 22–31. PMLR, 2023.[20] Q. Yu, J. He, X. Deng, X. Shen, and L.-C. Chen. Convolutions die hard: Open-vocabularysegmentation with single frozen convolutional clip. arXiv preprint arXiv:2308.02487 , 2023.[21] M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khalidov, P. Fernandez, D. Haz-iza, F. Massa, A. El-Nouby, et al. Dinov2: Learning robust visual features without supervision.arXiv preprint arXiv:2304.07193 , 2023.[22] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination ofminimum cost paths. IEEE transactions on Systems Science and Cybernetics , 4(2):100–107,1968.[23] Z. Fu, A. Kumar, A. Agarwal, H. Qi, J. Malik, and D. Pathak. Coupling vision and propriocep-tion for navigation of legged robots. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 17273–17283, 2022.[24] X. A. Wu, T. M. Huh, R. Mukherjee, and M. Cutkosky. Integrated ground reaction forcesensing and terrain classification for small legged robots. IEEE Robotics and AutomationLetters , 1(2):1125–1132, 2016.[25] M. Łysakowski, M. R. Nowicki, R. Buchanan, M. Camurri, M. Fallon, and K. Walas. Un-supervised learning of terrain representations for haptic monte carlo localization. In 2022International Conference on Robotics and Automation (ICRA) , pages 4642–4648. IEEE, 2022.10[26] A. Loquercio, A. Kumar, and J. Malik. Learning visual locomotion with cross-modal supervi-sion. arXiv preprint arXiv:2211.03785 , 2022.[27] S. Levine and D. Shah. Learning robotic navigation from experience: principles, methods andrecent results. Philosophical Transactions of the Royal Society B , 378(1869):20210447, 2023.[28] D. Shah, A. Sridhar, A. Bhorkar, N. Hirose, and S. Levine. Gnm: A general navigationmodel to drive any robot. In 2023 IEEE International Conference on Robotics and Automation(ICRA) , pages 7226–7233. IEEE, 2023.[29] D. Shah, A. Sridhar, N. Dashora, K. Stachowicz, K. Black, N. Hirose, and S. Levine. Vint: Afoundation model for visual navigation. arXiv preprint arXiv:2306.14846 , 2023.[30] R. Hadsell, P. Sermanet, J. Ben, A. Erkan, J. Han, B. Flepp, U. Muller, and Y . LeCun. Onlinelearning for offroad robots: Using spatial label propagation to learn long-range traversability.InProc. of Robotics: Science and Systems (RSS) , volume 11, page 32. Citeseer, 2007.[31] R. Hadsell, P. Sermanet, J. Ben, A. Erkan, M. Scoffier, K. Kavukcuoglu, U. Muller, and Y . Le-Cun. Learning long-range vision for autonomous off-road driving. Journal of Field Robotics ,26(2):120–144, 2009.[32] D. Stavens and S. Thrun. A self-supervised terrain roughness estimator for off-road au-tonomous driving. arXiv preprint arXiv:1206.6872 , 2012.[33] S. Palazzo, D. C. Guastella, L. Cantelli, P. Spadaro, F. Rundo, G. Muscato, D. Giordano, andC. Spampinato. Domain adaptation for outdoor robot traversability estimation from rgb datawith safety-preserving loss. In 2020 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 10014–10021. IEEE, 2020.[34] H. Lee and W. Chung. A self-training approach-based traversability analysis for mobile robotsin urban environments. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 3389–3394. IEEE, 2021.[35] X. Xiao, J. Biswas, and P. Stone. Learning inverse kinodynamics for accurate high-speed off-road navigation on unstructured terrain. IEEE Robotics and Automation Letters , 6(3):6054–6060, 2021.[36] A. Shaban, X. Meng, J. Lee, B. Boots, and D. Fox. Semantic terrain classification for off-roadautonomous driving. In Conference on Robot Learning , pages 619–629. PMLR, 2022.[37] A. J. Sathyamoorthy, K. Weerakoon, T. Guan, J. Liang, and D. Manocha. Terrapn: Unstruc-tured terrain navigation using online self-supervised learning. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 7197–7204. IEEE, 2022.[38] X. Meng, N. Hatch, A. Lambert, A. Li, N. Wagener, M. Schmittle, J. Lee, W. Yuan, Z. Chen,S. Deng, et al. Terrainnet: Visual modeling of complex terrain for high-speed, off-road navi-gation. arXiv preprint arXiv:2303.15771 , 2023.[39] R. Bajcsy. Active perception. Proceedings of the IEEE , 76(8):966–1005, 1988.[40] D. Jayaraman and K. Grauman. Look-ahead before you leap: end-to-end active recognition byforecasting the effect of motion. In Computer Vision–ECCV 2016: 14th European Conference,Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14 , pages 489–505.Springer, 2016.[41] S. K. Ramakrishnan, D. Jayaraman, and K. Grauman. Emergence of exploratory look-aroundbehaviors through active observation completion. Science Robotics , 4(30):eaaw6326, 2019.11[42] J. Bohg, K. Hausman, B. Sankaran, O. Brock, D. Kragic, S. Schaal, and G. S. Sukhatme. Inter-active perception: Leveraging action in perception and perception in action. IEEE Transactionson Robotics , 33(6):1273–1291, 2017.[43] H. Van Hoof, O. Kroemer, H. B. Amor, and J. Peters. Maximally informative interactionlearning for scene exploration. In 2012 IEEE/RSJ International Conference on IntelligentRobots and Systems , pages 5152–5158. IEEE, 2012.[44] V . Chu, I. McMahon, L. Riano, C. G. McDonald, Q. He, J. M. Perez-Tejada, M. Arrigo,T. Darrell, and K. J. Kuchenbecker. Robotic learning of haptic adjectives through physicalinteraction. Robotics and Autonomous Systems , 63:279–292, 2015.[45] D. Pathak, P. Mahmoudieh, G. Luo, P. Agrawal, D. Chen, Y . Shentu, E. Shelhamer, J. Malik,A. A. Efros, and T. Darrell. Zero-shot visual imitation. In Proceedings of the IEEE conferenceon computer vision and pattern recognition workshops , pages 2050–2053, 2018.12 |
fSmkKmWM5Ry | Stochastic Occupancy Grid Map Prediction inDynamic ScenesZhanteng XieDepartment of Mechanical EngineeringTemple University United Stateszhanteng.xie@temple.eduPhilip DamesDepartment of Mechanical EngineeringTemple University United Statespdames@temple.eduAbstract: This paper presents two variations of a novel stochastic prediction algo-rithm that enables mobile robots to accurately and robustly predict the future stateof complex dynamic scenes. The proposed algorithm uses a variational autoencoderto predict a range of possible future states of the environment. The algorithm takesfull advantage of the motion of the robot itself, the motion of dynamic objects, andthe geometry of static objects in the scene to improve prediction accuracy. Threesimulated and real-world datasets collected by different robot models are usedto demonstrate that the proposed algorithm is able to achieve more accurate androbust prediction performance than other prediction algorithms. Furthermore, apredictive uncertainty-aware planner is proposed to demonstrate the effectivenessof the proposed predictor in simulation and real-world navigation experiments. Im-plementations are open source at https://github.com/TempleRAIL/SOGMP .Keywords: Environment Prediction, Probabilistic Inference, Robot Learning1 IntroductionAutonomous mobile robots are beginning to enter people’s lives and try to help us provide differentlast-mile delivery services, such as moving goods in warehouses or hospitals and assisting groceryshoppers [ 1–3]. To realize this vision, robots are required to safely and efficiently navigate throughcomplex and dynamic environments that are full of people and/or other robots. The first prerequisitefor robots to navigate and perform tasks is to use their sensors to perceive the surrounding environment.This work focuses on the next step, which is to accurately and reliably predict how the surroundingenvironment will change based on these sensor data. This will allow mobile robots to proactively actto avoid potential future collisions, a key part of autonomous robot navigation.Environment prediction remains an open problem as the future state of the environment is unknown,complex, and stochastic. Many interesting works have focused on this prediction problem. Traditionalobject detection and tracking methods [ 4,5] use multi-stage procedures, hand-designed features,and explicitly detect and track objects. More recently, deep learning (DL)-based methods that aredetection and tracking-free have been able to obtain more accurate predictions [ 6–11]. Occupancygrid maps (OGMs), the most widely successful spatial representation in robotics, are the mostcommon environment representation in these DL-based methods. This transforms the complexenvironment prediction problem into an OGM prediction problem, outlined in Figure 1. Since OGMscan be treated as images (both are 2D arrays of data), the multi-step OGM prediction problem can bethought of as a video prediction task, a well-studied problem in machine learning.The most common technique for OGM prediction uses recurrent neural networks (RNNs), whichare widely used in video prediction [ 12–14]. For example, Ondruska and Posner [6]first propose anRNN-based deep tracking framework to directly track and predict unoccluded OGM states from rawsensor data. Itkina et al. [7]directly adapt PredNet [ 13] to predict the dynamic OGMs (DOGMas) inurban scenes. Furthermore, Toyungyernsub et al. [8]decouple the static and dynamic OGMs and7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.X/m Y/m X/m Y/m Occupancy Grid Map: History Occupancy Grid Map: Prediction Local Environment: History Local Environment: Future Environment Prediction OGM Prediction Mapping 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 1 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1 1 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 1 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 Figure 1: A simple illustration of the OGM prediction problem.propose a double-prong PredNet to predict occupancy states of the environment. Schreiber et al.[9,10]embed the ConvLSTM units in the U-Net to capture spatiotemporal information of DOGMasand predict them in the stationary vehicle setting. Lange et al. [11] propose two attention-augmentedConvLSTM networks to capture long-range dependencies and predict future OGMs in the movingvehicle setting. However, these image-based works only focus on improving network architecturesand just treat the OGMs as images, assuming their network architectures can implicitly capture usefulinformation from the kinematics and dynamics behind the environment with sufficient good data.There are other DL-based approaches that explicitly exploit the ego-motion and motion flow of theenvironment to improve the OGM prediction accuracy. By using input placement and recurrentstates shifting to compensate for ego-motion, Schreiber et al. [15] extend their previous image-basedworks [ 9,10] to predict DOGMas in moving vehicle scenarios. By extending the deep trackingframework [ 6] with a spatial transformer module to compensate for ego-motion, Dequaire et al. [16]propose a gated recurrent unit (GRU)-based network to predict future states in moving vehicle settings.Song et al. [17] propose a GRU-based LiDAR-FlowNet to estimate the forward/backward motionflow between two OGMs and predict future OGMs. Thomas et al. [18] directly encode spatiotemporalinformation into the world coordinate frame and propose a 3D-2D feedforward architecture to predictfutures. By considering the ego-motion and motion flow together, Mohajerin and Rohani [19] first usethe geometric image transformation to compensate for ego-motion, and then propose a ConvLSTM-based difference learning architecture to extract the motion difference between consecutive OGMs.However, most of these motion-based works are designed for autonomous vehicles and cannot bedirectly deployed on mobile robots with limited resources. Furthermore, all the above-describedworks assume that the environmental state is deterministic and cannot estimate the uncertainty offuture states, which we believe is key to helping robots robustly navigate in dynamic environments.In this paper, we propose two versions of a variational autoencoder (V AE)-based stochastic OGMpredictor for resource-limited mobile robots, namely SOGMP and SOGMP++, both of which predicta distribution of possible future states of dynamic scenes. The primary contribution of our approach isthat we fully exploit the kinematics and dynamics of the robot and its surrounding objects to improveprediction performance. Specifically, we first develop a simple and effective ego-motion compensationmechanism for robot motion, then utilize a ConvLSTM unit to extract the spatiotemporal motioninformation of dynamic objects, and finally generate a local environment map to interpret staticobjects. Another key contribution is that by relaxing the deterministic environment assumption, ourproposed approaches are able to provide uncertainty estimates and predict a distribution over futurestates of the environment with the help of variational inference techniques. We demonstrate theeffectiveness of our approaches by using both computer vision metrics and multi-target trackingmetrics on three simulated and real-world datasets with different robot models, showing that ourproposed predictors have a smaller absolute error, higher structural similarity, higher tracking accuracythan other state-of-the-art approaches, and can provide robust and diverse uncertainty estimates forfuture OGMs. Note that while all other published works only evaluate the image quality performanceof the predicted OGMs, to the best of our knowledge, we are the first to employ a multi-object trackingmetric, optimal subpattern assignment metric (OSPA) [ 20], to more fully evaluate OGM predictionperformance in terms of tracking accuracy. In addition, we propose a predictive uncertainty-awareplanner by integrating the predicted and uncertain costmaps from our predictor, and demonstrate itssuperior navigation performance in dynamic simulated and real-world environments.2Ego-Motion Compensation k(•)Robot Motion: ......Lidar Measurements: Yt-τ:tMapper g(•)Static Objects: Dynamic Objects: ...Input OGMs: Ot-τ:t ConvLSTM h(•) Predicted OGM Samples: Ot+1 N(0, I) μσζ ~ N(0, I) Resampling: Z = μ + ζ σ Conv2d Block Conv2d Block Residual Block Residual Block Conv2d Block Encoder: q(Z | Ō t+1, m, ø) Decoder: p(O t+1 | Z, Ō t+1, m,θ) Conv2d Block Residual Block Residual Block Conv2d Conv2d Conv2d Block VAE Predictor: Pθ(Ot+1 | Ō t+1, m)m Ōt+1Local Environment Map: mRobot States: { Xt-τ:t , Ut-τ:t }c(•)Figure 2: System architecture of the SOGMP++ predictor (SOGMP omits the Static Objects block).2 Stochastic Occupancy Grip Map Predictor2.1 Problem FormulationIn a complex dynamic environment with both static and dynamic obstacles, the robot is equipped witha lidar sensor to sense its surroundings and uses the OGM to represent the environment. We assumeeach grid cell in the OGM is either occupied or free, i.e.,a binary OGM. We assume that the robot isable to obtain relatively accurate estimates of its own pose and velocities from its odometry sensorsor other localization algorithms over short periods of time (on the order of 1 s). We denote the poseand control velocity of the robot at time tbyxt= [xtytθt]Tandut= [vtwt]Trespectively. Letyt= [rtbt]Tdenote the lidar measurements (range rand bearing b) at time t, andotdenote the OGMat time t. Giving a history of τlidar measurements yt−τ:tand robot states {xt−τ:t,ut−τ:t}, therobot needs to predict the future state ( i.e.,OGM) of the environment ot+1. Then, this environmentprediction problem can be formulated as a prediction model:pθ(ot+1|yt−τ:t,xt−τ:t,ut−τ:t), (1)where θare the model parameters. The goal is to find the optimal θto maximize (1).Note that in this paper, we set τ= 10 , the sampling rate is 10 Hz, and limit the physical size ofOGMs to [0,6.4]m along the x axis (forward) and [−3.2,3.2]m along the y axis (left). We use a cellsize of 0.1 m, resulting in 64×64OGMs. These settings are consistent with other works on mobilerobot navigation [21]. All data o,u,yare represented in the local coordinate frame of the robot.2.2 System OverviewBefore describing our proposed SOGMP and SOGMP++ methods, we first briefly discuss image-based prediction methods. The prediction model (1) of image-based approaches is rewritten aspθ(ot+1|yt−τ:t,xt−τ:t,ut−τ:t) =fθ(ot−τ:t), ot−τ:t=c(yt−τ:t), (2)where fθ(·)is the neural network model, and c(·)is the conversion function to convert the lidarmeasurements to the binary OGMs in the robot’s local coordinate frame. From this image-basedmodel (2), we can easily see that image-based methods explicitly ignore the kinematics and dynamicsof the robot and surrounding objects ( i.e.,assuming they can be implicitly captured by very powerfulnetwork architectures and enough good data), and fail to provide a range of possible and reliableOGM predictions ( i.e.,assuming a deterministic future).Based on these limitations, we argue that: 1) the future state of the environment explicitly depends onthe motion of the robot itself, the motion of dynamic objects, and the state of static objects withinthe environment; 2) the future state of the environment is stochastic and unknown, and a range ofpossible future states helps provide robust predictions. With these two assumptions, we fully andexplicitly exploit the kinematic and dynamic information of these three different types of objects,utilize the V AE-based network to provide stochastic predictions, and finally propose two novelstochastic OGM predictors ( i.e.,SOGMP and SOGMP++, shown in Figure 2) to predict the futurestate of the environment. Since the only difference between the SOGMP and SOGMP++ is that the3SOGMP++ considers the static objects and the SOGMP does not, we mainly describe the SOGMP++model. The prediction model (1) of our SOGMP++, outlined in Figure 2, can be rewritten aspθ(ot+1|yt−τ:t,xt−τ:t,ut−τ:t) =pθ(ot+1|ˆ ot+1,m), (3a)ˆ ot+1=h(ot−τ:t),ot−τ:t=c(yRt−τ:t), (3b)m=g(yRt−τ:t), (3c)yRt−τ:t=k(yt−τ:t,xt−τ:t,ut−τ:t). (3d)The V AE predictor module corresponds to (3a). The dynamic object’s module corresponds to (3b),where h(·)is the time series data processing function for dynamic objects, c(·)is the conversionfunction like (2), and Rdenotes the local coordinate frame of the robot at predicted time t+n. Thestatic object’s module corresponds to (3c), where g(·)is the occupancy grid mapping function forstatic objects. Finally, the robot motion module corresponds to (3d), where k(·)is the transformationfunction for robot motion compensation. Note that this prediction model (3)only predicts futurestates at the next time step t+ 1and is used for training. To predict a multi-step future states at timet+n, we can easily utilize the autoregressive mechanism and feed the next-step prediction back ot+1to our SOGMP/SOGMP++ network (3a)forn−1time steps to predict the future states at time stept+n. Note that the prediction horizon ncould theoretically be any time step.2.3 Robot MotionTo account for the robot motion in dynamic scenarios, we propose a simple and effective ego-motioncompensation mechanism k(·)to mitigate its dynamic effects and allow the environment dynamics tobe consistent in the robot’s local coordinate frame. To predict the future OGM state in the robot’slocal future view, we consider the robot’s future ego-motion and transform the observed OGMsot−τ:tto the robot’s local coordinate frame at prediction time step t+n. So, our proposed ego-motion compensation mechanism can be divided into two steps: robot pose prediction and coordinatetransformation. To account for the robot’s future ego motion, we first use a constant velocity motionmodel to predict the robot’s future pose. Then, we use a homogeneous transformation matrix totransform the robot poses xt−τ:tand lidar measurements yt−τ:tto the robot’ local future coordinateframe Rand compensate for its ego-motion (see Appendix A.1 and Appendix A.2 for details).2.4 Dynamic ObjectsSince we use the OGM to represent the environmental state, we first need to implement a conversionfunction c(·)to convert the compensated lidar measurements yRt−τ:tto the corresponding OGMsot−τ:tby using coordinate to subscript conversion equations (see Appendix A.3 for details).Tracking and predicting dynamic objects such as pedestrians is the hardest part of environmentalprediction in complex dynamic scenes. It requires some techniques to process a set of time seriesdata to capture the motion information. While the traditional particle-based methods [ 5] requireexplicitly tracking objects and treat each grid cell as an independent state, recent learning-basedmethods [ 7–11,15–17,19] prefer to use RNNs to directly process the observed time series OGMs.Based on these trends, we choose the most popular ConvLSTM unit to process the spatiotemporalOGM sequences ot−τ:t. However, while other works [ 8,14] explicitly decouple the dynamic andstatic/unknown objects and use different networks to process them separately, we argue that themotion of dynamic objects is related to their surroundings, and that explicit disentangling may losesome useful contextual information. For example, pedestrians walking through a narrow corridorare less likely to collide with or pass through surrounding walls. To exploit the useful contextualinformation between dynamic objects and their surrounding, we directly feed the observed OGMsot−τ:tinto a ConvLSTM unit h(·)and implicitly predict the future state ˆ ot+1of dynamic objects.2.5 Static ObjectsWhile predicting dynamic objects plays a key role in environmental prediction, paying extra attentionto static objects is also important to improve prediction accuracy. The main reason is that the4area occupied by static objects is much larger than that of dynamic objects, as shown in Figure 2,where static objects such as walls are consistently clustered together, while dynamic objects such aspedestrians are sparse and scattered point clusters. Another reason is that static objects contribute tothe scene geometry and give a global view of the surroundings, where these static objects maintaintheir shape and position over time. To account for them, we utilize a local static environment map mas a prediction for future static objects, which is a key contribution of our work. We generate this localenvironment map musing the standard inverse sensor model [ 22]. This Bayesian approach generatesa robust estimate of the local map, where dynamic objects are treated as noise data and removed overtime. We speed up this step by implementing a GPU-accelerated OGM mapping algorithm g(·)thatparallelizes the independent cell state update operations (see Appendix A.4 for details).2.6 VAE PredictorIf we treat the dynamic prediction function (3b) and the static prediction function (3c)as the predictionstep of Bayes filters, then the V AE predictor (3a) is the update step of Bayes filters. It fuses thepredicted features of static objects from (3c) and dynamic objects from (3b), and finally predictsa distribution over future states of the environment. A range of possible future environment statesallows the robot to capture the uncertainty of the environment and enable risk-aware operationalbehavior in complex dynamic scenarios. To represent the stochasticity of environment states, weassume that environment states ot−τ:t+nare generated by some unobserved, random, latent variableszthat follow a prior distribution pθ(z). Then, our V AE predictor model (3a) can be rewritten aspθ(ot+1|ˆ ot+1,m) =Zpθ(z)pθ(ot+1|z,ˆ ot+1,m)dz. (4)Since we are unable to directly optimize this marginal likelihood and obtain optimal parameters θ, weuse a V AE network to parameterize our prediction model pθ(ot+1|ˆ ot+1,m), outlined in Figure 2,where the inference network (encoder) parameterized by φrefers to the variational approximationqφ(z|ˆ ot+1,m), the generative network (decoder) parameterized by θrefers to the likelihoodpθ(ot+1|z,ˆ ot+1,m), and the standard Gaussian distribution N(0,1)refers to the prior pθ(z). Then,from work [ 23], we can simply maximize the evidence lower bound (ELBO) loss L(θ, φ;ot+1)tooptimize this marginal likelihood and get optimal θ:L(θ, φ;ot+1) =Eqφ(z|ˆ ot+1,m)[logpθ(ot+1|z,ˆ ot+1,m)]−KLqφ(z|ˆ ot+1,m)∥pθ(z).(5)The first term on the right-hand side (RHS) is the expected generative error, describing how well thefuture environment states can be generated from the latent variable z. The second RHS term is theKullback–Leibler (KL) divergence, describing how close the variational approximation is to the prior.Finally, we use mini-batching, Monte-Carlo estimation, and reparameterization tricks to calculatethe stochastic gradients of the ELBO (5)[23], and obtain the optimized model parameters φandθ.Using them, our V AE predictor can integrate the predicted features {ˆ ot+1,m}of dynamic and staticobjects, and output a probabilistic estimate of the future OGM states with uncertainty awareness.3 Experiments and ResultsTo demonstrate the prediction performance of our proposed approaches, we first test our algorithmson a simulated dataset and two public real-world sub-datasets from the socially compliant navigationdataset (SCAND) [ 24]. Second, we characterize the uncertainty of our proposed predictors acrossdifferent sample sizes and numbers of objects. Finally, we propose a predictive uncertainty-awareplanner by using the prediction and uncertainty information from our predictor to demonstrate how itimproves robot navigation performance in crowded dynamic simulated/real-world environments. Seealso Appendix B.1 for experimental evaluation of the run time of different prediction algorithms.3.1 Prediction ResultsDataset: To train our networks and baselines, we collected an OGM dataset, called the OGM-Turtlebot2 dataset, using the 3D human-robot interaction simulator with a 0.8 m/s Turtlebot2 robot [ 25,5(a) WMSE: OGM-Turtlebot2 (b) WMSE: OGM-Jackal (c) WMSE: OGM-Spot(d) SSIM: OGM-Turtlebot2 (e) SSIM: OGM-Jackal (f) SSIM: OGM-Spot(g) OSPA: OGM-Turtlebot2 (h) OSPA: OGM-Jackal (i) OSPA: OGM-SpotFigure 3: Average WMSE, average SSIM, and average OSPA of 10 different prediction time steps forall tested methods on 3 different datasets. Note that the shadows for SOGMP NEMC, SOGMP andSOGMP++ approaches are plotted with 95% confidence interval over 32 samples. For all curves forWMSE and OSPA, lower is better. For all curves for SSIM, higher is better.26]. We collected a total of 94,891 (x,u,y)tuples, where 17,000 tuples are used for testing. Inaddition, to fairly evaluate all networks and examine performance in the real world, we extracted tworeal-world sub-datasets ( i.e.,(x,u,y)tuples) from the public SCAND [ 24] dataset: OGM-Jackaldataset (collected by a 2m/s Jackal robot) and OGM-Spot dataset (collected by a 1.6m/s Spot robot).These two real-world datasets are only used for testing. See Appendix A.5 for more details.Baselines: We compare our proposed SOGMP and SOGMP++ algorithms with four DL-basedbaselines: ConvLSTM [ 12], DeepTracking [ 6], PhyDNet [ 14], and an ablation baseline without theego-motion compensation module ( i.e.,SOGMP NEMC). Note that all networks were implementedby PyTorch framework [27] and trained using the self-collected OGM-Turtlebot2 dataset.3.1.1 Quantitative ResultsWe evaluate all test OGM predictors on the three datasets from above by evaluating the absoluteerror, structural similarity, and tracking accuracy (see Appendix A.6 for details). Figure 3 showsthe detailed results obtained from these tests, from which we can observe four phenomena. First,our proposed SOGMP predictors with ego-motion compensation achieve significantly better averageWMSE, SSIM, and OSPA than the SOGMP NEMC baseline without ego-motion compensation in alltest datasets, which illustrates the effectiveness of ego-motion compensation. Second, the averageWMSE of our proposed SOGMP predictors at different prediction time steps is lower than the otherimage-based baselines in all three test datasets collected by different robots. This shows that the6(a) Entropy vs. Number of samples (b) Entropy vs. Number of objectsFigure 4: Average entropy of our SOGMP and SOGMP++ predictors at 5th prediction time step.proposed SOGMP predictors utilizing kinematic and dynamic information are able to predict moreaccurate OGMs ( i.e.,smaller absolute error) than the state-of-art image-based approaches. Third,while our SOGMP predictors achieve the highest average SSIMs in all test datasets, it is interestingthat the average SSIM of our SOGMP++ with a local environment map is significantly higher thanthat of SOGMP without a local environment map in long-term predictions over multiple time steps.This indicates that the local environment map, which accounts for static objects, helps for longer-termprediction. Finally, the average OSPA errors of our proposed SOGMP predictors are significantlylower than that of the other four baselines, especially in longer prediction time steps. This furtherdemonstrates the preferential performance of our proposed motion-based methods in tracking orpredicting environmental states ( i.e.,localization and cardinality) over other image-based methods.However, the average OSPA errors of our proposed SOGMP and SOGMP++ predictors are almostthe same. We believe that this is because static objects are more persistent than dynamic objects, sothe static map does not provide significant benefits here. See also Appendix B.2 for examples ofpredicted OGMs and qualitative analysis.3.1.2 Uncertainty CharacterizationOne of the biggest differences of our proposed methods compared to other previous baseline worksis that they can provide uncertainty estimates. To demonstrate the diversity and consistency ofuncertainty estimates, we run two experiments to characterize and analyze the output distribution ofour SOGMP/SOGMP++ predictors. One is to show how the entropy of the final probabilistic OGMchanges as the number of OGM samples increases, with the hypothesis that it will level off at somevalue well below that of the entropy of a uniform distribution. The other is to show how the entropyof the final probabilistic OGM changes as the number of objects in the OGM increases, with thehypothesis that it will increase with the number of objects in it. Figure 4 shows the entropy results ofthese experiments on the OGM-Turtlebot2 dataset, which validates our hypotheses and demonstratesthe consistency of our V AE-based predictors. See Appendix A.7 for experimental details.3.2 Navigation ResultsWe next test the applicability of our proposed methods to practical robotics applications. Figure 5shows our planner, where we add two new costmaps to the standard ROS [ 28] navigation stack usingthe outputs of our SOGMP predictor: one for the predicted scene and one for the uncertainty of thescene. See Appendix A.8 for additional details about the experimental setup.3.2.1 Simulation ResultsUsing the Turtlebot2 robot in a simulated lobby environment with different crowd densities, asin [25,26], we compare our proposed predictive uncertainty-aware planner ( i.e.,DWA-SOMGP-PU)with five navigation baselines: supervised-learning-based CNN [ 25], deep reinforcement learning-7SOGMP Predicted OGM Samples Prediction Hokuyo Lidar Uncertainty Master costmap Static layer Inflation layer Prediction layer Uncertainty layer Obstacle layer Global Planner Predictive Uncertainty-Aware Planner Local Planner VxWzControl Velocities Figure 5: System architecture of the predictive uncertainty-aware navigation planner.Table 1: Navigation results at different crowd densitiesEnvironment Method Success Rate Average Time (s) Average Length (m) Average Speed (m/s)Lobby world,35 pedestriansCNN [25] 0.81 14.30 5.40 0.38A1-RC [29] 0.77 16.81 6.89 0.41DWA [30] 0.82 14.18 5.15 0.36DWA-DeepTracking-P 0.84 13.93 5.10 0.37DWA-SOMGP-P 0.86 13.79 5.12 0.34DWA-SOMGP-PU 0.89 14.90 5.12 0.34Lobby world,45 pedestriansCNN [25] 0.79 16.65 5.62 0.34A1-RC [29] 0.77 14.65 6.28 0.43DWA [30] 0.77 15.39 5.16 0.34DWA-DeepTracking-P 0.78 15.23 5.14 0.34DWA-SOMGP-P 0.79 14.84 5.14 0.35DWA-SOMGP-PU 0.82 15.96 5.17 0.32based A1-RC [ 29], model-based DWA [ 30], and two ablation baselines without the uncertaintycostmap ( i.e.,DWA-DeepTracking-P and DWA-SOGMP-P). Table 1 summarizes these navigationresults. As can be seen, our DWA-SOGMP-PU policy has the highest success rate in each crowdsize, while having almost the shortest path length. This shows that the prediction costmap from ourSOGMP predictor is able to help the traditional DWA planner to provide safer and shorter paths,and combining it with its associated uncertainty costmap can achieve a much better navigationperformance than other baselines. It demonstrates the effectiveness of our proposed predictors inhelping design safe robot navigation policies in crowded dynamic scenes. See Appendix B.3 forexamples of paths planned by each approach and qualitative analysis.3.2.2 Hardware ResultsBesides the simulated experiments, we also conduct a real-world experiment to demonstrate theapplicability of our DWA-SOGMP-PU policy in the real world. Specifically, we deploy our SOGMPpredictor and DWA-SOGMP-PU control policy to a real Turtlebot2 robot with a 2D Hokuyo lidarand an Nvidia Jetson Xavier computer, and let it navigate through a crowded dynamic corridor in anatural condition. From the attached Multimedia, it can be seen that the robot can safely navigate toits goal points among crowded pedestrians without any collision, which demonstrates the real-worldeffectiveness of our proposed SOGMP predictors and DWA-SOGMP-PU planner.4 Limitations and Future WorkIn this paper, we propose two versions of a novel V AE-based OGM prediction algorithm thatprovides mobile robots with the ability to accurately and robustly predict the future state of crowdeddynamic scenes. In addition, we integrate our predictors with the ROS navigation stack to proposea predictive uncertainty-aware planner and demonstrate its effectiveness on the problem of robotcrowded navigation. We found that the prediction accuracy of SOGMP predictors was poor when:1) the robot moves erratically or rotates rapidly; 2) the robot’s field of view is occluded by a largeobstacle. We also were not able to compare against any of the DL-based ego-motion compensationmethods [ 15–17,19] as they do not have open-source implementations and our attempts to recreatethe results were not successful. Our future work will leverage more accurate robot motion modelsand correct jump steering angles to overcome these limitations. We will continue to explore how tointegrate our uncertainty-aware predictors into learning-based control policies to further improverobot navigation performance in crowded dynamic scenes.8AcknowledgmentsThis work was funded by NSF grant 1830419 and Temple University. This research includescalculations carried out on HPC resources supported in part by the National Science Foundationthrough major research instrumentation grant number 1625061 and by the US Army ResearchLaboratory under contract number W911NF-16-2-0189.References[1]R. ED. Types and applications of autonomous mobile robots. https://www.conveyco.com/blog/types-and-applications-of-amrs , July 2022. (Accessed on 08/20/2022).[2]J.-u. Kim. Keimyung hospital demonstrates smart autonomous mobile robot. https://www.koreabiomed.com/news/articleView.html?idxno=10585 , Mar 2021. (Accessedon 08/20/2022).[3]SICK. Revolutionizing grocery shopping with mobile robots. https://sickusablog.com/revolutionizing-grocery-shopping-mobile-robots , Mar 2021. (Accessed on08/20/2022).[4]A. Ess, K. Schindler, B. Leibe, and L. Van Gool. Object detection and tracking for autonomousnavigation in dynamic environments. The International Journal of Robotics Research , 29(14):1707–1725, 2010.[5]D. Nuss, S. Reuter, M. Thom, T. Yuan, G. Krehl, M. Maile, A. Gern, and K. Dietmayer. Arandom finite set approach for dynamic occupancy grid maps with real-time application. TheInternational Journal of Robotics Research , 37(8):841–866, 2018.[6]P. Ondruska and I. Posner. Deep tracking: Seeing beyond seeing using recurrent neural networks.InThirtieth AAAI conference on artificial intelligence , 2016.[7]M. Itkina, K. Driggs-Campbell, and M. J. Kochenderfer. Dynamic environment prediction inurban scenes using recurrent representation learning. In 2019 IEEE Intelligent TransportationSystems Conference (ITSC) , pages 2052–2059. IEEE, 2019.[8]M. Toyungyernsub, M. Itkina, R. Senanayake, and M. J. Kochenderfer. Double-prong convlstmfor spatiotemporal occupancy prediction in dynamic environments. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 13931–13937. IEEE, 2021.[9]M. Schreiber, S. Hoermann, and K. Dietmayer. Long-term occupancy grid prediction usingrecurrent neural networks. In 2019 International Conference on Robotics and Automation(ICRA) , pages 9299–9305. IEEE, 2019.[10] M. Schreiber, V . Belagiannis, C. Gl ̈aser, and K. Dietmayer. Motion estimation in occupancygrid maps in stationary settings using recurrent neural networks. In 2020 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 8587–8593. IEEE, 2020.[11] B. Lange, M. Itkina, and M. J. Kochenderfer. Attention augmented convlstm for environmentprediction. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 1346–1353. IEEE, 2021.[12] X. Shi, Z. Chen, H. Wang, D.-Y . Yeung, W.-K. Wong, and W.-c. Woo. Convolutional LSTMnetwork: A machine learning approach for precipitation nowcasting. In Advances in NeuralInformation Processing Systems , volume 28, 2015.[13] W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction andunsupervised learning. In International Conference on Learning Representations , 2016.9[14] V . L. Guen and N. Thome. Disentangling physical dynamics from unknown factors for unsuper-vised video prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 11474–11484, 2020.[15] M. Schreiber, V . Belagiannis, C. Gl ̈aser, and K. Dietmayer. Dynamic occupancy grid mappingwith recurrent neural networks. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 6717–6724. IEEE, 2021.[16] J. Dequaire, P. Ondr ́uˇska, D. Rao, D. Wang, and I. Posner. Deep tracking in the wild: End-to-endtracking using recurrent neural networks. The International Journal of Robotics Research , 37(4-5):492–512, 2018.[17] Y . Song, Y . Tian, G. Wang, and M. Li. 2d lidar map prediction via estimating motion flow withgru. In 2019 International Conference on Robotics and Automation (ICRA) , pages 6617–6623.IEEE, 2019.[18] H. Thomas, M. G. de Saint Aurin, J. Zhang, and T. D. Barfoot. Learning spatiotemporal occu-pancy grid maps for lifelong navigation in dynamic scenes. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 484–490. IEEE, 2022.[19] N. Mohajerin and M. Rohani. Multi-step prediction of occupancy grid maps with recurrentneural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 10600–10608, 2019.[20] D. Schuhmacher, B.-T. V o, and B.-N. V o. A consistent metric for performance evaluationof multi-object filters. IEEE Transactions on Signal Processing , 56(8):3447–3457, 2008.doi:10.1109/TSP.2008.920469.[21] K. Katyal, K. Popek, C. Paxton, P. Burlina, and G. D. Hager. Uncertainty-aware occupancy mapprediction using generative networks for robot navigation. In 2019 International Conference onRobotics and Automation (ICRA) , pages 5453–5459. IEEE, 2019.[22] S. Thrun. Learning occupancy grid maps with forward sensor models. Autonomous robots , 15(2):111–127, 2003.[23] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 ,2013.[24] H. Karnan, A. Nair, X. Xiao, G. Warnell, S. Pirk, A. Toshev, J. Hart, J. Biswas, and P. Stone.Socially compliant navigation dataset (scand): A large-scale dataset of demonstrations for socialnavigation. arXiv preprint arXiv:2203.15041 , 2022.[25] Z. Xie, P. Xin, and P. Dames. Towards safe navigation through crowded dynamic environments.In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages4934–4940. IEEE, 2021.[26] Z. Xie and P. Dames. DRL-VO: Learning to navigate through crowded dynamic scenes usingvelocity obstacles. IEEE Transactions on Robotics , 39(4):2700–2719, 2023. doi:10.1109/TRO.2023.3257549.[27] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, et al. PyTorch: An imperative style, high-performance deep learninglibrary. In Advances in Neural Information Processing Systems , pages 8026–8037, 2019.[28] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y . Ng, et al. Ros:an open-source robot operating system. In ICRA workshop on open source software , volume 3,page 5. Kobe, Japan, 2009.10[29] R. Guldenring, M. G ̈orner, N. Hendrich, N. J. Jacobsen, and J. Zhang. Learning local planners forhuman-aware navigation in indoor environments. In 2020 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , pages 6053–6060. IEEE, 2020.[30] D. Fox, W. Burgard, and S. Thrun. The dynamic window approach to collision avoidance. IEEERobotics & Automation Magazine , 4(1):23–33, 1997.[31] N. L. Baisa. Derivation of a constant velocity motion model for visual tracking. arXiv preprintarXiv:2005.00844 , 2020.[32] C. Sch ̈oller, V . Aravantinos, F. Lay, and A. Knoll. What the constant velocity model can teachus about pedestrian motion prediction. IEEE Robotics and Automation Letters , 5(2):1696–1703,2020.[33] Z. Xie and P. Dames. Stochastic Occupancy Grid Map Prediction in Dynamic Scenes: Dataset.https://doi.org/10.5281/zenodo.7051560 .[34] D. Helbing and P. Molnar. Social force model for pedestrian dynamics. Physical Review E , 51(5):4282, 1995.[35] M. Moussa ̈ıd, D. Helbing, S. Garnier, A. Johansson, M. Combe, and G. Theraulaz. Experi-mental study of the behavioural mechanisms underlying self-organization in human crowds.Proceedings of the Royal Society B: Biological Sciences , 276(1668):2755–2762, 2009.[36] N. Ponomarenko, S. Krivenko, K. Egiazarian, V . Lukin, and J. Astola. Weighted mean squareerror for estimation of visual quality of image denoising methods. In CD ROM Proceedings ofVPQM , volume 5. Scottsdale USA, 2010.[37] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a commonmulti-scale convolutional architecture. In Proceedings of the IEEE international conference oncomputer vision , pages 2650–2658, 2015.[38] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from errorvisibility to structural similarity. IEEE transactions on image processing , 13(4):600–612, 2004.[39] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clustersin large spatial databases with noise. In Proceedings of the Second International Conference onKnowledge Discovery and Data Mining , KDD’96, page 226–231. AAAI Press, 1996.[40] C. E. Shannon. A mathematical theory of communication. ACM SIGMOBILE mobile computingand communications review , 5(1):3–55, 2001.11A Additional Implementation DetailsA.1 Robot Pose PredictionTo account for the robot’s future ego-motion, we need to predict the future pose of the robot atprediction time step t+n. Since the constant velocity motion model is the most widely used motionmodel for tracking [ 31] and often outperforms more state-of-the-art methods in general settings [ 32],we use it as our robot motion model and assume that the robot keeps constant motion in a relativelyshort period ( i.e.,less than 1 second). Note that other more suitable robot motion models specific toparticular robot models can be used to provide better robot pose predictions. Then, we can easilypredict the future pose of the robot xt+nat prediction time step t+nusing the robot’s current posextand velocity ut:"xt+nyt+nθt+n#="xtytθt#+"vtcos(θt)vtsin(θt)wt#n∆t+"σxσyσθ#, (6)where ∆tis the sampling interval, σ(·)is the Gaussian noise.A.2 Coordinate TransformationTo compensate for the ego-motion of the robot, we first use a homogeneous transformation matrix totransform the robot poses xt−τ:tto the robot’s local future coordinate frame R:xRt−τ:tyRt−τ:t1="cos(θt+n)−sin(θt+n)xt+nsin(θt+n) cos( θt+n)yt+n0 0 1#−1"xt−τ:tyt−τ:t1#, (7a)θRt−τ:t=θt−τ:t−θt+n. (7b)Then, by adding these ego-motion displacements, we convert the observed lidar measurements yt−τ:tfrom Polar to Cartesian coordinates:yRt−τ:t=xzt−τ:tyzt−τ:t=xRt−τ:tyRt−τ:t+rt−τ:tcos(bt−τ:t+θRt−τ:t)sin(bt−τ:t+θRt−τ:t). (8)Finally, we implement the transformation function (3d) and obtain a set of observed lidar measure-ments yRt−τ:tat the robot’s local future coordinate frame R. The benefit of ego-motion compensationis that we can treat these lidar measurements from a moving lidar sensor as observations from astationary lidar sensor at R. This significantly reduces the difficulty of OGM predictions and improvesaccuracy.A.3 GPU-accelerated Conversion Function c(·)Algorithm 1 shows the pseudo-code for the GPU-accelerated conversion function c(·)(i.e., (3b)) to convert the lidar measurements to the binary occupancy grid maps.Algorithm 1: Converting lidar points to OGMsInput: compensated lidar measurements yRt−τ:tInput: grid cell size sInput: the physical size of the OGMs SInput: the lower left corner of the OGMs (x0, y0)Output: OGMs ot−τ:t1:initialize: ot−τ:t= 02:for all parallel beams z∈yRt−τ:tdo3: i=⌊(xzt−τ:t−x0)/s⌋4: j=⌊(yzt−τ:t−y0)/s⌋5: ifi, j∈[0, S/s]then6: ot−τ:t(i, j) = 17: end if8:end for12A.4 GPU-accelerated OGM Mapping Algorithm g(·)Algorithm 2 shows the pseudo code for the GPU-accelerated and parallelized OGM mapping algo-rithm g(·)1(i.e.,(3c)) that parallelizes the independent cell state update operation.Note that liis the log odds representation of occupancy in the occupancy grid map m[22].Algorithm 2: GPU-accelerated OGM mappingInput: compensated lidar measurements yRt−τ:tOutput: local environment map m1:for all time steps nfrom t−τtotdo2: for all parallel grid cells miin the perceptual field of yRndo3: li=li+ logp(mi|yRn)1−p(mi|yRn)−logp(mi)1−p(mi)4: end for5:end forA.5 Dataset CollectionsWe collected three OGM datasets to evaluate our proposed prediction algorithm, one self-collectedfrom the simulator ( i.e.,OGM-Turtlebot2) and two extracted from the real-world dataset SCAND [ 24](i.e.,OGM-Jackal and OGM-Spot). Note that all three datasets are open source and availableonline [33].A.5.1 OGM-Turtlebot2 DatasetA simulated Turtlebot2 equipped with a 2D Hokuyo UTM-30LX lidar navigates around an indoorenvironment with 34 moving pedestrians using random start points and goal points, as shown inFigure 6. The Turtlebot2 uses the dynamic window approach (DWA) planner [ 30] and has a maximumspeed of 0.8 m/s. The Turtlebot2 robot was set up to navigate autonomously in the 3D simulated lobbyenvironment to collect the OGM-Turtlebot2 dataset. The moving pedestrians in the human-robotinteraction Gazebo simulator [ 25,26] are driven by a microscopic pedestrian crowd simulation library,called the PEDSIM, which uses the social forces model [ 34,35] to guide the motion of individualpedestrians:Fp=Fdesp+Fobsp+Fperp+Frobp, (9)where Fpis the resultant force that determines the motion of a pedestrian; Fdesppulls a pedestriantowards a destination; Fobsppushes a pedestrian away from static obstacles; Fperpmodels interactionswith other pedestrians ( e.g., collision avoidance or grouping); and Frobppushes pedestrians awayfrom the robot, modeling the way people would naturally avoid collisions and thereby allowing ourcontrol policy to learn this behavior. More details can be found in Xie and Dames [26].We collected the robot states {x,u}and raw lidar measurements yat a sampling rate of 10 Hz.We collected a total of 94,891 (x,u,y)tuples, dividing this into three separate subsets for training(67,000 tuples), validation during training (10,891 tuples), and final testing (17,000 tuples).A.5.2 OGM-Jackal and OGM-Spot DatasetsFigure 7 shows the real-world outdoor environment at UT Austin and the Jackal robot used to collectthe raw SCAND dataset to construct the OGM-Jackal dataset. Figure 8 shows the real-world indoorenvironment at UT Austin and the Spot robot used to collect the raw SCAND dataset to construct theOGM-Spot dataset. Note that the SCAND dataset was collected by humans manually operating theJackal robot and the Spot robot around the indoor/outdoor environments at UT Austin. More detailscan be found in Karnan et al. [24].1https://github.com/TempleRAIL/occupancy_grid_mapping_torch13ZED Camera Hokuyo Lidar Wheel Odometry Figure 6: Gazebo simulated environment, where the Turtlebot2 robot was used to collect the OGM-Turtlebot2 dataset.RGB Camera Velodyne Lidar Wheel Odometry Stereo Camera Figure 7: Outdoor environment at UT Austin, where the Jackal robot was used to collect the OGM-Jackal dataset.A.6 Experiment Details for OGM PredictionA.6.1 Evaluation MetricsTo comprehensively evaluate the performance of OGM predictors, we define the predicted OGM as ̄oand the ground truth OGM as o, and use the following three metrics:•Weighted mean square error (WMSE) [36] :WMSE =PNi=1wi( ̄oi−oi)2PNi=1wi, (10)where Nis the number of cells in the OGM, and wiis the weight for the cell iin the OGM,calculated by the median frequency balancing method [ 37]. This metric is used to evaluatethe weighted absolute errors (balancing the imbalance in the percentage of occupied andfree cells) between the predicted OGM and its corresponding ground truth OGM, describingthe predicted quality of single OGM cell.14Grayscale Cameras Velodyne Lidar Visual Odometry Joint Angle RGB Camera Figure 8: Indoor environment at UT Austin, where the Spot robot was used to collect the OGM-Spotdataset.•Structural similarity index measure (SSIM) [38]:SSIM =(2μ ̄oμo+C1) (2δ ̄oo+C2)(μ2 ̄o+μ2o+C1) (δ2 ̄o+δ2o+C2), (11)where μ(·)andδ(·)denote the mean and variance/covariance, respectively, and C(·)denotesconstant parameters to avoid instability. We use C1= 1e−4andC2= 9e−4. This metric isused to evaluate the structural similarity between the predicted OGM and its correspondingground truth OGM, describing the predicted quality of the scene geometry.•Optimal subpattern assignment metric (OSPA) [20]:OSPA = 1nminπ∈ΠnmX1dc( ̄oi,oπ(i))p+cp(n−m)!1p, (12)where cis the cutoff distance, pis the norm associated to distance, dc( ̄o,o) = min( c,∥ ̄o−o∥), and Πnis the set of permutations of {1,2, ..., n}. We use c= 10 (i.e.,1 m) and p= 1.This metric is used to evaluate the target tracking accuracy between the predicted OGMand its corresponding ground truth OGM, describing the predicted quality of multi-targetlocalization and assignment.It is worth noting that while other OGM prediction works [ 7–11,15–17,19] only use the computervision metrics ( e.g., MSE, F1 Score, and SSIM) to evaluate the quality of predicted OGMs, we arethe first to evaluate the predicted OGMs from the perspective of multi-target tracking ( i.e.,OSPA).We believe that since it takes into account multi-target localization error and cardinality error, it cangive a more accurate and comprehensive evaluation than only evaluating the image quality.A.6.2 Evaluation Pipeline for Calculating OSPA Error on OGMsThis evaluation pipeline about how to extract targets from OGMs to calculate their OSPA errors isshown in Figure 9. First, we binarize the predicted OGMs with an occupancy threshold pfree= 0.3,which is set by referencing the occthresh default parameter of 0.25 from the gmapping ROSpackage. Second, we use the density-based spatial clustering of applications with noise (DBSCAN)[39] algorithm to cluster the obstacle points in the OGMs. Finally, we use the mean position of eachcluster as the target to calculate the OSPA error (with cutoff distance 10 cells, or 1 m). Note, we getthe ground truth target by applying the same process on the ground truth OGMs.15Binarize DBSCAN Mean Ground Truth Binarized Clusters Targets Binarize DBSCAN Mean Prediction Binarized Clusters Targets OSPA Error Figure 9: Evaluation pipeline for calculating OSPA error on predicted OGMs.A.7 Experiment Details for Uncertainty CharacterizationA.7.1 Evaluation MetricsTo comprehensively characterize the uncertainty information of our SOGMP and SOGMP++ pre-dictors, we define the predicted OGM as ̄oand the ground truth OGM as oand use the Shannonentropy [40] as the metric:H( ̄o) =1NNXi=1[ ̄oilog ̄oi+ (1− ̄oi) log(1 − ̄oi)], (13)where Nis the number of cells in the predicted OGM ̄o, and ̄oiis the value of the cell iin thepredicted OGM ̄o.A.7.2 Experiment SetupSince our SOGMP/SOGMP++ network predicts a bunch of binary OGM samples ( i.e.,cell valueis 0 or 1) rather than a probabilistic OGM, we first combine these binary OGM samples to create asingle probabilistic OGM and then use Shannon entropy to characterize its uncertainty. Note that thenumber of samples we draw can be scaled according to the robot’s available computational resources.Before we conduct our uncertainty experiments, we first compute the number of objects in eachinput sequence of the OGM-Turtlebot test dataset at the 5th prediction time step using the evaluationpipeline shown in A.6.2, where we classify these input sequences into 12 categories according to thenumber of objects in them. Then, we randomly select 20 input sequences for each number of objects(i.e.,from 1 to 12) and generate a total of 1,024 OGM samples for each input sequence at the 5thprediction time step.To analyze the relationship between the entropy of the predicted OGM and its sample size, we use allselected test sequences and calculate the average entropy over the number of samples growing as anexponential power of 2. To analyze the relationship between the entropy of the predicted OGM andthe number of objects in it, we first use 1,024 OGM samples for each input sequence to generate thefinal probabilistic OGM and then calculate the average entropy over the number of objects from 1 to12.A.8 Experiment Details for Robot NavigationA.8.1 Evaluation MetricsTo comprehensively evaluate the performance of navigation control policies, we use the followingfour metrics from [25, 26]:•Success rate : the fraction of collision-free trials.•Average time : the average travel time of trials.16•Average length : the average trajectory length of trials.•Average speed : the average speed during trials.A.8.2 Experiment SetupFor the robot navigation experiments, we use the Turtlebot2 robot with a maximum speed of 0.5 m/s,equipped with a Hokuyo UTM-30LX lidar and an NVIDIA Jetson A VG Xavier embedded computer.Considering the computational resources of the Turtlebot2 robot, we use the SOGMP predictor togenerate 8 predicted OGM samples at the 6th prediction time step ( i.e.,0.6 s). Based on these 8predicted OGM samples, we generate a prediction map (mean) and an uncertainty map (standarddeviation), which are used to generate the prediction costmap layer and uncertainty costmap layerrespectively for our predictive uncertainty-aware planner ( i.e.,DWA-SOGMP-PU), as shown inFigure 5. Note that each costmap grid cell has an initial constant cost, and we map each occupiedgrid cell of the prediction costmap and uncertainty costmap to a Gaussian obstacle value rather than a“lethal” obstacle value. This is because the predicted obstacles and uncertainty regions are not realobstacle spaces.17B Additional ResultsB.1 Inference Speed ResultsBefore we focus on quantitative results on the quality of these OGM predictions, we first talk aboutthe inference speed and model size of these predictors. This is because robots are resource-limited,and smaller model sizes and faster inference speeds mean robots have a faster reaction time to faceand handle dangerous situations in complex dynamic scenarios.Table 2 summarizes the inference speed and model size of six predictors tested on an NVIDIA JetsonTX2 embedded computer equipped with a 256-core NVIDIA Pascal @ 1300MHz GPU. We can seethat although DeepTracking [ 6] has the smallest model size, our proposed SOGMP models are about1.4 times smaller than the ConvLSTM [ 12] and 4 times smaller than the PhyDNet [ 14], and theirinference speed is the fastest (up to 24 FPS).Table 2: Inference Speed and Model SizeModels ConvLSTM [12] PhyDNet [14] DeepTracking [6] SOGMP NEMC SOGMP SOGMP++FPS 2.95 4.66 5.32 24.83 23.29 10.68# of Params 12.44 M 37.17 M 0.95 M 8.84 M 8.84 M 8.85 MB.2 Qualitative Prediction ResultsFigure 10, Figure 11, Figure 12, and the accompanying Multimedia illustrate the future OGM predic-tions generated by our proposed predictors and the baselines. We observe two interesting phenomena.First, the image-based baselines, especially the PhyDNet, generate blurry future predictions after5 time steps, with only blurred shapes of static objects ( i.e.,walls) and missing dynamic objects(i.e.,pedestrians). We believe that this is because these three baselines are deterministic modelsthat use less expressive network architectures, only treat time series OGMs as images/video, andcannot capture and utilize the kinematics and dynamics of the robot itself, dynamic objects, andstatic objects. Second, the SOGMP++ with a local environment map has a sharper and more accuratesurrounding scene geometry ( i.e.,right walls) than the SOGMP without a local environment map.This difference indicates that the local environment map for static objects is beneficial and plays akey role in predicting surrounding scene geometry.In addition, Figure 13 shows a diverse set of prediction samples from our SOGMP++ predictor. Thered bounding boxes highlight the ability of our SOMGP++ predictor to provide diverse and plausiblepotential future predictions for stochastic and dynamic environments.B.3 Qualitative Navigation ResultsFigure 14 shows the difference of nominal paths and costmaps generated by three different plannersin the simulated lobby environment. The default DWA planner [ 30] only cares about the current stateof the environment and generates a costmap based on the perceived obstacles. The predictive DWAplanner ( i.e.,DWA-SOGMP-P) using the prediction map of our proposed SOGMP predictor cangenerate a costmap with predicted obstacles. The predictive uncertainty-aware DWA planner ( i.e.,DWA-SOGMP-PU) using both the prediction map and uncertainty map of our proposed SOGMPpredictor can generate a safer costmap with predicted obstacles and uncertainty regions. Theseadditional predicted obstacles and uncertainty regions of our proposed DWA-SOGMP-PU plannerenable the robot to follow safer nominal paths and reduce collisions with obstacles, especially movingpedestrians. See the accompanying Multimedia for a detailed navigation demonstration.18SOGMP++ Ground truth PhyDNet SOGMP ConvLSTM n = 1 n = 2 n = 3 n = 4 n = 6 n = 5 n = 7 n = 8 n = 9 n = 10 SOGMP_NEMC DeepTracking Figure 10: A prediction showcase of the six predictors tested on the OGM-Turtlebot2 dataset overthe prediction horizon. The black area is the free space and the white area is the occupied space.SOGMP++ Ground truth PhyDNet SOGMP ConvLSTM n = 1 n = 2 n = 3 n = 4 n = 6 n = 5 n = 7 n = 8 n = 9 n = 10 SOGMP_NEMC DeepTracking Figure 11: A prediction showcase of the six predictors tested on the OGM-Turtlebot2 dataset overthe prediction horizon. The black area is the free space and the white area is the occupied space.19SOGMP++ Ground truth PhyDNet SOGMP ConvLSTM n = 1 n = 2 n = 3 n = 4 n = 6 n = 5 n = 7 n = 8 n = 9 n = 10 DeepTracking SOGMP_NEMC Figure 12: A prediction showcase of the six predictors tested on the OGM-Turtlebot2 dataset overthe prediction horizon. The black area is the free space and the white area is the occupied space.Sample 1 Sample 2 Sample 3 Sample 4 Figure 13: A diverse set of prediction samples of our SOGMP++ predictor tested on the OGM-Turtlebot2 dataset at the 5th prediction timestep. The black area is the free space and the white areais the occupied space. The red bounding boxes show multiple possible predictions.(a) DWA [30] (b) DWA-SOGMP-P (c) DWA-SOGMP-PUFigure 14: Robot reactions and their corresponding costmaps generated by different control policiesin the simulated lobby environment. The robot (black disk) is avoiding pedestrians (colorful squareboxes) and reaching the goal (red disk) according to the nominal path (green line) planned by thecostmap (square white map).20 |
ckeT8cMz_A | REBOOT: Reuse Data for Bootstrapping EfficientReal-World Dexterous ManipulationZheyuan Hu1∗, Aaron Rovinsky1∗, Jianlan Luo1, Vikash Kumar2, Abhishek Gupta3, Sergey Levine11UC Berkeley2Meta AI Research3University of WashingtonFigure 1: REBOOT achieves 2Xsample efficiency boost on learning a variety of contact-rich real-worlddexterous manipulation skills on three different objects autonomously by bootstrapping on prior data acrossdifferent objects and tasks with sample-efficient RL and imitation learning-based reset policies.Abstract: Dexterous manipulation tasks involving contact-rich interactions pose asignificant challenge for both model-based control systems and imitation learningalgorithms. The complexity arises from the need for multi-fingered robotic handsto dynamically establish and break contacts, balance non-prehensile forces, andcontrol large degrees of freedom. Reinforcement learning (RL) offers a promisingapproach due to its general applicability and capacity to autonomously acquireoptimal manipulation strategies. However, its real-world application is often hin-dered by the necessity to generate a large number of samples, reset the environ-ment, and obtain reward signals. In this work, we introduce an efficient systemfor learning dexterous manipulation skills with RL to alleviate these challenges.The main idea of our approach is the integration of recent advances in sample-efficient RL and replay buffer bootstrapping. This combination allows us to utilizedata from different tasks or objects as a starting point for training new tasks, sig-nificantly improving learning efficiency. Additionally, our system completes thereal-world training cycle by incorporating learned resets via an imitation-basedpickup policy as well as learned reward functions, eliminating the need for man-ual resets and reward engineering. We demonstrate the benefits of reusing pastdata as replay buffer initialization for new tasks, for instance, the fast acquisitionof intricate manipulation skills in the real world on a four-fingered robotic hand.(Videos: https://sites.google.com/view/reboot-dexterous)Keywords: Dexterous Manipulation, Reinforcement Learning, Sample-Efficient1 IntroductionDexterous manipulation tasks involving contact-rich interaction, specifically those involving multi-fingered robotic hands and underactuated objects, pose a significant challenge for both model-basedcontrol systems and imitation learning algorithms. The complexity arises from the need for multi-∗Both authors contributed equally7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 2: REBOOT System Overview: Our method learns various dexterous manipulation skills in the realworld using raw image observations. This is enabled by using sample-efficient RL and bootstrapping with datafrom other tasks and even other objects, with autonomous resets.fingered robotic hands to dynamically establish and break contacts, balance non-prehensile forces,and control a high number of degrees of freedom. Reinforcement learning (RL) offers a promisingsolution for such settings. In principle, RL enables a robot to refine its manipulation skills througha process of trial-and-error, alleviating the requirement for strong modeling assumptions. However,making RL methods practical for learning such complex behaviors directly in the real world presentsa number of obstacles. The main obstacle is sample efficiency: particularly for tasks that requirecomplex interactions with many possibilities for failure (e.g., in-hand reorientation where the robotmight drop the object), the number of trials needed for learning a skill with RL from scratch mightbe very high, requiring hours or even days of training. Additionally, real-world learning outside ofthe lab requires the robot to perform the entire training process using its own sensors and actuators,evaluating object state and rewards using camera observations, and resetting autonomously betweentrials. Because of these challenges, many prior works on RL for dexterous manipulation have ex-plored alternative solutions, such as sim-to-real transfer [1, 2, 3], imitation learning [4, 5, 6], or theuse of tools like motion capture [7, 2] or separately-engineered reset mechanisms [8, 9].In this paper, we instead propose a system that is designed to make direct RL in the real worldpractical without these alternatives, so as to take a step toward robots that could one day learn underassumptions that are sufficient for autonomously acquiring new skills in open-world settings, evenoutside the laboratory. This means that the entire learning process must be conducted using therobot’s own sensors and actuators, without simulation or additional instrumentation, and be efficientenough to learn skills quickly. We posit that a key enabling factor for this goal is to reuse data frompast skills, and we instantiate this with a simple buffer initialization method, where the replay bufferof each skill is initialized with data from other tasks or even other objects. In combination with avision-based method for learning reward functions from user-provided images and a learned resetprocedure to automatically pick up an object between trials, we demonstrate that our system enablesa robotic hand to learn in-hand reorientation skills in just a few hours of fully autonomous training,using only camera observations and joint encoder readings.Our main contribution is REBOOT , a system to Reuse Data for Boot strapping Real-World Dex-terous Manipulation, which we illustrate in Figure 2. By simply preloading the replay buffer usingprior data from other objects and tasks, our system avoids starting from scratch for every new task.By combining recent advances in sample-efficient online RL [10] with buffer initialization to boot-strap learning from prior tasks and objects, we show that in-hand manipulation behaviors can belearned in a few hours of autonomous practicing.We additionally use learned reset skills to make training autonomous, and extend adversariallylearned rewards to handle our buffer initialization method, allowing users to specify tasks with afew examples of desired object poses and without manual reward engineering. Some of the skills2learned by our system, shown in Figure 3, include in-hand reorientation of a three-pronged object,handling a T-shaped pipe, and manipulating a toy football.2 Related WorkA number of designs for anthropomorphic hands have been proposed in prior work [11, 12, 13].Prior learning-based methods to control such hands utilize trajectory optimization [14, 15], policysearch [16, 17, 18], demonstration-based learning [19, 20, 21, 22], simulation to real-world transfer[3, 23, 24, 25], reinforcement learning directly in the real world [26, 8, 27, 28, 29], or a combinationof these approaches [30].Most of the aforementioned works leveraged accurate simulations or engineered real-world state-estimation systems to provide compact state representations. In contrast, we seek to learn visuomo-tor policies autonomously and entirely in the real world without access to privileged state informa-tion, under assumptions that more closely reflect what robots might encounter when learning “onthe job” outside of laboratory settings. Prior work has explored learning these policies in simulation[31, 32], where autonomy is not of concern due to the ability to reset the simulated environment.Most real-world methods either rely on instrumentation for state estimation [28] or deal with sim-pler robots and tasks [27]. An important consideration in our system is the ability to specify a taskwithout manual reward engineering. Although task specification has been studied extensively, mostprior works make a variety of assumptions, ranges from having humans-provided demonstrationsfor enabling imitation learning [4, 33, 34], using inverse RL [35, 36, 37], active settings whereusers can provide corrections [38, 39, 40], or ranking-based preferences [41, 42]. Our in-hand RLtraining phase learns from raw high-dimensional pixel observations in an end-to-end fashion usingDrQ[43] and VICE[44], although our system could use any reward inference algorithm based uponsuccess examples [45]. With users defining the manipulation task by providing a small number ofimage goals instead of full demonstrations, our method not only removes the barrier to orchestratehigh-dimensional finger motions [46, 47] but also accelerates robot training progress by offering suf-ficient reward shaping for RL in real-world scenarios without per-task reward engineering. WhileA V AIL [29] also learns dexterous manipulation skills from raw images, we show in our comparisonthat our system is faster, and our buffer initialization approach significantly speeds up the acquisitionof in-hand manipulation skills compared to starting from scratch.Buffer initialization has also been employed by Smith et al. [48] in the context of transfer learningfor robotic locomotion, where a similar approach was used to create a curriculum for locomotionskills or adapt to walking on new terrains. Our method differs in several significant ways. First, ourmethod learns from raw image observations with learned reward functions defined through a fewexample images, whereas [48] uses hand-programmed rewards. Second, our focus is on learningintricate dexterous manipulation skills from scratch in the real world, whereas [48] uses initializa-tion in simulation. Although the methodology is closely related, our proposed system extends themethodology in significant ways, enabling the use of vision and learned rewards in a very differentdomain.Reset-free learning is essential for autonomous real-world training of dexterous skills (see [49] fora review of reset-free methods). Most of the prior works [27, 28, 29, 50, 51, 52, 53, 54] rely on“backward” policies to reset the environment so the “forward” policy can continue learning thetask of interest. Similarly, we divide training into two phases due to different skills having uniquedemands for control complexities and user-provided supervision. Specifically, the skill needed topick up objects in reset is better studied and developed for immediate usage through imitating user-provided demonstrations [55].3 Robot Platform and Problem OverviewIn this work, we use a custom-built, 4-finger, 16-DoF robot hand mounted to a 7-DoF Sawyer roboticarm for dexterous object manipulation tasks. Our platform is shown in Figure 3. Our focus is on3learning in-hand reorientation skills with reinforcement learning. During the in-hand manipulationphase, the RL policy controls the 16 degrees of freedom of the hand, setting target positions at 10Hz, with observations provided by the joint encoders in the finger motors and two RGB cameras,Figure 3: Depiction of our hardware plat-form and tasks. (a)custom-built 16 DoFrobotic hand (c)teleoperation using the 3-Dmouse, to interact with the following objectsin-hand (b)blue football, (d)3-prongedvalve, (e)T-shaped pipe.one overhead and another embedded in the palm of thehand. To facilitate autonomous training, we also use im-itation learning to learn a reset policy to pick up the ob-ject from the table in-between in-hand manipulation tri-als. This imitation policy uses a 19-dimensional actionspace, controlling the end effector position of the wristand 16 finger joints to pick up the object from any loca-tion.Our tasks are parameterized by images of desired objectposes in the palm of the hand. Since the reset policycan grasp the object in a variety of poses, the in-handpolicy must learn to rotate and translate the object care-fully to achieve the goal pose. We train and evaluate ourmethod entirely in the real world. In the following sec-tions, we describe how data from different objects can beused to bootstrap new manipulation skills for more effi-cient learning.4 Reinforcement Learning with Buffer InitializationIn this work, we propose a system for efficiently learning visuomotor policies for dexterous ma-nipulation tasks via bootstrapping with prior data. We describe our learning method and real-worldconsiderations for our system in the following subsections.Problem setting. Our method leverages the framework of Markov decision processes for rein-forcement learning as described in [56]. In RL, the aim is to learn a policy π(at|st)that obtainsthe maximum expected discounted sum of rewards R(st, at)under an initial state distribution ρ0,dynamics P(st+1|st, at), and discount factor γ. The formal objective is as follows:J(π) =E s0∼ρ0at∼π(at|st)st+1∼P(st+1|st,at)"TXt=0γtR(st, at)#(1)The particular reinforcement learning algorithm that we build on in this work is RLPD [10], asample-efficient RL method that combines a design based on soft actor-critic (SAC) [57] with anumber of design decisions to enable fast training. This approach trains a value function Qπ(st, at)in combination with an actor or policy π(at|st), though in principle our system could be compatiblewith a variety of off-policy RL algorithms that utilize a replay buffer. For more details on RLPD,we refer readers to prior work [10].Reinforcement learning with buffer initialization. While using a sample-efficient RL algorithmsuch as RLPD to acquire in-hand manipulation skills can be feasible in the real world, the trainingprocess can take a very long time (see Section 5). A central component of our system design isto utilize data from other tasks or even other objects to bootstrap the acquisition of new in-handmanipulation skills. In our experiments, we will show that a very simple procedure can make thispossible: for every RL update, we sample half the batch from the growing buffer for the current task,and half the batch from a buffer containing the experience from all of the prior tasks. Thus, if n−1skills have been learned, to learn a new n-th skill, we pre-load the replay buffer with trajectoriesfrom each of the n−1prior skills and sample half of each training batch from prior data and theother half from the new agent’s online experience. This 50-50 sampling method has been used insome prior works, including4RLPD [10, 58], in order to initialize online RL with offline data from the same task . However, inour system, we adapt this procedure to bootstrap a behavior from other skills. Since all of the tasksuse visual observations, the generalization ability of the value function and policy networks can thenlearn to make use of this prior experience to assist in learning the new task. Note that it is not at allobvious that prior experience like this should be directly useful, as other tasks involve visiting verydifferent states or manipulating different objects. However, if the networks are able to extract forexample a general understanding of contacts or physical interactions, then we would expect this toaccelerate the acquisition of the new task.Demonstration-based reset-free learning. In-between in-hand manipulation trials, the robot maydrop the object and need to pick it back up again to attempt the task again. To automate training, wemust also acquire an autonomous pick-up policy to serve as a reset mechanism for the in-hand task,retrieving objects that may have fallen out of the hand during in-hand manipulation. We observethat the reset task is composed of essentially the same reaching, power grasping, and lifting up skillsacross different objects. Unlike complex manipulation tasks in the in-hand phase, a human operatorcan provide demonstrations for these skills more conveniently and effectively, overcoming the wideinitial state distribution issue due to the fact that objects can fall to anywhere in the environment. Asshown in prior work [27], exploration is especially challenging for RL in such settings. Thus, we usebehavioral cloning (BC) to train policies for the reset phase from simple demonstrations providedwith a 3D mouse and a discrete finger closing command. Note that no demonstrations are used forthe actual in-hand reorientation skill (which is difficult to teleoperate), only for the comparativelysimpler reset skill, which only requires picking up the object.Reward learning via goal images with buffer initialization. Our aim is to enable our system tolearn under assumptions that are reasonable outside of the laboratory: the robot should use the sen-sors and actuators available to it for all parts of the learning process, including using an autonomousreset policy and eschewing ground truth state estimation (e.g., motion capture) in favor of visualobservations that are used to train an end-to-end visuomotor policy. However, this requires us to beable to evaluate a reward function for the in-hand RL training process from visual observations aswell, which is highly non-trivial. We therefore use an automated method that uses goal examplesprovided by a person (e.g., positioning the object into the desired pose and placing it on the hand)to learn a success classifier, which then provides a reward signal for RL. Thus, for each in-handmanipulation task Ti, we assume a set Giconsisting of a few goal images depicting the successfulcompletion of the task. Na ̈ıvely training a classifier and using it as a reward signal is vulnerable toexploitation, as RL can learn to manipulate the object so as to fool the classifier [44]. We thereforeadapt VICE [44] to address this challenge, which trains an adversarial discriminator pre-defined goalimages as positives ( y= 1) and observation samples from the replay buffer as negatives ( y= 0).However, it is necessary to adapt this method to handle our buffer initialization approach, sinceVICE is by design an on-policy [44]. We first summarize the VICE algorithm and the regularizationtechniques we employ to make it practical for vision-based training, and then discuss how we adaptit to handle buffer initialization.A common issue with adversarial methods such as VICE is instability and mode collapse. Wefound strong regularization techniques based on mixup [59] and gradient penalty [60] to be essentialto stabilize VICE for learning image-based tasks, and these regularizers additionally aid the RLprocess by causing the classifier to produce a smoother, more shaped reward signal. The VICEclassifier predicts logpθ(g|ot), the log probability that the observation otcorresponds to the goal g,which can then be used as a reward signal for RL training. The VICE classifier parameterized by θ,Dθ, is then optimized by minimizing a regularized discriminator loss:L(x;θ) =λ· Lλ(x;θ) + (1 −λ)· L1−λ(x;θ) +α(∥∇xDθ(x)∥2−1)2(2)where the input xis a batch of evenly mixed user-defined goal images and observations collectedduring training, LλandL1−λare the Binary Cross Entropy (BCE) loss terms for mixed-up samplesand labels, α= 10 is the weight for Gradient Penalty loss.5Figure 4: Successful rollouts of in-hand object manipulation policies for the three objects: purple 3-prongedobject (Pose B), black T-shaped pipe, and blue football. The boxes on the right (outlined in green) are repre-sentative user-provided success state examples for each task. Note that the autonomous pickup policy picks upthe object in a variety of different poses across episodes, requiring the in-hand manipulation skill to reorient itinto the target pose from many starting configurations.Applying this method with buffer initialization, where prior data from other tasks and objects isincluded in the replay buffer, requires additional care. Na ̈ıvely, if we train a new VICE classifier withuser-provided goal images for the current task as positives, then almost all previous experiences fromother tasks and objects are likely to be assigned a negligible reward during training, which wouldnot result in beneficial learning signals for the RL agent. Instead, for tasks from other objects inthe prior dataset, rewards are labeled using a task-specific VICE classifier which was trained whenthat data was collected for its own task . These classifier rewards are computed and saved prior totraining a new skill, and they remain static throughout training, in contrast to the rewards for onlinedata and offline data from the same object, which depend on the changing VICE classifier.We hypothesize that initializing the buffer in this way with data from other objects, or other tasks forthe same object, will allow the RL algorithm to learn more quickly by transferring knowledge aboutfinger-object interactions, actuation strategies for the hand, and other structural similarities betweenthe tasks. Of course, the degree to which such transfer can happen depends on the degree of tasksimilarity, but as we will show in the next section, we empirically observe improvement from priordata even when transferring to an entirely new object.5 Experimental ResultsIn our experiments, we aim to study the following questions:1. Can our system learn dexterous manipulation skills autonomously in the real world?2. Can prior data from one object improve the sample efficiency of learning new skills withthe same object?3. Can data from different objects be transferred to enable faster acquisition of skills with newobjects?We perform experiments with 3 objects of various shapes and colors: a purple 3-pronged object,a black T-shaped pipe, and a blue football. For each manipulation task, we collected a set of 400success example images, as described in the Appendix E.6We also provide demonstrations per object for the reset policy to enable in-hand training. Wepresent details of demonstration collection, training procedure, and success rates in Appendix F.Each demonstration takes roughly 30 seconds to collect, totaling less than 2 hours to collect thenecessary demonstrations. Please check our website https://sites.google.com/view/reboot-dexterousfor videos and further experimental details.Figure 5: Learning curve showing the performanceas a function of training time of reorienting the 3-prong object into different poses. Even though bothour method and training from scratch eventually reacha success rate of 80%, our method gets there about twotimes faster.Figure 6: Bar plot displaying the training time re-quired for each object to reach their respective targetperformance. Buffer initialization leads to more than a2xspeedup across all of the objects compared to train-ing from scratch.Figure 7: Pose A for the 3-pronged object is approximately60◦offset from Pose B, with anyleg pointing straight to the wall.Task transfer. To answer Question 1, we evaluated our methodon each of the 3 objects with varying amounts of prior data. Wefirst trained a 3-prong object manipulation policy (for a goal posewe call Pose A, shown in Figure 5) without prior data in order togather data to initialize training for subsequent objects/tasks. Wethen trained another 3-prong manipulation policy for a differentgoal pose (Pose B, shown in Figure 4) as well as a T-pipe manipula-tion policy, both using prior data from the first 3-prong experiment.Finally, we trained a football manipulation policy using the 3-prongand T-pipe experiments as prior data. Our method’s success rateis shown in Figure 6, and film strips of various manipulation pol-icy successes during training are shown in Figure 4. Our behavior-cloned reset policy was sufficient as a reset mechanism for in-handtraining. Furthermore, our in-hand policies are able to successfullypose the 3-prong and T-pipe objects more than 50% of the time.To answer Question 2, we consider the Pose B 3-prong experiment described previously. Sincereorienting to both Pose A and B uses the same 3-prong object, we expect the task difficulty tobe similar for both poses. A comparison between training Pose A from scratch and training PoseB with a pre-loaded replay buffer is shown in Figure 5. The Pose B experiment with our methodoutperforms the Pose A experiments training from scratch in terms of training time. Our methodreaches 80% success in around 6 hours while training from scratch yields poor performance at thatpoint. It takes more than 10 hours for learning from scratch to achieve a comparable success rate.This suggests that our method can significantly reduce training time when using prior data from thesame object for a new manipulation task.Object transfer. To answer Question 3, we consider the T-pipe and football experiments describedabove. We compare our method to learning from scratch without prior data and display the resultsin Figure 8. Our method with prior data from other objects is significantly faster than learning fromscratch for both objects. For the T-pipe experiments, our method achieves a 60% success rate at6 hours compared to 13 hours for training from scratch. Furthermore, the from-scratch runs have7Figure 8: Learning curve showing the performance as a function of training time for the T-pipe and footballobjects. In both cases, buffer initialization is about two times faster than learning from scratch, though particu-larly the football object is harder to reorient for all methods.absolutely no success in evaluation prior to 5 hours of training, while our method achieves someinitial success as early as 1 hour into training. The football task appears to be significantly morechallenging than the 3-prong and T-pipe tasks, as shown in Figure 8, with no methods performingabove a 30% success rate. However, our method still outperforms learning from scratch, achieving a30% success rate with 5 hours of training; the from-scratch runs required at least 16 hours of trainingto achieve a lower 20% success rate.Ablation Studies. Finally, we conduct ablation experiments in both simulation and the real worldto compare the effects of varying the initial buffer size, the order in which the buffer is initialized,transfer learning from a trained policy, and training for an extended period of time. Results andin-depth analysis are provided in Appendix C and Appendix D.6 Discussion, Limitations, and Future WorkWe presented a system for learning in-hand manipulation skills directly by training in the real worldwith RL, without simulation, and using only onboard sensing from encoders and cameras. Oursystem enables sample-efficient and autonomous learning by initializing the replay buffer of an effi-cient online RL method with data from other tasks and even other objects. We extend adversarially-learned classifier-based rewards into this setting to make it possible for users to define tasks witha collection of goal images, and implement automated resets using an imitation-learned reset pol-icy, providing a pipeline for fully autonomous training. The complete system avoids any stronginstrumentation assumptions, using the robot’s own sensors and actuators for every part of training,providing a proof-of-concept for an efficient real-world RL system that could operate outside oflaboratory conditions.Limitations: Our experimental evaluation does have a number of limitations. Although we showthat reusing data from one or two prior tasks improves learning efficiency, a more practical general-purpose robotic system might use data from tens, hundreds, or even thousands of skills. Evaluatingthe potential for such methods at scale might require additional technical innovations, as it is unclearif buffer initialization with very large datasets will be as effective. Additionally, our evaluation islimited to in-hand reorientation skills. While such skills exercise the robot’s dexterity and physicalcapabilities, many other behaviors that require forceful interaction with the environment and othermanipulation skills could require a different reset process or might require a different method forreward specification (for example to handle occlusions). Exploring these more diverse skills is anexciting direction for future work. The current manipulation setup is training with fairly robustobjects where fragility or wear and tear are not major concerns. As we move to more dexteroustasks, a more directed approach may be required to handle fragile objects or perform tasks thatrequire force-sensitive interaction. Studying how to integrate our system with tactile sensing isanother exciting avenue to explore.8AcknowledgmentsThis research was partly supported by the Office of Naval Research (N00014-20-1-2383), and AROW911NF-21-1-0097. We would like to thank Ilya Kostrikov for the initial versions of the simulatorand codebase, and everyone at RAIL for their constructive feedback.References[1] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomizationfor transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJinternational conference on intelligent robots and systems (IROS) , pages 23–30. IEEE, 2017.[2] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of roboticcontrol with dynamics randomization. In 2018 IEEE international conference on robotics andautomation (ICRA) , pages 3803–3810. IEEE, 2018.[3] OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. J ́ozefowicz, B. McGrew, J. W. Pachocki,J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin,P. Welinder, L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. CoRR ,abs/1808.00177, 2018. URL http://arxiv.org/abs/1808.00177 .[4] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[5] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learningmethods. ACM Computing Surveys (CSUR) , 50(2):1–35, 2017.[6] T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, and J. Peters. An algorithmicperspective on imitation learning. arXiv preprint arXiv:1811.06711 , 2018.[7] V . Kumar, A. Gupta, E. Todorov, and S. Levine. Learning dexterous manipulation policiesfrom experience and imitation. arXiv preprint arXiv:1611.05095 , 2016.[8] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V . Kumar. Dexterous manipulation with deepreinforcement learning: Efficient, general, and low-cost. In 2019 International Conference onRobotics and Automation (ICRA) , pages 3651–3657. IEEE, 2019.[9] A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. In Conference on Robot Learning , pages 1101–1112, 2020.[10] P. J. Ball, L. Smith, I. Kostrikov, and S. Levine. Efficient online reinforcement learning withoffline data, 2023.[11] Z. Xu, V . Kumar, and E. Todorov. A low-cost and modular, 20-dof anthropomorphic robotichand: Design, actuation and modeling. In 2013 13th IEEE-RAS International Conference onHumanoid Robots (Humanoids) , pages 368–375. IEEE, 2013.[12] R. Deimel and O. Brock. A novel type of compliant and underactuated robotic hand for dex-terous grasping. The International Journal of Robotics Research , 35(1-3):161–185, 2016.[13] R. Bhirangi, A. DeFranco, J. Adkins, C. Majidi, A. Gupta, T. Hellebrekers, and V . Kumar. Allthe feels: A dexterous hand with large area sensing, 2023.[14] I. Mordatch, Z. Popovi ́c, and E. Todorov. Contact-invariant optimization for hand manipula-tion. In Proceedings of the ACM SIGGRAPH/Eurographics symposium on computer anima-tion, pages 137–144, 2012.[15] V . Kumar, Y . Tassa, T. Erez, and E. Todorov. Real-time behaviour synthesis for dynamic hand-manipulation. In 2014 IEEE International Conference on Robotics and Automation (ICRA) ,pages 6808–6815. IEEE, 2014.9[16] J. Kober and J. Peters. Policy search for motor primitives in robotics. Advances in neuralinformation processing systems , 21, 2008.[17] M. Posa, C. Cantu, and R. Tedrake. A direct method for trajectory optimization of rigid bodiesthrough contact. The International Journal of Robotics Research , 33(1):69–81, 2014.[18] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning complex dexterous manipulation with deep reinforcement learning and demonstra-tions. arXiv preprint arXiv:1709.10087 , 2017.[19] D. Jain, A. Li, S. Singhal, A. Rajeswaran, V . Kumar, and E. Todorov. Learning deep visuomo-tor policies for dexterous hand manipulation. In 2019 International Conference on Roboticsand Automation (ICRA) , pages 3636–3643. IEEE, 2019.[20] C. Zeng, S. Li, Y . Jiang, Q. Li, Z. Chen, C. Yang, and J. Zhang. Learning compliant graspingand manipulation by teleoperation with adaptive force control, 2021.[21] C. Zeng, S. Li, Z. Chen, C. Yang, F. Sun, and J. Zhang. Multifingered robot hand compliantmanipulation based on vision-based demonstration and adaptive force control. IEEE Trans-actions on Neural Networks and Learning Systems , pages 1–12, 2022. doi:10.1109/TNNLS.2022.3184258.[22] S. P. Arunachalam, I. G ̈uzey, S. Chintala, and L. Pinto. Holo-dex: Teaching dexterity withimmersive mixed reality, 2022.[23] K. Lowrey, S. Kolev, J. Dao, A. Rajeswaran, and E. Todorov. Reinforcement learning fornon-prehensile manipulation: Transfer from simulation to physical system. In 2018 IEEEInternational Conference on Simulation, Modeling, and Programming for Autonomous Robots(SIMPAR) , pages 35–42. IEEE, 2018.[24] B. Wu, I. Akinola, J. Varley, and P. Allen. Mat: Multi-fingered adaptive tactile grasping viadeep reinforcement learning, 2019.[25] A. Allshire, M. Mittal, V . Lodaya, V . Makoviychuk, D. Makoviichuk, F. Widmaier,M. W ̈uthrich, S. Bauer, A. Handa, and A. Garg. Transferring dexterous manipulation fromgpu simulation to a remote real-world trifinger. arXiv preprint arXiv:2108.09779 , 2021.[26] H. Van Hoof, T. Hermans, G. Neumann, and J. Peters. Learning robot in-hand manipulationwith tactile features. In 2015 IEEE-RAS 15th International Conference on Humanoid Robots(Humanoids) , pages 121–127. IEEE, 2015.[27] H. Zhu, J. Yu, A. Gupta, D. Shah, K. Hartikainen, A. Singh, V . Kumar, and S. Levine. Theingredients of real-world robotic reinforcement learning. arXiv preprint arXiv:2004.12570 ,2020.[28] A. Gupta, J. Yu, T. Z. Zhao, V . Kumar, A. Rovinsky, K. Xu, T. Devlin, and S. Levine. Reset-freereinforcement learning via multi-task learning: Learning dexterous manipulation behaviorswithout human intervention. arXiv preprint arXiv:2104.11203 , 2021.[29] K. Xu, Z. Hu, R. Doshi, A. Rovinsky, V . Kumar, A. Gupta, and S. Levine. Dexterous manipu-lation from images: Autonomous real-world rl via substep guidance, 2022.[30] A. Gupta, C. Eppner, S. Levine, and P. Abbeel. Learning dexterous manipulation for a softrobotic hand from human demonstrations. In 2016 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 3786–3793. IEEE, 2016.[31] P. Mandikal and K. Grauman. Learning dexterous grasping with object-centric visual affor-dances. arXiv preprint arXiv:2009.01439 , 2020.10[32] I. Akinola, J. Varley, and D. Kalashnikov. Learning precise 3d manipulation from multipleuncalibrated cameras. In 2020 IEEE International Conference on Robotics and Automation(ICRA) , pages 4616–4622. IEEE, 2020.[33] S. Ross, N. Melik-Barkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell, andM. Hebert. Learning monocular reactive UA V control in cluttered natural environments. In2013 IEEE International Conference on Robotics and Automation , 2013. doi:10.1109/ICRA.2013.6630809.[34] S. Reddy, A. D. Dragan, and S. Levine. Sqil: Imitation learning via reinforcement learningwith sparse rewards. arXiv preprint arXiv:1905.11108 , 2019.[35] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforce-ment learning. In AAAI . AAAI Press, 2008. ISBN 978-1-57735-368-3.[36] M. Wulfmeier, P. Ondruska, and I. Posner. Maximum entropy deep inverse reinforcementlearning. arXiv preprint arXiv:1507.04888 , 2015.[37] N. D. Ratliff, J. A. Bagnell, and M. Zinkevich. Maximum margin planning. In MachineLearning, Proceedings of the Twenty-Third International Conference ICML , 2006. doi:10.1145/1143844.1143936.[38] D. P. Losey and M. K. O’Malley. Including uncertainty when learning from human corrections.InConference on Robot Learning , pages 123–132. PMLR, 2018.[39] Y . Cui and S. Niekum. Active reward learning from critiques. In 2018 IEEE internationalconference on robotics and automation (ICRA) , pages 6907–6914. IEEE, 2018.[40] J. D. Co-Reyes, A. Gupta, S. Sanjeev, N. Altieri, J. DeNero, P. Abbeel, and S. Levine.Guiding policies with language via meta-learning. CoRR , abs/1811.07882, 2018. URLhttp://arxiv.org/abs/1811.07882 .[41] V . Myers, E. Biyik, N. Anari, and D. Sadigh. Learning multimodal rewards from rankings. InConference on Robot Learning , pages 342–352. PMLR, 2022.[42] D. S. Brown, W. Goo, and S. Niekum. Better-than-demonstrator imitation learning viaautomatically-ranked demonstrations. In Conference on robot learning , pages 330–359.PMLR, 2020.[43] I. Kostrikov, D. Yarats, and R. Fergus. Image augmentation is all you need: Regularizing deepreinforcement learning from pixels. arXiv preprint arXiv:2004.13649 , 2020.[44] J. Fu, A. Singh, D. Ghosh, L. Yang, and S. Levine. Variational inverse control with events: Ageneral framework for data-driven reward definition. arXiv preprint arXiv:1805.11686 , 2018.[45] K. Zolna, A. Novikov, K. Konyushkova, C. Gulcehre, Z. Wang, Y . Aytar, M. Denil, N. de Fre-itas, and S. Reed. Offline learning from demonstrations and unlabeled experience. arXivpreprint arXiv:2011.13885 , 2020.[46] B. Akgun, M. Cakmak, J. W. Yoo, and A. L. Thomaz. Trajectories and keyframes for kines-thetic teaching: A human-robot interaction perspective. In Proceedings of the seventh annualACM/IEEE international conference on Human-Robot Interaction , pages 391–398, 2012.[47] V . Villani, F. Pini, F. Leali, and C. Secchi. Survey on human–robot collaboration in industrialsettings: Safety, intuitive interfaces and applications. Mechatronics , 55:248–266, 2018.[48] L. Smith, J. C. Kew, T. Li, L. Luu, X. B. Peng, S. Ha, J. Tan, and S. Levine. Learning andadapting agile locomotion skills by transferring experience, 2023.11[49] A. Sharma, K. Xu, N. Sardana, A. Gupta, K. Hausman, S. Levine, and C. Finn. Autonomousreinforcement learning: Formalism and benchmarking, 2022.[50] B. Eysenbach, S. Gu, J. Ibarz, and S. Levine. Leave no trace: Learning to reset for safe andautonomous reinforcement learning. arXiv preprint arXiv:1711.06782 , 2017.[51] K. Xu, S. Verma, C. Finn, and S. Levine. Continual learning of control primitives: Skilldiscovery via reset-games. arXiv preprint arXiv:2011.05286 , 2020.[52] A. Sharma, A. Gupta, S. Levine, K. Hausman, and C. Finn. Autonomous reinforcement learn-ing via subgoal curricula, 2021.[53] W. Han, S. Levine, and P. Abbeel. Learning compound multi-step controllers under unknowndynamics. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 6435–6442. IEEE, 2015.[54] A. Sharma, A. M. Ahmed, R. Ahmad, and C. Finn. Self-improving robots: End-to-end au-tonomous visuomotor reinforcement learning, 2023.[55] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, Y . Zhu, L. Fei-Fei, and S. Savarese. Human-in-the-loop imitation learning using remote teleoperation, 2020.[56] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction . 2018.[57] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V . Kumar, H. Zhu, A. Gupta,P. Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 ,2018.[58] Y . Song, Y . Zhou, A. Sekhari, J. A. Bagnell, A. Krishnamurthy, and W. Sun. Hybrid rl: Usingboth offline and online data can make rl efficient. arXiv preprint arXiv:2210.06718 , 2022.[59] H. Zhang, M. Cisse, Y . N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk mini-mization, 2018.[60] I. Gulrajani, F. Ahmed, M. Arjovsky, V . Dumoulin, and A. Courville. Improved training ofwasserstein gans, 2017.12AppendixA Evaluation Success CriteriaWe evaluated the trained policy success rate at every 12000 steps for the three real-world in-handmanipulation tasks considered in the paper. The success criteria are defined as follows to stay con-sistent with the goal images collected to learn the VICE classifiers:Task Success Criteria3-Prong Object Pose A&B 1n|θany leg−θgoal pose| ≤5◦oT-Shaped Pipe 1n|θvertical leg−θgoal pose| ≤5◦oFootball 1n|θlong axis−θgoal pose| ≤5◦o1. 3-Pronged Object:• Pose A is successful if any leg is pointing straight forward (to the wall) with less thanor equal to 5 degrees deviations.• Pose B is successful if any leg is pointing straight backward (to the robot) with lessthan or equal to 5 degrees deviations.2. T-Pipe: The T-Pipe is successful if the vertical pipe is pointing straight backward (to therobot) with less than or equal to 5 degrees deviations.3. Toy Football: The toy football is successful if its long axis is pointing straight to both thewall and the robot with less than or equal to 5 degrees deviations.For the reset policies, the success criteria are intuitively defined as whether the hand grasps andpicks up the object in a ready-to-manipulate pose, such that in-hand training can begin withoutobjects falling out of the palm immediately.13B Algorithm DetailsIn this section, we describe details related to our RL learning algorithms and our imitation learningalgorithm and also provide hyperparameters used in experiments for each method.REuse Data For BOOTstrapping Efficient Real-World Dexterous ManipulationAlgorithm 1 REBOOT1:Given: A replay buffer Dwith prior data, a set of reset demos Dreset , a set of goal images G,and a start state s0.2:Initialize an empty replay buffer B, RLPD(SAC)[10] with policy πψand value function Qψ, areset policy πφ, and VICE classifier [44] Dθ3:Train reset policy πφusingDreset via Behavior Cloning4:foriteration j= 1,2, ..., T do5: Execute πφto perform reset6: Execute πψin environment, storing data in the online replay buffer B7: Update the RLPD’s policy and value functions πψ,Qψusing a 50/50 batch of samples fromBandD, assigning reward based on Dθusing SAC [57].8: Update the VICE classifier Dθusing samples from Band goal images from G, using eq2.9:end forShared RL Hyperparameters ValueShared Images Encoder for DrQ MobileNetV3-Small-100 with ImageNet-1K weightsLearned Spatial EmbeddingActor Architecture FC(256, 256)FC(256, 19)Critic Architecture REDQ with 10 EnsemblesFC(256, 256)FC(256, 1)Optimizer AdamLearning rate {3e-4}Discount γ 0.99REBOOT UTD 4A V AIL UTD 1Target Update Frequency 1Actor Update Frequency 1Batch size 256VICE batch size 512 (256 Goals + 256 Replay Samples)VICE Classifier Hyperparameters ValueOptimizer AdamLearning rate {3e-4}Classifier steps per iteration 1Mixup Augmentation α 1Label Smoothing α 0.2Gradient Penalty Weight λ 10VICE update interval per episodeClassifier Architecture MobileNetV3-Small-100 with ImageNet-1K weightsLearned Spatial EmbeddingDropout(0.5)FC(256, 256) →LeakyReLU() →Dropout(0.1)FC(1)14C Ablation StudiesOrder of tasks to initialize the buffer. To investigate whether the ordering of tasks to initialize thereplay buffer impacts the learning performance in our method, i.e. bootstrapping on one object’s ortask’s data leads to better performance than others, we designed and ran two additional experiments.The experimental setup follows our method in Figure 5a, where the robot autonomously learns toreorient the 3-prong object to pose B. In the paper, we experimented and reported the performanceusing replay experiences from pose A’s training to bootstrap pose B. Then we bootstrapped thelearning of the T-pipe and football using the 3-prong object’s data.We now consider bootstrapping the 3-prong object’s pose B learning with replay data from the T-pipe and football training, while keeping the amount of prior data, replay sampling ratio, and UTDthe same. We report the evaluation success rate every 12000 steps.Figure 9: Ablation studying the effect of initializing the replay buffer with prior experience fromdifferent objects. Initializing with experience from the same object results in the best performance,but initializing using football experience provides a similar benefit.For the same task (reorient 3-prong object) under the same training hours, bootstrapping from thesame object but different task data yields the best performance, initialized with football task dataachieves similar results, T-pipe data’s performance follows, and no buffer initialization performedthe worst. We note two potentially significant differences between these 3 objects:1. The T-pipe is fully black colored while the 3-prong object and football are more vividlycolored.2. The in-hand dexterous motions required to solve the tasks are similar betweenthe 3-prong object and the football (planar rotation) but different from the T-pipe (vertical flipping). This can be visualized better on the project website linkhttps://sites.google.com/view/reboot-dexterous).15Initial buffer size. To investigate how different initial replay buffer sizes affect the performance,we performed additional real-world experiments for the 3-prong object reorient task. The experi-mental setup follows our method in Figure 5a, where the robot is tasked with autonomously learningto reorient the 3-prong object to pose B using buffer initialization from reorienting to pose A. Wecompare initializing the buffer with 60k vs. 30k randomly selected transitions and apply our method.Figure 10: Ablation studying the effect of reducing the amount of data used for buffer initialization(30k vs. 60k transitions pre-loaded into replay buffer). Our result demonstrates that there is somebenefit to pre-loading with less data, but the 60k setting still learns considerably faster.The 30k and 60k transitions used to initialize the replay buffers for these experiments were bothsampled uniformly from the same Pose A replay buffer (168k transitions), and both experiments usea 50/50 sampling ratio between prior and new data. However, the run initialized with 60k transitionscontains more diverse replay experiences, accelerating the online sample efficiency while achievinga higher success rate under the same training time.16Comparison to transfer learning. To compare whether our method is more effective at learningreal-world dexterous tasks than alternative approaches such as transfer learning, we ran an additionalexperiment with the 3-prong object task (Pose B) by transferring a baseline policy (Pose A, UTD=1,no initialization) that is trained for a longer period of time (21 hours, 70% evaluation success rate).We initialized the training by reloading the actor and critics network parameters with the trainedcheckpoints from the baseline policy and finetuned with the same experimental setups and no replaybuffer initialization. We report the evaluation success rate here.Figure 11: Ablation studying the effect of fine-tuning a previously trained policy for a different goalpose with the same object, rather than pre-loading the replay buffer. We find that pre-loading thereplay buffer improves sample efficiency significantly more than fine-tuning an existing policy.While the policy transfer + finetuning approach outperformed the baseline that learns from scratchunder the same training time, our method with buffer initialization still achieves the highest successrate.17Comparison of all ablations. Here we visualize a summary of all ablations and comparisons inone plot. Our method is the most sample-efficient among all experiments on reorienting the 3-prongobject.Figure 12: Evaluation plots showing the performance of checkpoints at different points in trainingfor a number of ablation experiments, all learning to reorient the 3-pronged object into Pose B. Thisfigure compares the initial replay buffer size, data used to initialize the replay buffer, and policyinitialization ablations against our method. Our method, initialized with 60k transitions from aprevious experiment with the same object, clearly learns faster than the ablations.18D Longer Training in SimulationSimulation Environment: For testing and iterating our algorithms, we developed a simulationreplica of our real robot setup using Mujoco and dm-control. This simulation model consists of thesame 16 DoF 4-fingered DHand attached to a 6 DoF Sawyer robot arm as the one built in the realworld.The simulation task considered here is to reposition the 3-pronged object from anywhere on thetabletop back to the center. In this environment, the robot correctly solving the task corresponds toa ground-truth episode reward of -20.Figure 13: Ground-truth reward vs. training steps for our method and a baseline without buffer initializationin simulation, demonstrating that the performance of our method remains stable after training for a long periodof time.Results: Results for simulated experiments are shown in Figure 13. The red line represents theaverage eval performance of our method across 4 seeds using buffer initialization (UTD=4, 60ktransitions initialization, same as real-world), while the brown line represents the average eval per-formance of the baseline method (UTD=1) w/o buffer initialization across 4 seeds. Both lines aresmoothed using an EMA of 0.9. Our method is notably more sample efficient at solving the taskthan the baseline method and is more stable at convergence than the baseline when trained up to500k steps.19E Goal Images Collection ProcedureFor each task considered in the experiment section, we collect a set of 400 goal images by placingthe object in the palm of the robot hand in the desired pose, closing the fingers for 1 second, andexecuting random actions for 1.5 seconds. We repeat this procedure multiple times, collecting 25goal images per iteration until we reach 400 total imagesF Behavior-Cloned Reset Policy DetailsWe began demonstration collection with the 3-prong object for which we collected 160 demonstra-tions for the reset policy. We provided only 30 additional demonstrations per new object, for a totalof 220 reset demonstrations across all objects. This was sufficient to train a universal reset policyfor all objects with a high enough success rate to enable in-hand training.In most cases, our behavior cloned reset policy is capable of resetting the environment, or at leastof making contact with the object, but there are a few states where the policy is unable to pick upor perturb the object in any way. In order to avoid getting stuck attempting unsuccessful resetsin these states, we train two different reset policies. One is trained with reset demonstrations formultiple objects, while the other is trained with demonstrations for only the current experiment’sobject. For example, when running an experiment with the football, one policy is trained usingreset demonstrations for the 3-pronged object, the T-shaped pipe, and the football, while the otheris trained only with demonstrations for the football. At the start of each training episode, we selectthe multi-object reset policy with an 80% probability and the single-object reset policy with a 20%probability. Since the policies behave differently due to being trained on different data, states inwhich one policy might get stuck are unlikely to cause the same issue for the other policy, whichenables training to continue even if one of the two policies is sub-optimal.Here we report the success rate of each reset policy measured when performing evaluations for thein-hand policies.Objects 3-Pronged Object T-Pipe FootballSuccess Rate 0.608 0.667 0.367Table 1: Success Rate of Reset PoliciesThe poor success rate of the toy football could be attributed to its reset success rate. While the3-pronged object and the T-pipe are more challenging to reorient in-hand due to more complexgeometries and more contacts during manipulation than the toy football, it is harder to pick up thetoy football by the robot hand due to its slim and small shape. With nearly half the success ratecompared to the other two objects, the football in-hand training had fewer opportunities to practicemeaningfully. Hence, the football experiment with our method is able to achieve a 30% success ratecompared to the near zero success in runs without prior data initialization under the same amount oftraining time.20 |
afF8RGcBBP | PlayFusion: Skill Acquisition via Diffusion fromLanguage-Annotated PlayLili Chen* Shikhar Bahl* Deepak PathakCarnegie Mellon UniversityAbstract: Learning from unstructured and uncurated data has become the dominantparadigm for generative approaches in language and vision. Such unstructuredand unguided behavior data, commonly known as play, is also easier to collect inrobotics but much more difficult to learn from due to its inherently multimodal,noisy, and suboptimal nature. In this paper, we study this problem of learninggoal-directed skill policies from unstructured play data which is labeled withlanguage in hindsight. Specifically, we leverage advances in diffusion models tolearn a multi-task diffusion model to extract robotic skills from play data. Usinga conditional denoising diffusion process in the space of states and actions, wecan gracefully handle the complexity and multimodality of play data and generatediverse and interesting robot behaviors. To make diffusion models more usefulfor skill learning, we encourage robotic agents to acquire a vocabulary of skills byintroducing discrete bottlenecks into the conditional behavior generation process.In our experiments, we demonstrate the effectiveness of our approach across awide variety of environments in both simulation and the real world. Video resultsavailable at https://play-fusion.github.io .Keywords: Diffusion Models, Learning from Play, Language-Driven Robotics*1 IntroductionHumans reuse past experience via a broad repertoire of skills learned through experience that allowsus to quickly solve new tasks and adapt across environments. For example, if one knows how tooperate and load a dishwasher, many of the skills (e.g., opening the articulated door, adjusting therack, putting objects in) will transfer seamlessly. How to learn such skills for robots and from whatkind is a long-standing research question. Robotic skill abstraction has been studied as a way totransfer knowledge between environments and tasks [ 1,2,3]. It has been common to use primitivesas actions in the options framework [ 4,5], which are often hand-engineered [ 6,7,8,9,10,11] orlearned via imitation [ 12,13,14]. These allow for much more sample-efficient control but requireknowledge of the task and need to be tuned for new settings. On the other hand, there have beenefforts to automatically discover skills using latent variable models [ 15,16,17,18,19,20,21,22].While they can work in any setting, such models are often extremely data-hungry and have difficultyscaling to the real world due to the data quality at hand.As a result, real-world paradigms are based on imitation or offline reinforcement learning (RL) butboth these require several assumptions about the datasets. In imitation learning, human teleoperatorsmust perform tasks near-perfectly, reset the robot to some initial state, perform the task near-perfectlyagain, and repeat several times. In offline RL, data is assumed to contain reward labels, which isimpractical in many real-world setups where reward engineering is cumbersome. In contrast, it ismuch easier to collect uncurated data from human teleoperators if they are instructed only to explore,resulting in play data [21,22,23]. Learning from play (LfP) has emerged as a viable alternative totraditional data collection methods for behavior generation. It offers several advantages: (1) it isefficient because large datasets of play can be collected without the need for setting up and executing*Equal contribution.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Language-Annotated Playuse spatulapouropenplace toastpick upopen the BBQ grillremove bread from toasterpick up the knifeFigure 1: Across multiple real-world and simulated robotic settings, we show that our model can extractsemantically meaningful skills from language-annotated play data. Such data is highly multimodal and offers nooptimality guarantees. Video results of PlayFusion are available at https://play-fusion.github.io .perfect demonstrations, and (2) the data collected is rich and diverse because it contains a broaddistribution of behavior ranging from completions of complex tasks to random meandering aroundthe environment. An important quality of such data is that it is grounded with some semantic goal thatthe ”player” is aiming to achieve. We believe a simple abstraction for this is language instructions,which can describe almost any play trajectory.A major challenge in learning from play is that the data is highly multimodal, i.e., there are manydifferent ways to achieve a specific goal, and given a sample from the play data, there are manydifferent goals that could have generated it. One popular way to handle highly multimodal data isby modeling the full distribution via generative models. In recent years, there has been remarkableprogress in large generative models [ 24,25,26,27], especially in the class of diffusion models [ 28,29],which have been shown to generate high-resolution images – a property well suited for vision-basedrobotic control. In fact, diffusion models have shown to be effective in capturing complex, continuousactions [ 30,31,29,32,33] in the context of robotics. However, these diffusion model-basedapproaches have not been empirically shown yet to work on unstructured data. We argue that theability of diffusion models to fully capture complex data paired with their potential for text-drivengeneration can make them good candidates to learn from language-annotated play data.One additional consideration is that in reality, humans only deal with a few skills. Almost every taskmanipulation task involves some grasping and some post-grasp movement. We believe that learningdiscrete skills will not only make the whole process more efficient but will also allow interpolationbetween skills and generalizations to new tasks. To address this, we propose PlayFusion , a diffusionmodel which can learn from language-annotated play data via discrete bottlenecks. We maintain themultimodal properties of our current system while allowing for a more discrete representation ofskills. Empirically, we show that our method outperforms state-of-the-art approaches on six differentenvironments: three challenging real-world manipulation settings as well as the CALVIN [ 34], FrankaKitchen [22], and Ravens [35, 36] simulation benchmarks.2 Related WorkGoal and Language Conditioned Skill Learning One method of specifying the task is via goal-conditioned learning, often by using the actual achieved last state as the goal [ 37,38,39,40]. There isalso recent work on using rewards to condition robot behavior [ 41], but this requires a reward-labeleddataset, which makes stronger assumptions than play data. Furthermore, there is a large body ofwork on language-conditioned learning [ 42,36,43,44,45,46,47], which specifies the task throughlanguage instructions. Instead of conditioning the policy on fully labeled and curated data, we takeadvantage of unstructured play data which is annotated with language in hindsight.2xx1p(xk-1|xk , g, s)similarity scorexks:Diffusion Modelg: place the bread on the sandwichcodebook...xK...xk-1xxK-1robot execution+Figure 2: Overview of how PlayFusion extracts useful skills from language-annotated play by leveragingdiscrete bottlenecks in both the language embedding and diffusion model U-Net. We generate robot trajectoriesvia an iterative denoising process conditioned on language and current state.Learning from Play Unlike demonstrations, play data is not assumed to be optimal for any specifictask as it is collected by human teleoperators who are instructed only to explore. Play-LMP andMCIL [ 21,48] generate behaviors by learning motor primitives from play data using a V AE [ 49,50].RIL [ 22] is a hierarchical imitation learning approach and C-BeT [ 23] generates behaviors usinga transformer-based policy and leverages action discretization to handle multimodality. LAD [ 51]incorporates diffusion for learning from play, but keeps several components of V AE-based approachesfor encoding latent plans; we forgo those elements completely.Behavior Modeling with Generative Models A promising architecture for behavior modelingwith generative models is the diffusion probabilistic model [28, 52, 53, 54]. Diffuser [30], DecisionDiffuser [ 31], Diffusion-QL [ 29] and IDQL [ 32] apply diffusion models to the offline reinforcementlearning (RL) problem. In real-world robotic applications, Diffusion Policy [ 33] demonstrated strongresults in visuomotor policy learning from demonstrations. Different from these works, we learnfrom play data containing semantic labels instead of offline RL datasets or expert demonstrations.Some approaches [55, 56] incorporate diffusion in robotics but not for generating low-level actions.Discrete control A key challenge in robot learning is the exponentially large, continuous actionspace. Option or skill-based learning is appealing as it can circumvent this problem and allow the agentto learn in a structured, countable action space [ 57,58,59,60]. Learned action discretization [ 52,23]has allowed approaches to scale to complex tasks. C-BeT [ 23] applies real-world robotic controlwith transformers [ 41,61,62] to the goal-conditioned setting; [ 63] train a dynamics model overdiscrete latent states. We leverage the discrete properties of VQ-V AEs and their natural connection tolanguage-labeled skills.3 BackgroundDenoising Diffusion Probabilistic Models (DDPMs) DDPMs [ 28] model the output generationprocess as a denoising process, which is often referred to as Stochastic Langevin Dynamics. Togenerate the output, the DDPM starts by sampling xKfrom a Gaussian noise distribution. It thenperforms a series of denoising iterations, totaling Kiterations, to generate a sequence of intermediateoutputs, xk, xk−1,···, x0. This iterative process continues until a noise-free output x0is produced.The denoising process is governed by the following equation:xk−1=α(xk−γεθ(xk, k) +N(0, σ2I)) (1)Here, εθrepresents the noise prediction network with a learnable parameter θ, andN(0, σ2I))denotesthe Gaussian noise added at each iteration. This equation is used to generate intermediate outputswith gradually decreasing noise levels until a noise-free output is obtained. To train the DDPM,the process begins by randomly selecting x0from the training dataset. For each selected sample, a3denoising iteration kis randomly chosen, and a noise εkis sampled with the appropriate variance forthe selected iteration. The noise prediction network is then trained to predict the noise by minimizing:L=||εk−εθ(x0+εk, k)||2(2)Discrete Representations We utilize VQ-V AE [ 64] inspired models in PlayFusion as they canprovide a way to discretize the skill space. Given an input x, a VQ-V AE trains an encoder Etopredict latent E(x) =zand maintains a codebook of discrete latent codes e. The VQ layer selects jasarg min i||z−ei||, finding the closest code to the embedding, which is used to reconstruct x. Thetraining loss isLVQV AE =Lrecon(x, D(ej)) +||z−sg(ej)||2+||sg(z)−ej||2 (3)where Dis the VQ-V AE decoder. The reconstruction loss is augmented with a quantization loss,bringing chosen codebook embedding vectors ejtoward the encoder outputs in order to train thecodebook, as well a loss to encourage the encoder to ”commit” to one of the embeddings.Learning from Play Data (LfP) In the LfP setting, we are given a dataset {(s, a)} ∈S×A. Thereare no assumptions about tasks performed in these sequences or the optimality of the data collectionmethod. Similar to the formulation of [ 23], the goal is to learn a policy π=S×S→Awhere theinput is the current state stand goal g=sT. In some cases, (including ours), the goals are insteaddescribed via language annotations.4 PlayFusion: Discrete Diffusion for Language-Annotated PlayHumans do not think about low-level control when performing everyday tasks. Our understandingof skills like door opening or picking up objects has already been grounded in countless priorexperiences, and we can comfortably perform these in new settings. Skills are acquired through ourprior experiences – successes, failures, and everything in between. PlayFusion focuses on learningthese skills through language-annotated play data.However, learning from play data is still difficult as continuous control skills are not easy to identifydue to several challenges: (1) data can come from multiple modalities as there are many actions thatthe robot could have taken at any point, (2) we want the model to acquire a vocabulary of meaningfulskillsm and (3) we want to generalize beyond the training data and have the model transfer skills tonew settings. To address the challenges, we leverage recent advances in diffusion-model large-scaletext-to-image generation. Such models [ 33,30,29] can inherently model multimodality via theiriterative denoising process. To effectively transfer skills to new settings, we propose a modifieddiffusion model with the ability to discretize learned behavior from language-annotated data. Figure2 shows an overview of our method.4.1 Language Conditioned Play DataOur setup consists of language conditioned play data [ 21]Dplay={(s(i)t, a(i)t)}Ni=1: long sequencesof robot behavior data containing many kinds of behaviors, collected by human operators instructedto perform interesting tasks. In this setting, we assume that there is some optimality to the data,i.e.at∼ F(st, zg), where zgis a latent variable that models the intention of the operator. Wethus leverage language labels to estimate zg. Given a sequence τ={si, ai}Ht=k, label τwith aninstruction lwhich is passed into a language model [ 65],glang, referring to it as zlthroughout thepaper. One can also use goal images, but we might not have access to these at test time. While ourmethod can use any zgas conditioning, assume that the play data has access to language annotations l.Our policy π(at|st, zl)contains a few simple components. We use a ResNet[ 66]-based visual encoderφvto encode st(a sequence of images) and an MLP based langauge encoder φlto downproject thelanguage embedding zl. The policy uses g= [φl(zl), φv(st)]as conditioning to the action decoderfact. Previous approaches [ 21,34] use latent variable models to deal with multimodality. We find thatmodelling factas a diffusion process enables us to circumvent this.4CookingDining TableSinkCALVINFranka KitchenRavensFigure 3: Simulated (top row) and real-world (bottom row) environments used for our evaluations. In eachreal-world setup, the robot is tasked with picking up one of the objects (e.g., plate, cup, carrot, bread, corn) andrelocating it to a specified location (e.g., drying rack, plate, toaster, grill, pot).4.2 Multi-modal Behavior Generation via DiffusionWith fact, we aim to predict robot actions given the current state, using a DDPM to approximate theconditional distribution P(at|st). In our setting, we additionally condition on the goal g. Formally,we train the model to generate robot actions atconditioned on goal gand current state st, so wemodify Equations 1 and 6 to obtain:ak−1t=α(akt−γεθ(g, st, akt, k) +N(0, σ2I)) (4)L=||εk−εθ(g, st, a0t+εk, k)||2 (5)We use the notation above for simplicity, but in practice, we predict a sequence of Tafuture actionsat,···, at+Tainstead of only the most immediate action, a technique known as action chunking .This is done in some recent works [33, 67] and is shown to improve temporal consistency.4.3 Discrete Diffusion for ControlMoreover, humans often break down tasks into smaller skills, which are often repeatable. In fact,most tasks can be achieved with a relatively small set. On the other hand, both the latent goals that welearn as well as the action diffusion process are continuous. Making sure learnt skills are discrete cannot only allow for better performance but also better generalization to new settings. However, naivelyenforcing discretization can lead to suboptimal behavior. We want to ensure that conditioned on alatent goal, g, action predictions from factare both multimodal and yet only represent a few modes.Thus, we propose a discrete bottleneck instead.For the action generation process to represent a useful skill space, we want to enforce discretenesswhere the actions interact with latent goal. PlayFusion adds a vector quantization bottleneck inthe diffusion process, specifically in the network εθ(x) =εθ(g, st, a0t+εk, k).εθis U-Net whichfuses the language conditioning into the action denoising. We modify the U-Net architecture witha codebook of discrete latent codes eu, a discrete bottleneck for the diffusion model. Given aninput xthe U-Net encoder produces a latent ψε(x), which is passed into the decoder to produceεθ(x) =γε(ψε(x)). This bottleneck layer selects jasarg min i||ψε(x)−ei||, finding the closestcode to the embedding, which is used to reconstruct x. To account for this, we augment the trainingprocedure with the quantization and commitment losses, similar to VQ-V AE.Generalization via discrete language conditioning Consider an agent that has learnt skills formedfrom the atomic units A,B,CandC, of the form A + B ,B + C andC + D . To truly extend its5capabilities beyond the initial training data, the agent must learn to interpolate and extrapolate fromthese existing skills, being able to perform tasks like A + D that it hasn’t explicitly been trained on.Given that the action generation in the diffusion process is already quantized, our hypothesis is thata discrete goal space will be synergestic and allow the policy to compose skills better. Thus, wemaintain a codebook of discrete latent codes elfor the language embeddings output by the languagegoal network φl(zl), selecting el,jwhich is closest to φl(zl). The full loss function that we use totrain PlayFusion is as follows:LPlayFusion =||εk−εθ(x0+εk, k)||2+β1||sg(ψε(x)−eu,j)||2| {z }U-Net quantization loss+β1||ψε(x)−sg(eu,j)||2| {z }U-Net commitment loss+β2||sg(φl(zl))−el,j||2| {z }lang. quantization loss+β2||φl(zl)−sg(el,j)||2| {z }lang. commitment loss(6)where β1andβ2are coefficients to determine the tradeoff between covering a diversity of possiblebehaviors and encouraging behaviors belonging to similar skills to be brought close to each other.Sampling from PlayFusion Given a novel language instruction at test time z′, we obtain thequantized encoding φl(z′), combining it with the visual encoding to get conditioning g′. We samplea set of actions at:t+k∼ N(0,1), pass them through the discrete denoising process in Equation 4.5 ExperimentsIn this section, we investigate PlayFusion and its ability to scale to complex tasks, as well asgeneralization to new settings. We ask the following questions: (1) Can PlayFusion allow for learningcomplex manipulation tasks from language annotated play data? (2) Can our method performefficiently in the real-world setup beyond the simulated environment? (3) How well can PlayFusiongeneralize to out of distribution settings? (4) Can PlayFusion in fact learn discrete skills? (5) How dovarious design choices, such as quantization, language conditioning, etc., affect PlayFusion? We aimto answer these through experiments in three different simulation and real world settings.Environmental Setup We test our approach across a wide variety of environments in both simula-tions as well as the real world. For simulation, we evaluate three benchmarks: (a) CALVIN [ 34], (b)Franka Kitchen [ 22], and (c) Language-Conditioned Ravens [ 35,36]. For the real-world setup, wecreate three different environments: cooking ,dining table andsink , shown in Figure 3. Moredetails of the environment setup are in the supplementary.Baselines We handle task conditioning in the same way for our method as well as all baselines,using the same visual and language encoders. We compare our method with the following baselines:(a)Learning Motor Primitives from Play (Play-LMP ): Play-LMP [ 21] generates behaviors by learningmotor primitives from play data using a V AE, which encodes action sequences into latents and thendecodes them into actions. (b) Conditional Behavior Transformer (C-BeT) : C-BeT [ 23] generatesbehaviors using a transformer-based policy and leverages action discretization to handle multimodality.(c)Goal-Conditioned Behavior Cloning (GCBC) : GCBC [21, 68] is conditional behavior cloning.5.1 Results in Simulation and Real WorldPlayFusion in simulation Table 1 shows success rates for PlayFusion, Play-LMP, C-BeT, andGCBC on the simulation benchmarks. On both CALVIN setups, we outperform the baselines by awide margin, which demonstrates the effectiveness of our method in large-scale language-conditionedpolicy learning from complex, multimodal play data. The baselines perform comparatively betteron the Franka Kitchen environments, where the training datasets are smaller and the data covers amore narrow behavior distribution and the benefit of handling multimodality is smaller; however,6Simulation Real WorldCALVIN A CALVIN B Kitchen A Kitchen B Ravens Dining Table Cooking SinkC-BeT [23] 26.3±0.8 23 .4±0.945.6±2.3 24.4 ±2.3 13.4 20.0 0.0 10.0Play-LMP [21] 19.9±1.0 22 .0±0.4 1 .9±1.5 0 .0±0.0 0.2 0.0 0.0 0.0GCBC [21] 23.2±2.0 30 .4±1.4 38 .0±3.3 15 .5±4.5 1.6 5.0 0.0 5.0Ours 45.2±1.2 58.7 ±0.7 47.5 ±2.0 27.7 ±0.9 35.8 45.0 30.0 20.0Table 1: Success rates for PlayFusion and the baselines on simulation and real-world settings.PlayFusion consistently outperforms all of the baselines.PlayFusion still outperforms or matches all baselines. PlayFusion also achieves significantly highersuccess rate than the baselines on Ravens (see appendix for per-task results), which is not as large-scale as CALVIN but covers a large portion of the state space due to the diversity of instructions.No. of InstructionsAv. Seq Len 1 2 3 4 5CALVIN A :C-BeT 0.262 25.2 1.0 0.0 0.0 0.0Play-LMP 0.175 16.5 1.0 0.0 0.0 0.0GCBC 0.194 19.4 0.0 0.0 0.0 0.0CALVIN B :C-BeT 0.272 27.2 0.0 0.0 0.0 0.0Play-LMP 0.117 11.7 0.0 0.0 0.0 0.0GCBC 0.291 27.2 1.9 0.0 0.0 0.0Ours (A) 0.417 37.1 2.9 1.0 0.0 0.0Ours (B) 0.611 54.4 6.0 0.0 0.0 0.0Table 2: Average sequence length on Long HorizonCALVIN and success rate for the n-th instructions.Long horizon tasks Using the Long HorizonCALVIN evaluation suite, we test the abilityof agents to stitch together different tasks, withtransitioning between tasks being particularlydifficult. One such long horizon chain might be”turn on the led” →”open drawer” →”pushthe blue block” →”pick up the blue block”→”place in slider”. We rollout 128 differentlong horizon chains containing five instructionseach and record the number of instructions suc-cessfully completed. As shown in Table 2, wefind that PlayFusion significantly outperformsthe baselines in both CALVIN A and CALVINB. The diffusion process gracefully handles themultimodality of not only each individual taskin the chain but also of the highly varied datathe agent has seen of transitions between tasks.Generalization in the real world Table 1 shows results for PlayFusion and the baselines in ourreal world evaluation setups. These setups are particularly challenging for two reasons: (1) inherentchallenges with real-world robotics such as noisier data and constantly changing environmentconditions such as lighting, and (2) they are designed to test skill-level compositional generalization.Specifically, the agents are required to compose skills A + B andC + D intoA + D ; for example,they might be trained on ”pick up the carrot and place it in the pan” and ”pick up the bread and putit in the toaster” and must generalize to ”pick up the carrot and put it in the toaster”. Our methodsignificantly outperforms the baselines in these settings, showcasing the ability of the diffusion modelin modeling complex distributions and the emergence of learned skills via the discrete bottleneck.Video results are at https://play-fusion.github.io .5.2 Analysis of Discrete RepresentationsBread in PanPineapple in PanCarrot in OvenCarrot in GrillFigure 4: Visualization of the codebook em-beddings for various real-world skills.Learning discrete skills Table 3 studies the impact ofour discrete bottlenecks (for Ravens results, see the ap-pendix). The success rate is, on average, worsened with theremoval of either the U-Net discretization and the languageembedding discretization. We also qualitatively studywhether semantically similar skills are actually mapped tosimilar areas of the latent space and should therefore bebrought together by the discrete bottleneck. In Figure 4,we show that skills involving similar locations (e.g., pan)or objects (e.g., carrot) are encoded into similar embeddings. In Figure 4, we show the embeddings7of different trajectories. The top two rows share the first skill (which is to remove the lid from thepan) and place an object in the pan. The bottom two rows share the second skill (grasping the carrot).Embeddings that contain the same skill have a similar pattern, which further indicates that the latentskill space being learned is somewhat discretized.Methods CALVIN A CALVIN BOurs 45.2±1.2 58.7 ±0.7No U-Net discretiz. 45.3±2.1 55.1±1.4No lang discretiz. 40.3±1.6 54 .1±1.2Table 3: Effect of discrete bottlenecks.Balancing the discrete bottlenecks In Table 4, we studythe effects of different β1andβ2values on CALVIN Aperformance, i.e., the relative weightings for the additionalterms in the loss function corresponding to the U-Netdiscretization and language embedding discretization. Wefind that β1=β2= 0.5results in the best performance.In general, equally weighing the four additional losses(two for U-Net and two for language) leads to improved performance over imbalanced weightings.β1=β2= 0.5is also better than β1=β2= 1, indicating that over-incentivizing discretization canbe detrimental to diffusion model learning. Further analyses can be found in the appendix.5.3 Ablations of Design ChoicesSuccess RateEffect of conditioning :Global 54.1Conditional Noise 40.2Visual Pre-training 38.1Effect of language model :all-MiniLM-L6-v2 47.1all-distilroberta-v1 48.4all-mpnet-base-v2 48.8BERT 48.8CLIP (ResNet50) 35.2CLIP (ViTB32) 43.9Loss weights (U-Net & Language) :0.5 & 0.5 47.11 & 1 45.10.1 & 1 45.51 & 0.1 43.40.25 & 0.75 37.70.75 & 0.25 43.4Table 4: Effects of conditioning,language model, and loss weights.Effect of language model Although our method is orthogonal tothe language model used, we test its sensitivity to this. As shown inTable 4, we find that common models such as MiniLM [ 65], Distil-roberta [ 69], MPNet [ 70], and BERT [ 71] have similar performance,showing that PlayFusion is mostly robust to this design choice. Wehypothesize that the discrete bottleneck applied to the language em-beddings helps to achieve this robustness. CLIP [ 72] embeddingsresult in much lower success rates, most likely due the fact thatInternet images may not contain similar ”play data” instructions.Effect of conditioning Table 4 studies various different possibil-ities for conditioning the diffusion model generations on languageand vision in CALVIN A. When working with diffusion models thereare multiple different ways we can approach how to feed it goals,images of the scene etc. We found that PlayFusion is mostly robustto this, with global conditioning providing benefits for smaller mod-els (such as those in the real world). We also attempted to conditionthe diffusion model noise on the goal but found that this negativelyimpacted performance. For the visual conditioning, we studied theeffect of initializing the image encoder with large-scale pre-trained models [ 73]), finding that it doesnot help, and PlayFusion can learn the visual encoder end-to-end from scratch.For data scaling curves and more analyses on design choices, see the appendix.6 Limitations and DiscussionIn this paper, we introduced a novel approach for learning a multi-task robotic control policy usinga denoising diffusion process on trajectories, conditioned on language instructions. Our methodexploits the effectiveness of diffusion models in handling multimodality and introduces two discretebottlenecks in the diffusion model in order to incentivize the model to learn semantically meaningfulskills. PlayFusion does require the collection of teleoperated play data paired with after-the-factlanguage annotations, which still require human effort despite being already less expensive andtime-consuming to collect than demonstrations. It would be interesting to label the play data with acaptioning model or other autonomous method. Furthermore, there is room for improvement in ourperformance on our real-world setups. Additionally, our real-world experiments could be expandedto even more complex household settings such as study rooms, bed rooms, and living rooms. Overall,our approach can significantly enhance the ability of robots to operate autonomously in complex anddynamic environments, making them more useful in a wide range of applications.8References[1]R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporalabstraction in reinforcement learning. Artificial intelligence , 112(1-2):181–211, 1999.[2]S. Thrun, C. Faloutsos, T. Mitchell, and L. Wasserman. Automated learning and discoverystate-of-the-art and research topics in a rapidly growing field. Ai Magazine , 20(3):78–78, 1999.[3]M. Pickett and A. G. Barto. Policyblocks: An algorithm for creating useful macro-actions inreinforcement learning. In ICML , volume 19, pages 506–513, 2002.[4] P.-L. Bacon, J. Harb, and D. Precup. The option-critic architecture. In AAAI , 2017.[5]R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporalabstraction in reinforcement learning. Artificial Intelligence , 1999.[6]C. Daniel, G. Neumann, O. Kroemer, and J. Peters. Hierarchical relative entropy policy search.Journal of Machine Learning Research , 2016.[7]F. Stulp, E. A. Theodorou, and S. Schaal. Reinforcement learning with sequences of motionprimitives for robust manipulation. Transactions on Robotics , 2012.[8] J. Kober and J. Peters. Learning motor primitives for robotics. In ICRA , 2009.[9]P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and S. Schaal. Skill learning and taskoutcome prediction for manipulation. In ICRA , 2011.[10] M. Dalal, D. Pathak, and R. R. Salakhutdinov. Accelerating robotic reinforcement learning viaparameterized action primitives. NeurIPS , 2021.[11] S. Nasiriany, H. Liu, and Y . Zhu. Augmenting reinforcement learning with behavior primitivesfor diverse manipulation tasks. In ICRA , 2022.[12] K. Pertsch, Y . Lee, and J. Lim. Accelerating reinforcement learning with learned skill priors. InConference on robot learning , pages 188–204. PMLR, 2021.[13] S. Bahl, A. Gupta, and D. Pathak. Hierarchical neural dynamic policies. RSS, 2021.[14] K. Pertsch, R. Desai, V . Kumar, F. Meier, J. J. Lim, D. Batra, and A. Rai. Cross-domain transfervia semantic skill imitation. arXiv preprint arXiv:2212.07407 , 2022.[15] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: learning skillswithout a reward function. arXiv preprint arXiv:1802.06070 , 2018.[16] A. Sharma, S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-aware unsuperviseddiscovery of skills. arXiv preprint arXiv:1907.01657 , 2019.[17] J. Merel, S. Tunyasuvunakool, A. Ahuja, Y . Tassa, L. Hasenclever, V . Pham, T. Erez, G. Wayne,and N. Heess. Catch & carry: reusable neural controllers for vision-guided whole-body tasks.ACM Transactions on Graphics (TOG) , 39(4):39–1, 2020.[18] T. Shankar and A. Gupta. Learning robot skills with temporal variational inference. InInternational Conference on Machine Learning , pages 8624–8633. PMLR, 2020.[19] T. Kipf, Y . Li, H. Dai, V . Zambaldi, A. Sanchez-Gonzalez, E. Grefenstette, P. Kohli, andP. Battaglia. Compile: Compositional imitation learning and execution. In InternationalConference on Machine Learning , pages 3418–3428. PMLR, 2019.[20] W. Whitney, R. Agarwal, K. Cho, and A. Gupta. Dynamics-aware embeddings. arXiv preprintarXiv:1908.09357 , 2019.9[21] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learninglatent plans from play. arXiv preprint arXiv:1903.01973 , 2019.[22] A. Gupta, V . Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solvinglong-horizon tasks via imitation and reinforcement learning. arXiv preprint arXiv:1910.11956 ,2019.[23] Z. J. Cui, Y . Wang, N. Muhammad, L. Pinto, et al. From play to policy: Conditional behaviorgeneration from uncurated robot data. arXiv preprint arXiv:2210.10047 , 2022.[24] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. V oss, A. Radford, M. Chen, and I. Sutskever.Zero-shot text-to-image generation. In International Conference on Machine Learning , pages8821–8831. PMLR, 2021.[25] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen. Hierarchical text-conditional imagegeneration with clip latents. arXiv preprint arXiv:2204.06125 , 2022.[26] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image synthesiswith latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 10684–10695, 2022.[27] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neuralinformation processing systems , 33:1877–1901, 2020.[28] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. Advances in NeuralInformation Processing Systems , 33:6840–6851, 2020.[29] Z. Wang, J. J. Hunt, and M. Zhou. Diffusion policies as an expressive policy class for offlinereinforcement learning. arXiv preprint arXiv:2208.06193 , 2022.[30] M. Janner, Y . Du, J. B. Tenenbaum, and S. Levine. Planning with diffusion for flexible behaviorsynthesis. arXiv preprint arXiv:2205.09991 , 2022.[31] A. Ajay, Y . Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is conditional generativemodeling all you need for decision-making? arXiv preprint arXiv:2211.15657 , 2022.[32] P. Hansen-Estruch, I. Kostrikov, M. Janner, J. G. Kuba, and S. Levine. Idql: Implicit q-learningas an actor-critic method with diffusion policies. arXiv preprint arXiv:2304.10573 , 2023.[33] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023.[34] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard. Calvin: A benchmark for language-conditioned policy learning for long-horizon robot manipulation tasks. IEEE Robotics andAutomation Letters , 7(3):7327–7334, 2022.[35] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world forrobotic manipulation. CoRL , 2020.[36] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipula-tion. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[37] L. P. Kaelbling. Learning to achieve goals. In IJCAI , volume 2, pages 1094–8. Citeseer, 1993.[38] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin,O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neural informationprocessing systems , 30, 2017.10[39] D. Ghosh, A. Gupta, A. Reddy, J. Fu, C. Devin, B. Eysenbach, and S. Levine. Learning to reachgoals via iterated supervised learning. arXiv preprint arXiv:1912.06088 , 2019.[40] A. Goyal, A. Friesen, A. Banino, T. Weber, N. R. Ke, A. P. Badia, A. Guez, M. Mirza, P. C.Humphreys, K. Konyushova, et al. Retrieval-augmented reinforcement learning. In InternationalConference on Machine Learning , pages 7740–7765. PMLR, 2022.[41] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, andI. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advancesin neural information processing systems , 34:15084–15097, 2021.[42] C. Lynch and P. Sermanet. Grounding language in play. arXiv preprint arXiv:2005.07648 , 40:105, 2020.[43] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning ,pages 991–1002. PMLR, 2022.[44] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[45] S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditioned robotbehavior from offline data and crowd-sourced annotation. In Conference on Robot Learning ,pages 1303–1315. PMLR, 2022.[46] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitationlearning over unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212,2022.[47] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning , pages 785–799. PMLR, 2023.[48] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data.arXiv preprint arXiv:2005.07648 , 2020.[49] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 ,2013.[50] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximateinference in deep generative models. arXiv preprint arXiv:1401.4082 , 2014.[51] E. Zhang, Y . Lu, W. Wang, and A. Zhang. Lad: Language augmented diffusion for reinforcementlearning. arXiv preprint arXiv:2210.15629 , 2022.[52] N. M. Shafiullah, Z. Cui, A. A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. Advances in neural information processing systems , 35:22955–22968,2022.[53] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latentdynamics for planning from pixels. arXiv preprint arXiv:1811.04551 , 2018.[54] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latentimagination. arXiv preprint arXiv:1912.01603 , 2019.[55] Y . Dai, M. Yang, B. Dai, H. Dai, O. Nachum, J. Tenenbaum, D. Schuurmans, and P. Abbeel.Learning universal policies via text-guided video generation. arXiv preprint arXiv:2302.00111 ,2023.11[56] W. Liu, Y . Du, T. Hermans, S. Chernova, and C. Paxton. Structdiffusion: Language-guided 304creation of physically-valid structures using unseen objects. arXiv preprint arXiv:2211.04604 ,305:2, 2022.[57] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker,M. Lai, A. Bolton, Y . Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, andD. Hassabis. Mastering the game of go without human knowledge. Nature , 2017.[58] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,I. Antonoglou, V . Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neuralnetworks and tree search. nature , 529(7587):484, 2016.[59] V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou,H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control throughdeep reinforcement learning. Nature , 518(7540):529–533, Feb. 2015.[60] M. G. Bellemare, Y . Naddaf, J. Veness, and M. Bowling. The arcade learning environment: Anevaluation platform for general agents. Journal of Artificial Intelligence Research , 47:253–279,jun 2013.[61] M. Janner, Q. Li, and S. Levine. Offline reinforcement learning as one big sequence modelingproblem. Advances in neural information processing systems , 34:1273–1286, 2021.[62] P. Wu, A. Majumdar, K. Stone, Y . Lin, I. Mordatch, P. Abbeel, and A. Rajeswaran. Maskedtrajectory models for prediction, representation, and control. arXiv preprint arXiv:2305.02968 ,2023.[63] S. Ozair, Y . Li, A. Razavi, I. Antonoglou, A. Van Den Oord, and O. Vinyals. Vector quantizedmodels for planning. In International Conference on Machine Learning , pages 8302–8313.PMLR, 2021.[64] A. van den Oord, O. Vinyals, et al. Neural discrete representation learning. In NeurIPS , pages6309–6318, 2017.[65] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks.arXiv preprint arXiv:1908.10084 , 2019.[66] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR ,abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385 .[67] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulationwith low-cost hardware. arXiv preprint arXiv:2304.13705 , 2023.[68] Y . Ding, C. Florensa, M. Phielipp, and P. Abbeel. Goal-conditioned imitation learning. arXivpreprint arXiv:1906.05838 , 2019.[69] Y . Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer,and V . Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprintarXiv:1907.11692 , 2019.[70] K. Song, X. Tan, T. Qin, J. Lu, and T.-Y . Liu. Mpnet: Masked and permuted pre-training forlanguage understanding. Advances in Neural Information Processing Systems , 33:16857–16867,2020.[71] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectionaltransformers for language understanding. 2018.12[72] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision.InInternational conference on machine learning , pages 8748–8763. PMLR, 2021.[73] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual representationfor robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[74] Calvin. https://github.com/mees/calvin/ .[75] Relay policy learning environments. https://github.com/google-research/relay-policy-learning/ .[76] Cliport. https://github.com/cliport/cliport .[77] From play to policy: Conditional behavior generation from uncurated robot data. https://github.com/jeffacce/play-to-policy .13A WebsiteVideo results are available at https://play-fusion.github.io .B Experimental SetupWe evaluate our method on three simulated environments. Below, we provide their details.CALVIN [34]. The CALVIN benchmark tests a robotic agent’s ability to follow language instruc-tions. CALVIN contains four manipulation environments, each of which include a desk with asliding door and a drawer that can be opened and closed, as well as a 7-DOF Franka Emika Pandarobot arm with a parallel gripper. The four environments differ from each other in both their spatialcomposition (e.g., positions of drawers, doors, and objects) and visual features. The training data foreach environment contains around 200K trajectories, from which we sample a sequence of transitionsfor each element of the minibatch. A portion of the dataset contains language annotations; we usethis subset to train our language-conditioned model. Each transition consists of the RGB imageobservation, proprioceptive state information, and the 7-dimensional action. The agent is evaluatedon its success rate in completing 34 tasks, which include variations of rotation, sliding, open/close,and lifting. These are specified by language instructions that are unseen during training in orderto test the generalization ability of the agent. We evaluate on two setups: (1) CALVIN A, wherethe model is trained and tested on the same environment (called D →D in the benchmark) and (2)CALVIN B, where the model is trained on three of the four environments and tested on the fourth(called ABC →D in the benchmark).Franka Kitchen [ 22].Franka Kitchen is a simulated kitchen environment with a Franka Pandarobot. It contains seven possible tasks: opening a sliding cabinet, opening a hinge cabinet, slidinga kettle, turning on a switch, turning on the bottom burner, turning on the top burner, and openinga microwave door. The dataset contains 566 VR demonstrations of humans performing four of theseven tasks in sequence. Each transition consists of the RGB image observation, proprioceptive stateinformation, and the 9-dimensional action. We split each of these demonstrations into their four tasksand annotate them with diverse natural language to create a language-annotated play dataset. In ourexperiments, we evaluate agents on two setups within this environment, which we denote as KitchenA and Kitchen B. In Kitchen A, we evaluate an agent’s language generalization ability at test-timeby prompting it with unseen instructions asking it to perform one of the seven tasks. This requiresthe model to identify the desired task and successfully execute it. Kitchen B is a more challengingevaluation setting, where the agent must perform two of the desired seven tasks in sequence given anunseen language instruction. In this setting, the agent must exhibit long-horizon reasoning capabilitiesand perform temporally consistent actions, in addition to the language generalization required inKitchen A.Language-Conditioned Ravens [ 35,36].Ravens is a tabletop manipulation environment with aFranka Panda arm. We evaluate on three tasks in the Ravens benchmark: putting blocks in bowls,stacking blocks to form a pyramid, and packing blocks into boxes. The dataset consists of 1000demonstrations collected by an expert policy. Although the dataset proposed in [ 36] contains languageinstructions denoting which color block to move and the desired final location, they are not diverselike human natural language annotations would be. In order to study our model’s performance on aplay-like language-annotated dataset, we instead annotate the demonstrations with diverse naturallanguage. At test-time, we prompt the agent with an unseen language instruction, similar to our othersetups.B.1 Real World SetupWe create multiple play environments in the real world as well. We use a 7-DOF Franka EmikaPanda robot arm with a parallel gripper, operating in joint action space. We have three different14environments cooking ,dining table andsink . All of these tasks are multi-step, i.e., in each therobot has to at least grab one object and put it in another, i.e. grab a carrot and put it inside the oven.Incooking , we test how the robot can handle articulated objects. It has to first open the oven, grill orpot, and then place an object properly inside. All of these objects have different articulations. Eachof the placed objects (bread, carrot, knife, steak, spoon, etc.) have unique and different ways of beinginteracted with. In the sink , we test very precise manipulation skills, where the robot has to placeobjects in the narrow dish rack or hang objects (like mugs). In all of these settings, we test unseengoals (a combination of objects) that has never been seen before, as well as an instruction that hasnever been seen before. We provide more details in the Appendix.B.2 Additional Analysis on Discretization BottleneckDiscretization ablation in Ravens. Table 5 studies the impact of our discrete bottlenecks on theRavens benchmark. The success rate is, on average, worsened with the removal of either the U-Netdiscretization and the language embedding discretization.Discretizing a portion of the latent. It is possible to quantize only a portion of the U-Net latentrepresentation. Table 6 shows results of discretizing only a portion (25% or 50%) of the latent. Wefind that discretizing 25% of the latent resulted in better performance. Discretizing the entire latentstill works well, but discretizing a portion is a great way to balance encouraging skill learning andaccurate denoising.Table 5: Effect of discrete bottlenecks on Ravens tasks.Methods put-block-in-bowl stack-block-pyramid packing-box-pairsOurs 63.6±2.5 20.0 ±0.0 24.0 ±1.8No U-Net discretization 65.5±3.3 5.0±2.3 18 .5±0.0No lang discretization 4.1±0.6 3 .3±2.7 7 .5±2.5Table 6: Effect of discretizing different fractions of the U-Net representation.Methods Success RateDiscretize 100% of latent 45.2±1.2Discretize 50% of latent 44.8±0.1Discretize 25% of latent 48.7±0.8B.3 Data Scaling CurvesFigure 5 shows data scaling curves.Effect of discrete bottlenecks. Our method scales well with more data and performs very well evenat 100K trajectories, which is half the size of the CALVIN A training dataset. The removal of thelanguage discretization results in lower success rates across almost all dataset sizes. The removal ofU-Net discretization is not as critical and can actually improve performance for very small datasets,but is on average harmful for larger datasets.Comparison to baselines. Our method scales well with more data while C-BeT, Play-LMP, andGCBC perform poorly for all dataset sizes.B.4 Dataset DetailsReal-world experiments. For each environment we collected 250 episodes. This translates to around15 hours of data collection. We augmented the dataset by adding 3 or 4 variations for each language15Figure 5: Data scaling curves. Left: effect of discrete bottlenecks. Right: comparison to baselines.instruction (making the training dataset 750-1K episodes). The episodes were not broken into smallerannotated instructions.Simulation experiments. We directly use the language-annotated dataset from CALVIN [ 34] anddata generation script from CLIPort [ 36]. For Kitchen experiments, we used the dataset from RelayPolicy Learning [ 22] and performed some processing and annotation to create language-annotateddatasets. We provide some information in Table 7, but note that some of the numbers are estimatesdue to data processing procedures and refer the reader to the papers [34, 36, 22] for full details.Table 7: Dataset details for simulation experiments.How was play datacollected?Hours Eps.lengthNo. of lang. anno-tated eps.Is a single eps. broken intosmaller instructions?CALVINATeleoperators are in-structed only to ex-plore. Processing intoepisodes and annotat-ing with language aredone after-the-fact.2.5 64 5K (instructionsare repeated to cre-ate 200K trainingepisodes)No. Training trajectoriesare length-16 sub-episodesof the length-64 episodes. In-structions are repeated for allsub-episodes to create a totalof 200K language-annotatedtraining trajectories. (How-ever, the length-64 windowwas sampled from a longstream of play data).CALVINBSame as CALVIN A,but for three differentenvironments.7.5 64 15K (instructionsare repeated to cre-ate 600K trainingepisodes)No. Same as CALVIN A,but for three different envi-ronments, for a total of 600Ktraining trajectories.KitchenATeleoperators are in-structed to perform 4out of 7 possible tasksfor each episode.1.5 200 566 (split to cre-ate 2.2K trainingepisodes)Yes. We split each episodeinto the four training trajec-tories and label each of themwith language.KitchenBSame as Kitchen A. 1.5 200 566 (split to cre-ate 1.6K trainingepisodes)Yes. We split each episodeinto three training trajecto-ries (one for each pair of con-secutive tasks) and label eachof them with language.Ravens Data is generated byrolling out an expertpolicy.3 Up to 20 1000 Depends on the task. If itis sequential then the instruc-tion changes throughout theepisode and if it is single-step then there is one instruc-tion for the episode.164 8 16 32Action Horizon0102030405060Success RatePlayFusion512 1024 2048 4096Codebook Size0102030405060SuccessPlayFusionFigure 6: Effect of model design choices.B.5 Model Design ChoicesFigure 6 studies the impact of action horizon and codebook size in CALVIN A. PlayFusion is mostlyrobust to the action horizon Ta. We empirically found taof around 20% of the overall horizon workedthe best. We find that PlayFusion is relatively robust to the discrete latent codebook sizes.Note that asymptotically, increasing the codebook size would remove the discrete bottleneck, inprinciple. To study whether this happens in practice, we further increased the codebook size and showCALVIN A results in Table 8. As expected, performance drops when codebook size gets very large.Table 9 shows the effect of number of diffusion timesteps in CALVIN A. We found that using 25timesteps works slightly better but our method is generally robust to this hyperparameter.Table 8: Effect of further increasing the codebook size.Codebook Size Success Rate2048 45.2±1.28192 46.0±1.216384 41.1±0.1Table 9: Effect of diffusion timesteps.Timesteps Success Rate50 45.2±1.2100 39.9±1.325 47.4±0.8B.6 Generalization to Unseen SkillsWe performed an experiment where we removed one skill from the CALVIN training data. Specif-ically, we removed lift-red-block-slider from the training data and tested the model’s ability tointerpolate between (1) lifting other blocks from the slider (e.g., lift-blue-block-slider, lift-pink-block-slider) and (2) lifting red blocks in other scenarios (e.g., lift-red-block-drawer, lift-red-block-table).We also repeated this experiment for lift-blue-block-table. We find that the removal of the discretebottlenecks results in generally worse performance in this challenging setup (see Table 10). Al-though confidence intervals do overlap a bit, we find that our method is on average the best for bothlift-red-block-slider and lift-blue-block-table.17Table 10: Performance on unseen skills.Models lift-red-block-slider lift-blue-block-tableOurs 20.0±8.1 16 .6±7.2No U-Net discretization 10.0±4.6 3 .3±2.7No lang discretization 13.3±2.7 13 .3±5.4B.7 Ravens ExperimentsTable 11 shows per-task success rates for Ravens.Table 11: Per-task success rates for Ravens.put-block-in-bowl stack-block-pyramid packing-box-pairsC-BeT 17.2±1.1 15 .0±2.3 8 .1±1.5Play-LMP 0.0±0.0 0 .0±0.0 0 .8±0.2GCBC 0.0±0.0 3 .3±2.7 1 .7±0.7Ours 63.6±2.5 20.0 ±0.0 24.0 ±1.8B.8 Implementation DetailsTable 12 shows the main hyperparameters of our model in our simulation and real world experiments.We build off of the implementation of MCIL from CALVIN [ 74]. For Franka Kitchen and Ravensdataset and environment processing, we use implementations from [ 75] and [ 76], respectively. Forimplementations of the baselines, we modify [ 77] for C-BeT and [ 74] for Play-LMP and GCBC.Where possible, we use the same hyperparameters for PlayFusion and the baselines.Table 12: Hyperparameters of PlayFusion in our simulation and real-world experiments.Hyperparameter CALVIN Franka Kitchen Ravens Real WorldBatch size 32 32 128 12Codebook size 2048 2048 2048 2048U-Net discretiz. wgt 0.5 0.5 0.5 0.5Lang. discretiz. wgt 0.5 0.5 0.5 0.5Action horizon Ta 16 64 2 32Context length To 2 1 1 1Language features 384 384 384 384Learning rate 1e-4 2.5e-4 2.5e-4 2.5e-4Diffusion timsteps 50 50 50 50Beta scheduler squaredcos capv2 squaredcos capv2 squaredcos capv2 squaredcos capv2Timestep embed dim 256 256 128 25618 |
wH23nZpVTF6 | DEFT: Dexterous Fine-Tuning for Hand PoliciesAditya Kannan∗Kenneth Shaw∗Shikhar BahlPragna Mannam Deepak PathakCarnegie Mellon UniversityFigure 1: We present DEFT, a novel approach that can learn complex, dexterous tasks in the real world in anefficient manner. DEFT manipulates tools and soft objects without any robot demonstrations.Abstract: Dexterity is often seen as a cornerstone of complex manipulation. Hu-mans are able to perform a host of skills with their hands, from making food to op-erating tools. In this paper, we investigate these challenges, especially in the case ofsoft, deformable objects as well as complex, relatively long-horizon tasks. However,learning such behaviors from scratch can be data inefficient. To circumvent this, wepropose a novel approach, DEFT ( DExterous Fine-Tuning for Hand Policies), thatleverages human-driven priors, which are executed directly in the real world. Inorder to improve upon these priors, DEFT involves an efficient online optimizationprocedure. With the integration of human-based learning and online fine-tuning,coupled with a soft robotic hand, DEFT demonstrates success across various tasks,establishing a robust, data-efficient pathway toward general dexterous manipula-tion. Please see our website at https://dexterous-finetuning.github.iofor video results.Keywords: Dexterous Manipulation, Learning from Videos1 IntroductionThe longstanding goal of robot learning is to build robust agents that can perform long-horizon tasksautonomously. This could for example mean a self-improving robot that can build furniture or anagent that can cook for us. A key aspect of most tasks that humans would like to perform is that theyrequire complex motions that are often only achievable by hands, such as hammering a nail or usinga screwdriver. Therefore, we investigate dexterous manipulation and its challenges in the real world.A key challenge in deploying policies in the real world, especially with robotic hands, is that thereexist many failure modes. Controlling a dexterous hand is much harder than end-effectors due tolarger action spaces and complex dynamics. To address this, one option is to improve directly inthe real world via practice . Traditionally, reinforcement learning (RL) and imitation learning (IL)techniques have been used to deploy hands-on tasks such as in-hand rotation or grasping. This is∗Equal contribution, order decided by coin flip.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 2: Left: DEFT consists of two phases: an affordance model that predicts grasp parameters followed byonline fine-tuning with CEM. Right: Our affordance prediction setup predicts grasp location and pose.the case as setups are often built so that it is either easy to simulate in the real world or robust topractice. However, the real world contains tasks that one cannot simulate (such as manipulation ofsoft objects like food) or difficult settings in which the robot cannot practice (sparse long-horizontasks like assembly). How can we build an approach that can scale to such tasks?There are several issues with current approaches for practice and improvement in the real world.Robot hardware often breaks, especially with the amount of contact to learn dexterous tasks likeoperating tools. We thus investigate using a soft anthropomorphic hand [1], which can easily runin the real world without failures or breaking. This soft anthropomorphic hand is well-suited to ourapproach as it is flexible and can gently handle object interactions. The hand does not get damagedby the environment and is robust to continuous data collection. Due to its human-like proportionsand morphology, retargeting human hand grasps to robot hand grasps is made simpler.Unfortunately, this hand is difficult to simulate due to its softness. Directly learning from scratch isalso difficult as we would like to build generalizable policies , and not practice for every new setting.To achieve efficient real-world learning, we must learn a prior for reasonable behavior to exploreusing useful actions. Due to recent advances in computer vision, we propose leveraging human datato learn priors for dexterous tasks, and improving on such priors in the real world. We aim to use thevast corpus of internet data to define this prior. What is the best way to combine human priors withonline practice, especially for hand-based tasks? When manipulating an object, the first thing onethinks about is where on the object to make contact, and how to make this contact. Then, we thinkabout how to move our hands after the contact . In fact, this type of prior has been studied in computervision and robotics literature as visual affordances [2,3,4,5,6,7,8,9]. Our approach, DEFT, buildsgrasp affordances that predict the contact point, hand pose at contact, and post contact trajectory. Toimprove upon these, we introduce a sampling-based approach similar to the Cross-Entropy Method(CEM) to fine-tune the grasp parameters in the real world for a variety of tasks. By learning a residualpolicy [10, 11], CEM enables iterative real-world improvement in less than an hour.In summary, our approach (DEFT) executes real-world learning on a soft robot hand with only a fewtrials in the real world. To facilitate this efficiently, we train priors on human motion from internetvideos. We introduce 9 challenging tasks (as seen in Figure 1) that are difficult even for trainedoperators to perform. While our method begins to show good success on these tasks with real-worldfine-tuning, more investigation is required to complete these tasks more effectively.2 Related WorkReal-world robot learning Real-world manipulation tasks can involve a blend of classical andlearning-based methods. Classical approaches like control methods or path planning often usehand-crafted features or objectives and can often lack flexibility in unstructured settings [ 12,13,14].On the other hand, data-driven approaches such as deep reinforcement learning (RL) can facilitatecomplex behaviors in various settings, but these methods frequently rely on lots of data, privileged2reward information and struggle with sample efficiency [ 15,16,17,18,19]. Efforts have been madeto scale end-to-end RL [ 20,21,22,23,24,25] to the real world, but their approaches are not yetefficient enough for more complex tasks and action spaces and are reduced to mostly simple taskseven after a lot of real-world learning. Many approaches try to improve this efficiency such as byusing different action spaces [ 26], goal relabeling [ 27], trajectory guidance [ 28], visual imaginedgoals [ 21], or curiosity-driven exploration [ 29]. Our work focuses on learning a prior from humanvideos in order to learn efficiently in the real world.Learning from Human Motion The field of computer vision has seen much recent success inhuman and object interaction with deep neural networks. The human hand is often parametrized withMANO, a 45-dimensional vector [ 30] of axes aligned with the wrist and a 10-dimensional shapevector. MANOtorch from [ 31] aligns it with the anatomical joints. Many recent works detect MANOin monocular video [ 32,33,34]. Some also detect objects as well as the hand together [ 6,35]. Weuse FrankMocap to detect the hand for this work. There are many recent datasets including the CMUMocap Database [ 36] and Human3.6M [ 37] for human pose estimation, 100 Days of Hands [ 6] forhand-object interactions, FreiHand [ 38] for hand poses, Something-Something [ 39] for semanticinteractions. ActivityNet datasets [ 40], or YouCook [ 41] are action-driven datasets that focus ondexterous manipulation. We use these three datasets: [ 42] is a large-scale dataset with human-objectinteractions, [ 43] for curated human-object interactions, and [ 44] which has many household kitchentasks. In addition to learning exact human motion, many others focus on learning priors from humanmotion. [45, 46] learn general priors using contrastive learning on human datasets.Learning for Dexterous Manipulation With recent data-driven machine learning methods, roboti-cists are now beginning to learn dexterous policies from human data as well. Using the motion of ahuman can be directly used to control robots [ 47,48,49]. Moving further, human motion in internetdatasets can be retargeted and used directly to pre-train robotic policies [ 50,51]. Additionally, usinghuman motion as a prior for RL can help with learning skills that are human-like [ 52,53,54]. Withoutusing human data as priors, object reorientation using RL has been recently successful in a variety ofsettings [ 55,56]. Similar to work in robot dogs which do not have an easy human analog to learnfrom, these methods rely on significant training data from simulation with zero-shot transfer [ 57,58].Soft Object Manipulation Manipulating soft and delicate objects in a robot’s environment has beena long-standing problem. Using the torque output on motors, either by measuring current or throughtorque sensors, is useful feedback to find out how much force a robot is applying [ 59,60]. Coupledwith dynamics controllers, these robots can learn not to apply too much torque to the environmentaround them [ 61,62,63]. A variety of touch sensors [ 64,65,66,67] have also been developed tofeel the environment around it and can be used as control feedback. Our work does not rely on touchsensors. Instead, we practice in the real world to learn stable and precise grasps.3 Fine-Tuning Affordance for DexterityThe goal of DEFT is to learn useful, dexterous manipulation in the real world that can generalizeto many objects and scenarios. DEFT learns in the real world and fine-tunes robot hand-to-objectinteraction in the real world using only a few samples. However, without any priors on usefulbehavior, the robot will explore inefficiently. Especially with a high-dimensional robotic hand, weneed a strong prior to effectively explore the real world. We thus train an affordance model on humanvideos that leverages human behavior to learn reasonable behaviors the robot should perform.3.1 Learning grasping affordancesTo learn from dexterous interaction in a sample efficient way, we use human hand motion as a priorfor robot hand motion. We aim to answer the following: (1) What useful, actionable information canwe extract from the human videos? (2) How can human motion be translated to the robot embodimentto guide the robot? In internet videos, humans frequently interact with a wide variety of objects. This3Figure 3: We produce three priors from human videos: the contact location ( top row ) and grasp pose ( middlerow) from the affordance prior; the post-grasp trajectory ( bottom row ) from a human demonstration of the task.data is especially useful in learning object affordances. Furthermore, one of the major obstacles inmanipulating objects with few samples is accurately grasping the object. A model that can performa strong grasp must learn where andhow to grasp. Additionally, the task objective is important indetermining object affordances–humans often grasp objects in different ways depending on their goal.Therefore, we extract three items from human videos: the grasp location, human grasp pose, and task.Given a video clip V={v1, v2, . . . , v T}, the first frame vtwhere the hand touches the objectis found using an off-the-shelf hand-object detection model [ 6]. Similar to previous approaches[3,4,7,5], a set of contact points are extracted to fit a Gaussian Mixture Model (GMM) with centersμ={μ1, μ2, . . . , μ k}. Detic [ 68] is used to obtain a cropped image v′1containing just the object inthe initial frame v1to condition the model. We use Frankmocap [ 34] to extract the hand grasp posePin the contact frame vtas MANO parameters. We also obtain the wrist orientation θwristin thecamera frame. This guides our prior to output wrist rotations and hand joint angles that produce astable grasp. Finally, we acquire a text description Tdescribing the action occurring in V.We extract affordances from three large-scale, egocentric datasets: Ego4D [ 42] for its large scale andthe variety of different scenarios depicted, HOI4D [ 8] for high-quality human-object interactions, andEPIC Kitchens [ 44] for its focus on kitchen tasks similar to our robot’s. We learn a task-conditionedaffordance model fthat produces (ˆμ,ˆθwrist,ˆP) =f(v′1, T). We predict ˆμin similar fashion to [ 3].First, we use a pre-trained visual model [ 69] to encode v′1into a latent vector zv. Then we pass zvthrough a set of deconvolutional layers to get a heatmap and use a spatial softmax to estimate ˆμ.Parameter Dimensions Descriptionμ 3 XYZ grasp location in workspaceθwrist 3 Wrist grasp rotation (euler angles)P 16 Finger joint angles in soft handTable 1: Parameters that are fine-tuned in the real world. The affor-dance model predicts a 45-dimensional hand joint pose for P, which isretargeted to a 16-dimensional soft hand pose.To determine ˆθwristandˆP, we usezvand an embedding of the textdescription zT=g(T), wheregis the CLIP text encoder [ 70].Because transformers have seensuccess in encoding various mul-tiple modes of input, we use atransformer encoder Tto predictˆθwrist,ˆP=T(zv, zT). Overall,we train our model to optimizeL=λμ||μ−ˆμ||2+λθ||θwrist−ˆθwrist||2+λP||P−ˆP||2 (1)At test time, we generate a crop of the object using Segment-Anything [ 71] and give our model atask description. The model generates contact points on the object, and we take the average as ourcontact point. Using a depth camera, we can determine the 3D contact point to navigate to. While themodel outputs MANO parameters [ 30] that are designed to describe human hand joints, we retarget4Figure 4: Left: Workspace Setup. We place an Intel RealSense camera above the robot to maintain an egocentricviewpoint, consistent with the affordance model’s training data. Right : Thirteen objects used in our experiments.these values to produce similar grasping poses on our robot hand in a similar manner to previousapproaches [72, 48]. For more details, we refer readers to the appendix.In addition to these grasp priors, we need a task-specific post-contact trajectory to successfullyexecute a task. Because it is challenging to learn complex and high-frequency action informationfrom purely offline videos, we collect one demo of the human doing the robot task (Figure 3) separatefrom the affordance model f. We extract the task-specific wrist trajectory after the grasp using [ 34].We compute the change in wrist pose between adjacent timesteps for the first 40 timesteps. Whendeployed for fine-tuning, we execute these displacements for the post-grasp trajectory. Once we havethis prior, how can the robot improve upon it?3.2 Fine-tuning via InteractionAlgorithm 1 Fine-Tuning Procedure for DEFTRequire: Task-conditioned affordance model f, task descriptionT, post-grasp trajectory τ, parameter distribution D, residualcV AE policy π.Enumber of elites, Mnumber of warm-upepisodes, Ntotal iterations.D ← N (0, σ2)fork= 1. . . N doIk,0←initial imageξk←f(Ik,0, T)Sample εk∼DExecute grasp from ξk+εk, then trajectory τCollect reward Rk; reset environmentifk > M thenOrder traj indices i1, i2, . . . , i kbased on rewardsΩ← {εi1, εi2, . . . , ε iE}FitDto distribution of residuals in Ωend ifend forFitπ(.)as a V AE to ΩThe affordance prior allows the robotto narrow down its learning behaviorto a small subset of all possible behav-iors. However, these affordances arenot perfect and the robot will often-times still not complete the task. Thisis partially due to morphology differ-ences between the human and robothands, inaccurate detections of the hu-man hands, or differences in the tasksetup. To improve upon the prior, wepractice learning a residual policy forthe grasp parameters in Table 1.Residual policies have been used pre-viously to efficiently explore in thereal world [ 11,73]. They use the prioras a starting point and explore nearby.Let the grasp location, wrist rotation,grasp pose, and trajectory from our af-fordance prior be ξ. During training we sample noise ε∼ D where Dis initialized to N(0, σ2)(for a small σ). We rollout a trajectory parameterized by ξ+ε. We collect Ri, the reward for eachξi=f(vi) +εiwhere viis the image. First, we execute an initial number of Mwarmup episodeswith actions sampled from D, recording a reward Ribased on how well the trajectory completesthe task. For each episode afterward, we rank the prior episodes based on the reward Riand extractthe sampled noise from the episodes with the highest reward (the ‘elites’ Ω). We fit Dto the eliteepisodes to improve the sampled noise. Then we sample actions from D, execute the episode, and5Figure 5: Qualitative results showing the finetuning procedure for DEFT. The model learns to hold the spatulaand flip the bagel after 30 CEM iterations.record the reward. By repeating this process we can gradually narrow the distribution around thedesired values. In practice, we use M= 10 warmup episodes and a total of N= 30 episodes totalfor each task. This procedure is shown in Algorithm 1. See Table 4 for more information.At test time, we could take the mean values of the top Ntrajectories for the rollout policy. However,this does not account for the appearance of different objects, previously unseen object configurations,or other properties in the environment. To generalize to different initializations, we train a V AE[74,75,76,77] to output residuals δjconditioned on an encoding of the initial image φ(Ij,0)andaffordance model outputs ξjfrom the top ten trajectories. We train an encoder q(z|δj, cj)wherecj= (φ(Ij,0), ξj), as well as a decoder p(δj|z, cj)that learns to reconstruct residuals δj. At test time,our residual policy π(I0, ξ)samples the latent z∼ N(0,I)and predicts ˆδ=p(z,(I0, ξ)). Then werollout the trajectory determined by the parameters ξ+ˆδ. Because the V AE is conditioned on theinitial image, we generalize to different locations and configurations of the object.4 Experiment SetupWe perform a variety of experiments to answer the following: 1) How well can DEFT learn andimprove in the real world? 2) How good is our affordance model? 3) How can the experiencecollected by DEFT be distilled into a policy? 4) How can DEFT be used for complex, soft objectmanipulation? Please see our website at http://dexterous-finetuning.github.io for videos.Task Setup We introduce 9 tabletop tasks, Pick Cup ,Pour Cup ,Open Drawer ,Pick Spoon ,ScoopGrape ,Stir Spoon ,Pick Grape ,Flip Bagel ,Squeeze Lemon . Robotic hands are especially well-suitedfor these tasks because most of them require holding curved objects or manipulating objects withtools to succeed. For all tasks, we randomize the position of the object on the table, as well as usetrain and test objects with different shapes and appearances to test for generalization. To achievereal-world learning with the soft robot hand, we pretrain an internet affordance model as a prior forrobot behavior. As explained in Section 3, we train one language-conditioned model on all data.At test time, we use this as initialization for our real-world fine-tuning. The fine-tuning is donepurely in the real world. An operator runs 10 warmup episodes of CEM, followed by 20 episodesthat continually update the noise distribution, improving the policy. After this stage, we train aresidual V AE policy that trains on the top ten CEM episodes to predict the noise given the image andaffordance outputs. We evaluate how effectively the V AE predicts the residuals on each of the tasksby averaging over 10 trials. Because it takes less than an hour to fine-tune for one task, we are able tothoroughly evaluate our method on 9 tasks, involving over 100 hours of real-world data collection.Hardware Setup We use a 6-DOF UFactory xArm6 robot arm for all our experiments. We attachit to a 16-DOF Soft Hand using a custom, 3D-printed base. We use a single, egocentric RGBDcamera to capture the 3D location of the object in the camera frame. We calibrate the camera so thatthe predictions of the affordance model can be converted to and executed in the robot frame. Theflexibility of the robot hand also makes it robust to collisions with objects or unexpected contact withthe environment. For the arm, we ensure that it stays above the tabletop. The job will be terminated ifthe arm’s dynamics controller senses that the arm collided aggressively with the environment.6Method Pick cup Pour cup Open drawer Pick spoon Scoop Grape Stir Spoontrain test train test train test train test train test train testReal-World Only 0.0 0.1 0.2 0.1 0.1 0.0 0.7 0.3 0.0 0.0 0.3 0.0Affordance Model Only 0.1 0.4 0.5 0.5 0.0 0.3DEFT 0.8 0.8 0.8 0.9 0.5 0.4 0.8 0.6 0.7 0.3 0.8 0.5Table 2: We present the results of our method as well as compare them to other baselines: Real-world learningwithout internet priors used as guidance and the affordance model outputs without real-world learning.Weevaluate the success of the methods on the tasks over 10 trials.10 15 20 25 30CEM Rollouts0.00.20.40.60.8Success RatePick Cup10 15 20 25 30CEM Rollouts0.00.20.40.60.8Success RatePourOurs No Fine-Tuning No Affordance Model10 15 20 25 30CEM Rollouts0.00.20.40.60.8Success RateOpen Drawer10 15 20 25 30CEM Rollouts0.00.20.40.60.8Success RatePick Spoon10 15 20 25 30CEM Rollouts0.00.20.40.60.8Success RateScoop10 15 20 25 30CEM Rollouts0.00.20.40.60.8Success RateStirFigure 6: Improvement results for 6 tasks: pick cup, pour, open drawer, pick spoon, scoop, and stir. We see asteady improvement in our method as more CEM episodes are collected.5 ResultsEffect of affordance model We investigate the role of the affordance model and real-world fine-tuning (Table 2 and Figure 6). In the real-world only model, we provide a few heuristics in place ofthe affordance prior. We detect the object in the scene using a popular object detection model [ 71]and let the contact location prior be the center of the bounding box. We randomly sample the rotationangle and use a half-closed hand as the grasp pose prior. With these manually provided priors, therobot has difficulty finding stable grasps. The main challenge was finding the correct rotation anglefor the hand. Hand rotation is very important for many tool manipulation tasks because it requiresnot only picking the tool but also grasping in a stable manner.Zero-shot model execution We explore the zero-shot performance of our prior. Without applyingany online fine-tuning to our affordance model, we rollout the trajectory parameterized by the prior.While our model is decent on simpler tasks, the model struggles on tasks like stir and scoop thatrequire strong power grasps (shown in Table 2). In these tasks, the spoon collides with other objects,so fine-tuning the prior to hold the back of the spoon is important in maintaining a reliable gripthroughout the post-grasp motion. Because DEFT incorporates real-world experience with the prior,it is able to sample contact locations and grasp rotations that can better execute the task.Human and automated rewards We ablate the reward function used to evaluate episodes. Ourmethod queries the operator during the task reset process to assign a continuous score from 0to1forthe grasp. Because the reset process requires a human-in-the-loop regardless, this adds little marginalcost for the operator. But what if we would like these rewards to be calculated autonomously? Weuse the final image collected in the single post-grasp human demonstration from Section 3 as the goalimage. We define the reward to be the negative embedding distance between the final image of theepisode and the goal image with either an R3M [ 69] or a ResNet [ 78] encoder. The model learnedfrom ranking trajectories with R3M reward is competitive with DEFT in all but one task, indicatingthat using a visual reward model can provide reasonable results compared to human rewards.7Method Pour Cup Open Drawer Pick Spoontrain test train test train testReward Function:R3M Reward 0.0 0.0 0.4 0.5 0.5 0.4Resnet18 Imagenet Reward 0.1 0.2 0.3 0.1 0.4 0.2Policy Ablation:DEFT w/ MLP 0.0 0.0 0.5 0.0 0.6 0.5DEFT w/ Transformer 0.4 0.5 0.6 0.1 0.4 0.5DEFT w/ Direct Parameter est. 0.1 0.1 0.1 0.0 0.3 0.0DEFT 0.8 0.9 0.5 0.4 0.8 0.6Table 3: Ablations for (1) reward function type, (2) model architecture, and (3) parameter estimation.10 15 20 25 30CEM Rollouts0.00.20.4Success RatePick Grape10 15 20 25 30CEM Rollouts0.00.20.40.6Success RateFlip BagelOurs No Fine-Tuning10 15 20 25 30CEM Rollouts0.00.20.40.6Success RateSqueeze LemonFigure 7: We evaluate DEFT on three difficult manipulation tasks.Model Architecture Weinvestigate different modelsand training architecturesfor the policy trained on therollouts (Table 3). When wereplace the conditional V AEwith an MLP that predictsresiduals, the model has dif-ficulty learning the grasp rotation to effectively pour a cup. We find that the MLP cannot learn themulti-modality of the successful data properly. Our transformer ablation is an offline method similarto [79] where in addition to the image and affordance model outputs, we condition on the rewardoutputs and train a transformer to predict the residual. At test time the maximum reward is queriedand the output is used in the rollout. While this method performs well, we hypothesize that thetransformer needs more data to match DEFT. Finally, we train a V AE to directly estimate ξinstead ofthe residual. This does not effectively distill the information from the affordance prior without thetraining time allotted. As a result, it often makes predictions that are far from the correct grasp pose.Performance on complex tasks and soft object manipulation We investigate the performance ofDEFT on more challenging tasks. Tasks involving soft objects cannot be simulated accurately, whileour method is able to perform reasonably on food manipulation tasks as shown in Figure 7.Of the three tasks, our method has the most difficulty with the Pick Grape task. Because grapes aresmall, the fingers must curl fully to maintain a stable grasp. A limitation of our hand is that the rangeof its joints does not allow it to close the grasp fully and as a result, it has difficulty in consistentlypicking small objects. This also makes it challenging to hold heavy objects like the spatula in FlipBagel, but with practice DEFT learns to maintain a stable grasp of the spatula. For Squeeze Lemon,DEFT develops a grasp that allows it to apply sufficient pressure above the juicer. Specifically, ourmethod takes advantage of the additional fingers available for support in hands.6 Discussion and LimitationsIn this paper, we investigate how to learn dexterous manipulation in complex setups. DEFT aims tolearn directly in the real world. In order to accelerate real-world fine-tuning, we build an affordanceprior learned from human videos. We are able to efficiently practice and improve in the real worldvia our online fine-tuning approach with a soft anthropomorphic hand, performing a variety of tasks(involving both rigid and soft objects). While our method shows some success on these tasks, thereare some limitations to DEFT that hinder its efficacy. Although we are able to learn policies for thehigh-dimensional robot hand, the grasps learned are not very multi-modal and do not capture allof the different grasps humans are able to perform. This is mainly due to noisy hand detections inaffordance pretraining. As detection models improve, we hope to be able to learn a more diverse setof hand grasps. Second, during finetuning, resets require human input and intervention. This limitsthe real-world learning we can do, as the human has to be constantly in the loop to reset the objects.Lastly, the hand’s fingers cannot curl fully. This physical limitation makes it difficult to hold thinobjects tightly. Future iterations of the soft hand can be designed to grip such objects strongly.8AcknowledgmentsWe thank Ananye Agarwal and Shagun Uppal for fruitful discussions. KS is supported by the NSFGraduate Research Fellowship under Grant No. DGE2140739. The work is supported in part by ONRN00014-22-1-2096, ONR MURI N00014-22-1-2773, and Air Force Office of Scientific Research(AFOSR) FA9550-23-1-0747.References[1]P. Mannam, K. Shaw, D. Bauer, J. Oh, D. Pathak, and N. Pollard. A framework for designinganthropomorphic soft hands through interaction, 2023.[2]D. F. Fouhey, X. Wang, and A. Gupta. In defense of the direct perception of affordances. arXivpreprint arXiv:1505.01085 , 2015.[3]S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videos as aversatile representation for robotics. 2023.[4]M. Goyal, S. Modi, R. Goyal, and S. Gupta. Human hands as probes for interactive objectunderstanding. In CVPR , 2022.[5]T. Nagarajan, C. Feichtenhofer, and K. Grauman. Grounded human-object interaction hotspotsfrom video. In ICCV , 2019.[6]D. Shan, J. Geng, M. Shu, and D. F. Fouhey. Understanding human hands in contact at internetscale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 9869–9878, 2020.[7]S. Liu, S. Tripathi, S. Majumdar, and X. Wang. Joint hand motion and interaction hotspotsprediction from egocentric videos. In CVPR , 2022.[8]Y . Liu, Y . Liu, C. Jiang, K. Lyu, W. Wan, H. Shen, B. Liang, Z. Fu, H. Wang, and L. Yi.Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages21013–21022, June 2022.[9]X. Wang, R. Girdhar, and A. Gupta. Binge watching: Scaling affordance learning from sitcoms.InCVPR , 2017.[10] T. Davchev, K. S. Luck, M. Burke, F. Meier, S. Schaal, and S. Ramamoorthy. Residual learningfrom demonstration. arXiv preprint arXiv:2008.07682 , 2020.[11] T. Johannink, S. Bahl, A. Nair, J. Luo, A. Kumar, M. Loskyll, J. A. Ojea, E. Solowjow, andS. Levine. Residual reinforcement learning for robot control. In ICRA , 2019.[12] S. Karaman, M. R. Walter, A. Perez, E. Frazzoli, and S. Teller. Anytime motion planning usingthe rrt. In ICRA , 2011.[13] J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query pathplanning. In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conferenceon Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065) , volume 2, pages995–1001. IEEE, 2000.[14] M. Mukadam, X. Yan, and B. Boots. Gaussian process motion planning. In 2016 IEEEinternational conference on robotics and automation (ICRA) , pages 9–15. IEEE, 2016.[15] J. Kober and J. Peters. Policy search for motor primitives in robotics. Advances in neuralinformation processing systems , 21, 2008.9[16] J. Peters, K. Mulling, and Y . Altun. Relative entropy policy search. In Twenty-Fourth AAAIConference on Artificial Intelligence , 2010.[17] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning. ICLR , 2016.[18] I. Popov, N. Heess, T. Lillicrap, R. Hafner, G. Barth-Maron, M. Vecerik, T. Lampe, Y . Tassa,T. Erez, and M. Riedmiller. Data-efficient deep reinforcement learning for dexterous manipula-tion. arXiv preprint arXiv:1704.03073 , 2017.[19] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In ICML , 2017.[20] S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen. Learning hand-eye coordination for roboticgrasping with large-scale data collection. In ISER , 2016.[21] A. V . Nair, V . Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine. Visual reinforcement learningwith imagined goals. In NeurIPS , pages 9191–9200, 2018.[22] P. Agrawal, A. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experientiallearning of intuitive physics. NIPS , 2016.[23] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. Reinforcement learning with deep energy-basedpolicies. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 ,pages 1352–1361. JMLR. org, 2017.[24] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakr-ishnan, V . Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-basedrobotic manipulation. arXiv preprint arXiv:1806.10293 , 2018.[25] D. Kalashnikov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. Mt-opt: Continuous multi-task robotic reinforcement learning at scale. arXivpreprint arXiv:2104.08212 , 2021.[26] R. Martin-Martin, M. A. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg. Variable impedancecontrol in end-effector space: An action space for reinforcement learning in contact-rich tasks.IROS , 2019.[27] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin,P. Abbeel, and W. Zaremba. Hindsight experience replay. NeurIPS , 2017.[28] S. Levine and V . Koltun. Guided policy search. In ICML , 2013.[29] R. Mendonca, S. Bahl, and D. Pathak. Alan: Autonomously exploring robotic agents in the realworld. arXiv preprint arXiv:2302.06604 , 2023.[30] J. Romero, D. Tzionas, and M. J. Black. Embodied hands: Modeling and capturing hands andbodies together. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia) , 36(6), Nov. 2017.[31] L. Yang, X. Zhan, K. Li, W. Xu, J. Li, and C. Lu. CPF: Learning a contact potential field tomodel the hand-object interaction. In ICCV , 2021.[32] J. Wang, F. Mueller, F. Bernard, S. Sorli, O. Sotnychenko, N. Qian, M. A. Otaduy, D. Casas,and C. Theobalt. Rgb2hands: real-time tracking of 3d hand interactions from monocular rgbvideo. ACM Transactions on Graphics (TOG) , 39(6):1–16, 2020.[33] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shapeand pose. CoRR , abs/1712.06584, 2017. URL http://arxiv.org/abs/1712.06584 .10[34] Y . Rong, T. Shiratori, and H. Joo. Frankmocap: A monocular 3d whole-body pose estimationsystem via regression and integration. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision (ICCV) Workshops , pages 1749–1759, October 2021.[35] Y . Ye, A. Gupta, and S. Tulsiani. What’s in your hands? 3d reconstruction of generic objectsin hands. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 3895–3905, 2022.[36] Cmu graphics lab motion capture database. http://mocap.cs.cmu.edu/ .[37] C. Ionescu, D. Papava, V . Olaru, and C. Sminchisescu. Human3. 6m: Large scale datasets andpredictive methods for 3d human sensing in natural environments. IEEE transactions on patternanalysis and machine intelligence , 36(7):1325–1339, 2013.[38] C. Zimmermann, D. Ceylan, J. Yang, B. Russell, M. Argus, and T. Brox. Freihand: A datasetfor markerless capture of hand pose and shape from single rgb images. In Proceedings of theIEEE/CVF International Conference on Computer Vision , pages 813–822, 2019.[39] R. Goyal, S. Ebrahimi Kahou, V . Michalski, J. Materzynska, S. Westphal, H. Kim, V . Haenel,I. Fruend, P. Yianilos, M. Mueller-Freitag, F. Hoppe, C. Thurau, I. Bax, and R. Memisevic. The”something something” video database for learning and evaluating visual common sense. InProceedings of the IEEE International Conference on Computer Vision (ICCV) , Oct 2017.[40] B. G. Fabian Caba Heilbron, Victor Escorcia and J. C. Niebles. Activitynet: A large-scale videobenchmark for human activity understanding. In CVPR , pages 961–970, 2015.[41] P. Das, C. Xu, R. F. Doell, and J. J. Corso. A thousand frames in just a few words: Lingualdescription of videos through latent topics and sparse object stitching. In Proceedings of theIEEE conference on computer vision and pattern recognition , pages 2634–2641, 2013.[42] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang,M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages18995–19012, 2022.[43] Y . Liu, Y . Liu, C. Jiang, K. Lyu, W. Wan, H. Shen, B. Liang, Z. Fu, H. Wang, and L. Yi.Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages21013–21022, June 2022.[44] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti,J. Munro, T. Perrett, W. Price, and M. Wray. Scaling egocentric vision: The epic-kitchensdataset. In European Conference on Computer Vision (ECCV) , 2018.[45] Y . J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V . Kumar, and A. Zhang. Vip: Towardsuniversal visual reward and representation via value-implicit pre-training. arXiv preprintarXiv:2210.00030 , 2022.[46] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual representationfor robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[47] A. Handa, K. Van Wyk, W. Yang, J. Liang, Y .-W. Chao, Q. Wan, S. Birchfield, N. Ratliff, andD. Fox. Dexpilot: Vision-based teleoperation of dexterous robotic hand-arm system. In ICRA ,2020.[48] A. Sivakumar, K. Shaw, and D. Pathak. Robotic telekinesis: learning a robotic hand imitator bywatching humans on youtube. arXiv preprint arXiv:2202.10448 , 2022.[49] F. O. H. to Multiple Hands: Imitation Learning for Dexterous Manipulation from Single-Camera Teleoperation. Qin, yuzhe and su, hao and wang, xiaolong, 2022.11[50] K. Shaw, S. Bahl, and D. Pathak. Videodex: Learning dexterity from internet videos. In CoRL ,2022.[51] P. Mandikal and K. Grauman. Dexvip: Learning dexterous grasping with human hand posepriors from video. In Conference on Robot Learning , pages 651–661. PMLR, 2022.[52] A. Rajeswaran, V . Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine.Learning complex dexterous manipulation with deep reinforcement learning and demonstrations.arXiv preprint arXiv:1709.10087 , 2017.[53] X. B. Peng, P. Abbeel, S. Levine, and M. Van de Panne. Deepmimic: Example-guided deepreinforcement learning of physics-based character skills. ACM Transactions On Graphics(TOG) , 37(4):1–14, 2018.[54] P. Mandikal and K. Grauman. Dexvip: Learning dexterous grasping with human hand posepriors from video. In Conference on Robot Learning (CoRL) , 2021.[55] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki, A. Petron,M. Plappert, G. Powell, A. Ray, et al. Learning dexterous in-hand manipulation. IJRR , 2020.[56] T. Chen, M. Tippur, S. Wu, V . Kumar, E. Adelson, and P. Agrawal. Visual dexterity: In-handdexterous manipulation from depth. arXiv preprint arXiv:2211.11744 , 2022.[57] A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrainsusing egocentric vision. In Conference on Robot Learning , pages 403–415. PMLR, 2023.[58] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. Rapid locomotion via reinforce-ment learning. arXiv preprint arXiv:2205.02824 , 2022.[59] T. Yoshikawa. Dynamic manipulability of robot manipulators. Transactions of the Society ofInstrument and Control Engineers , 21(9):970–975, 1985.[60] H. Asada and J.-J. Slotine. Robot analysis and control . John Wiley & Sons, 1991.[61] K. M. Lynch and F. C. Park. Modern robotics . Cambridge University Press, 2017.[62] C. Liu and M. Tomizuka. Designing the robot behavior for safe human robot interactions.InTrends in Control and Decision-Making for Human-Robot Collaboration Systems , pages241–270. Springer, 2017.[63] O. Khatib. A unified approach for motion and force control of robot manipulators: Theoperational space formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987.[64] Z. Si, T. C. Yu, K. Morozov, J. McCann, and W. Yuan. Robotsweater: Scalable, generalizable,and customizable machine-knitted tactile skins for robots. arXiv preprint arXiv:2303.02858 ,2023.[65] W. Yuan, S. Dong, and E. H. Adelson. Gelsight: High-resolution robot tactile sensors forestimating geometry and force. Sensors , 17(12):2762, 2017.[66] R. Bhirangi, T. Hellebrekers, C. Majidi, and A. Gupta. Reskin: versatile, replaceable, lastingtactile skins. arXiv preprint arXiv:2111.00071 , 2021.[67] S. Sundaram, P. Kellnhofer, Y . Li, J.-Y . Zhu, A. Torralba, and W. Matusik. Learning thesignatures of the human grasp using a scalable tactile glove. Nature , 569(7758), 2019. doi:10.1038/s41586-019-1234-z.[68] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. arXiv preprint arXiv:2201.02605 , 2022.12[69] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual representationfor robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[70] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models fromnatural language supervision. CoRR , abs/2103.00020, 2021. URL https://arxiv.org/abs/2103.00020 .[71] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C.Berg, W.-Y . Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643 , 2023.[72] A. Handa, K. Van Wyk, W. Yang, J. Liang, Y .-W. Chao, Q. Wan, S. Birchfield, N. Ratliff, andD. Fox. Dexpilot: Vision-based teleoperation of dexterous robotic hand-arm system. In 2020IEEE International Conference on Robotics and Automation (ICRA) , pages 9164–9170. IEEE,2020.[73] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. In RSS, 2022.[74] K. Sohn, X. Yan, and H. Lee. Learning Structured Output Representation using Deep Condi-tional Generative Models. In NeurIPS , 2015.[75] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximateinference in deep generative models. In International conference on machine learning , pages1278–1286. PMLR, 2014.[76] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximateinference in deep generative models. arXiv preprint arXiv:1401.4082 , 2014.[77] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 ,2013.[78] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR ,abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385 .[79] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, andI. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. arXivpreprint arXiv:2106.01345 , 2021.[80] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.13A Video DemoWe provide video demos of our system at https://dexterous-finetuning.github.io .B DASH: Dexterous Anthropomorphic Soft HandRecently introduced, DASH (Dexterous Anthropomorphic Soft Hand) [ 1] is a four-fingered anthro-pomorphic soft robotic hand well-suited for machine learning research use. Its human-like size andform factor allow us to retarget human hand grasps to robot hand grasps easily as well as performhuman-like grasps. Each finger is actuated by 3 motors connected to string-like tendons, whichdeform the joints closest to the fingertip (DIP joint), the middle joint (PIP joint), and the joint at thebase of the finger (MCP joint). There is one motor for the finger to move side-to-side at the MCPjoint, one for the finger to move forward at the MCP joint, and one for PIP and DIP joints. The PIPand DIP joints are coupled to one motor and move dependently. While the motors do not know theend-effector positions of the fingers, we learn a mapping function from pairs of motor angles andvisually observed open-loop finger joint angles. These models are used to command the finger jointpositions learned from human grasps.C MANO RetargetingFor MANO parameters, the axis of each of the joints is rotation aligned with the wrist joint andtranslated across the hand. However, our robot hand operates on forward and side-to-side joint angles.To translate the MANO parameters to the robot fingers we extract the anatomical consistent axes ofMANO using MANOTorch. Once these axes are extracted, each axis rotation represents twisting (notpossible for human hands), bending, and spreading. We then match these axes to the robot hand. Thespreading of the human hand’s fingers (side-to-side motion at the MCP joint) maps to the side-to-sidemotion at the robot hand’s base joint. The forward folding at the base of the human hand (forwardmotion at the MCP joint) maps to the forward motion at the base of the robot hand’s finger. Finally,the bending of the other two finger joints on the human hand, PIP and DIP, map to the robot hand’sPIP and DIP joints. While the thumb does not have anatomically the same structure, we map the axesin the same way. Other approaches rely on creating an energy function to map the human hand to therobot hand. However, because the soft hand is similar in anatomy and size to a human hand, it doesnot require energy functions for accurate retargeting.D Affordance Model TrainingWe use data from Ego4D [ 42], EpicKitchens-100 [ 44], and HOI4D [ 8]. After filtering for clips ofsufficient length, clips that involve grasping objects with the right hand, and clips that have languageannotations, we used 64666 clips from Ego4D, 9144 clips from EpicKitchens, and 2707 clips fromHOI4D. In total, we use a dataset of 76517 samples for training our model.For our contact location model, we use the visual encoder from [ 69] to encode the image as a512-dimensional vector. We use the spatial features of the encoder to upsample the latent beforeapplying a spatial softmax to return the contact heatmap. This consists of three deconvolutional layerswith 512, 256, and 64 channels in that order.To predict wrist rotation and grasp pose, we use the language encoder from [ 70] to compress thelanguage instruction to a 512-dimensional vector. We concatenate the visual and language latents andpass them through a transformer with eight heads and six self-attention layers. We pass the resultof the transformer through an MLP with hidden size 576, and predict a vector of size 48: the first 3dimensions are the axis-angle rotations; the last 45 dimensions are the joint angles of the hand. Thesecorrespond to the parameters output by Frankmocap [ 34], which we used to get ground truth handpose in all the datasets. The images used from the training datasets as well as the ground truth labelsare released here.14We jointly optimize the L2 loss of the contact location μ, the wrist rotation θwristand grasp pose P.The weights we used for the losses are λμ= 1.0, λθ= 0.1, λP= 0.1. We train for 70 epochs withan initial learning rate of 0.0002, and a batch size of 224. We used the Adam optimizer [ 80] withcosine learning rate scheduler. We trained on a single NVIDIA RTX A6000 with 48GB RAM.E Fine-Tuning ParametersBelow are the values of the parameters used for the CEM phase of DEFT.Parameter Value DescriptionE 10 Number of elites for CEMM 10 Number of warm-up episodesN 30 Total number of CEM episodesσμ 0.02 Initial contact location Standard Deviation (meters)σθwrist 0.2 Initial wrist rotation Standard Deviation (euler angle radians)σP 0.05 Initial soft hand joints Standard DeviationTable 4: Values for fixed parameters in fine-tuning Algorithm 1.F Success Criteria for TasksWe define the criteria for success in each of our 9 tasks as follows:• Pick Cup: Cup must leave table surface and stay grasped throughout trial.•Pour Cup: Cup must be grasped throughout trial and also rotate so that the top of the cup isat a lower height than the base.•Open Drawer: Drawer is initially slightly open so that it can be grasped. By the end of theepisodes, the drawer should be at least 1 centimeter more open than it was at the beginning.• Pick Spoon: The spoon must not be in contact with the table at the end of the trial.•Stir Spoon: The spoon base must rotate around the jar/pot at least 180 degrees while grasped.•Scoop Grape: The spoon must hold a grape at the end of the trial while being held by thesoft hand.•Pick Grape: All grapes must be held by the hand above the table surface. In particular, ifany single grape falls due to a weak stem, this is considered a failure.•Flip Bagel: The side of the bagel that is facing up at the end of the trial should be oppositethe side facing up at the beginning.• Squeeze Lemon: The lemon should be grasped securely on top of the juicer.15 |
rvh0vkwKUM | Predicting Routine Object Usage forProactive Robot AssistanceMaithili PatelGeorgia Institute of Technologymaithili@gatech.eduAswin PrakashGeorgia Institute of Technologyaprakash88@gatech.eduSonia ChernovaGeorgia Institute of Technologychernova@gatech.eduAbstract: Proactivity in robot assistance refers to the robot’s ability to anticipateuser needs and perform assistive actions without explicit requests. This requiresunderstanding user routines, predicting consistent activities, and actively seekinginformation to predict inconsistent behaviors. We propose SLaTe-PRO (Sequen-tial Latent Temporal model for Predicting Routine Object usage), which improvesupon prior state-of-the-art by combining object and user action information, andconditioning object usage predictions on past history. Additionally, we find somehuman behavior to be inherently stochastic and lacking in contextual cues that therobot can use for proactive assistance. To address such cases, we introduce aninteractive query mechanism that can be used to ask queries about the user’s in-tended activities and object use to improve prediction. We evaluate our approachon longitudinal data from three households, spanning 24 activity classes. SLaTe-PRO performance raises the F1 score metric to 0.57 without queries, and 0.60 withuser queries, over a score of 0.43 from prior work. We additionally present a casestudy with a fully autonomous household robot.Keywords: Proactive Assistance, User Routine Understanding, Human ActivityAnticipation, Robot Learning1 IntroductionProactive assistive robots provide support for human user activities by monitoring user actions, iden-tifying opportunities for supporting the user’s objective, and performing supportive actions withoutexplicitly being asked. Incorporating elements of proactive assistance has been proposed as a keyprinciple for effective human-robot interaction [1], and studies have shown that users prefer proac-tive assistance over always having to ask for help in longitudinal interactions with robots [2, 3].Prior work has considered assistance at two different time scales: short-term assistance based onthe user’s current action (e.g., handing the next tool for an assembly task) [4, 5], and longitudinalassistance, in which the robot must anticipate the user’s needs over long time horizons (e.g., settingout breakfast before the user comes into the kitchen) [6].In this work, we consider the problem of longitudinal proactive assistance , in which the robot learnspatterns in user behavior from observations of a wide range of household tasks, and then providesassistance by fetching objects prior to being asked. Longitudinal assistance is a challenging problemdue to the inherent stochasticity of human behavior – at any given time of day, a person may engagein a wide variety of activities or interact with many objects. The leading dataset for modelingproactive assistance, HOMER [6], crowdsourced different patterns of user routines and obtainedmodels in which users were engaged in one of 3 activities on average, and up to 9 activities atcertain times of the day.Computationally, proactive assistance can be modeled by considering object-object relation frequen-cies [7], periodic routines [8], or through spatio-temporal object tracking [6]. Our work is particu-larly inspired by Spatio-Temporal Object Tracking (STOT) [6], which outperforms other prior meth-ods using a generative graph neural network to learn a unified spatio-temporal predictive model ofobject dynamics from temporal sequences of object arrangements. The resulting model performedwell on more consistent user routines, such as using a plate for dinner , but was unable to predictless consistent activities, such as socializing . A key limitation of STOT is that it utilized only objectinformation for proactivity cues (e.g., which objects the user picked up and moved) and did notconsider the underlying high level activity label for the user’s actions.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.In this work, we contribute Sequential Latent Temporal model for Predicting Routine Object usage(SLaTe-PRO), which models temporal evolution of user activities by incorporating observations ob-tained in object and activity domains. SLaTe-PRO improves upon STOT by i) combining object anduser action information, and ii) conditioning object usage predictions on the past history of user ob-servations in addition to the current observation, leading to a significant improvement in proactivityperformance. We further characterize human activities by their difficulty with respect to proactiveassistance, and show that temporal consistency of activities plays a key role in enabling effectiveproactive assistance. Importantly, our analysis shows that some activities are so inconsistent andlacking in contextual cues that the robot can do no better than a random guess based on their like-lihood. To address such cases, we introduce a user interaction component that enables the robot tomake a limited number of daily user queries to inform assistance decisions.We evaluate SLaTe-PRO on three households from the HOMER dataset, which incorporates modelsof 24 routine activities, and compare performance against prior state-of-the-art, STOT. Our resultsshow that, under the same operating conditions as assumed in [6], in which no activity recognitiondata is available about the user, SLaTe-PRO outperforms STOT, raising F1 score from 0:43to0:52.With the added option to perform activity recognition to detect the user’s current activity, SLaTe-PRO achieves a further improvement in F1 score to 0:57. Finally, enabling the robot to pair SLaTe-PRO with a limited number of user queries leads to an F1 score of 0:60, for the most significantimprovement over STOT. We present detailed performance results in simulation, and then present acase study and description of a fully autonomous household robot.2 Prior WorkKey elements in addressing the problem of proactive assistance include recognizing activities beingexecuted by the user, modeling temporal patterns in observed activity representations, and finallyinteracting with the user to obtain clarifications.Activity recognition has been explored through the use of camera data [9, 10, 11, 12], smart-homesensors [13, 14, 15, 16, 17], and wearable devices [18, 19]. In many settings, tracking the interactionof the user with various objects has been used to improve activity recognition [20, 21, 22], goalinference [4] and temporal routine tracking [6]. Recent work on smart-home systems for activityrecognition [13] have achieved 80% accuracy on recognizing activity labels at a coarse-grainedlevel, such as breakfast, washing dishes, work in office, etc. Activity labels can provide useful cuesabout object needs, or lack thereof. For example, greeting guests might indicate the need to serverefreshments, while leaving the house suggests no need for anything. We seek to utilize such activityrecognition labels in combination with object-centric observations to create a representation of useractivities, which can support predictive modeling of their routines in object space.Long-horizon proactivity requires a temporal model of the user’s behavior in addition to under-standing the activities independently. To represent temporal patterns in user routines we use latentspace sequence models, which have increasingly been used to model world dynamics [23, 24, 25].These works utilize latent representations to capture dynamics of objects resulting from physics inthe environment, whereas we seek to capture the effect of user’s activities and their effect on howobjects move between different locations (e.g. cabinet, sink, etc.) around a large space, such as awhole household. Models of user behavior at longer timescales have been studied in the context ofpredicting occupancy and traffic [26, 27], and more recently towards anticipating object usage forproactive assistance [6], but they remain specific to their respective domain, by modeling a sequenceover data represented in that domain. In contrast, our model can combine information obtained invarious forms into a unified latent space and use that space to make predictions.Finally, user interaction is necessary in proactive assistance, to address inevitable predictions errorsresulting from aleatoric uncertainty in the user’s daily life. Our proposed solution for seeking userfeedback derives from ideas of information gain, which have been used in prior work to plan activeactions towards searching and mapping unexplored regions [28, 29, 30, 31], to plan actions towardsimproving world models for reinforcement learning [32], to actively query labels to improve classifi-cation performance [33], and to find objects in clutter [34]. Verbal clarifications have been exploredtowards refining goal specifications [35, 36, 37] or navigation instructions [38]. While these worksseek to utilize clarifications in natural language to refine user commands in the same space, we seekto clarify our model’s predictions of the user’s activities. Natural language expressions based onrobot’s internal inferences have been explored towards explaining the robot’s actions to promote2explainability and transparency [39, 40]. In contrast to promoting the user’s understanding of therobot’s inferences, we seek to obtain information that improves the robot’s predictions.3 Problem FormulationOur work builds on the formulation of proactive assistance through object relocations introduced in[6]; we extend the problem formulation to incorporate activity recognition data and to incorporateuser interaction. An environment consists of a set of objects O=foig, and locationsL=flig,and a human agent that takes actions which lead to the movement of objects from one location toanother. Note that objects can also serve as locations for other objects (e.g. spoon being on a plate),in which case the object exists in both OandL. The state of the environment Gt, at timet, consistsof a set of object-location pairs f(oi;li)g, and can be fully observed by the robot.At any given time, the robot’s classification of human agent’s activity is represented as at2A ,one of a predefined set of activities, where unknown2A represents all activities not known to, ornot recognized by, the robot. Over any given time period t, the human agent performs actions aspart of their activity, causing the environment to transition to Gt+t, and the difference betweenstatesGtandGt+tcan be represented as the set of objects that move to a new location within thattimestep Gt:t+t=f(oi;lj)j(oi;lj)=2Gt;(oi;lj)2Gt0;s.t.t<t0<(t+ t)g. Additionally,the user can provide input to the robot by specifying their intended activity or object usage throughut. In our work, we enable the robot to query the user about their upcoming activities (e.g., “willyou be having fruit for breakfast?”), but unprompted user input can also be captured within ut.We formulate proactive assistance as consisting of two phases, an observation (training) phase andassistance phase. In the observation phase, the robot obtains sequential observations of the environ-ment stateGtand user actions at, which it uses to learn a predictive model of the user’s behavior.In the assistance phase, given a history of environment states G0:t, partially observed history ofactivity labels a0:t, optional user input ut, and timet, the robot predicts a set of object relocationsR=f(oi;lj)gconsisting of the objects oithat change locations in the predictive window of ttot+ t, along with their firstnew location lj, so that it can move the object where needed.4 Predicting Routine Object UsageThe aim of SLaTe-PRO1is to utilize environment observations G0:tand user activity recognitionlabelsa0:t, when available, to model user routines and predict future object movements Gt:t+t.We model all observations and predictions at discrete time steps, represented as unit length in ournotation. We represent the environment as scene graphs with nodes representing objects oiand loca-tionslithrough one-hot vectors, and edges connecting objects to their respective locations. Activitylabelsatare represented as one-hot vectors, and time-of-day as a vector of sines and cosines of pre-determined frequencies (t)based on prior work [41]. Our proposed method encodes all availableactivity labels and object arrangement observations into a shared latent space, makes predictionsin this space, and decodes them into object movements, as outlined in Figure 1. The latent spaceXtencodes more detailed activity information beyond what is captured by the activity label. Forinstance, the label having breakfast captures the high level activity, whereas the latent space mightadditionally capture which food is being eaten and what utensils are being used.We learn autoencoder models for object arrangements and activity lables, based on transformers,graph neural networks, and feedforward MLPs. The object arrangement encoder captures envi-ronment changes from the scene graph pairs fenc:Gt1;Gt;(t)!^Xt, and is modeled as atransformer [42] with attention across all objects. The encoder takes as input object features, con-catenated with the distribution of their previous and current locations, to capture object movementswhile preserving the context of unchanged object locations. The encoder applies self-attentionacross input encodings of all objects, followed by cross-attention conditioned on the time-of-daycontext, and finally max pooling across resulting object features to obtain a latent representationof overall change in the environment state. Conditioning on time-of-day helps the model contex-tualize object movements, e.g. using a cup for coffee or wine. The object arrangement decoderfdec:Gt;Xt+1!p(Gt+1), modelled using an edge-message-passing-based graph neural networkproposed in prior work [6], generates a probabilistic scene graph representing object arrangementsat a future time step from the current scene graph and conditioned on the latent vector. Note that in-stead of encoding and decoding the entire scene graph, our approach focuses on capturing changes inthe environment through the latent vector. We encode graph pairs and condition our decoder on the1The code is available publicly at https://github.com/Maithili/SLaTe-PRO3Figure 1: SLaTe-PRO consists of transformer-based encoder fencand graph neural network based decoderfdecfor object observations Gt, learned embeddings gencand MLP-based decoder gdecfor activity labels at,and a transformer-based predictive model hover latent space Xt. These models together learn to predict theobject arrangement and activity label at future time-steps. The variables in grey represent observed variablesprevious graph, not requiring our model to remember locations of irrelevant objects. We representthe activity label encoder as a learnable set of embedding vectors genc:at!^Xtfor each activitylabel, and the activity label decoder as a fully connected feedforward classifier gdec:Xt!p(At).Finally, we create a latent dynamics model h:X0:t;(t)!^Xt+1to learn the user’s temporal rou-tine in the shared latent space. This model predicts the next latent state given a history of latent statesand current time-of-day using self-attention-based transformer encoder. We integrate absolute timevector(t)with the latent vectors by summation, similar to positional embeddings in the originaltransformer. Using the absolute time provides the model with the semantic context of time-of-day,in addition to the relative temporal sequence of the latent vectors.We use several training objectives to train all components of our model simultaneously, using se-quential observations of the environment and the user. We use reconstruction losses to train eachautoencoder, specifically crossentropy losses for predicting the correct activity label, and the correctlocation of each object. We train latents obtained from either source to be similar through a con-trastive loss, and combine latents obtained from both encoders by averaging. The latent predictivemodel is trained on this averaged latent through both, reconstruction losses on the decoded futuregraphs and actions, and contrastive loss between the predicted and encoded latent vectors. We usea latent overshooting loss to aid long-horizon predictions, as proposed in prior work [23], and findthat for our model, observation overshooting provides very little benefit.Atinference time, the model must predict object relocations ^Rthat will occur in a given -steppredictive horizon. For this we first encode observation histories G0:tanda0:T, and average theencodings at every timestep to obtain a sequence of latent vectors X0:T. We then employ the latentpredictive model to predict latent vectors for a -prediction horizon ^Xt:t+, and decode them intoa sequence of object arrangement probabilities p(Gt:t+)and activity labels ^At:t+. To predictrelocations ^Rassociated with each object’s first movement, we use the location distribution of thetime-steptoiwhen the object oi, currently at location liis most likely to be at a different locationtoi=argmintp(oi;li). By combining the location distributions of all objects, we can infer aprobability distribution over object-location pairs preloc(oi;lj), and infer relocations ^R=f(oi;^lj)gas a set of objects oithat are predicted to move to locations ^lj. In a similar manner, we can infer adistribution over next activity prediction as the probability of each activity label at the time-step tawhen an activity different from the current activity a0is most likely to start ta=argmintpt(a0).5 Overcoming Stochastic User Behavior through Interactive QueriesThe above framework is effective when future human behavior can be predicted from past obser-vations. However, some human action choices are inherently more stochastic than others. In thissection we discuss this occurrence and present our approach for generating interactive queries forproactive assistance.5.1 Limitations to Computational Proactive AssistanceA model that predicts user needs is fundamentally limited by the stochastic nature of human behav-ior. Users may engage in some activities less consistently than others, such as choosing to eat out,cook dinner at home, or host a party on various nights. Alternately, users may perform the sameactivity using different objects, such as choosing between cereal, oatmeal or fruit for breakfast. Inour analysis of the HOMER dataset, we found that for many such cases there are no observable be-4havioral cues that the robot can use to predict the user’s actions. In such cases, the predictive modelcan do no better than chance, even when entirety of ground truth observations is made available.Unsurprisingly, as we report in the results section, performance of SLaTe-PRO drops by 26% onsuch less consistent activities compared to the overall dataset. In the section below, we describe aninteractive approach for eliciting additional information from the user in response to robot queriesrelating to activities (e.g., “will you be having dinner soon?”), or object usage (e.g. “will you havecereal for breakfast today?”).5.2 Interactive Queries for Proactive AssistanceWe rely on the learned predictive model to decide when an inconsistent behavior is likely to occur,and which query qwould elicit information Q:q!utthat best alleviates the uncertainty. Specifi-cally, we use the predicted relocation distribution preloc to focus on predictions that the robot is mostuncertain about and which might provide useful assistive opportunities. We use information gain asa metric to decide when a query will be informative, and measure it through the expectation overthe potential query responses utof reduction in entropy of the relocation distribution H(preloc). Weuse the predicted activity and object relocation distributions as the probability of query responsesindicating the respective events, and calculate the expected information gain asPutp(ut)H(preloc(oi;lj))H(preloc(oi;ljjut).The robot can elicit query responses in two forms ut2fuat;uoitg, by asking regarding an activity a,resulting in the activity that the user will do next, uat2A, or regarding a particular object oi, result-ing in a binary response on whether it will be used in the predictive horizon, uoit2fTrue;Falseg.We interpret the response to an activity query as the correct activity label at the time-step tawhena new activity is likely to start (similar to the next activity inference in Section 4). We encode theactivity to obtain a vector in latent space ~Xta=genc(ut), which we combine with the predictedlatent at that time-step through a weighted average. The object relocation distribution conditionedon the query response preloc(oi;ljjut), is obtained by continuing the rollout with the corrected latentvector. For object queries, we interpret the user response as the object leaving from or staying inits current location at the time-step toiwhen it was most likely to move (similar to the relocationinference in Section 4). We obtain the conditioned object relocation distribution by correcting thelatent vector and continuing the remaining rollout, similar to activity-based queries.We compute the expected information gain from all potential query candidates, which includes theactivity-based query, and all objects that do not have highly confident predictions. We excludeobjects which the model is over 90% confident about to avoid unnecessary computational overhead.If the best query has an expected information gain of above a predefined threshold, then the robotwill ask that query, and correct its predictions based on the response.6 EvaluationWe create HOMER+2, by modifying the HOMER dataset [6], which represents routines of indi-vidual households over several weeks. The original HOMER dataset contains 5 households with22 activities, and focuses on capturing variations across households. Each activity is executed in ahousehold using a single crowdsourced script (scripts may differ between households). We mod-ify this dataset to instead focus on capturing realistic variations within each household. We adopta script per activity from the original dataset, and add 1-3 variations for 17 out of 24 activities3tobetter emulate how humans perform the same activity in various ways, eg. sometimes having cereal ,and other times having oatmeal for breakfast. The resulting behavior distribution more accuratelyreflects real world human behavior. Note that this variation challenges our model just as much as thebaseline, if not more, as the robot observes the same activity label regardless of the variation beingexecuted, but we believe this more accurately reflects the stochasticity of user routines. HOMER+includes 3 households, with 24 activities, and 93 entities, including objects and locations.We evaluate our model’s predictions of object relocations ^R=f(oi;^lj)gby comparing them againstthe expected set of relocations in the ground truth sequence R=f(oi;lj)gover metrics of recallj^R\RjjRj, precisionj^R\Rjj^Rj, and F-1 score calculated based on them. We evaluate predictions over a -step predictive window, independently for each time-step t, with a discretization of 10 minutes. Weconsider a predictive window of 30 mins, unless otherwise specified. All our evaluations are based2HOMER+ is included with SLaTe-PRO code at https://github.com/Maithili/SLaTe-PRO3We add going to sleep ,getting out of bed andtaking a nap , split washing dishes into activities associatedwith each meal, and combine cleaning ,kitchen cleaning andvacuum cleaning into a single activity5on 10 days of evaluation data per household, which are unseen during training. All metrics aremicro-averaged over the 10 days for each household and macro-averaged over the three households.We compare our results against STOT, proposed in prior work [6]. STOT utilizes time as context tomake one-step predictions on object arrangements, and iteratively does so for long-horizon predic-tions. The architecture and size of this model is the same as our object movement decoder. We trainall models with 60 days of observations, independently for each household. The size of our latentvector and embeddings of both transformers are set to 16, and the hidden layer of our GNN decoderand STOT are set to 8 as in [6]. We train all models up to 1000 epochs with early stopping based onaccuracy over predicting locations of moved objects, using Adam optimizer and a learning rate of103. We use a threshold of 0.5 on information gain and 0.8 as the weight of latent correction whenincorporating feedback.7 ResultsWe present empirical results on the HOMER+ dataset, show the effect of inconsistency on perfor-mance, and how active queries help improve performance, especially over inconsistent behavior.Finally, we present a case study of SLaTe-PRO running on a physical robot system.7.1 Predictive Performance against STOTFigure 2a presents a comparison between variations of SLaTe-PRO against STOT. We see that inthe absence of activity recognition (i.e., given exactly same inputs Gt), SLaTe-PRO (No Act) out-performs STOT by 20%. This improvement is due to the recurrent latent space prediction, whichleverages the history of past states in addition to the current state. For example, if the user hadbreakfast a few hours ago, they are unlikely to have it again.(a) (b)Figure 2: (a) SLaTe-PRO outperforms STOT with no activity labels, and steadily improves as more activitiesbecome available. (b) With 100% activity labels, SLaTe-PRO outperforms STOT across varying proactivity -sNext, we evaluate the benefit of activity labels on the performance of SLaTe-PRO. Modern activ-ity recognition systems are not perfect, as discussed in Section 2, and achieve 80% accuracy ondatasets, and potentially lower in complex real world settings. Hence, we consider varying activityrecognition performance, from none (No Act) to 100% availability of correct activity labels (Fig-ure 2a). With 75% activity labels, SLaTe-PRO approaches peak performance, raising the F1 scorefrom 0:43to0:57, showing little additional improvement with more activity labels. In Figure 2b, weanalyze SLaTe-PRO with 100% activity labels across different predictive- s, and demonstrate thatit outperforms STOT, particularly for long-horizon predictions.7.2 Effect of Behavioral ConsistencyNext, we study how the model performance differs for consistent and inconsistent activities. Weassess activity consistency using standard deviation in start times, and categorize them into threegroups: more consistent ( <30min), less consistent ( >1hr), and moderately consistent activitiesfalling in between. For activities that occur multiple times a day, such as brushing teeth in themorning and night, we separately calculate the standard deviation per occurrence, by clusteringusing k-means with the average number of occurrences per day as the value for k. We then evaluateover objects that participate in activities in each of these categories separately. If an object is involvedin multiple activities, we evaluate it based on the less consistent category. By splitting the dataset inthis manner, across the three household datasets, 39% object movements fall in the more consistentcategory, 31% in the middle, and 30% in the less consistent category.6Figure 3: A steady drop in performance is observed across allmethods from more consistent object usage to less.Predictably, the performance ofSLaTe-PRO as well as STOT fallswith decreasing consistency, asshown in Figure 3. The performancegap between the more consistent andless consistent activities in SLaTe-PRO with 100% activity availabilityis 0.22, and that for STOT is 0.12.We find the usage of toothpastein one of our datasets as the mostextreme case of routine usage, with astandard deviation of 5 mins. On thatobject alone, all methods achieve anF-1 score of 0.97.7.3 Improvement from QueriesTo address the impact of human be-havior variability on predictive per-formance, we examine the effectiveness of active queries, particularly for inconsistent activities.For each prediction, the robot is allowed to ask a single query, and an oracle response is provided ifa query is asked. We set a threshold of 0.5 information gain for every model to decide when a queryshould be asked. Figure 4 shows performance gains from the inclusion of active queries, with mostsignificant improvements over the less consistent object usages, raising the F1-score from 0.43 to0.49. The F1-score on the overall dataset improves from 0.57 to 0.6, while performance of STOTonly improves by 0.02 overall as well as over inconsistent activities. Note that STOT can only askobject-based queries. SLaTe-PRO seeks feedback for about 18% of its predictions, while STOT asks8% with the same threshold over information gain. Even if we encourage it to ask a similar numberof queries as our model by reducing the threshold, we only see an improvement of 0.01. This in-dicates that STOT, despite adopting a conservative approach to avoid false object movements, failsto effectively represent uncertain predictions and seek useful feedback. We also find that our modeldoes not ask queries pertaining to very sporadic activities, whereas STOT sometimes devotes un-necessarily many queries to such activities. For instance, in one of our datasets where the user tendsto watch TV randomly over the day, STOT persistently asks the user about TV remote usage in theafternoon. In contrast, our model disregards this event and only intervenes in returning the remote.This allows our model to prioritize other, more relevant queries, while STOT misses opportunities.Our model focuses on other inconsistent activities which might lead to more informative results. Forinstance, if a user typically has dinner between 6pm and 8pm, our model asks about dinner plans toavoid prematurely preparing objects or delaying preparations and missing an assistive opportunity.(a) (b)Figure 4: (b) Performance gains from queries are most significant for the less consistent object usages.(a) They also improve performance across the entire dataset. ‘+F’ denotes using active queries with the method.Additionally, our model is able to discern whether an object query or an activity query would be moreuseful at a given time. If our model is constrained to asking only object-based or only activity-basedqueries, we obtain an F1-score of 0.46, in either case, as opposed to 0.49 when the model can chooseto ask any type of query. If the model is uncertain about the occurrence of the activity as a whole, ittends to ask activity-based queries, such as “Will you be socializing soon?”, otherwise it seeks morespecific information, such as “Will you be needing the coffee when you socialize?”. For activities7wherein a particular object is consistently used, the model slightly favors object-based queries, suchas asking about requiring oil for cooking dinner. Although both types of queries contain the sameinformation, the object-based queries might be easier for the model to condition on, being in thesame form as the desired output. While generally our model correctly picks the more informativequery between activities and objects, it does sometimes go wrong, causing two kinds of failures.First, when the model overestimates its confidence in an activity label causing the negative responseto an object use leading the model to predict another variation of the activity. For instance, if themodel receives feedback of ‘not using cereal’ it may mistakenly predict the use of oatmeal becauseit is certain that the user will have breakfast, even when the user intends to engage in a differentactivity. On the other hand asking only about the activity label when underconfident leaves it to therobot’s discretion to predict which variation the user prefers, still leaving room for errors.7.4 Robot ValidationWe demonstrate our proactive assistance system on a Stretch robot [43] in a household setting,portraying a morning routine, as shown in Figure 5 and in our video4. To infer a semantic scenegraph representation of the environment from visual observations, we first reconstruct the scene withobjects represented as meshes and bounding boxes, using Hydra [44]5, with dense semantic labelsobtained from SparseInst [45]. We extract the object-location relations through heuristics based onbounding boxes, and use them to build and dynamically update a scene graph as objects move acrossthe environment. The user’s routine is demonstrated to the robot as first having breakfast, where theyusually have an apple and coffee but sometimes eat cereal, and usually leave for work afterwards,but sometimes work from home. Scene graphs and activity labels obtained during the observationphase serve as training data for SLaTe-PRO. During the assistance phase, the robot acts on its objectrelocation predictions to assist proactively (Figure 5a). However, when the user chooses to followtheir less common behavior, e.g. eating cereal (Figure 5b), the robot’s assistive actions fail. Therobot is able to correct such mistakes by actively querying the user about their intended object usage(apple or cereal), and activity (leaving or working from home) (Figure 5c).(a) (b) (c)Figure 5: (a) A Stretch robot assisting proactively with a user’s morning routine, by acting on most likelypredictions. (b) These tend to fail when the user chooses a less-frequent variation. (c) By querying the user’sabout their intent, the robot is able to assist with different variations.8 Limitations and Future WorkOur approach has a number of limitations that present opportunities for future work. First, we donot currently model information about the state of an object (e.g., clean vs dirty plate), and semanticcorrelations between objects (e.g. spoons and forks are both silverware), which have been shownto be useful in modeling object placement [46, 47, 48, 49]. Second, extending the formulation to acontinual learning problem would enable more effective long-term adaptation as user behavior maychange over time; the HOMER datasets currently do not model changing user routines. Finally, user-centered factors should be considered and further evaluated to better understand user preferenceswith regard to types and frequency of assistive robot actions and queries. Our proactive approach canbe personalized by changing level of robot assistance through tuning precision v.s. recall, changingamount of active queries through the information gain threshold, and including different types ofpersonalized assistive behaviors in response to different activities.4Demo video is available at https://youtu.be/zLlyM20Bi_85This implementation of object mapping does not inventory closed containers and is thus limited to objectsdirectly observable without environment manipulation. However, SLaTe-PRO has no such constraint and canbe combined with a different mapping system to overcome this limitation in the pipeline implementation.8AcknowledgmentsThis work is sponsored in part by NSF IIS 2112633 and Amazon Research. In addition, we wouldlike to thank Dr. Weiyu Liu (Stanford University) for technical help regarding model structure andtraining.References[1] M. A. Goodrich and D. R. Olsen. Seven principles of efficient human robot interaction. InSMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man andCybernetics. Conference Theme-System Security and Assurance (Cat. No. 03CH37483) , vol-ume 4, pages 3942–3948. IEEE, 2003.[2] H.-M. Gross, S. Mueller, C. Schroeter, M. V olkhardt, A. Scheidig, K. Debes, K. Richter, andN. Doering. Robot companion for domestic health assistance: Implementation, test and casestudy under everyday conditions in private apartments. In 2015 IEEE/RSJ International Con-ference on Intelligent Robots and Systems (IROS) , pages 5992–5999, 2015.[3] G. Peleka, A. Kargakos, E. Skartados, I. Kostavelis, D. Giakoumis, I. Sarantopoulos, Z. Doul-geri, M. Foukarakis, M. Antona, S. Hirche, E. Ruffaldi, B. Stanczyk, A. Zompas, J. Hernandez-Farigola, N. Roberto, K. Rejdak, and D. Tzovaras. Ramcip - a service robot for mci patients athome. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 1–9, 2018.[4] X. Puig, T. Shu, S. Li, Z. Wang, Y .-H. Liao, J. B. Tenenbaum, S. Fidler, and A. Torralba. Watch-and-help: A challenge for social perception and human- faigcollaboration. In InternationalConference on Learning Representations , 2021.[5] H. Harman and P. Simoens. Action graphs for proactive robot assistance in smart environments.J. Ambient Intell. Smart Environ. , 12:79–99, 2020.[6] M. Patel and S. Chernova. Proactive robot assistance via spatio-temporal object modeling. InProceedings of The 6th Conference on Robot Learning , volume 205 of Proceedings of MachineLearning Research , pages 881–891. PMLR, 14–18 Dec 2023.[7] Z. Zeng, A. R ̈ofer, and O. C. Jenkins. Semantic linking maps for active visual object search. In2020 IEEE International Conference on Robotics and Automation (ICRA) , pages 1984–1990.IEEE, 2020.[8] T. Krajn ́ık, M. Kulich, L. Mudrov ́a, R. Ambrus, and T. Duckett. Where’s waldo at time t ?using spatio-temporal models for mobile robot search. In 2015 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 2140–2146, 2015.[9] M. E. Kalfaoglu, S. Kalkan, and A. A. Alatan. Late temporal modeling in 3d cnn architectureswith bert for action recognition. In Computer Vision–ECCV 2020 Workshops: Glasgow, UK,August 23–28, 2020, Proceedings, Part V 16 , pages 731–747. Springer, 2020.[10] L. Wang, Y . Xiong, Z. Wang, Y . Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segmentnetworks: Towards good practices for deep action recognition. In European conference oncomputer vision , pages 20–36. Springer, 2016.[11] C.-Y . Wu and P. Krahenbuhl. Towards long-form video understanding. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1884–1894, 2021.[12] C.-Y . Wu, Y . Li, K. Mangalam, H. Fan, B. Xiong, J. Malik, and C. Feichtenhofer. Memvit:Memory-augmented multiscale vision transformer for efficient long-term video recognition. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages13587–13597, 2022.[13] H. Najeh, C. Lohr, and B. Leduc. Convolutional neural network bootstrapped by dynamicsegmentation and stigmergy-based encoding for real-time human activity recognition in smarthomes. Sensors , 23(4), 2023.9[14] J. Park, K. Jang, and S.-B. Yang. Deep neural networks for activity recognition with multi-sensor data in a smart home. In 2018 IEEE 4th World Forum on Internet of Things (WF-IoT) ,pages 155–160, 2018.[15] L. G. Fahad and S. F. Tahir. Activity recognition and anomaly detection in smart homes.Neurocomputing , 423:362–372, 2021. ISSN 0925-2312.[16] Y . Du, Y . Lim, and Y . Tan. A novel human activity recognition and prediction in smart homebased on interaction. Sensors , 19(20):4474, 2019.[17] D. Das, Y . Nishimura, R. P. Vivek, N. Takeda, S. T. Fish, T. Ploetz, and S. Chernova. Explain-able activity recognition for smart home systems. ACM Transactions on Interactive IntelligentSystems , 13(2):1–39, 2023.[18] S. Zhang, Y . Li, S. Zhang, F. Shahabi, S. Xia, Y . Deng, and N. Alshurafa. Deep learning inhuman activity recognition with wearable sensors: A review on advances. Sensors , 22(4):1476,2022.[19] H. Haresamudram, A. Beedu, V . Agrawal, P. L. Grady, I. Essa, J. Hoffman, and T. Pl ̈otz.Masked reconstruction based self-supervision for human activity recognition. In Proceedingsof the 2020 ACM International Symposium on Wearable Computers , pages 45–49, 2020.[20] J. Ji, R. Krishna, L. Fei-Fei, and J. C. Niebles. Action genome: Actions as compositionsof spatio-temporal scene graphs. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR) , June 2020.[21] Z. Chen, J. Mao, J. Wu, K. K. Wong, J. B. Tenenbaum, and C. Gan. Grounding physical con-cepts of objects and events through dynamic visual reasoning. In 9th International Conferenceon Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 , 2021.[22] N. Gkalelis, A. Goulas, D. Galanopoulos, and V . Mezaris. Objectgraphs: Using objects and agraph convolutional network for the bottom-up recognition and explanation of events in video.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 3375–3383, 2021.[23] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latentdynamics for planning from pixels. In International conference on machine learning , pages2555–2565. PMLR, 2019.[24] P. Wu, A. Escontrela, D. Hafner, P. Abbeel, and K. Goldberg. Daydreamer: World models forphysical robot learning. In Proceedings of The 6th Conference on Robot Learning , volume 205ofProceedings of Machine Learning Research , pages 2226–2240. PMLR, 14–18 Dec 2023.[25] C. Wang, D. Xu, and L. Fei-Fei. Generalizable task planning through representation pretrain-ing. IEEE Robotics and Automation Letters , 7(3):8299–8306, 2022.[26] T. Vintr, Z. Yan, T. Duckett, and T. Krajn ́ık. Spatio-temporal representation for long-term an-ticipation of human presence in service robotics. In 2019 International Conference on Roboticsand Automation (ICRA) , pages 2620–2626. IEEE, 2019.[27] F. Li, Z. Gui, Z. Zhang, D. Peng, S. Tian, K. Yuan, Y . Sun, H. Wu, J. Gong, and Y . Lei.A hierarchical temporal attention-based lstm encoder-decoder model for individual mobilityprediction. Neurocomputing , 403:153–166, 2020.[28] N. Roy and C. Earnest. Dynamic action spaces for information gain maximization in searchand exploration. In 2006 American Control Conference , pages 6–pp. IEEE, 2006.[29] A. Visser and B. A. Slamet. Balancing the information gain against the movement costfor multi-robot frontier exploration. In European Robotics Symposium 2008 , pages 43–52.Springer, 2008.[30] S. Isler, R. Sabzevari, J. Delmerico, and D. Scaramuzza. An information gain formulation foractive volumetric 3d reconstruction. In 2016 IEEE International Conference on Robotics andAutomation (ICRA) , pages 3477–3484. IEEE, 2016.10[31] Y . Tao, Y . Wu, B. Li, F. Cladera, A. Zhou, D. Thakur, and V . Kumar. Seer: Safe efficient explo-ration for aerial robots using learning to predict information gain. In 2023 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 1235–1241, 2023.[32] P. Ball, J. Parker-Holder, A. Pacchiano, K. Choromanski, and S. Roberts. Ready policy one:World building through active learning. In International Conference on Machine Learning ,pages 591–601. PMLR, 2020.[33] R. Mehta, C. Shui, B. Nichyporuk, and T. Arbel. Information gain sampling for active learn-ing in medical image classification. In Uncertainty for Safe Utilization of Machine Learningin Medical Imaging: 4th International Workshop, UNSURE 2022, Held in Conjunction withMICCAI 2022, Singapore, September 18, 2022, Proceedings , pages 135–145. Springer, 2022.[34] T. Novkovic, R. Pautrat, F. Furrer, M. Breyer, R. Siegwart, and J. Nieto. Object finding in clut-tered scenes using interactive perception. In 2020 IEEE International Conference on Roboticsand Automation (ICRA) , pages 8338–8344, 2020.[35] J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y . Jiang, H. Yedidsion, J. Hart, P. Stone,and R. J. Mooney. Improving grounded natural language understanding through human-robotdialog. In 2019 International Conference on Robotics and Automation (ICRA) , pages 6934–6941, 2019.[36] Ontology-based knowledge management with verbal interaction for command interpretationand execution by home service robots. Robotics and Autonomous Systems , 140:103763, 2021.ISSN 0921-8890.[37] F. I. Do ̆gan, I. Torre, and I. Leite. Asking follow-up clarifications to resolve ambiguities inhuman-robot conversation. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 461–469. IEEE, 2022.[38] T.-C. Chi, M. Shen, M. Eric, S. Kim, and D. Hakkani-tur. Just ask: An interactive learningframework for vision and language navigation. In Proceedings of the AAAI Conference onArtificial Intelligence , volume 34, pages 2459–2466, 2020.[39] A. Daruna, D. Das, and S. Chernova. Explainable knowledge graph embedding: Inference rec-onciliation for knowledge inferences supporting robot actions. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 1008–1015. IEEE, 2022.[40] M. Diehl and K. Ramirez-Amaro. A causal-based approach to explain, predict and preventfailures in robotic tasks. Robotics and Autonomous Systems , 162:104376, 2023.[41] V . Pe ̃naloza. Time2vec embedding on a seq2seq bi-directional lstm network for pedestriantrajectory prediction. Res. Comput. Sci. , 149(11):249–260, 2020.[42] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[43] C. C. Kemp, A. Edsinger, H. M. Clever, and B. Matulevich. The design of stretch: A com-pact, lightweight mobile manipulator for indoor human environments. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 3150–3157. IEEE, 2022.[44] N. Hughes, Y . Chang, and L. Carlone. Hydra: A real-time spatial perception system for 3Dscene graph construction and optimization. 2022.[45] T. Cheng, X. Wang, S. Chen, W. Zhang, Q. Zhang, C. Huang, Z. Zhang, and W. Liu. Sparseinstance activation for real-time instance segmentation. In Proc. IEEE Conf. Computer Visionand Pattern Recognition (CVPR) , 2022.[46] W. Liu, D. Bansal, A. Daruna, and S. Chernova. Learning Instance-Level N-Ary SemanticKnowledge At Scale For Robots Operating in Everyday Environments. In Proceedings ofRobotics: Science and Systems , Virtual, July 2021.11[47] V . K. Nagaraja, V . I. Morariu, and L. S. Davis. Modeling context between objects for refer-ring expression understanding. In Computer Vision–ECCV 2016: 14th European Conference,Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 792–807.Springer, 2016.[48] K. Ramachandruni, M. Zuo, and S. Chernova. Consor: A context-aware semantic object rear-rangement framework for partially arranged scenes. In 2023 IEEE/RSJ International Confer-ence on Intelligent Robots and Systems (IROS) . IEEE, 2023.[49] Y . Kant, A. Ramachandran, S. Yenamandra, I. Gilitschenski, D. Batra, A. Szot, andH. Agrawal. Housekeep: Tidying virtual households using commonsense reasoning. In Euro-pean Conference on Computer Vision , pages 355–373. Springer, 2022.12AppendixA HOMER+We contribute HOMER+ an enhanced version of the HOMER [6] dataset, which is a first-of-its-kindlongitudinal behavioral dataset capturing object-interaction level information. We create HOMER+to emphasize variations in the activity patterns within the simulated households to more accuratelymodel the stochasticity in the real world.(a) (b)Figure 6: (a) Temporal variations in breakfast activity and (b) corresponding object usage in a sim-ulated household in HOMER+ through breakfast activity over 75 days of data. Notice day-to-dayvariations in both: start times and objects being used.A.1 HOMER: Strengths and LimitationsThe HOMER [6] dataset contains weeks-long data illustrating routine user behavior with object-interaction-level detail. It comprises of 22 activities of daily living, and is based on crowdsourceddata containing 1) action sequences of how people perform each activity, and 2) temporal distribu-tions representing different habits of when the activity is done during the day. A temporal habit and ascript for each activity are used to compose a fictional household, and samples representing a day inthe household are generated from the resulting temporal distribution. The resulting dataset presentsrealistic temporal noise in user activities, since the activities are sampled from a highly stochasticdistribution, while maintaining some patterns which the model is expected to learn.Weaknesses: HOMER has no variation within each activity since it picks a single action sequencescript per activity. This assumption results in behaviors such as always eating cereal and milk forbreakfast . It also ignores sleeping as an activity, therefore ignoring variations in when the user startstheir day in the morning, and ignores correlations between activities of having meals andwashingdishes .A.2 Creating HOMER+To create HOMER+, we manually insert variations in the crowdsourced action sequences. Forinstance, if the breakfast script includes having cereal and coffee, its variations might include eatingoatmeal instead of cereal, and/or having juice instead of coffee. This results in different objectsbeing used for breakfasts on different days as shown in Figure 6b. We pick one script per activity,and generate up to 3 variations, which can then be randomly used each time the activity is done.We also insert going to sleep ,getting out of bed andtaking a nap activities to allow varying starttimes of the day, and split activity of washing dishes into activities associated with each meal toenhance consistency, such that the objects used are the ones washed. Ultimately, this results in a setof 47 scripts encompassing 24 activities. We compile the temporal activity distributions in the same13way as HOMER, and sample routines by randomly picking script variations within each activity.This results in a dataset representing three households, consisting of similar activity scripts acrosshouseholds but enhancing variations within each household, relative to HOMER. This allows us toreally test how predictive models perform under such more realistic noisy conditions.A.3 Characteristics of HOMER+The final dataset HOMER+ contains sequences of activity labels and object locations informationthroughout the day for several weeks, for each of the three simulated households. The activity labelscome from the set of 24 activities, with no distinction made based on the variation being performed.This makes the dataset more challenging but maintains a realistic assumption that such nuances aredifficult to observe through activity recognition. The activities are sampled from a temporal dis-tribution, resulting in a dataset with natural temporal stochasticity, e.g. in one household, the userhas dinner anywhere between 4pm to 7pm, washes dishes right after or hours later, and socializes,plays and listens to music and watches TV in a random order before and after dinner, as shownin Figure 6a. Within each activity the object usage may vary, as shown in Figure 6b for a singleactivity of having breakfast . The temporal variations in activities, cause any given activity to befollowed by one of 10 different activities on average. Taking into account the variations we intro-duce, this amounts to about 20 different activity variations following any activity. This makes thepredictive task significantly more challenging, relative to not having such variations. In addition tothe multitude of possibilities of semantic activity sequences, predictions on an object level requirean understanding of the expansive space of object locations. Each household consists of 93 enti-ties, which all serve as potential locations for the 59 dynamic objects. Thus, there are about 10115(9359) different potential scene graphs representing object-location combinations, making the fullmodeling space of probability of a scene graph conditioned on an observed scene graph blow up to10230.B Out-of-distribution ScenariosA robot deployed in home environments will need to deal with novel situations, which might be out-of-distribution for the learned predictive model. In such anomalous situations, we do not expect therobot to provide perfect assistance by anticipating sporadic user needs, but we would want the robotto not take disruptive actions. Semantic patterns in our dataset, and therefore deviations therein, canbe interpreted in terms of the activity a, repetition of an activity r, objects used o, locations whereobjects are moved l, and the time when activity occurs t. We express deviations in routines throughaddition +xor removalxof each of the above variables x2fa;r;o;l;tg, and create hand craftedcases representing each deviation. Our model does not include explicit safeguards against such out-of-distribution behavior, but we report the corresponding model responses in Table 1 to qualitativelyunderstand how the model would behave in such circumstances.Overall, our model does overfit to seen activities, sometimes moving the irregularly used objectsback, but it does not make random predictions, such as moving unrelated objects or misplacingobjects. Note that our model learns about each object from scratch by representing them throughone-hot vectors, so we do not expect generalization to semantically similar objects. However, futurework could explore the use of semantically informed representations to overcome this limitation.Anomaly Type Examples Model Response+a Adding an un-known activityparty, repottingplants, new medi-cationIf novel objects are used in an unseen activity,then the robot ignores their movement, whichis the desired behavior. But, if common ob-jects are used in an unexpected way (e.g., us-ing kitchen bowls to repot plants), then therobot tries to return them, which might dis-turb the user. If the objects are likely to beused around that time at a different location(e.g. bowl at the table at breakfast time) thenthe robot moves it to where it is expected tobe used, otherwise it moves the object whereit is usually stored.14a Not performing ausual activityvacation, having anevening out, sickThe robot adheres to routine, continuing tobring things out, and cleaning them up. Thismay cause some undesired activity and addi-tional reasoning methods should be added tohandle such cases.+r Repeating a usualactivityhaving lunchtwice, watchingTV or listeningto music multipletimesIn cases where certain activity is observed tousually happen exactly once, the robot willnot anticipate its second occurrence, but willprovide assistance once the user initiates theactivity, which is the desired behavior. For in-stance, if the user decides to have lunch again,it will assist in bringing out other related ob-jects once the user has started the activity (e.g.bring out a plate once the user brings out a panand oil), and will also help in returning the ob-jects after use.r Not repeating anotherwise repeatedactivityNot brushingtwice, not playingmusic or workingmultiple times adayIf the activity is repeated in a consistent man-ner such as brushing teeth twice at consistenttimes, the robot would expect them to occurand keep objects ready. If the activity is re-peated but not at consistent times, then therobot learns to not try to anticipate their oc-currence but limit assistance to returning theobjects after use. It maintains this behav-ior when the activity occurs fewer (or more)times than usual. Note that such repetitionanomalies for inconsistent activities are in-cluded in the HOMER dataset.+o Using additionalobjects in anactivityHaving donuts orpizza for breakfastThe robot is not able to associate these newobjects with the activity they are a part of. Insome cases, the robot returns the novel objectsback to where they are stored, and in othercases leaves them untouched.o Not using a com-monly used objectNot using platesfor lunchIf an object is consistently used in the activity(e.g. bowl for breakfast), then the robot con-tinues to bring out that object. If the object isused only in a less frequent variation of an ac-tivity (e.g. one of oatmeal or cereal for break-fast), then the robot learns to wait for the userto bring out the object, and if the user decidesto not do so, nor does the robot.+l Using objects at anew locationHaving breakfastin bed, working inthe dining roomThe robot is expecting the same set of objectsto be used at a different location and so it as-sumes they have been misplaced. As a result,it tries to return the objects to where they areusually used, which might cause an inconve-nience to the user.l Not using objectsat the usual loca-tionNot working at thedesk, not havingbreakfast at the ta-bleThis is subsumed in the above case since us-ing objects elsewhere includes not using themat their usual location.+t Performing the ac-tivity at a later timehaving a late din-ner/breakfastIf a consistent activity is performed unusuallylate, the robot prepares objects at their usualtime, but restores them if a long delay occurs.15t Performing the ac-tivity at an earliertimehaving an earlydinner/breakfastIf a consistent activity happens slightly earlysuch that the usual time of occurrence is stillwithin the robot’s proactivity window, thenthe robot manages to prepare objects in time.However, if the user starts the activity earlierthan that, then the robot fails to prepare in ad-vance but does clean up after.Table 1: Out-of-distribution scenarios for the HOMER dataset, and corresponding model responses16 |
VtUZ4VGPns | IIFL: Implicit Interactive Fleet Learning fromHeterogeneous Human SupervisorsGaurav Datta∗1, Ryan Hoque∗1, Anrui Gu1, Eugen Solowjow2, Ken Goldberg11AUTOLab at UC Berkeley2Siemens Research LabAbstract: Imitation learning has been applied to a range of robotic tasks, but canstruggle when robots encounter edge cases that are not represented in the trainingdata (i.e., distribution shift). Interactive fleet learning (IFL) mitigates distributionshift by allowing robots to access remote human supervisors during task execu-tion and learn from them over time, but different supervisors may demonstrate thetask in different ways. Recent work proposes Implicit Behavior Cloning (IBC),which is able to represent multimodal demonstrations using energy-based mod-els (EBMs). In this work, we propose Implicit Interactive Fleet Learning (IIFL),an algorithm that builds on IBC for interactive imitation learning from multipleheterogeneous human supervisors. A key insight in IIFL is a novel approachfor uncertainty quantification in EBMs using Jeffreys divergence. While IIFL ismore computationally expensive than explicit methods, results suggest that IIFLachieves a 2.8×higher success rate in simulation experiments and a 4.5×higherreturn on human effort in a physical block pushing task over (Explicit) IFL, IBC,and other baselines.1 IntroductionImitation learning (IL), the paradigm of learning from human demonstrations and feedback, hasbeen applied to diverse tasks such as autonomous driving [1, 2, 3], robot-assisted surgery [4, 5], anddeformable object manipulation [6, 7, 8]. The most common IL algorithm is behavior cloning (BC)[2], where the robot policy is derived via supervised machine learning on an offline set of human taskdemonstrations. Since BC can suffer from distribution shift between the states visited by the humanand those visited by the robot, interactive IL (IIL) algorithms including DAgger [9] and variants[10, 11, 12] iteratively improve the robot policy with corrective human interventions during robottask execution. These algorithms are typically designed for the single-robot, single-human setting;interactive fleet learning (IFL) [13] extends IIL to multiple robots and multiple human supervisors.However, learning from multiple humans can be unreliable as the data is often multimodal.Training data is multimodal when the same state is paired with multiple (correct) action labels:{(s, ai),(s, aj), . . .}, ai̸=aj. Almost all robot tasks such as grasping, navigation, motion plan-ning, and manipulation can be performed in multiple equally correct ways; as a result, almost alldemonstration data has some degree of multimodality. Multimodality is especially severe whenlearning from different human supervisors with varying preferences and proficiency, as they demon-strate the same task in different ways [14]. Multimodality can also occur in the demonstrations ofone individual human who may make mistakes, become more proficient at the task over time, orexecute a different valid action when subsequently encountering the same state [14, 15].Florence et al. [16] propose Implicit Behavior Cloning (IBC), an IL algorithm that trains an energy-based model (EBM) [17] to represent state-action mappings implicitly rather than explicitly. Whilethis makes model training and inference more computationally expensive (Section 6), implicit mod-∗Equal contribution.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.(A) Robot Paths (B) IIFL Energy BC IIFLx x x x x xJunction Hallway Wind IBCFigure 1: In the 2D navigation experiments from Section 5.1, the robot must navigate from the blue X markeron the left to the green X marker on the right, where the robot can go either above or below the rectangulargrey obstacle and continue through a section subject to upward wind forces (blue arrows) that shift commandedmotions upward. (A) Robot Trajectories: After training on 100 demonstrations of the two paths aroundthe obstacle, pure behavior cloning cannot make progress past the fork due to multimodal demonstrations,while Implicit Behavior Cloning cannot overcome the distribution shift due to wind in the +ydirection atexecution time (denoted in light blue). IIFL reaches the goal by handling both multimodality and distributionshift. (B) Implicit Interactive Fleet Learning Energy: We display normalized IIFL energy distributions fromrepresentative states in the trajectory. Lower energy (darker) indicates a more optimal action, and the xandyaxes are the 2D action deltas ˆathat the robot can execute (which can be mapped directly onto the corresponding1×1 cell in the maze). At the junction point, both upward and downward actions attain low energy; in a straighthallway, the rightmost actions attain low energy; in the windy area, actions toward the lower right corner(making progress toward the goal while fighting the wind) attain low energy.els can represent multiple actions for each state. This property allows them to handle both single-human multimodality and multi-human heterogeneity, as they are indistinguishable from a data-centric perspective. However, IBC suffers from the same distribution shift as (Explicit) BC.In this paper we combine implicit models with interactive fleet learning to facilitate interactive learn-ing from multiple humans. See Figure 1 for intuition. As existing IFL algorithms rely on estimatesof epistemic uncertainty like the output variance among an ensemble of networks, which are incom-patible with implicit models (Section 4.3), we propose a new technique for estimating the epistemicuncertainty in EBMs using Jeffreys divergence [18].This paper makes the following contributions: (1) Implicit Interactive Fleet Learning (IIFL), the firstIIL algorithm to use implicit policies, (2) a novel metric for estimating uncertainty in energy-basedmodels, (3) simulation experiments with a fleet of 100 robots and 10 heterogeneous algorithmic su-pervisors, (4) physical experiments with a fleet of 4 robots and 2 heterogeneous human supervisors.Open-source Python code is available at https://github.com/BerkeleyAutomation/IIFL .2 Preliminaries and Related Work2.1 Interactive Imitation LearningLearning from an offline set of human task demonstrations with behavior cloning (i.e., supervisedlearning) is an intuitive and effective way to train a robot control policy [19, 2, 6, 1]. However,behavior cloning can suffer from distribution shift [9], as compounding approximation errors andreal-world data distributions (e.g., variable lighting in a warehouse) can lead the robot to visit statesthat were not visited by the human. To mitigate distribution shift, Ross et al. [9] propose datasetaggregation (DAgger), an IIL algorithm which collects online action labels on states visited by therobot during task execution and iteratively improves the robot policy. Since DAgger can requestexcessive queries to a human supervisor, several IIL algorithms seek to reduce human burden byintermittently ceding control to the human during robot execution based on some switching criteria2[11, 10, 20]. Human-gated IIL [11, 21, 22] has the human decide when to take and cede control,while robot-gated IIL [23, 10, 12, 20] has the robot autonomously decide. Hoque et al. [13] proposeInteractive Fleet Learning (IFL), which generalizes robot-gated IIL to multiple robots supervised bymultiple humans. In this work, we consider the IFL setting.Sun et al. [24] propose a method for interactive imitation learning from heterogeneous experts, buttheir method is not based on implicit policies and is limited to autonomous driving applications.Gandhi et al. [25] also interactively learn from multiple experts and propose actively soliciting thehuman supervisors to provide demonstrations that are compatible with the current data. However,this prevents the robot from learning alternative modes and requires the human supervisors to complywith suggestions, which may not occur due to human suboptimality, fatigue, or obstinacy [26].2.2 Robot Learning from Multimodal DataLearning from multimodal demonstrations is an active challenge in machine learning and robotics.A mixture density network [27] is a popular approach that fits a (typically Gaussian) mixture modelto the data, but it requires setting a parameter for how many modes to fit, which may not be knowna priori. When actions can be represented as pixels in an image (e.g., pick points), a Fully Convolu-tional Network [28] can be applied to learning pixelwise multimodality [8, 29]. Shafiullah et al. [30]propose Behavior Transformers, a technique that applies the multi-token prediction of Transformerneural networks [31] to imitation learning. Other Transformer-based policies report similar benefitsfor multimodal data [32, 33]; however, these approaches require action discretization to cast behav-ior prediction as next-token prediction. In a very recent paper, Chi et al. [34] introduce diffusionpolicies, an application of diffusion models [35] to imitation learning from multimodal data.Florence et al. [16] propose implicit behavior cloning, a technique that trains a conditional energy-based model [17] and is found to outperform (explicit) BC and mixture density networks in theirexperiments. As opposed to explicit models that take the form π:S → A , implicit models take theform of a scalar-valued function E:S × A → R; the action is an input rather than an output of themodel. To sample an action from the policy, instead of evaluating the explicit model ˆa=π(s), theimplicit model must perform optimization over Econditioned on state s:ˆa= arg mina∈AE(s, a) (1)In this work, we combine IBC with IFL to mitigate the effects of both distribution shift and multi-modality. To our knowledge, we are the first to extend implicit policies to interactive IL.2.3 Jeffreys DivergenceThe Jeffreys divergence [18] is a statistical measure of the distance between two probability distri-butions and is a symmetric version of the Kullback-Leibler (KL) divergence:DJ(P∥Q) =DKL(P∥Q) +DKL(Q∥P).The KL divergence is widely used in machine learning algorithms, most commonly in variationalautoencoders [36] and generative adversarial networks [37]. It has also been used for dimensionalityreduction [38], information bottlenecks [39], and policy gradient methods for reinforcement learning[40, 41]. The Jensen-Shannon divergence [42] is another symmetric KL divergence that sums theKL divergences of both distributions against the mixture of the two, but neither the Jensen-Shannonnor the asymmetric KL divergences have the structural properties that make Jeffreys divergenceamenable to our setting (Section 4.3). Nielsen [43] derives a proposition similar to Identity 1 (Sec-tion 4.3) with Jeffreys divergence for exponential families but does not apply it to energy-basedmodels. To our knowledge, IIFL is the first algorithm to use Jeffreys divergence for uncertaintyestimation in energy-based models, exploiting its structural properties for fast computation.33 Problem StatementWe consider the interactive fleet learning (IFL) setting proposed by Hoque et al. [13]. A fleet of Nrobots operate in parallel independent Markov Decision Processes (MDPs) that are identical apartfrom their initial state distributions. The robots can query a set of M < N human supervisors withaction space AH=A∪{R}, where a∈ A is teleoperation in the action space of the robots and Risa “hard reset” that physically resets a robot in a failure state (e.g., a delivery robot tipped over on itsside). As in [13], we assume that (1) the robots share policy πθt:S → A , (2) the MDP timesteps aresynchronous across robots, and (3) each human can only help one robot at a time. However, unlikethe original IFL formulation [13], we do notassume that the human supervisors are homogeneous;instead, each human imay have a unique policy πiH:S → A H. Furthermore, each πiHmay itselfbe nondeterministic and multimodal, but is assumed to be optimal or nearly optimal.An IFL supervisor allocation algorithm is a policy ωthat determines the assignment αtof humansto robots at time t, with no more than one human per robot and one robot per human at a time:ω: (st, πθt,·)7→αt∈ {0,1}N×Ms.t.MXj=1αtij≤1andNXi=1αtij≤1∀i, j. (2)The allocation policy ωin IFL must be autonomously determined with robot-gated criteria [10, 12]rather than human-gated criteria [11, 21, 22] in order to scale to large ratios of NtoM. The IFLobjective is to find an ωthat maximizes return on human effort (ROHE), defined as the averageperformance of the robot fleet normalized by the amount of human effort required [13]:maxω∈ΩEτ∼pω,θ0(τ)"MN·PTt=0 ̄r(st,at)1 +PTt=0∥ω(st, πθt,αt−1,xt)∥2F#(3)where ∥ · ∥ Fis the Frobenius norm, Tis the amount of time the fleet operates (rather than anindividual episode horizon), and θ0are the initial parameters of πθt.4 Approach4.1 Preliminaries: Implicit ModelsWe build on Implicit Behavior Cloning [16]. IBC seeks to learn a conditional energy-based modelE:S ×A → R, where E(s, a)is the scalar “energy” for action aconditioned on state s. Lower en-ergy indicates a higher correspondence between sanda. The energy function defines a multimodalprobability distribution πof action aconditioned on state s:π(a|s) =e−E(s,a)Z(s)(4)where Z(s)is a normalization factor known as the “partition function.” In practice, we estimate Ewith a learned neural network function approximator Eθparameterized by θand train Eθon samples{si, ai}collected from the expert policies πH. Training Eθis described in Appendix 7.2.4.2 Implicit Interactive Dataset AggregationBehavior cloning is prone to distribution shift due to compounding approximation errors [9], andany data-driven robot policy may encounter edge cases during execution that are not representedin the training data [13]. We extend IBC to interactive imitation learning using dataset aggregationof online human data, and iteratively update the shared robot policy with the aggregate dataset at afixed interval 1≤ˆt≤Tvia supervised learning, as in DAgger [9] and variants [11, 13]:(Dt+1←Dt∪DtH,where DtH:={(sti, πjH(sti)) :πjH(sti)̸=RandPMj=1αtij= 1}πθt←arg minθL(πθ, Dt),ift≡0 (mod ˆt)4where πjH(sti)is the teleoperation action from human jfor robot iat time t, andαtijis the assignmentof human jto robot iat time t, as in Equation 2. Ross et al. [9] show that such a policy incursapproximation error that is linear in the time horizon rather than quadratic, as in behavior cloning.4.3 Uncertainty Estimation for EBMsWhile prior work computes the output variance among a bootstrapped ensemble of neural networksto estimate epistemic uncertainty [44, 12, 10], this approach is not applicable to implicit policiesbecause multimodality results in a false positive: different ensemble members may select equallyoptimal actions from different modes, resulting in high variance despite high certainty. Furthermore,training and inference in EBMs are much more computationally expensive than in explicit models(Section 6), making ensembles of 5+ models impractical. Finally, inference in implicit models isnondeterministic, creating an additional source of variance that is not due to uncertainty.The notion of ensemble disagreement can still be applied to EBMs by considering the action distri-butions at a given state rather than the single predicted actions. At states within the distribution ofthe human data, a bootstrapped EBM will predict action distributions that are concentrated aroundthe human actions. However, outside of the human data distribution, the models have no referencebehavior to imitate, and will likely predict different conditional action distributions due to randominitialization, stochastic optimization, and bootstrapping. Accordingly, we propose bootstrapping 2implicit policies and calculating the Jeffreys divergence DJ[18] between them as a measure of howtheir conditional action distributions differ at a given state. Jeffreys divergence in this setting has twokey properties: (1) it is symmetric, which is useful as neither bootstrapped policy is more correctthan the other, and (2) it is computationally tractable for EBMs as it does not require estimating thepartition function Z(s)(Equation 4). To show (2), we derive the following novel identity (proof inAppendix 7.1):Identity 1. LetE1andE2be two energy-based models that respectively define distributions π1andπ2according to Equation 4. Then,DJ(π1(·|s)∥π2(·|s)) =Ea∼π1(·|s)[E2(s, a)−E1(s, a)] +Ea∼π2(·|s)[E1(s, a)−E2(s, a)].Crucially, the intractable partition functions do not appear in the expression due to the symmetryof Jeffreys divergence. We estimate the expectations in Identity 1 using Langevin sampling. Notethat this method is not limited to the interactive IL setting and may have broad applications for anyalgorithms or systems that use energy-based models. We provide more intuition on the proposedmetric in Figure 4 in the appendix and consider how this method may be generalized to a greaternumber of models in Appendix 7.3.4.4 Energy-Based AllocationTo extend IBC to the IFL setting, we synthesize the Jeffreys uncertainty estimate with Fleet-DAgger[13]. Specifically, we set the Fleet-DAgger priority function ˆp: (s, πθt)→[0,∞)to prioritizerobots with high uncertainty, followed by robots that require a hard reset R. This produces a super-visor allocation policy ωwith Fleet-EnsembleDAgger, the U.C. (Uncertainty-Constraint) allocationscheme in [13]. We refer to the composite approach as IIFL.5 Experiments5.1 Simulation Experiments: 2D NavigationTo evaluate the correctness of our implementation and provide visual intuition, we first run experi-ments in a 2D pointbot navigation environment. See Figure 1 for the maze environment, representa-tive trajectories, and energy distribution plots. We consider discrete 2D states s= (x, y)∈N2(theCartesian pose of the robot) and continuous 2D actions a= (∆ x,∆y)∈[−1,1]2(relative changesin Cartesian pose). The maze has a fixed start and goal location and consists of a forked path around5Ball Balance AntAnymal Cumulative Successes Cumulative Hard Resets Return on Human Effort IFL BC IBC IIFL IIFL-R Figure 2: IFL Benchmark simulation experiment results. Despite unimodal supervision, IIFL is competitivewith or outperforms IFL and other baselines across 3 environments, suggesting benefits of implicit policiesbeyond robustness to multimodality. Shading represents ±1 standard deviation.a large obstacle followed by a long corridor. An algorithmic supervisor provides 100 demonstrationsof the task, randomly choosing to go upward or downward at the fork with equal probability. Sincea model can simply overfit to the demonstrations in this low-dimensional environment, to inducedistribution shift we add “wind” at execution time to a segment of the right corridor with magnitude0.75in the +ydirection.In 100 trials, (explicit) BC achieves a 0%success rate, IBC achieves a 0%success rate, and IIFLachieves a 100% autonomous success rate (i.e., robot-only trajectories without human interventions,after interactive training). In Figure 1 we observe that BC cannot pass the fork due to averaging thetwo modes to zero. Meanwhile, IBC is not robust to the distribution shift: once the wind pushes therobot to the top of the corridor, it does not know how to return to the center. We also observe that theIIFL energy distributions in Figure 1(B) reflect the desired behavior in accordance with intuition.5.2 Simulation Experiments: IFL BenchmarkEnvironments: Evaluating IIFL in simulation is uniquely challenging as it requires all of the fol-lowing, precluding the use of most existing benchmarks in similar papers: (1) efficient simulationof large robot fleets, (2) simulation of multiple algorithmic humans, (3) interactive human control,and (4) heterogeneous human control, which is difficult to specify in joint space. To accommodatethese requirements, following prior work [13] we evaluate with Isaac Gym [45] and the IFL Bench-mark [13]. We separate these experiments into two domains: (1) homogeneous human control in3 environments (Ball Balance, Ant, Anymal) to compare with prior IFL algorithms that assumeunimodal supervision; (2) heterogeneous human control in FrankaCubeStack, the only Isaac Gymenvironment with Cartesian space control. More details are available in Appendix 7.4.Metrics: Following prior work [13], we measure the total successful task completions across thefleet and the total number of hard resets. For interactive algorithms, we also measure the return onhuman effort (Equation 3) where reward is a sparse r∈ {0,1}for task completion. Task executionis deemed successful if the robot completes its trajectory without a hard reset and reaches 95% ofexpert human reward.Baselines: We compare IIFL to the following baselines: (explicit) BC, IBC, (explicit) IFL (specif-ically, Fleet-EnsembleDAgger [13]), and IIFL-Random (IIFL-R), which is an ablation of IIFL that6Algorithm Avg. Reward Task Successes ROHEBC 29.27±14.05 0 .3±0.5 N/AIBC 24.96±0.83 0 .0±0.0 N/AIFL 230.39±53.41 7 .0±2.2 2 .30±0.53IIFL-R 166.24±28.63 0 .0±0.0 1 .66±0.29IIFL 784.26±122.41 26 .7±4.5 7 .84±1.22Table 1: Execution results from the FrankaCubeStack Isaac Gym environment with 4 heterogeneous expertpolicies. IIFL significantly outperforms the baselines in average reward, task successes, and return on humaneffort.allocates humans to robots randomly instead of using the Jeffreys uncertainty estimate. Humansupervisors for BC and IBC perform only hard resets (i.e., no teleoperation) during execution.Experimental Setup: We run experiments with a fleet of N= 100 robots and M= 10 algorith-mic supervisors, where the supervisors are reinforcement learning agents trained with Isaac Gym’sreference implementation of PPO [41]. All training runs have hard reset time tR= 5 timesteps,minimum intervention time tT= 5timesteps, and fleet operation time T= 10,000timesteps [13],and are averaged over 3 random seeds. The initial robot policy πθ0for all algorithms is initializedwith behavior cloning on 10 full task demonstrations. While IFL trains at every timestep follow-ing prior work [13], the implicit interactive algorithms train at intervals of 1000 timesteps with anequivalent total amount of gradient steps for increased stability of EBM training.FrankaCubeStack, in which a Franka arm grasps a cube and stacks it on another (see Appendix 7.4.2for images and details), has several differences from the other 3 environments. First, since it allowsCartesian space control, we can script 4 heterogeneous supervisor policies with grasps correspondingto each face of the cube; the M= 10 supervisors are split into 4 groups, each of which has aunique policy. Second, due to the difficulty of scripting interactive experts, the online interventionstake place at execution-time (i.e., the robot policy is frozen). Third, since there is no notion ofcatastrophic failure in the cube stacking environment, we do not report hard resets as there are none.Results: The results are shown in Figure 2 and Table 1. In the homogeneous control experiments,we observe that IIFL rivals or outperforms all baselines across all metrics, with the exception of hardresets in the Anymal environment. We hypothesize that the latter results from learning more “aggres-sive” locomotion that makes greater progress on average but is more prone to failure. These resultssuggest that implicit policies may have desirable properties over explicit policies such as improveddata efficiency and generalization even when multimodality is notpresent in the data, as suggestedby prior work [16]. The severity of distribution shift due to compounding approximation error [9]in the homogeneous experiments roughly corresponds to the performance gap between BC and IFL(or IBC and IIFL). Surprisingly, (explicit) IFL underperforms BC in Ball Balance; we hypothesizethat this is due to its frequent policy updates on a shifting low-dimensional data distribution. Inthe FrankaCubeStack environment, IIFL significantly outperforms the baselines across all metrics,indicating the value of implicit policies for heterogeneous supervision. The 74% performance gapbetween IFL and IIFL corresponds to the severity of multimodality in this environment. Only IFLand IIFL attain nontrivial success rates; while IIFL-R makes progress, it is not able to successfullystack the cube, suggesting that IIFL allocates human attention more judiciously.5.3 Physical Experiments: Pushing Block to Target Point amid ObstacleExperimental Setup: To evaluate IIFL in experiments with real-world human multimodality andhigh-dimensional state spaces, we run an image-based block-pushing task with a fleet of N= 4ABB YuMi robot arms operating simultaneously and M= 2 human supervisors, similar to Hoqueet al. [13]. See Figure 3 for the physical setup. Each robot has an identical square workspace witha small blue cube and rectangular pusher as an end effector. Unlike Hoque et al. [13], we add asquare obstacle to the center of each workspace. The task for each robot is to push the cube to agoal region diametrically opposite the cube’s initial position without colliding with the walls or theobstacle. Once this is achieved, the goal region is procedurally reset based on the new cube position.As described in Section 3, the role of human superivsion is to (1) teleoperate when requested and (2)71 2 3 4 Observation Actions YuMi #1 YuMi #2 Figure 3: Physical experiment setup with 2 ABB YuMi robots for a total of 4 independent arms.Algorithm Successes ( ↑) Hard Resets ( ↓) ROHE ( ↑)BC 2.0±0.8 51 .0±0.8 N/AIBC 20.3±4.1 35.3±6.8 N/AIFL 7.0±0.8 47 .3±0.5 0 .13±0.01IIFL 36.3±1.2 37.0±2.2 0.71±0.01Table 2: Physical block pushing experiment results. IIFL outperforms all baselines in number of task successesand ROHE and explicit methods in hard resets. Implicit BC and IIFL incur similar amounts of hard resets.provide a physical hard reset when requested. When both paths to the goal are equidistant, Human1 pushes the cube clockwise around the obstacle while Human 2 pushes the cube counterclockwise ;if one path is closer, the human takes that path. Hard resets Rare defined to be collisions of thecube with the obstacle or the boundaries of the workspace. Furthermore, unlike the discrete actionspace in Hoque et al. [13], we use a continuous 2D action space of a= (∆ x,∆y)that correspondsto the vector along which to push the block, starting from the block’s center. We run 3 trials of eachalgorithm in Table 2 for T= 150 timesteps; see Appendix 7.4.3 for more details.Results: The results are shown in Table 2. We observe that implicit policies are crucial for success,as the explicit methods rarely reach the goal and incur many hard resets. Results suggest that IIFLimproves the success rate by 80% over IBC and improves ROHE by 4.5 ×over IFL. However, IIFLincurs a similar number of hard resets to IBC. We hypothesize that the duration of the physicalexperiment, difficult to extend due to the significant robot and human time required, is insufficientto learn subtle collision avoidance behaviors that noticeably reduce the number of hard resets.6 Limitations and Future WorkSince IIFL extends IBC, it inherits some of its limitations. First, model training and inferencerequire 18×and82×more computation time than explicit methods: on one NVIDIA V100 GPU, wemeasure implicit training to take an additional 0.34 seconds per gradient step and implicit inferenceto take an additional 0.49 seconds. Second, Florence et al. [16] find that IBC performance falls onsome tasks when the action space dimensionality is very high ( |A|>16); we do not observe this inour experiments as |A| ≤ 12but IIFL likely incurs this property with higher-dimensional actions.Third, while it is 7×faster than alternate methods for implicit models and has sub-second latencyfor a fleet of 100 robots, IIFL uncertainty estimation is nevertheless 340×slower than its highlyefficient explicit counterpart (Appendix 7.4.4). Finally, the real-world evaluation of IIFL is limitedto block pushing with fixed block properties; more comprehensive evaluation of IIFL in a widerrange of physical domains is required to assess its full applicability.In future work, we will evaluate IIFL in additional physical environments as well as extend recentlyproposed alternative approaches for handling multimodality such as Behavior Transformers [30]and Diffusion Policies [34] to the IFL setting. We will also develop algorithms that effectively learnfrom human demonstrations that are not only multimodal but also suboptimal. We note that as theJeffreys uncertainty quantification method does not rely on any IFL assumptions, it may be broadlyuseful beyond this setting to any applications involving Boltzmann distributions and EBMs.8AcknowledgmentsThis research was performed at the AUTOLab at UC Berkeley in affiliation with the Berkeley AIResearch (BAIR) Lab. The authors were supported in part by donations from Siemens, Google,Bosch, Toyota Research Institute, and Autodesk and by equipment grants from PhotoNeo, NVidia,and Intuitive Surgical. Any opinions, findings, and conclusions or recommendations expressed inthis material are those of the author(s) and do not necessarily reflect the views of the sponsors.We thank our colleagues who provided helpful feedback, code, and suggestions, especially CesarSalcedo, Letian Fu, and Aviv Adler.References[1] Y . Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. Theodorou, and B. Boots. Agile au-tonomous driving using end-to-end deep imitation learning. In Robotics: Science and Systems(RSS) , 2018.[2] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In D. Touret-zky, editor, Neural Information Processing Systems (NeurIPS) , volume 1. Morgan-Kaufmann,1988.[3] J. Chen, B. Yuan, and M. Tomizuka. Deep imitation learning for autonomous driving in genericurban scenarios with enhanced safety. 2019 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 2884–2890, 2019.[4] S. Paradis, M. Hwang, B. Thananjeyan, J. Ichnowski, D. Seita, D. Fer, T. Low, J. E. Gonzalez,and K. Goldberg. Intermittent visual servoing: Efficiently learning policies robust to instrumentchanges for high-precision surgical manipulation. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 7166–7173, 2021.[5] J. W. Kim, P. Zhang, P. L. Gehlbach, I. I. Iordachita, and M. Kobilarov. Towards autonomouseye surgery by combining deep imitation learning with optimal control. In Conference onRobot Learning (CoRL) , 2020.[6] D. Seita, A. Ganapathi, R. Hoque, M. Hwang, E. Cen, A. K. Tanwani, A. Balakrishna,B. Thananjeyan, J. Ichnowski, N. Jamali, et al. Deep imitation learning of sequential fabricsmoothing from an algorithmic supervisor. In IEEE/RSJ International Conference on Intelli-gent Robots and Systems (IROS) , pages 9651–9658, 2020.[7] Y . Avigal, L. Berscheid, T. Asfour, T. Kroger, and K. Goldberg. Speedfolding: Learningefficient bimanual folding of garments. 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 1–8, 2022.[8] R. Hoque, K. Shivakumar, S. Aeron, G. Deza, A. Ganapathi, A. Wong, J. Lee, A. Zeng, V . Van-houcke, and K. Goldberg. Learning to fold real garments with one arm: A case study incloud-based robotics research. In IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , 2022.[9] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In International Conference on Artificial Intelligence andStatistics (AISTATS) , pages 627–635, 2011.[10] R. Hoque, A. Balakrishna, E. Novoseller, A. Wilcox, D. S. Brown, and K. Goldberg.ThriftyDAgger: Budget-aware novelty and risk gating for interactive imitation learning. InConference on Robot Learning (CoRL) , 2021.[11] M. Kelly, C. Sidrane, K. Driggs-Campbell, and M. J. Kochenderfer. Hg-dagger: Interactiveimitation learning with human experts. 2019 International Conference on Robotics and Au-tomation (ICRA) , pages 8077–8083, 2018.9[12] K. Menda, K. Driggs-Campbell, and M. J. Kochenderfer. EnsembleDAgger: A Bayesian Ap-proach to Safe Imitation Learning. In IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , 2019.[13] R. Hoque, L. Y . Chen, S. Sharma, K. Dharmarajan, B. Thananjeyan, P. Abbeel, and K. Gold-berg. Fleet-dagger: Interactive robot fleet learning with scalable human supervision. In Con-ference on Robot Learning (CoRL) , 2022.[14] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart’in-Mart’in. What matters in learning from offline human demonstrationsfor robot manipulation. In Conference on Robot Learning (CoRL) , 2021.[15] C. G. Northcutt, A. Athalye, and J. Mueller. Pervasive label errors in test sets destabilizemachine learning benchmarks. In Neural Information Processing Systems (NeurIPS) , 2021.[16] P. R. Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. S. Wong, J. Lee,I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning(CoRL) , 2021.[17] Y . LeCun, S. Chopra, R. Hadsell, A. Ranzato, and F. J. Huang. A tutorial on energy-basedlearning. Predicting Structured Data , 1(0), 2006.[18] H. Jeffreys. The Theory of Probability . Oxford University Press, 1939.[19] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[20] J. Zhang and K. Cho. Query-efficient imitation learning for end-to-end autonomous driving.InAssociation for the Advancement of Artificial Intelligence (AAAI) , 2017.[21] J. Spencer, S. Choudhury, M. Barnes, M. Schmittle, M. Chiang, P. Ramadge, and S. Srinivasa.Learning from interventions: Human-robot interaction as both explicit and implicit feedback.InRobotics: Science and Systems (RSS) , 2020.[22] H. Liu, S. Nasiriany, L. Zhang, Z. Bao, and Y . Zhu. Robot learning on the job: Human-in-the-loop autonomy and learning during deployment. arXiv , abs/2211.08416, 2022.[23] R. Hoque, A. Balakrishna, C. Putterman, M. Luo, D. S. Brown, D. Seita, B. Thananjeyan,E. Novoseller, and K. Goldberg. LazyDAgger: Reducing context switching in interactiveimitation learning. In IEEE Conference on Automation Science and Engineering (CASE) ,pages 502–509, 2021.[24] X. Sun, S. Yang, and R. Mangharam. Mega-dagger: Imitation learning with multiple imperfectexperts. ArXiv , arXiv preprint arXiv:2303.00638, 2023.[25] K. Gandhi, S. Karamcheti, M. Liao, and D. Sadigh. Eliciting compatible demonstrations formulti-human imitation learning. In Conference on Robot Learning (CoRL) , 2022.[26] S. E. F. Chipman. The Oxford Handbook of Cognitive Science . Oxford University Press, 102017. ISBN 9780199842193.[27] C. M. Bishop. Mixture density networks. Neural Computing Research Group Report , 1994.[28] E. Shelhamer, J. Long, and T. Darrell. Fully convolutional networks for semantic segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence , 39:640–651, 2017.[29] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world forrobotic manipulation. Conference on Robot Learning (CoRL) , 2020.10[30] N. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. In Neural Information Processing Systems (NeurIPS) , 2022.[31] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polo-sukhin. Attention is all you need. In Neural Information Processing Systems (NeurIPS) , 2017.[32] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning (CoRL) , 2022.[33] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan. VIMA: General robot manipulation with multimodal prompts. In NeurIPS 2022Foundation Models for Decision Making Workshop , 2022.[34] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023.[35] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. arXiv preprintarXiv:2006.11239 , 2020.[36] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In International Conferenceon Learning Representations (ICLR) , 2014.[37] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,and Y . Bengio. Generative adversarial networks. In Advances in Neural Information Process-ing Systems , 2014.[38] L. van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Ma-chine Learning Research , 9(86):2579–2605, 2008. URL http://jmlr.org/papers/v9/vandermaaten08a.html .[39] N. Tishby and N. Zaslavsky. Deep learning and the information bottleneck principle. 2015IEEE Information Theory Workshop (ITW) , pages 1–5, 2015.[40] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimiza-tion. In International Conference on Machine Learning , 2015.[41] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[42] J. Lin. Divergence measures based on the shannon entropy. IEEE Transactions on InformationTheory , 37(1):145–151, 1991.[43] F. Nielsen. Fast approximations of the jeffreys divergence between univariate gaussian mix-tures via mixture conversions to exponential-polynomial distributions. Entropy , 23, 2021.[44] K. Chua, R. Calandra, R. T. McAllister, and S. Levine. Deep reinforcement learning in a hand-ful of trials using probabilistic dynamics models. In Neural Information Processing Systems ,2018.[45] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac gym: High performance gpu-based physics simulation forrobot learning. arXiv preprint arXiv:2108.10470 , 2021.[46] A. van den Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictivecoding. ArXiv preprint arXiv:1807.03748 , 2018.[47] M. Welling and Y . W. Teh. Bayesian learning via stochastic gradient langevin dynamics.InProceedings of the 28th International Conference on Machine Learning , ICML’11, page681–688, Madison, WI, USA, 2011. Omnipress. ISBN 9781450306195.[48] R. M. Neal. Annealed importance sampling. Statistics and computing , 11:125–139, 2001.117 Appendix7.1 Jeffreys Divergence IdentityWe derive the following identity from the main text:Identity 1. LetE1andE2be two energy-based models that respectively define distributions π1andπ2according to Equation 4. Then,DJ(π1(·|s)∥π2(·|s)) =Ea∼π1(·|s)[E2(s, a)−E1(s, a)] +Ea∼π2(·|s)[E1(s, a)−E2(s, a)].Proof. The proof follows from applying the definition of Jeffreys divergence to EBMs:DJ(π1(·|s)∥π2(·|s)) =DKL(π1(·|s)∥π2(·|s)) +DKL(π2(·|s)∥π1(·|s))=Ea∼π1(·|s)logπ1(a|s)π2(a|s)+Ea∼π2(·|s)logπ2(a|s)π1(a|s)=Ea∼π1(·|s)[E2(s, a)−E1(s, a)]−logZ1(s) + log Z2(s)+Ea∼π2(·|s)[E1(s, a)−E2(s, a)]−logZ2(s) + log Z1(s)=Ea∼π1(·|s)[E2(s, a)−E1(s, a)] +Ea∼π2(·|s)[E1(s, a)−E2(s, a)].To provide more intuition on this identity, we plot the Jeffreys divergence for a pair of isotropicGaussian energy functions in Figure 4.7.2 Additional Details on Implicit ModelsImplicit BC trains an energy-based model Eθon samples {si, ai}collected from the expert poli-ciesπH. After generating a set of counter-examples { ̃aji}for each si, Implicit BC minimizes thefollowing InfoNCE [46] loss function:L=NXi=1−log ˆpθ(ai|si,{ ̃aji}),ˆpθ(ai|si,{ ̃aji}) :=e−Eθ(si,ai)e−Eθ(si,ai)+Pje−Eθ(si, ̃aji). (5)This loss is equivalent to the negative log likelihood of the training data, where the partition func-tionZ(s)is estimated with the counter-examples. Florence et al. [16] propose three techniques forgenerating these counter-examples { ̃aji}and performing inference over the learned model Eθ; wechoose gradient-based Langevin sampling [47] with an additional gradient penalty loss for train-ing in this work as Florence et al. [16] demonstrate that it scales with action dimensionality betterthan the alternate methods. This is a Markov Chain Monte Carlo (MCMC) method with stochasticgradient Langevin dynamics. More details are available in Appendix B.3 of Florence et al. [16].We use the following hyperparameters for implicit model training and inference:Hyperparameter Valuelearning rate 0.0005learning rate decay 0.99learning rate decay steps 100train counter examples 8langevin iterations 100langevin learning rate init. 0.1langevin learning rate final 1e-5langevin polynomial decay power 2inference counter examples 512Table 3: Implicit model hyperparameters.12State Jeffreys Divergence State Energy Functions E1E2Action Figure 4: Consider a pair of isotropic Gaussian energy functions E1(s, a)andE2(s, a)in green and purplerespectively, where each function is a negated Gaussian probability density function and E1adds a uniformoffset of Z=−100to all values (Left). Using numerical integration to directly compute the expectations inthe Jeffreys divergence identity (Identity 1), at each state we calculate the distance between the implicit policiesdefined by the two energy functions (Right). As intuition suggests, the divergence peaks at the mean of eachGaussian (where one energy function is highest and the other is near zero) and approaches zero where theenergy functions are the same (at the center and edges of the state space). Note the symmetric structure of theJeffreys curve, which produces identical values regardless of the offset Z.7.3 Uncertainty Estimation with Larger EnsemblesPrior works using ensembles of explicit models to estimate epistemic uncertainty [44, 12, 10] typ-ically employ larger ensembles of n≥5models, whereas IIFL uses n= 2. We wish to evaluatethe impact of this smaller number of models. However, the Jeffreys divergence is only defined fortwo distributions, and while other divergence measures (e.g. Jensen-Shannon) can be generalizedto an arbitrary number of distributions, they typically require knowledge of the intractable partitionfunctions of the distributions. Accordingly, we consider estimating the uncertainty of n= 5implicitmodels by computing the average of the Jeffreys divergences between every pairwise combinationof models. Figure 5 provides intuition on this measure, and we provide information on computationtime in Section 7.4.4.We evaluate the effect of adding more models by comparing the estimate of the Jeffreys divergencewithn= 2 models and the averaged estimate with n= 5 models to the L2 distance between therobot policy’s proposed action and the expert policy’s action at the same state. While ground truthepistemic uncertainty is intractable to calculate, the ground truth action discrepancy between thehuman and robot can provide a correlate of uncertainty: higher discrepancy corresponds to higheruncertainty. The results are shown in Figure 6. We observe that both ensemble sizes are positivelycorrelated with action discrepancy, and that the ensemble with n= 5models has a higher correlation(r= 0.804) than the ensemble with n= 2 models ( r= 0.688). We also observe that the n= 5ensemble has lower variance than n= 2: the standard deviation is 0.176 compared to 0.220. Theseresults suggest that larger ensembles can improve the uncertainty estimation at the cost of increasedcomputation time ( 2.6×in Section 7.4.4).7.4 Additional Experimental Details7.4.1 IFL Benchmark HyperparametersImplementations of Implicit Interactive Fleet Learning and baselines are available in the code sup-plement and are configured to run with the same hyperparameters we used in the experiments. Tocompute the uncertainty thresholds ˆufor Explicit IFL and IIFL (see Section 8.3.1 in [13] for defini-tion), we run Explicit BC and Implicit BC respectively with N= 100 robots for T= 1000 timestepsand choose the 99th percentile value among all 100×1000 uncertainty values. The FrankaCubeStackenvironment sets these thresholds to zero since there are no constraint violations (i.e., this sorts robot13(a) Consider 5 isotropic Gaussian energy functions, each a negative Gaussian probabilitydensity function with some offset.(b) We use numerical integration to calculate at each state the Jeffreys divergences be-tween each of the52= 10 unique pairs of models, and report the average value. Asintuition suggests, the calculated uncertainty is highest at states −2and2, where twoof the Gaussians have means that are far apart, meaning that they strongly prefer verydifferent actions. At state 0, the uncertainty is lower as one model strongly prefers theaction 0, and the others are closer to uniform. Far from state 0, the uncertainty is lowestas all the energy functions are approximately flat.Figure 514Figure 6: We plot the Jeffreys divergence estimates and the ground truth action discrepancies at the first 1000states visited by a robot with a unimodal policy. Both variants of the Jeffreys divergence calculation are posi-tively correlated with the L2distance between the robot policy’s and expert policy’s actions. In the n= 2case,the correlation coefficient is r= 0.688; in the n= 5 case, the correlation coefficient is r= 0.804, indicatingthat additional models can make the ensemble more predictive of when the agent will deviate from the expert(at the cost of increased computation time).priority by uncertainty alone). See Table 4 for these values, state and action space dimensionality,and other hyperparameters. The batch size is 512 and all algorithms pretrain the policy for N/2gradient steps, where Nis the number of data points in the 10 offline task demonstrations. Finally,as in prior work [13], the Random IIFL baseline is given a human action budget that approximatelyequals the average amount of human supervision solicited by IIFL. See the code for more details.Environment |S||A|Explicit ˆuImplicit ˆuBallBalance 24 3 0.1179 0.1206Ant 60 8 0.0304 0.9062Anymal 48 12 0.0703 2.2845FrankaCubeStack 19 7 0.0 0.0Table 4: Simulation environment hyperparameters.7.4.2 FrankaCubeStack EnvironmentThe scripted supervisor for FrankaCubeStack is defined in human action() ofenv/isaacgym/franka cube stack.py in the code supplement. Using known pose infor-mation and Cartesian space control, the supervisor policy does the following, where Cube A is to bestacked on Cube B: (1) move the end effector to a position above Cube A; (2) rotate into a pre-grasppose; (3) descend to Cube A; (4) lift Cube A; (5) translate to a position above Cube B; (6) placeCube A on Cube B; and (7) release the gripper. Heterogeneity is concentrated in Step 2: while onesupervisor rotates to an angle θ∈[0,π2]that corresponds to a pair of antipodal faces of the cube,the others rotate to θ−π, θ−π2,andθ+π2. See Figure 7 for intuition. We also include resultsfor only 2 hetereogeneous policies ( θandθ−π2) in Table 5; results (in conjunction with Table 1)suggest that relative performance of IIFL over baselines remains approximately consistent as thenumber of modes varies and can improve as multimodality increases.15Figure 7: The scripted heterogeneous supervisors for the FrankaCubeStack Isaac Gym environment pick dif-ferent faces of the cube for the same cube pose.Algorithm Avg. Reward Task Successes ROHEBC 23.45±0.99 0 .0±0.0 N/AIBC 30.32±2.78 0 .0±0.0 N/AIFL 307.87±118.59 9 .3±4.7 3 .08±1.19IIFL-R 244.98±32.58 0 .0±0.0 2 .45±0.33IIFL 604.17±263.06 17 .7±11.1 6 .04±2.63Table 5: Execution results from the FrankaCubeStack Isaac Gym environment with 2 heterogeneous supervisorpolicies (rather than 4).7.4.3 Physical Experiment ProtocolWe largely follow the physical experiment protocol in Hoque et al. [13] but introduce some modifi-cations to human supervision. We execute 3 trials of each of 4 algorithms (Explicit BC, Implicit BC,Explicit IFL, Implicit IFL) on the fleet of 4 robot arms. Each trial lasts 150 timesteps (synchronousacross the fleet) for a total of 3 × 4 × 4 × 150 = 7200 individual pushing actions. The authorsprovide human teleoperation and hard resets, which differ from prior work due to the continuousaction space and the square obstacle in the center of the workspace. Teleoperation is done using anOpenCV (https://opencv.org/) GUI by clicking on the desired end point of the end-effector in theoverhead camera view. Hard resets are physical adjustments of the cube to a randomly chosen sideof the obstacle. IIFL is trained online with updated data at t= 50 andt= 100 while IFL is updatedat every timestep (with an equivalent total amount of gradient steps) to follow prior work [13].The rest of the experiment protocol matches Hoque et al. [13]. The 2 ABB YuMi robots are locatedabout 1 km apart; a driver program uses the Secure Shell Protocol (SSH) to connect to a machine thatis connected to the robot via Ethernet, sending actions and receiving camera observations. Pushingactions are executed concurrently by all 4 arms using multiprocessing. We set minimum interventiontimetT= 3and hard reset time tR= 5. All policies are initialized with an offline dataset of 3360image-action pairs (336 samples collected by the authors with 10 ×data augmentation). 10 ×dataaugmentation on the initial offline dataset as well as the online data collected during executionapplies the following transformations:• Linear contrast uniformly sampled between 85% and 115%• Add values uniformly sampled between -10 and 10 to each pixel value per channel• Gamma contrast uniformly sampled between 90% and 110%• Gaussian blur with σuniformly sampled between 0.0 and 0.3• Saturation uniformly sampled between 95% and 105%• Additive Gaussian noise with σuniformly sampled between 0 and180×25580 × 255167.4.4 Computation TimeIn Table 6 we report the mean and standard deviation of various computation time metrics. Alltiming experiments were performed with N= 100 robots and averaged across T= 100 timestepsin the Ant environment on a single NVIDIA Tesla V100 GPU with 32 GB RAM. Training time isreported for a single gradient step with a batch size of 512. Note that with default hyperparameters,IFL trains an ensemble of 5 (explicit) models and IIFL trains an ensemble of 2 (implicit) models;hence, we also report the training time per individual model. IFL inference consists of a singleforward pass through each of the 5 models, while IIFL inference performs 100 Langevin iterations;both of these are vectorized across all 100 robots at once. IFL uncertainty estimation also consists ofa single forward pass through each of the 5 models while IIFL performs both Langevin iterations and2 forward passes through each of the 2 models. While IIFL can provide policy performance benefitsover IFL, we observe that it comes with a tradeoff of computation time, which may be mitigated withparallelization across additional GPUs. Furthermore, while uncertainty estimation is the bottleneckin IIFL, it is performed with sub-second latency for the entire fleet. This is significantly faster thanalternatives such as directly estimating the partition function, which is both less accurate and slower;we measure it to take an average of 7.10 seconds per step using annealed importance sampling [48].Finally, uncertainty estimation for the variant described in Section 7.3 that uses n= 5 implicitmodels required 2.599±0.002s. While the time complexity should grow as quadratic in n, in practicewe observe that for small values of nthe growth is closer to linear as the latency is dominated bytheO(n)sampling process rather than the O(n2)forward passes.Time IFL IIFLTraining step (s) 0.0385±0.0205 0.694±0.207Training step per model (s) 0.0077±0.0041 0.347±0.104Inference (s) 0.0060±0.0395 0.494±0.045Uncertainty estimation (s) 0.0029±0.0008 0.988±0.008Table 6: Computation times for training, inference, and uncertainty estimation for IFL and IIFL.17 |
VLihM67Wdi6 | STERLING: Self-Supervised Terrain RepresentationLearning from Unconstrained Robot ExperienceHaresh Karnan1haresh.miriyala@utexas.eduElvin Yang1elvin.yang@utexas.eduDaniel Farkash2dmf248@cornell.eduGarett Warnell1,3garrett.a.warnell.civ@army.milJoydeep Biswas1joydeepb@cs.utexas.eduPeter Stone1,4pstone@cs.utexas.eduAbstract: Terrain awareness , i.e., the ability to identify anddistinguish differ-ent types of terrain, is a critical ability that robots must have to succeed at au-tonomous off-road navigation. Current approaches that provide robots with thisawareness either rely on labeled data which is expensive to collect, engineered fea-tures and cost functions that may not generalize, or expert human demonstrationswhich may not be available. Towards endowing robots with terrain awarenesswithout these limitations, we introduce Self-supervised TErrain RepresentationLearnING (STERLING ), a novel approach for learning terrain representations thatrelies solely on easy-to-collect, unconstrained (e.g., non-expert), and unlabeledrobot experience, with no additional constraints on data collection. STERLINGemploys a novel multi-modal self-supervision objective through non-contrastiverepresentation learning to learn relevant terrain representations for terrain-awarenavigation. Through physical robot experiments in off-road environments, weevaluate STERLING features on the task of preference-aligned visual navigationand find that STERLING features perform on par with fully-supervised approachesand outperform other state-of-the-art methods with respect to preference align-ment. Additionally, we perform a large-scale experiment of semi-autonomouslyhiking a 3-mile long trail which STERLING completes successfully with only twomanual interventions, demonstrating robustness to real-world off-road conditions.Robot experiment videos and more details can be found in the appendix and theproject website https://hareshkarnan.github.io/sterling/Keywords: Vision-Based Navigation, Representation Learning.1 IntroductionOff-road navigation is emerging as a crucial capability for autonomous mobile robots envisionedfor use in a growing number of outdoor applications such as agricultural operations [1], packagedelivery [2], and search and rescue [3]. Endowing robots with this capability has, however, provedto be challenging and remains an active area of research.One particularly difficult challenge in off-road autonomous navigation is that of providing the robotwith terrain awareness , i.e., the ability to identify distinct terrain features that are relevant to a widevariety of downstream tasks (e.g., changing preferences over terrain types) [4, 5, 6, 7, 8, 9, 10,11]. While a litany of prior work has attempted to address this challenge [12, 13, 14, 15], existingapproaches typically rely on difficult-to-collect curated datasets [16, 17, 18, 19, 20] or has been1The University of Texas at Austin, Austin, TX, USA2Cornell University, Ithaca, NY , USA3Army Research Laboratory, Austin, TX, USA4Sony AI, North America7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.focused on particular tasks [21, 22, 23, 24, 25, 9] and is not amenable to downstream task changes[26, 25, 7]. These limitations prevent existing approaches from appropriately scaling to the vastdistribution of terrains and navigation tasks in the real world.Toward overcoming the scalability challenges in terrain awareness, we introduce Self-supervisedTErrain Representation LearnING (STERLING )1, a novel approach to learning terrain representa-tions for off-road navigation. STERLING learns an encoding function that maps high-dimensional,multi-modal sensor data to low-dimensional, terrain-aware representations that amplify differencesimportant for navigation and attenuate differences due to extraneous factors such as changes in view-point and lighting. Importantly, STERLING works with easy-to-collect unconstrained and unlabeledrobot data, thereby providing a scalable pathway to data collection and system improvement for thewide variety of terrain and downstream tasks that off-road robots must face.To evaluate STERLING , we apply it to the problem of preference-aligned off-road navigation andprovide a detailed comparison to existing approaches to this problem, including RCA [7], GANav[19], SE-R[8], and a fully-supervised oracle. We find that STERLING enables performance on parwith or better than these existing approaches without requiring any expert labels or demonstrations.Additionally, we report the results of a large-scale qualitative experiment in which STERLING en-abled semi-autonomous robot navigation on a 3-mile long hiking trail.The key contributions of this paper are— 1) Self-supervised TErrain Representation LearnING(STERLING ), a novel approach that learns terrain representations from easy-to-collect unconstrainedrobot experiences, 2) Detailed evaluation of STERLING against baseline methods on the task of op-erator preference-aligned off-road navigation, and 3) A large-scale qualitative experiment of semi-autonomously hiking a 3-mile long trail, demonstrating the effectiveness of STERLING -features.2 Related WorkIn this section, we review related work on terrain-aware visual off-road navigation. We specificallyfocus on approaches that learn to navigate off-road conditions using supervised and self-supervisedlearning.2.1 Supervised MethodsSeveral approaches in the past have proposed using supervised learning from large-scale data tonavigate off-road environments. We divide them into two categories as follows.End-to-End Learning: The initial success of applying learning-based solutions to off-road terrain-aware navigation was by LeCun et al. [28] who used a convolutional network to learn to drive inoff-road conditions. More recently, Bojarski et al. [21] trained a deep neural network end-to-endusing several miles of driving data collected on a vehicle in the real world. While both approacheswere promising in urban and off-road environments, end-to-end methods require large amounts ofdata and are well-known to suffer from domain and covariate shifts [29, 30, 31].Image Segmentation: Unlike end-to-end approaches that learn behaviors, segmentation-based ap-proaches seek to characterize terrain using a set of known semantic classes, and the resultingsemantic features are consumed by downstream planning and control techniques for navigation[32, 19, 33]. Guan et al. [19] propose GANav, a transformer-based architecture to pixel-wise seg-ment terrains, trained on RELLIS [17] and RUGD [16] datasets, with manually assigned terraincosts. While effective at terrain awareness, segmentation-based methods are fixed to the specificterrain types available in the datasets and require additional labeling effort to generalize to novel ter-rains. In STERLING , we do not require semantically labeled datasets and learn terrain representationsfrom unconstrained experience collected onboard a mobile robot.1A preliminary version of this work was presented at the PT4R workshop at ICRA 2023 [27]22.2 Self-Supervised LearningTo alleviate the need for extensive human labeling, self-supervised learning methods have beenproposed to either learn terrain representations or costs from data gathered onboard a mobile robot.Representation Learning: Brooks et al. [34] utilize contact vibrations and visual sensors to clas-sify terrains via self-supervision. Loquercio et al. [35] use proprioceptive supervision to predictextrinsic representations [36] of terrain geometry from vision, used as inputs to drive a Reinforce-ment Learning-based locomotion policy. In this work, we do not learn a robot-specific locomotionpolicy and instead learn relevant representations for off-road terrain awareness. Z ̈urn et al. [8]introduce SE-Rwhich utilizes acoustic and visual sensors on the robot to segment terrains usinga self-supervised triplet-contrastive learning framework. Using triplet-based contrastive learningmethods requires negative samples which may not be available when learning using unlabeled data.InSTERLING , we use recently proposed non-contrastive unsupervised learning approaches such asVICR eg [37] that do not require any negative samples and instead rely on correlations between datamodalities to learn relevant terrain representations.Cost Learning: Several methods have applied self-supervision to assign traversability costs for thedownstream off-road navigation task [7, 38, 26, 39, 40, 41, 42]. Specifically, these methods rely oninertial spectral features [7], future predictive models [26], inertial-odometry errors [38], or force-torque values from foothold positions [39, 43] as self-supervision signals to learn a traversabilitycost map, used to evaluate candidate actions. More recently, Frey et al. [44] have proposed anonline traversability estimation approach inspired by the above self-supervision schemes. Instead ofinferring costs or rewards using self-supervision for a fixed task, in this work, we focus on learningrelevant visual features from unconstrained robot experiences that could be used in downstreamtasks. This framework allows a designer to reuse features across tasks without retraining entirelyfrom scratch.Hybrid Methods: The approach closest to ours is VRL-PAP[6] which requires human expert tele-operated demonstrations of a particular trajectory pattern to both explicitly learn visual terrain rep-resentations as well as to infer terrain preference costs. However, in this work, we focus on learningterrain features from unconstrained robot experiences without requiring human experts in the fieldfor demonstrations, which is a more general problem than the one considered by VRL-PAP.3 ApproachIn this section, we introduce the self-supervised terrain representation learning approach, STERLING ,proposed in this work. We first describe the offline pre-processing performed on unconstrained robotdata and then summarize the self-supervision objectives. Finally, we describe the problem formu-lation for preference-aligned off-road navigation and present how features learned using STERLINGcan be utilized within a planner for terrain-aware and preference-aligned navigation.3.1 Data-Collection and Pre-ProcessingSTERLING learns terrain representations from unconstrained, unlabeled robot experiences collectedusing any navigation policy. This policy may be, for instance, non-expert human teleoperation,curiosity-driven exploration [45], or point-to-point navigation using any underlying planner. Com-pared to requiring a human expert to provide teleoperated demonstrations and labels, collecting thistype of robot experience is cheap and easy, thereby providing a scalable pathway to data collectionand system improvement. We additionally assume that the robot is equipped with multiple sensors ,e.g., an egocentric RGB camera, odometry sensors, onboard IMU, proprioceptive, and tactile sen-sors, that together provide rich multi-modal observations as the robot traverses over different terrainscollecting experience. STERLING leverages this multi-modal data by using the correlation betweendifferent modalities to inform the learned terrain representations.3In order to learn terrain representations using STERLING , we begin by pre-processing the visual andnon-visual observations, which are explained in detail below.Figure 1: An illustration of the pre-processingperformed on unconstrained robot experience.Image patches of traversed terrain at locationskare extracted from bird’s eye view observa-tions at prior locations sk−1, sk−2along thetrajectory. The corresponding IPTobserva-tions at skare transformed from time series toPSD signals. Note the visual artifacts causedby noise in homography transformation fromviewpoints farther away from sk.Visual Patch Extraction: The egocentric cameraobservations are homography-projected into a vir-tual bird’s eye view (BEV) frame, assuming that theground is a flat plane, using the intrinsic and extrin-sic camera matrices. As shown in Fig. 1, we projectthe robot’s trajectory onto the BEV frame and ex-tract 64-by-64 pixels (equivalent to the robot’s foot-print of 0.5-by-0.5 meters) square visual patches ofterrain along with the corresponding inertial, pro-prioceptive, and tactile observations at the same lo-cation, along the trajectory. Since the terrain at skis unobservable when the robot itself is at sk(i.e.,it is underneath the robot), we extract terrain im-age patches corresponding to skfrom BEV obser-vations at previous locations sk−1, sk−2, . . . alongits trajectory. Fig. 1 illustrates the offline patchextraction process from two previous viewpoints,however, we extract patches from up to 20 previ-ous viewpoints within 2 meters. Although just oneviewpoint is sufficient to learn the correlation be-tween visual and other sensor observations, whenplanning to navigate, the robot will need to visu-ally evaluate terrain at future locations, and therefore STERLING also seeks representations that areinvariant to patch differences due to viewpoint, also known as viewpoint invariance .IPT Pre-Processing: For the inertial, proprioceptive, and tactile ( IPT) observations, we retain up to2-second history and convert the time-series signals into power-spectral density ( PSD) representationin the frequency domain. This ensures the IPTtime-series data representations used as input toSTERLING are invariant to differences in length and phase in the recorded signals. Additional detailsare provided in Supplementary Section 9.5.3.2 Non-Contrastive Terrain Representation LearningIt is desired for learned representations of terrains to be such that representations of similar terrainare close together in the embedding space and that representations of different terrains are suffi-ciently far apart. Although we do not possess privileged information such as semantic labels ofterrains for training, the visual and kinodynamic observations experienced by the robot reflect sim-ilarities and differences between terrain samples. For instance, traversing a smooth terrain that ahuman may refer to as cement sidewalk may lead to relatively smooth motion by the robot’sjoints, whereas a rough terrain such as what might be referred to as marble rocks may correspondto jerkier motion. STERLING leverages this multi-modal experience observed by the robot and com-putes a correlation objective between visual and inertial-proprio-tactile signals to learn desired ter-rain representations. Additionally, STERLING uses viewpoint invariance as an objective unique tothe visual component of the experience to learn viewpoint-invariant terrain representations.Fig. 2 provides an overview of the self-supervised representation learning framework adopted inSTERLING . A parameterized visual encoder (4-layer CNN with 0.25 million parameters) encodesterrain image patch observations v1andv2of the same location sinto visual representations φv1andφv2, respectively, collectively referred to as φv1,2for brevity. Similarly, an inertial-proprio-tactileencoder (4-layer MLP with 0.25 million parameters) encodes frequency domain IPTobservations ofthe robot at that location to an inertial-proprio-tactile representation φi. We follow the frameworkof prior self-supervised representation learning algorithms from the computer vision communitysuch as VICR eg [37], and utilize a parameterized projector network (2-layer MLP with 0.25 million4parameters) that maps encoded visual and non-visual representations independently to a higher-dimensional feature space ψv1,2andψirespectively, over which the self-supervision objectives arecomputed. The STERLING objective composed of the multi-modal correlation LMM(ψv1,2, ψi)andviewpoint-invariance LV I(ψv1, ψv2)objectives are defined as:LSTERLING =LV I(ψv1, ψv2) +LMM(ψv1,2, ψi)LV I(ψv1, ψv2) =LVICReg(ψv1, ψv2)LMM(ψv1,2, ψi) = [LVICReg(ψv1, ψi) +LVICReg(ψv2, ψi)]/2(1)LVICRegis the VICR eg loss that is composed of variance-invariance-covariance representation learn-ing objectives, as proposed by Bardes et al. [37]. Given two alternate projected representationsZandZ′of a data sample (in STERLING ,ZandZ′are projected representations of the visual andnon-visual sensor modalities), the VICR eg loss is defined as LVICReg(Z, Z′) =λs(Z, Z′)+μ[v(Z)+v(Z′)] +ν[c(Z) +c(Z′)]. Note that while Bardes et al. use VICR eg to learn representations fromvisual inputs using artificial image augmentations, in this work, we extend VICR eg to multi-modalinputs and use real-world augmentations via multi-viewpoint image patches as described in Sec.3.1.λ,μ, and νare hyper-parameters and the functions v,s, and care the variance, invariance, andcovariance terms computed on a mini-batch of projected features. We refer the reader to Bardes etal. [37] for additional details on the individual terms and also define them here for completeness.The variance term vis a hinge function defined as v(Z) =1ddPj=1max(0, γ−S(zj, ε)), where Sisthe standard deviation, and dis the dimensionality of the projected feature space. cis the covari-ance term, defined as c(Z) =1dPi̸=j[C(Z)]2i,j, where C(Z)is the covariance matrix of Z.sis theinvariance term defined as s(Z, Z′) =1nPi||zi−z′i||. More details on the individual terms in theloss function are provided in Sec. 9.5. We apply an l2norm on the visual and non-visual features toensure they are on a hypersphere, which helped improve the quality of learned representations. Ona mini-batch of data containing paired terrain image patches and IPTobservations, we compute theLSTERLING loss and update parameters of the two encoder networks and the shared projector networktogether using Adam optimizer.3.3 Preference-Aligned Off-Road NavigationIn this subsection, we describe the downstream navigation task of preference-aligned visual naviga-tion that we focus on when evaluating STERLING .Preliminaries: We formulate the task of preference-aligned terrain-aware navigation as a localpath-planning problem, where the robot operates within a state space S, action space A, and adeterministic transition function T:S × A −→ S in the environment. The state space consistsofs= [x, y, θ, φ v], where [x, y, θ ]denote the robot’s position in SE(2)space, and φvdenotesthe visual features of the terrain at this location. Given a goal location G, the preference-alignednavigation task is to reach this goal while adhering to operator preferences over terrains. We assumeaccess to a sampling-based planner, the details of which are provided in Supplementary Sec. 8.Learning the preference utility: Following Zucker et al. [46], we learn the utility function u:Φv→R+using human queries. From the predicted terrain features on data samples in our trainingset, we cluster the terrain representations using k-means with silhouette-score elbow criterion, andsample candidate terrain patches from each cluster, which is presented to the human operator usinga GUI. The human operator then provides a full-order ranking of terrain preferences over clusters,which is utilized to learn the utility function u(.), represented by a 2-layer MLP. While recoveringabsolute cost values from ranked preference orders is an under-constrained problem, we find thatthis approximation provided by Zucker et al. [46] works well in practice.54 ExperimentsFigure 2: Overview of the training architectureinSTERLING . Terrain patches v1andv2fromdifferent viewpoints of the same location are en-coded as φv1andφv2respectively, and mappedinto embeddings ψv1andψv2. Similarly, iner-tial, proprio, tactile signals are encoded as φi, andmapped as ψi. Self-supervision objectives LV Ifor viewpoint-invariance and LMM for multi-modal correlation are computed on the minibatchto perform gradient descent.In this section, we describe the experimentsperformed to evaluate STERLING . Specifically,the experiments presented in this section are tai-lored to address the following questions:(Q1) How effective are STERLING featuresin comparison to baseline approachesat enabling terrain awareness in off-road navigation?(Q2) How effective are the proposed STER -LING objectives in learning discrimi-native terrain features in comparisonto other representation learning objec-tives?We investigate Q1through physical robot ex-periments on the task of preference-aligned off-road navigation. We perform quantitative eval-uations in six different outdoor environments,and then further perform a large-scale qualita-tive evaluation by semi-autonomously hiking a3-mile long off-road trail using preference costslearned using STERLING features. To comparevarious methods, we use the success rate ofpreference alignment as a metric. If a trajec-tory followed by any algorithm fails to reach the goal, or at any time traverses over any terrain thatis less preferred than any traversed by the operator-demonstrated trajectory, we classify the trial as afailure. We additionally investigate Q2by comparing STERLING against other unsupervised terrainrepresentation learning methods and perform an ablation study on the two STERLING objectives.Additional experiments are provided in Supplementary Sec. 9.2.Baselines: To perform quantitative evaluations for Q1, we compare STERLING with SE-R[8], RCA[7], GANav [19], geometric-only planning [47], and a fully-supervised baseline. SE-Rand RCAperform self-supervised learning from unconstrained robot experience to learn terrain representa-tions and traversability costs respectively, making them relevant baselines for this problem. Sincethere is no open-source implementation of RCA, we replicate it to the best of our abilities. Thegeometric-only approach ignores terrain costs ( Lterrain ) and plans with geometric cost ( Lgeom )only, making it a relevant ablation on the cost formulation for preference-aware planning. GANav2[19] is a segmentation-based approach trained on the RUGD [16] dataset. We additionally train thefully-supervised baseline in which the terrain cost function is learned end-to-end using supervisedlearning from linear extrapolation of operator preferences. GANav and the fully-supervised baselinerequire supervision via terrain labels to learn and hence serve as references for comparison. Wenormalize the terrain cost predicted by all methods to be between 0 and 1 for a fair comparison.4.1 Evaluating Terrain-Awareness via Robot ExperimentsIn this subsection, we report on experiments to investigate the effectiveness of STERLING features inenabling terrain awareness during off-road navigation. We quantitatively compare the performanceofSTERLING with baselines RCA [7], GANav [19], SE-R[8] and the fully-supervised baseline, on thetask of preference-aligned navigation. We identify six environments within the university campus,with eight different terrain types, as shown in Fig. 3. For this study, we use the same data col-lected on the robot to train RCA,SE-R, fully-supervised baseline, and STERLING , and the operator2https://github.com/rayguan97/GANav-offroad6Figure 3: Trajectories traced by different approaches in 5 environments containing 8 differentterrains. The operator preferences are shown above. We see that STERLING navigates in anoperator-preference aligned manner, by preferring cement sidewalk, red bricks, pebblesidewalk, and yellow bricks over mulch, grass, marble rocks, and bush , outper-forming other baselines and performing on-par with the Fully-Supervised approach.provides the same rankings for all methods during training. Note that we use the same encoder andutility function across all environments and do not retrain/finetune to each environment to preventenvironment-specific overfitting.Fig. 3 shows the operator’s (first author) terrain preferences for all Envs. 1 to 5, and the performanceof baseline approaches, including an operator-demonstrated trajectory for reference. In all environ-ments, we see that STERLING navigates in a terrain-aware manner while adhering to the operator’spreferences. Note that although Fully-Supervised also completes the task successfully, it requiresprivileged information such as terrain labels during training, whereas STERLING does not requiresuch supervision, and can potentially be used on large datasets containing unlabeled, unconstrainedrobot experiences. GANav, trained on the RUGD dataset fails to generalize to unseen real-worldconditions. RCA uses inertial spectral features to learn terrain traversability costs and hence doesnot adhere to operator preference. SE-Rdoes not address viewpoint invariance which is a signifi-cant problem in vision-based off-road navigation and hence performs poorly in Envs. 1 and 2. Weperform additional experiments in an outdoor environment (Env. 6) to study adherence to operatorpreferences, detailed in Supplementary Sec. 9.1.Table 1 shows the success rate of preference alignment for all approaches in all environments, overfive different trials. STERLING outperforms other self-supervised baselines and performs on par withthe fully-supervised approach. In summary, the physical experiments conducted in six environmentsquantitatively demonstrate the effectiveness of STERLING features in enabling terrain awarenessduring off-road navigation.4.2 Evaluating Self-Supervision ObjectivesIn this subsection, we investigate the effectiveness of STERLING at learning discriminative terrainfeatures and compare with baseline unsupervised terrain representation learning methods such asRegularized Auto-Encoder ( RAE) and SE-R[8] and large pretrained networks such as a ResNet-50 pretrained on ImageNet. STERLING uses multi-modal correlation ( LMM) and viewpoint in-variance ( LV I) objectives for self-supervised representation learning, whereas, SE-Rand RAE usesoft-triplet-contrastive loss and pixel-wise reconstruction loss, respectively. Additionally, we alsoperform an ablation study on the two objectives in STERLING to understand their contributions tolearning discriminative terrain features. To evaluate different visual representations, we perform7unsupervised classification using k-means clustering and compare their relative classification accu-racies with manually labeled terrain labels. For this experiment, we train STERLING ,SE-R, and RAEon our training set and evaluate on a held-out validation set. Fig. 4 shows the results of this study.We see that STERLING -features using both the self-supervision objectives perform the best amongall methods. Additionally, we see that using a non-contrastive representation learning approach suchasVICR eg [37] within STERLING performs better than contrastive learning methods such as SE-R,and reconstruction-based methods such as RAE. This study shows that the proposed self-supervisionobjectives in STERLING indeed help learn discriminative terrain features.ApproachEnvironment1 2 3 4 5 6 (a) 6 (b)Geometric-only 0/5 0/5 0/5 0/5 0/5 0/5 5/5RCA[7] 2/5 4/5 2/5 0/5 1/5 5/5 0/5GANav[19] 5/5 0/5 0/5 5/5 0/5 4/5 5/5SE-R[8] 1/5 0/5 5/5 1/5 3/5 5/5 4/5Fully-Supervised 5/5 5/5 5/5 5/5 5/5 5/5 5/5STERLING (Ours) 5/5 5/5 5/5 5/5 5/5 5/5 5/5Table 1: Success rates of different algorithms on the task of preference-aligned off-road navigation5 Limitations and Future WorkFigure 4: Ablation study depicting classificationaccuracy (value closer to 1.0is better) from terrainrepresentations learned using different approachesand objectives. The combined objective ( VI+MM) proposed in STERLING achieves the highestaccuracy, indicating that the learned representa-tions are sufficiently discriminative of terrains.STERLING requires traversing over terrains inorder to learn representations, which may beunsafe in certain situations. Uncertainty-awaresafe exploration and exploration focusing on in-formative and diverse terrains for data collec-tion is a promising direction for future work.Extending STERLING features to work on un-structured non-flat environments such as stairs[48] and boulders [49] is another promising di-rection for future work. Extending STERLINGby pretraining with large-scale off-road datasetsusing modern architectures such as transform-ers that are known to scale well with large-scaledata is an exciting direction for future work.6 ConclusionIn this paper, we introduce Self-supervised TEr-rain Representation LearnING (STERLING ), anovel framework for learning terrain repre-sentations from easy-to-collect, unconstrained(e.g., non-expert), and unlabeled robot experience. STERLING utilizes non-contrastive representa-tion learning through viewpoint invariance and multi-modal correlation self-supervision objectivesto learn relevant terrain representations for visual navigation. We show how features learned throughSTERLING can be utilized to learn operator preferences over terrains and integrated within a plan-ner for preference-aligned navigation. We evaluate STERLING against state-of-the-art alternativeson the task of preference-aligned visual navigation on a Spot robot and find that STERLING out-performs other methods and performs on par with a fully-supervised baseline. We additionally per-form a qualitative large-scale experiment by successfully hiking a 3-mile-long trail using STERLING ,demonstrating its robustness to off-road conditions in the real world.8AcknowledgmentsThis work has taken place in the Learning Agents Research Group (LARG) and Autonomous Mo-bile Robotics Laboratory (AMRL) at UT Austin. LARG research is supported in part by NSF(CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO(W911NF19-2-0333), DARPA, Lockheed Martin, GM, and Bosch. AMRL research is supportedin part by NSF (CAREER2046955, IIS-1954778, SHF-2006404), ARO (W911NF-19-2- 0333,W911NF-21-20217), DARPA (HR001120C0031), Amazon, JP Morgan, and Northrop GrummanMission Systems. Peter Stone serves as the Executive Director of Sony AI America and receivesfinancial compensation for this work. The terms of this arrangement have been reviewed and ap-proved by the University of Texas at Austin in accordance with its policy on objectivity in research.References[1] T. Wang, B. Chen, Z. Zhang, H. Li, and M. Zhang. Applications of machine vision in agri-cultural robot navigation: A review. Computers and Electronics in Agriculture , 198:107085,2022.[2] Agility robotics launches next generation of digit: World’s first human-centric, multi-purposerobot made for logistics work, 2023. URL https://agilityrobotics.com/news/2022/future-robotics-l3mjh .[3] X. Xiao, J. Dufek, and R. R. Murphy. Autonomous visual assistance for robot operations usinga tethered uav, 2019.[4] H. Karnan, E. Yang, G. Warnell, J. Biswas, and P. Stone. Wait, that feels familiar: Learning toextrapolate human preferences for preference aligned path planning, 2023.[5] E. Yang, H. Karnan, G. Warnell, P. Stone, and J. Biswas. Wait, that feels familiar: Learning toextrapolate human preferences for preference-aligned path planning. In ICRA2023 Workshopon Pretraining for Robotics (PT4R) , 2023.[6] K. S. Sikand, S. Rabiee, A. Uccello, X. Xiao, G. Warnell, and J. Biswas. Visual representationlearning for preference-aware path planning. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 11303–11309. IEEE, 2022.[7] X. Yao, J. Zhang, and J. Oh. Rca: Ride comfort-aware visual navigation via self-supervisedlearning. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 7847–7852. IEEE, 2022.[8] J. Z ̈urn, W. Burgard, and A. Valada. Self-supervised visual terrain classification from unsuper-vised acoustic feature learning. IEEE Transactions on Robotics , 37(2):466–481, 2020.[9] H. Karnan, K. S. Sikand, P. Atreya, S. Rabiee, X. Xiao, G. Warnell, P. Stone, and J. Biswas.Vi-ikd: High-speed accurate off-road navigation using learned visual-inertial inverse kinody-namics. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 3294–3301. IEEE, 2022.[10] P. Atreya, H. Karnan, K. S. Sikand, X. Xiao, S. Rabiee, and J. Biswas. High-speed accuraterobot control using learned forward kinodynamics and non-linear least squares optimization,2022.[11] H. Karnan, A. Nair, X. Xiao, G. Warnell, S. Pirk, A. Toshev, J. Hart, J. Biswas, and P. Stone.Socially compliant navigation dataset (scand): A large-scale dataset of demonstrations forsocial navigation. IEEE Robotics and Automation Letters , 7(4):11807–11814, 2022.[12] S. Desai, H. Karnan, J. P. Hanna, G. Warnell, and P. Stone. Stochastic grounded action trans-formation for robot learning in simulation. In 2020 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 6106–6111. IEEE, 2020.9[13] J. P. Hanna, S. Desai, H. Karnan, G. Warnell, and P. Stone. Grounded action transformationfor sim-to-real reinforcement learning. Machine Learning , 110(9):2469–2499, 2021.[14] H. Karnan, S. Desai, J. P. Hanna, G. Warnell, and P. Stone. Reinforced grounded action trans-formation for sim-to-real transfer. In 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 4397–4402. IEEE, 2020.[15] S. Desai, I. Durugkar, H. Karnan, G. Warnell, J. Hanna, and P. Stone. An imitation fromobservation approach to transfer learning with dynamics mismatch, 2020.[16] M. Wigness, S. Eum, J. G. Rogers, D. Han, and H. Kwon. A rugd dataset for autonomousnavigation and visual perception in unstructured outdoor environments. In 2019 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 5000–5007. IEEE,2019.[17] P. Jiang, P. Osteen, M. Wigness, and S. Saripalli. Rellis-3d dataset: Data, benchmarks andanalysis, 2020.[18] S. Sharma, L. Dabbiru, T. Hannis, G. Mason, D. W. Carruth, M. Doude, C. Goodin, C. Hudson,S. Ozier, J. E. Ball, et al. Cat: Cavs traversability dataset for off-road autonomous driving.IEEE Access , 10:24759–24768, 2022.[19] T. Guan, D. Kothandaraman, R. Chandra, A. J. Sathyamoorthy, K. Weerakoon, andD. Manocha. Ga-nav: Efficient terrain segmentation for robot navigation in unstructured out-door environments. IEEE Robotics and Automation Letters , 7(3):8138–8145, 2022.[20] S. Triest, M. Sivaprakasam, S. J. Wang, W. Wang, A. M. Johnson, and S. Scherer. Tartandrive:A large-scale dataset for learning off-road dynamics models, 2022.[21] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba. End to end learningfor self-driving cars, 2016.[22] Y . LeCun, U. Muller, J. Ben, E. Cosatto, and B. Flepp. Off-road obstacle avoidance throughend-to-end learning. In Proceedings of the 18th International Conference on Neural Informa-tion Processing Systems , NIPS’05, page 739–746, Cambridge, MA, USA, 2005. MIT Press.[23] M. Wulfmeier, P. Ondruska, and I. Posner. Maximum entropy deep inverse reinforcementlearning. arXiv preprint arXiv:1507.04888 , 2015.[24] Y . Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. A. Theodorou, and B. Boots. Imitationlearning for agile autonomous driving. The International Journal of Robotics Research , 39(2-3):286–302, 2020.[25] G. Kahn, P. Abbeel, and S. Levine. Land: Learning to navigate from disengagements. IEEERobotics and Automation Letters , 6(2):1872–1879, 2021.[26] G. Kahn, P. Abbeel, and S. Levine. Badgr: An autonomous self-supervised learning-basednavigation system. IEEE Robotics and Automation Letters , 6(2):1312–1319, 2021.[27] H. Karnan, E. Yang, D. Farkash, G. Warnell, J. Biswas, and P. Stone. Self-supervised ter-rain representation learning from unconstrained robot experience. In ICRA2023 Workshop onPretraining for Robotics (PT4R) , 2023.[28] Y . LeCun, U. Muller, J. Ben, E. Cosatto, and B. Flepp. Off-road obstacle avoidance throughend-to-end learning. In Proceedings of the 18th International Conference on Neural Informa-tion Processing Systems , NIPS’05, page 739–746, Cambridge, MA, USA, 2005. MIT Press.[29] X. Xiao, B. Liu, G. Warnell, and P. Stone. Motion planning and control for mobile robotnavigation using machine learning: a survey, 2020.10[30] H. Karnan, G. Warnell, X. Xiao, and P. Stone. V oila: Visual-observation-only imitation learn-ing for autonomous navigation, 2021.[31] X. Xiao, Z. Xu, Z. Wang, Y . Song, G. Warnell, P. Stone, T. Zhang, S. Ravi, G. Wang, H. Kar-nan, J. Biswas, N. Mohammad, L. Bramblett, R. Peddi, N. Bezzo, Z. Xie, and P. Dames.Autonomous ground navigation in highly constrained spaces: Lessons learned from the barnchallenge at icra 2022, 2022.[32] F. Schilling, X. Chen, J. Folkesson, and P. Jensfelt. Geometric and visual terrain classificationfor autonomous mobile navigation. In 2017 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 2678–2684. IEEE, 2017.[33] M. Wigness, J. G. Rogers, and L. E. Navarro-Serment. Robot navigation from human demon-stration: Learning control behaviors. In 2018 IEEE international conference on robotics andautomation (ICRA) , pages 1150–1157. IEEE, 2018.[34] C. A. Brooks and K. D. Iagnemma. Self-supervised classification for planetary rover terrainsensing. In 2007 IEEE aerospace conference , pages 1–9. IEEE, 2007.[35] A. Loquercio, A. Kumar, and J. Malik. Learning visual locomotion with cross-modal supervi-sion, 2022.[36] H. Karnan, F. Torabi, G. Warnell, and P. Stone. Adversarial imitation learning from videousing a state observer. In 2022 International Conference on Robotics and Automation (ICRA) ,pages 2452–2458. IEEE, 2022.[37] A. Bardes, J. Ponce, and Y . LeCun. Vicreg: Variance-invariance-covariance regularization forself-supervised learning. arXiv preprint arXiv:2105.04906 , 2021.[38] A. J. Sathyamoorthy, K. Weerakoon, T. Guan, J. Liang, and D. Manocha. Terrapn: Unstruc-tured terrain navigation using online self-supervised learning, 2022.[39] L. Wellhausen, A. Dosovitskiy, R. Ranftl, K. Walas, C. Cadena, and M. Hutter. Where shouldi walk? predicting terrain properties from images via self-supervised learning. IEEE Roboticsand Automation Letters , 4(2):1509–1516, 2019.[40] M. G. Castro, S. Triest, W. Wang, J. M. Gregory, F. Sanchez, J. G. R. I. au2, and S. Scherer.How does it feel? self-supervised costmap learning for off-road vehicle traversability, 2023.[41] S. Triest, M. G. Castro, P. Maheshwari, M. Sivaprakasam, W. Wang, and S. Scherer. Learningrisk-aware costmaps via inverse reinforcement learning for off-road navigation, 2023.[42] E. Chen, C. Ho, M. Maulimov, C. Wang, and S. Scherer. Learning-on-the-drive: Self-supervised adaptation of visual offroad traversability models, 2023.[43] L. Wellhausen, R. Ranftl, and M. Hutter. Safe robot navigation via multi-modal anomalydetection. IEEE Robotics and Automation Letters , 5(2):1326–1333, apr 2020. doi:10.1109/lra.2020.2967706. URL https://doi.org/10.1109%2Flra.2020.2967706 .[44] J. Frey, M. Mattamala, N. Chebrolu, C. Cadena, M. Fallon, and M. Hutter. Fast traversabilityestimation for wild visual navigation, 2023.[45] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction, 2017.[46] M. Zucker, N. Ratliff, M. Stolle, J. Chestnutt, J. A. Bagnell, C. G. Atkeson, and J. Kuffner.Optimization and learning for rough terrain legged locomotion. The International Journal ofRobotics Research , 30(2):175–191, 2011.11[47] J. Biswas. Amrl autonomy stack. https://github.com/ut-amrl/graph_navigation ,2013.[48] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robust per-ceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62), jan 2022.doi:10.1126/scirobotics.abk2822.[49] A. Datar, C. Pan, M. Nazeri, and X. Xiao. Toward wheeled mobility on vertically challengingterrain: Platforms, datasets, and algorithms, 2023.127 Data CollectionIn all experiments, we use a legged Boston Dynamics Spot robot and collect robot experi-ences on eight different types of terrain around the university campus that we labeled as mulch ,pebble sidewalk ,cement sidewalk ,grass ,bushes ,marbled rock ,yellow bricks , andred bricks . The data is collected through human teleoperation (by the first and second authors)such that each trajectory contains a unique terrain throughout, with random trajectory shapes. Notethat STERLING does not require a human expert to teleoperate the robot to collect robot experiencenor does it require the experience to be gathered on a unique terrain per trajectory. We follow thisdata collection approach since it is easier to label the terrain for evaluation purposes. STERLINGcan also work with random trajectory lengths, with multiple terrains encountered along the sametrajectory, without any semantic labels such as terrain names, and any navigation policy can be usedfor data collection. We record 8 trajectories per terrain, each five minutes long, and use 4 trajectoriesfor training and the remaining for validation.8 Sampling-based PlanningFigure 5: An overview of the cost infer-ence process for local planning at deployment.The constant-curvature arcs (yellow) are over-layed on the BEV image, and the terrain costJterrain (Γ)is computed on patches extractedalong all arcs. White is high cost and black islow cost.Sampling-based planning: We assume accessto a receding horizon sampling-based motionplanner with a fixed set of constant-curvaturearcs{Γ0,Γ1, . . . , Γns},Γ∈ SNwhich solvesfor the optimal arc Γ∗= arg minΓ[J(Γ, G)],minimizing the objective function J(Γ, G),J:(Γ, G)−→R+. For the task of preference-aligned off-road navigation, we assume the ob-jective function is composed of two componentsJgeom(Γ, G)andJterrain (Γ), and can be de-fined as J(Γ, G) = αJgeom(Γ, G) + (1 −α)Jterrain (Γ).Jgeom(Γ, G)is the geometriccost that deals with progress towards the goalGand avoiding geometric obstacles, whereasJterrain (Γ) is the terrain cost associated withpreference-alignment. We utilize the geomet-ric cost as defined in AMRL’s graph navigationstack3. The multiplier α∈[0,1]trades offrelative contributions of the geometric and ter-rain preference components of the path planningobjective. A 1D time-optimal controller trans-lates the sequence of states in the optimal tra-jectory Γ∗to a sequence of receding horizon ac-tions (a0, a1, . . . , a N). For a given arc Γ ={s0, s1, . . . , s N}, such that state s0is closest tothe robot, the terrain-preference cost can be com-puted as follows.Jterrain (Γ) =NXvi∼Γ,i=0γiC(u(fv(vi)))N+ 1(2)The function fv(.)maps from RGB space of a visual patch of terrain viat a specific state si, to itsvisual representation φv∈Φv. For instance, fvcan be the visual encoder learned using STERLING ,as described in Section 3.2. The utility function u(.)maps the visual representation φvof a patch ofterrain to a real-valued utility of preferences. We follow the utility function formulation of Zucker etal. [46] and assume the terrain preference cost follows a multiplicative formulation such that given3https://github.com/ut-amrl/graph navigation13Figure 6: Trajectories traced by different approaches for the task of preference-aligned off-roadnavigation. Shown here are two different preferences expressed by the operator in the sameenvironment—in 6 (a), sidewalk is more preferred than grass which is more preferred than bush ,and in 6 (b), grass andsidewalk are equally preferred and bush is least preferred. We see thatwithout retraining the terrain features, in both cases (a) and (b), STERLING optimally navigates tothe goal while adhering to operator preferences.a utility value x∈R+, the traversability cost is C(x) =e−x. The discount factor γweighs theterrain cost proportional to its proximity to the robot. We set γto0.8, which we find to work wellin practice.Planning at Deployment: Fig. 5 provides an overview of the cost inference process for localplanning at deployment. To evaluate the terrain cost Jterrain (Γ)for the constant-curvature arcs,we overlay the arcs on the bird’s eye view image, extract terrain patches at states along the arc,and compute the cost according to Eq. 2. We compute the visual representation, utility value, andterrain cost of all images at once as a single batch inference. Since the visual encoder and the utilityfunction are relatively lightweight neural networks with about 0.5 million parameters, we are ableto achieve real-time planning rates of 40 Hz using a laptop-grade Nvidia GPU.9 Additional ExperimentsIn this section, we detail additional experiments performed to evaluate STERLING -features againstbaseline approaches.9.1 Preference Alignment EvaluationIn addition to the evaluations of STERLING -features with baseline approaches in five environmentsas shown in Sec. 4, we utilize Env. 6 to further study adherence to operator preferences. We hypoth-esize that the discriminative features learned using STERLING is sufficient to learn the preferencecost for local planning. To test this hypothesis, in Env. 6 containing three terrains as shown in Fig.6, the operator provides two different preferences 6(a) and 6(b). While bush is the least preferred inboth cases, in 6(a), sidewalk is more preferred than grass and in 6(b), both grass andsidewalkare equally preferred. We see in Fig. 6 that using STERLING features, the planner is able to suffi-ciently distinguish the terrains and reach the goal while adhering to operator preferences. AlthoughSE-R[8] adheres to operator preference in 6(b), it incorrectly maps grass tobush , assigning ahigher cost and taking a longer route to reach the goal. On the other hand, RCA [7] fails to adhere tooperator preferences since it directly assigns traversability costs using inertial features.9.2 Large-Scale Qualitative EvaluationIn this subsection, we perform a qualitative evaluation of STERLING by reporting a large-scale studyof semi-autonomously hiking a 3-mile-long off-road trail using the Spot robot.We train STERLING using unconstrained robot experience collected within the university campusand train the preference utility function using operator-provided preferences: marble rocks <grass < dirt = cement . The task is to navigate the trail without a global map while adheringto operator preferences at all times. Since we do not use a global map, visual terrain awareness is14Figure 7: A large-scale qualitative evaluation of STERLING on a 3-mile outdoor trail. STERLINGfeatures successfully complete the trail with only two manual interventions (shown in red).necessary to navigate within the trail and avoid catastrophic events such as falling into the river nextto the trail. We set a moving goal of six meters in front of the robot, updated every second. While therobot navigates autonomously, the operator walks behind the robot and takes manual control onlyto correct the robot’s path during forks, or to yield to incoming pedestrians and pets. The attachedsupplementary video shows the robot navigating the trail successfully while avoiding less preferredterrains. The robot needed two manual interventions while traversing along the trail. Fig. 7 showsthe 3-mile trajectory traced by the robot and the two failure cases that required manual intervention.This large-scale qualitative experiment demonstrates the reliability of STERLING during real-worldoff-road deployments.9.3 Experiments on a Wheeled Mobile RobotSTERLING is intended as a general algorithm to learn relevant terrain representations for off-roadnavigation. Towards demonstrating the versatility of STERLING to being applied to robots of differ-ent morphology, we conduct two additional experiments on the Clearpath Jackal, a wheeled mobilerobot.Learning Representations on Wheeled Robots: We utilize unconstrained data collected on theJackal consisting of multi-modal visual and inertial sensor data and learn terrain representationsusing STERLING followed by a utility function of operator preferences. Fig. 8 ( STERLING -Jackal)shows the path traversed by the Jackal in Env. 6, following the human preference sidewalk >grass >bush . This experiment demonstrates the applicability of STERLING on wheeled robotswith inertial sensors, as against legged robots that have access to additional sensors such as jointencoders and tactile information.Zero-Shot Cross-Morphology Transfer: In a noteworthy experiment to evaluate the transfer-able property of terrain representations across robot morphologies, we utilized the visual encodertrained on data from the legged Spot robot and applied it on the wheeled Jackal robot withoutadditional fine-tuning. Fig. 8 ( STERLING -Spot) showcases the Jackal’s trajectory, leveraging STER -LING representations learned from Spot’s data, and adhering to the operator’s terrain preference:sidewalk >grass >bush . Fig. 10 shows costmaps generated using STERLING features, usedby the sampling-based planner to navigate in an operator-aligned manner. This demonstrates STER -15Figure 8: Experimental study of STERLING on a Clearpath Jackal—a wheeled mobile robot in En-vironment 6. STERLING -Spot shows the trajectory traced using STERLING trained on data collectedon the Spot, deployed zero-shot on the Jackal robot, whereas STERLING -Jackal shows the trajectorytraced by STERLING trained on data collected on the Jackal, deployed also on the Jackal robot. Inboth experiments, we see the robot reach the goal successfully while adhering to human operator’spreferences over terrains ( sidewalk >grass >bush ).Figure 9: Clearpath Jackal, a wheeled robot navigating using STERLING features trained on uncon-strained data collected on the Jackal robot ( STERLING -Jackal). We see here in Env. 6 that the robotreaches the goal while adhering to operator preferences Sidewalk >Grass >Bush . This experi-ment demonstrates the versatility of STERLING in being applied to robots of different morphology.LING ’s capability to generalize across diverse robotic platforms, emphasizing its adaptability andbroad applicability.Fig. 9 shows a third-person view of the deployment of STERLING -Jackal in Env. 6. In both ex-periments above, we see the Jackal robot reaches the goal successfully while adhering to humanoperator preferences, in a terrain-aware manner, highlighting STERLING ’s adaptability regardless ofrobot morphology.9.4 On the Efficacy of Multi-Modal Data Over Vision AloneWhile it might seem that visual cues are sufficient for distinguishing terrains, as evidenced in Fig.3,the reality is more complex. Variations in lighting, shadow, color, texture, and other artifacts maylead to inconsistent representations for the same terrain type and can render visually distinctiveterrains deceptively similar. For instance, while six different visual patches of the terrain “sidewalk”as shown in Fig. 11 might each exhibit unique visual characteristics because of these variations,they all denote the same terrain category and evoke similar inertial-prioprio-tactile ( IPT) responseon the robot (feel similar to the robot). Solely relying on vision may lead to overlooking underlyingcommonalities between terrains, resulting in inconsistent terrain representations.16Figure 10: Visualizing the costmaps from the Jackal robot when traversing Env 6., trained usingdata from the Jackal ( STERLING -Jackal).Figure 11: Six distinct instances of sidewalk terrain, showcasing the variability in visual appear-ance due to factors such as lighting, texture, shadows, and other artifacts. Despite the visual differ-ences, each patch represents the same terrain type.Another concrete example is the scenario of fallen leaves. A sidewalk, a grass patch, and a foresttrail could all be covered with fallen leaves, making them visually similar. However, underneaththose leaves, the actual terrain properties – and the robot’s interaction with them – vary significantly.While the leaves might visually mask the terrain differences, the robot would feel different terrainresponses when moving over them due to differences in underlying ground properties.Furthermore, visual similarity is not a conclusive indicator of identical terrains. Consider four im-ages as a case in point, as shown in Fig. 12. Though the first two and the last two images mightseem visually similar, they represent distinct terrains: the first image depicts “bush”, the secondand third denote “grass”, and the fourth is “sidewalk”. These three terrains induce different inertial,proprioceptive, and tactile responses in a robot. Thus, the mere semblance of appearance does notcapture the relevant features of a terrain.STERLING ’s approach of integrating additional modalities allows for more precise terrain identifi-cation by accounting for these subtleties. By considering variations and similarities among terrainsacross different modalities, we ensure relevant terrain representations for off-road navigation. Inall examples shown in Figs. 11 and 12, STERLING correctly associates the samples with the rightcluster for each terrain.9.5 Experimental Setup and Methodological DetailsIn this subsection, we outline the specifics of our experimental setup, detailing hyperparameters,architectural decisions, data, and sensory inputs. These insights ensure clarity and reproducibilityof our experiments.In all experiments in STERLING , including the baselines RCA and SE-R, we use a shallow 4-layerCNN with a kernel size of 3 and stride of 1. Our choice of a shallow 4-layer CNN was drivenby the specific need for a lightweight and efficient model that could operate in real time at 40Hz17Figure 12: A collection of four terrain patches, illustrating the challenge of terrain representationlearning based solely on visual cues. From left to right: bush, grass, grass, sidewalk. Despite visualsimilarities (and differences), the terrains can elicit different non-visual IPTresponses on a robot.Table 2: Hyperparameter Choices for STERLING ExperimentsHyperparameter Value/RangeLearning Rate 3×10−4Batch Size 128Number of Epochs 50Optimizer AdamWeight Decay 5×10−5Activation Function ReLUKernel Size 3Stride 1on a laptop GPU, a requirement that was effectively met with this simple architecture of 0.25 Mparameters, which we found was sufficient for the problem. While modern architectures like visiontransformers / Mobile-ViT could be applied with larger scale data, the primary concern was real-timeperformance and compatibility with our robot’s hardware. Our experiments and results demonstratethat the selected architecture was sufficient for the purpose, and we do not find evidence that ourapproach’s effectiveness is constrained by this architectural choice.To train STERLING ,SE-R, and RCA, we used a total of 117,604 data samples for all terrains com-bined. Example raw time-series sensor data is shown in Fig. 14. Each data sample contains aminimum of 2 visual patches and a maximum of 20 visual patches of the same location from mul-tiple different viewpoints from which we randomly sample 2 patches per location during training.We convert the time-series IPTsignals into their corresponding Power Spectral Density PSDvalues.Power spectral density describes the power of a signal across different frequency components. Tocompute this, we perform a fast-fourier transform over the time-series signal (inertial/proprioceptive/tactile) and compute the PSDdefined as PSD(ω) =E[|X(ω)|2]across each frequency component ω.On the Spot robot, we use a VectorNav IMU to record the inertial signals (angular velocities in thex and y-axis and linear acceleration in z-axis) at 200Hz, the joint angles and velocities of the leggedrobot, referred as proprioceptive feedback in this work are recorded at 25 Hz, and the feet contactmeasurements (contact booleans and estimated feet depth from ground) collectively referred to astactile feedback in this work are recorded at 25Hz. An Azure kinect camera is mounted on the Spot,used for visual sensing of the terrain. On the wheeled Clearpath Jackal robot, we use a Zed2 camerafor visual sensing, and utilize the internal IMU sensor for inertial feedback. Fig. 13 depicts the tworobots, sensor mounts, and other sensors used in this work.Note that in all experiments, to prevent overfitting to a specific environment, we pretrain the visualencoder and utility function once and deploy them in all environments. The encoders and utilityfunctions are not being retrained/finetuned per environment, including the large-scale outdoor trail.More details on the loss function: STERLING extends VICR eg algorithm initially proposed byBardes et al. [37] for self-supervised learning from vision-only data. While the foundational workby Bardes et al. uses image augmentations to learn visual representations in a self-supervised way,we utilize images from multiple viewpoints and multi-modal inputs such as vision, inertial, proprio-18Figure 13: Figure depicting the legged Spot and the wheeled Jackal robot, along with other sensorsused in this work.Figure 14: Visual depiction of 1-second of time-series features of inertial-proprioception-tactile dataand visual patches of three representative terrains ( Bush ,SideWalk andGrass ).ceptive, tactile using a novel formulation to learn relevant terrain representations in a self-supervisedway. The STERLING loss based on VICR eg is defined in Section 3.2. The VICR eg loss is defined asLVICReg(Z, Z′) =λs(Z, Z′) +μ[v(Z) +v(Z′)] +ν[c(Z) +c(Z′)], where λ,μandνare hyper-parameters. We use the values 25.0, 25.0, 1.0 for these hyperparameters respectively, as suggestedby Bardes et al. [37]. s(Z, Z′)denotes the invariance between the two inputs. In STERLING , thisis computed across the two image patches from different viewpoints, and also between the visualand non-visual ( IPT) projections. v(Z)denotes the variance across the batch dimension, which wecompute for the projections of individual patches and the IPTsignals. c(Z)denotes the covarianceacross the feature dimension which encourages distinct, non-correlative features which we againcompute for the projections of individual patches and the IPTsignals. We refer the reader to Bardeset al. [37] for additional details regarding individual terms in the loss function. We compute the lossprovided in Eq. 1 across a mini-batch of samples and use the Adam optimizer for gradient-basedoptimization of the visual encoder, IPTencoder and the common projector network.9.6 Visualizing the terrain representations learned using STERLINGFig. 15 depicts a t-SNE visualization of terrain representations learned using STERLING . Individualpatches are color-coded by their ground truth semantic terrain label. We see that STERLING learnsrelevant features for terrains, given their unique clustering in this latent space.9.7 Visualizing the costmapsFig. 16 shows cost visualizations of baseline approaches - RCA [7], SE-R[8], GANav [19] and Fully-Supervised in comparison with STERLING . Fig. 16 shows that RCA and SE-Rexhibit issues with19Figure 15: t-SNE visualization of terrain representations learned using STERLING. Each data pointrepresents a terrain example, color-coded by its ground truth label. The clustering of colors show-cases the efficacy of STERLING in capturing meaningful and distinctive terrain features.visual artifacts due to homography transformations. GANav, trained on the RUGD [16] dataset, failsto generalize to novel real-world situations. In contrast, costmaps from both the fully-supervisedmodel and STERLING efficiently guide planning, as demonstrated by quantitative results in Section4 and results in behaviors that align with operator preferences, prioritizing sidewalks over terrainslike rocks or bushes.9.8 Generalization to Unseen TerrainsDuring autonomous off-road navigation, generalization to novel terrains is paramount. Althoughdifficult to comment on generalizability, we document an instance during the large-scale deploymentwhere STERLING navigates around an unseen terrain “ water puddle ”, as shown in Fig. 17.20Figure 16: Comparative visualization of the costmaps generated by STERLING (this work) and otherbaseline algorithms ( RCA [7], SE-R[8], GANav [19], Fully-Supervised) for a given scene. Pairedwith each costmap is a bird’s-eye view image of the corresponding terrain. In the costmaps, whiteregions indicate high traversal cost, black signifies low cost, and areas in red are ignored or non-observable regions. We see that compared to other approaches, using STERLING features results incostmaps that align with operator preference of Sidewalk >Rocks >Bush .Figure 17: STERLING navigating around an unfamiliar terrain, specifically a “ water puddle ”,during the qualitative 3-mile off-road deployment.21 |
dxOaNO8bge | A Data-Efficient Visual-Audio Representationwith Intuitive Fine-tuning for Voice-Controlled RobotsPeixin Chang, Shuijing Liu, Tianchen Ji, Neeloy Chakraborty,Kaiwen Hong, Katherine Driggs-CampbellUniversity of Illinois at Urbana-Champaign{pchang17, sliu105, tj12, neeloyc2, kaiwen2, krdc }@illinois.eduAbstract: A command-following robot that serves people in everyday life mustcontinually improve itself in deployment domains with minimal help from its endusers, instead of engineers. Previous methods are either difficult to continuouslyimprove after the deployment or require a large number of new labels during fine-tuning. Motivated by (self-)supervised contrastive learning, we propose a novelrepresentation that generates an intrinsic reward function for command-followingrobot tasks by associating images with sound commands. After the robot is de-ployed in a new domain, the representation can be updated intuitively and data-efficiently by non-experts without any hand-crafted reward functions. We demon-strate our approach on various sound types and robotic tasks, including navigationand manipulation with raw sensor inputs. In simulated and real-world experi-ments, we show that our system can continually self-improve in previously unseenscenarios given fewer new labeled data, while still achieving better performanceover previous methods.Keywords: Command Following, Multimodal Representation, ReinforcementLearning, Human-in-the-Loop1 IntroductionAudio command following robots is an important application that paves the way for non-expertsto intuitively communicate and collaborate with robots in their daily lives. Ideally, a command-following robot should ground both speech and non-speech commands to visual observations andmotor skills. For example, a household robot must open the door when it hears a doorbell or some-one saying “open the door.” The robot should also be customizable and continually improve itsinterpretation of language and skills from non-experts [1, 2].The need for command-following robots has spurred a wealth of research. Learning-based languagegrounding agents were proposed to perform tasks according to visual observations and text/speechinstructions [3, 4, 5, 6, 7]. However, these approaches often fail to completely solve a common prob-lem in learning-based methods: performance degradation in a novel target domain, such as the realworld [8, 9, 10]. Fine-tuning the models in the real world is often expensive due to the requirementof expertise, extra equipment and large amounts of labels, none of which can be easily provided bynon-expert users in everyday environments. Without enough domain expertise or abundant labeleddata, how can we allow users to customize such robots to their domains with minimal supervision?Some prior works have attempted to reduce data usage when fine-tuning the robot in new domains.However, the efficiency of the methods usually relies on task-specific assumptions [11], extra sensorinstrumentation [12], and limited task variations [13, 14].In this paper, we propose a novel framework that builds on (self-)supervised contrastive learningto realize more effective training and more efficient fine-tuning for rewards and skills learning. Asshown in Fig. 1, we first learn a joint Visual and Audio Representation, which is Data-efficient andcan be Intuitively Fine-tuned(Dif-V AR). In the second stage, we use the representation to compute7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Our pipeline. Contrastive learning is used to group images and audio commands of the same intent.The resulting Dif-V AR supports the downstream RL training by encoding the auditory and visual signals andproviding reward signals and states to the agent. Users improve the Dif-V AR intuitively by providing the visual-audio pairs in their own domain, and the updated Dif-V AR supervises the RL fine-tuning.intrinsic reward functions to learn various robot skills with reinforcement learning (RL) withoutany reward engineering. When the robot is deployed in a new domain, the fine-tuning stage is dataefficient in terms of label usage and is natural to non-experts, since users only need to provide arelatively small number of images and their corresponding sounds. For example, a user can teachDif-V AR by saying that “this is an apple” when the robot sees an apple. Then, RL policies areself-improved with the updated Dif-V AR.We apply this learning approach to diverse navigation and manipulation environments. Given asound command, the robot must identify the commander’s goal (intent), draw the correspondencebetween the raw visual and audio inputs, and develop a policy to finish the task. To develop a gener-ally applicable pipeline, we make minimal assumptions. We do not assume the availability of humandemonstrations, prior knowledge about the environment, or image and sound recognition modulesthat have perfect accuracy after the deployment. The robot is equipped with only a monocular uncal-ibrated RGB camera and a microphone. Our method works with various sound signals (e.g. voice,environmental sound) and various types of robots (e.g. mobile robots, manipulators).Our main contributions are: (1) We propose a voice-controlled robot pipeline for everyday house-hold tasks. This framework is intuitive for non-experts to continually improve and is agnostic to awide range of voice-controlled tasks. (2) Inspired by (self-)supervised contrastive loss, we proposea novel representation of visual-audio observations named Dif-V AR, which generates intrinsic RLrewards for robot skill learning. Only image-audio pairs are required to fine-tune Dif-V AR and theRL policy. No reward engineering, state estimation, or other supervision is needed. (3) We pro-pose a variety of voice-controlled navigation and manipulation benchmarks. In new simulated andreal-world domains, the adaptation of our method outperforms state-of-the-art baselines in terms ofperformance and data efficiency, while requiring less expertise and environmental instrumentation.2 Related WorksEnd-to-end language understanding. End-to-end spoken language understanding (SLU) sys-tems extract the speaker’s intent directly from raw speech signals without translating the speechto text [15, 16, 17]. Such an end-to-end system is able to fully exploit subtle information, suchas speaker emotions that are lost during speech-to-text transcription, and outperform pipelines thatpreprocess the speech into text [16, 17]. However, they have not been widely applied in robotics.Command following agents. Conventional command following agents consist of independentmodules for language understanding, language grounding, and planning [18, 19, 20]. But thesemodular pipelines suffer from intermediate errors and do not generalize beyond their programmeddomains [21, 22, 23]. To address these problems, end-to-end command following agents areused to perform tasks according to text-based natural language instructions and visual observa-tions [3, 4, 23, 24, 25]. In addition, large language models are applied to program the robots giventhe language prompts [26, 27]. However, these methods neglect the non-speech commands andabstract away the practical challenges of auditory signal grounding. To make full use of audio com-mands, Chang et al. [7] introduces an RL framework for skill learning directly from raw sounds.2However, all these works overlook the continual fine-tuning and customization by users, an essen-tial step to ensure performance after the deployment. Fine-tuning such models is computationallychallenging and requires hand-tuned reward functions and prohibitive labeling efforts. Chang et al.[1] partially addresses the problem by learning a visual-audio representation (V AR) with triplet lossto generate an intrinsic reward function for RL. However, this method requires negative pairs in thetriplet loss, which is not intuitive to non-experts and inefficient to deploy in new target domains.Visual and language representation for robotics. Representation learning has shown great poten-tial in learning useful embeddings for downstream robotic tasks [28, 29]. Deep autoencoders havebeen used to learn a latent space which generates states or rewards for RL [30, 31, 32]. Contrastivelearning has also been used to learn representations for downstream skill learning [33, 34]. However,in task execution, a goal image has to be provided, which is less natural in terms of human-robotcommunication compared to language.To this end, visual-language representations have been widely used to associate human instructionswith visual observations in navigation [27, 35] and manipulation [12, 36, 37]. However, similar totext-based command following agents, these methods also lose information when the input is soundinstead of text. Although audio-visual representations such as AudioCLIP have been developed [38],how to apply them in robot learning remains an open challenge. Our work and [1] address this chal-lenge by proposing a visual-audio representation that generates RL rewards for robot skill learning,while our method achieves better data efficiency and can be more easily fine-tuned than [1].3 MethodologyIn this section, we describe the two-stage training pipeline and fine-tuning procedure. In training,we assume the availability of sufficiently large labeled datasets, simulators, and labels. However, infine-tuning, speech transcriptions, one-hot labels, and reward functions, are not available.3.1 Visual-audio representation learningIn the first stage, we collect visual-audio pairs from the environment. Then, we learn a joint repre-sentation of images and audios that associates an image with its corresponding sound command.Data collection. Suppose there are Mpossible intents or tasks within an environment. We collectvisual-audio pairs defined as (I,S, y)from the environment, where I∈Rn×nis an RGB imagefrom the robot’s camera, S∈Rl×mis the Mel Frequency Cepstral Coefficients (MFCC) [39] of thesound command, and y∈ {0,1, ..., M }is the intent ID. We call IandStwoviews of an intent y.A visual-audio pair contains a goal image and a sound command of the same intent. For example,when an iTHOR agent sees a lit lamp, the agent hears the sound “Switch on the lamp” from theenvironment. In contrast, when the agent does not see any object in interest or is far away from allobjects so that it sees multiple objects at once, it receives only an image and hears no sound. Theimage is paired with S=0l×mandy=M. We define this situation as an empty intent .Training Dif-V AR. Our goal is to encode both visual and auditory signals into a joint latent space,where the embeddings from the same intents are pulled closer together than embeddings from dif-ferent intents. For example, the embedding of an image with a TV turned on needs to be closeto the embedding of a sound command “Turn on the TV” but far away from other irrelevant com-mands such as “Turn off the light.” We adopt the idea from (self-)supervised contrastive learningfor visual representations and formulate the problem as metric learning. As shown in Fig. 2a, theDif-V AR is a double-branch network with two main components. The first component contains theencoders fI:Rn×n→RdIandfS:Rl×m→RdSwhich map an input image Iand a soundsignal Sto representation vectors hIandhS, respectively. In practice, any deep models for imageand sound processing can be used for fIandfS. The second component is the projection headsgI:RdI→Rd,gS:RdS→Rd, and bI:RdI→Rthat map the representations hIandhSto thespace where losses are applied. We denote the vector embeddings gI(hI)andgS(hS)aszIandzS,3Figure 2: Purposed framework . (a) The Dif-V AR is a double-branch network optimized with (self-)supervisedcontrastive loss. (b) The latent space of the Dif-V AR is a unit hypersphere such that the images and audios ofthe same intent are closer than those of different intent in the space. (c) The Dif-V AR decides which skill toactivate according to Sg.respectively. We enforce the norm of zIandzSto be 1by applying an L2-normalization, such thatthe embeddings live on a unit hypersphere as shown in Fig. 2b.We use supervised contrastive (SupCon) loss as the objective, which encourages the distance be-tween zIandzSof the same intent to be closer than those of a different intent [40]. Suppose thereareNvisual-audio pairs in a batch. Let k∈K:={1, ...,2N}be the index of an image or a soundsignal within that batch and P(k):={p∈K\ {k}:yp=yk}be the set of indices of all imagesand sounds of the same intent except for index k. Then, the SupCon loss isLSupCon =−Xk∈K1|P(k)|Xp∈P(k)logexp (zk·zp/τ)Pj∈K\{k}exp (zk·zj/τ), (1)where |·|is the cardinality, z∗can be either zIorzS, andτ∈R+is a scalar temperature parameter.The use of SupCon loss allows attraction and repulsion among all images and sound within a batch,which improves the training efficiency of the representation. We introduce a binary classificationloss for the image to distinguish between empty and non-empty intent. Let LBCEdenote the binarycross entropy loss and edenote the label of intent, which is 0for empty intent and 1for non-emptyintent. The batch loss for training the Dif-V AR is:LDif-V AR =α1LSupCon +α21NNXj=1LBCE(bI(hIj), ej) (2)where α1andα2are the weights of losses. Depending on if the intent is predicted empty or not, theoutput vIandvSof Dif-V AR can be determined for image and sound by:vI= 1{ˆeI≥0.5}zI,ˆeI:=bI(hI),vS= 1{Si̸=0l×m}zS. (3)where 1is an indicator function. The purpose of the binary classification is to set the image andsound embeddings of the empty intent to the center of the joint latent space. This centralizationremoves the biases caused by the location of the empty intent in the joint latent space, leading tobetter intrinsic reward then previous methods.While SupCon loss and other self-supervised visual representation learning frameworks are origi-nally only applied to image modality [40, 41], we extend the framework to a multi-modality settingand create a new representation for command following robots.3.2 RL with visual-audio representationThe second stage of our pipeline is to train an RL agent using an intrinsic reward function generatedby a trained Dif-V AR. We model a robot command following task as a Markov Decision Process(MDP), defined by the tuple ⟨X,A, P, R, γ ⟩. At each time step t, the agent receives an image4Itfrom its RGB camera, and robot states Mtsuch as end-effector location or previous action.Att= 0, an additional one-time sound command Sgcontaining an intent is given to the robot.We freeze the trainable weights of Dif-V AR in this stage and define the MDP state xt∈ X asxt= [It,vIt,vSg,Mt], where vItandvSgare the output of the Dif-V AR for ItandSg, respectively.Then, based on its policy π(at|xt), the agent takes an action at∈ A. In return, the agent receivesa reward rt∈Rand transitions to the next state xt+1according to an unknown state transitionP(·|xt, at). The process continues until texceeds the maximum episode length T, and the nextepisode starts.Intrinsic rewards. Since vIandvSof the same intent are pulled together within the Dif-V AR bythe contrastive loss, intrinsic rewards can be derived as the similarity between vIandvS. Eq. 4 and5 present two possible task-agnostic and robot-agnostic reward functions:rit=vIt·vSg(4) rict=vIt·vSg+vSt·vSg(5)where vStis the embedding of the current sound signal St, which can be triggered in the same wayasSas described in Section 3.1. Intuitively, the agent using ritreceives high reward when the sceneit sees matches the command it hears. The agent trained using the reward rictadditionally needsto match the current sound it hears with the sound command to receive high rewards. Comparedtorict, the reward function ritdoes not depend on any real-time supervision signal such as currentsound vStfrom the environment, allowing the agent to perform self-supervised RL training with Dif-V AR. Although RL agents trained with Eq. 4 can already achieve decent performance, providingthe current sound Stcan further improve the performance [1]. Since Stcan be difficult to obtainespecially in real environments, Stis not part of the state xtand thus the robot policy does notrequire Stat test time.Policy network. We purpose two architectures for RL policies. The first architecture is flat anduses a single policy network to fulfill all the intents in an environment, which is suitable when theskills to finish tasks are similar. The second architecture is hierarchical and contains multiple policynetworks individually designed to fulfill a subset of intents in an environment, as shown in Fig. 2c.Given an Sgand a set of policies Π ={π1, π2, ...}, the Dif-V AR selects a policy πjbyj=L(ˆy),ˆy= arg maxivSg·Ci∥Ci∥2. (6)where Ciis the centroid of an intent in the joint latent space calculated from the training data and Lis a lookup table that maps an intent ID to a policy. For benchmarking purposes, we use ProximalPolicy Optimization (PPO) for policy and value function learning [42].3.3 Intuitive and data-efficient fine-tuningAfter the robot is deployed in a new domain such as the real world, its performance often degradesdue to domain shift from both perception and dynamics [43]. Our fine-tuning procedure allowsnon-experts to continually improve the Dif-V AR to reduce perception gaps and improve robot skillsto reduce dynamics gaps. We only need to collect visual-audio pairs of the form (I,S)from non-experts. Since we no longer have the underlying labels yfor images and sounds, we replace theSupCon loss in Eq. (2) with the following self-supervised contrastive loss (SSC) [41]:LSSC=−Xk∈Klogexp (zk·zp(k)/τ)Pj∈K\{k}exp (zk·zj/τ), (7)where p(k)is the index of the data paired with the data of index kwith the same intent. We mixthe new data from the non-experts with a subset of the original training data to update the Dif-V AR, producing a more accurate reward function by Eq. 4. The robot can then self-improve itspolicy network with the reward function by randomly sampling a sound command as the goal. Thecollection of the visual-audio pairs does not require special equipment other than an RGB cameraand a microphone. The users provide images and sound based on their common knowledge usingtheir own voices. The users do not need to type in speech transcriptions, draw bounding boxes and5masks, modify the network architectures, or design a reward function. To fine-tune V AR in [1],non-experts have to provide a sound command with different intent S−for each image Ito usetriplet loss. In contrast, Dif-V AR eliminates this requirement by utilizing the SSC, leading to a moreintuitive data collection experience for non-experts and better performance with fewer labeled data.See Appendix A for the fine-tuning algorithm.4 ExperimentsIn this section, we first describe the various environments and sound datasets. Then, we comparethe performance and data efficiency of our pipeline with several baselines and ablation models.4.1 Environments and sound datasetFigure 3: Simulation environments.Robotic environments: We evaluate the perfor-mance of all the methods on three different roboticenvironments: iTHOR, Desk, and Row. In all envi-ronments, the perception of the robot comes from amonocular uncalibrated RGB camera, and the robotmust fulfill the sound command. See Appendix Bfor details.Sound data: We use several types of sounds from state-of-the-art datasets in training and testing.Specifically, we use speech signals from Fluent Speech Commands (FSC) [16] and short speechcommands from Google Speech Commands (GSC) [44]. We also collect a synthetic speech datasetusing Google Text-to-Speech. We use single-tone signals from NSynth [45] and environmentalsounds from UrbanSound8K (US8K) [46] and ESC-50 [47]. The Wordset dataset was created fromthe “0,” “1,” “2,” “3” in GSC. We also used a Mix dataset to show that the Dif-V AR can map multipletypes of sounds to a single object or idea, by mixing speech data with environmental sound. SeeAppendix C for more sound examples and intent we choose for the environment.4.2 Evaluation of the RL policyEvaluation metrics. We evaluate the model with two metrics: (1) success rate (SR) and (2) thenumber of labels used for training (LU). We define SR as the percentage of successful test episodes.We test the learned policy for 50episodes for each intent across multiple random seeds, and anagent succeeds if it fulfills the command. We compare the label usage of the models because acommand following robot deployed in the real world should require as few annotations as possiblefrom non-experts for fine-tuning.Baselines and ablations. We compare the RL performance of our method against the followingbaselines and ablations. (1) “E2E” is a representative end-to-end deep RL policy for voice-controlledrobots [7]. E2E uses hand-tuned task-specific reward functions and requires ground-truth class la-bels for image and sound classification. (2) “V AR” trains an RL agent based on the output of theV AR [1]. V AR utilizes triplet loss for training and fine-tuning. Both our method and V AR use Eq 5for the downstream RL tasks. (3) We compare the performance between flat (F) and hierarchical(H) architectures that have the same total trainable parameters. (4) “ASR+NLU+RL (ANR)” is acommon modular pipeline. Note that, unlike this baseline, our method does not rely on any tran-scriptions or expertise to be fine-tuned. This baseline does not work with non-speech datasets suchas NSynth, which will be indicated by “-”. (5) “CLIP” uses ASR for speech recognition and uses thedot product of the embeddings from the CLIP model as the reward [48]. We use “CLIP” as a rep-resentative of pre-trained visual-language models that claim zero-shot transferability to downstreamtasks [48]. (6) “Oracle” is an RL agent which assumes perfect ASR and NLU modules and is trainedwith hand-tuned reward functions and ground-truth class labels. See Appendix E for more details.Definition of labels. In this paper, labels include all forms of annotation and measurement thatare used to train a model. For example, one-hot labels for image and sound classification and the6Table 1: Test success rate with various types of sound commands in the original visual domain.Env Steps DatasetSR↑(×106) CLIP ANR E2E V AR Ours(F) Ours(H) OracleRow3.0 Wordset 1.5 85.5 95.5 97.0 98.0 96.0 98.03.0 NSynth - - 92.5 98.0 98.0 97.0 98.03.0 Mix - - 94.0 95.5 97.0 95.0 98.0Desk 9.0 Mix - - 77.0 58.5 84.5 89.5 90.0iTHOR 9.0 FSC 10.8 66.0 68.0 65.6 72.4 76.8 79.2distance measurement between the robot and the goal are both labels. One visual-audio pair (I,S, y)for training or (I,S)for fine-tuning used in Dif-V AR requires 1label to indicate yor the same intent.A visual-audio triplet used in V AR, (I,S+,S−), requires 2labels to indicate the positive and thenegative. Every E2E training step requires about 3labels, including the target object state checking(e.g. check if the light is switched on), distance measuring to calculate the extrinsic reward, and aone-hot label for auxiliary losses.Control policies with unheard sounds. In this experiment, we test the performance of differentmodels with sound commands never heard by the agent during training (e.g. new speakers). All themodels are trained with the same number of RL steps and sufficient labels. They are tested in theoriginal Floor Plans 201-220 or desk. No fine-tuning is performed yet.Table 1 suggests that the intrinsic rewards produced by our representation adequately support the RLtraining across various robots, robotic tasks, and types of sound signals. Remarkably, our methoddemonstrates satisfactory performance even without the inclusion of extrinsic rewards. CLIP suffersfrom a severe domain shift problem in our task. As a general-purpose pretrained model, CLIP isnot tailored to generate an RL reward or designed for any specific downstream robotic applications.ANR is limited to speech signals and has lower SR than the other methods due to intermediateerrors, which coincides with the findings in [1, 22]. Compared to our method, the V AR does notperform well in every environment, which suggests that the Dif-V AR produces better representationand more reliable rewards. The flat architecture outperforms the hierarchical architecture in onlythe Row environment, indicating that the hierarchical architecture is more suitable when the skillsrequired to complete tasks vary. See Appendix D for examples of task execution of the agent.Table 2: Average success rates over unseen domainsbefore and after fine-tuning.iTHOR(sim) Desk(sim) Row(real)LU 0 253 0 150 0 300ANR 18.8 19.8 - - 13.8 15.0E2E 18.4 19.8 44.5 45.0 15.0 15.0V AR 19.6 57.9 35.5 58.0 18.8 56.3Ours(F) 20.8 85.8 69.0 84.5 18.8 78.8Ours(H) 23.2 88.4 70.0 90.0 16.3 75.0Fine-tuning in novel domains. This ex-periment aims to show the potential ofeach method to be improved in a new do-main. We consider the scenario where atrained household robot is purchased toserve in a new place. Each method isgiven the same number of new labels, anda data-efficient method should achieve thehighest success rate. We first test the per-formance of trained models with unheardsound commands in unseen domains with-out any fine-tuning. For the iTHOR environment, the agent is tested in Floor Plans 226-230, whichhave sets of furniture and arrangements that are unfamiliar to the robot. For the Desk environment,the robot is placed in front of a new desk with unseen object appearances and locations. The resultsare marked by “LU=0” in Table 2 as 0new label is required for the test. We see that the perfor-mance of all methods drops compared to the test results in the original domains shown in Table 1.This phenomenon is due to the common problem of domain shift faced by learning systems [8].We then manually collect an average of 253 new labels for each unseen floor plan to fine-tune eachmethod for that floor plan in the iTHOR environment, and 150 new labels for the unseen desk inthe Desk environment. We followed Sec. 3.3 to fine-tune the V AR and Dif-V AR and used Eq. 4to self-improve RL policies without current sounds for 1M timesteps. For E2E, we collect one-hot7labels and use simulator queries during the fine-tuning. The fine-tuning is terminated after it reachesthe label limit. See Appendix D.4 and D.5 for task execution before and after the fine-tuning.From Table 2, we find that the ANR and E2E can only be improved by less than 1.5%, suggestingthe inefficiency of fine-tuning these methods after deployment. The label quotas are depleted rapidlydue to the inefficient use of labels for policy network fine-tuning, which leads to less RL experience.Our methods have higher data efficiency as labels were purely used to update the Dif-V AR, andthere was no label consumption during the self-supervised RL exploration. This leads to an overallricher RL experience. Compared to V AR, our method achieves better performance using the samenumber of labels because Dif-V AR does not need negative pairs for fine-tuning. Using around 250image-sound pairs, our method successfully improves itself to fulfill 4∼5tasks in a new domain.We believe the effort is manageable by a non-expert. See Appendix E.6 for intermediate resultsversus the label usage during fine-tuning.Figure 4: Sim2real experiment setup.Fine-tuning in the real world. Thisexperiment shows that our methodis practical and helps minimize thesim2real gap. In this experiment, theagent performs a grasping task with anoisy background using a single un-calibrated RGB camera. This settingis challenging for user-involved sim2real because the performance of the models is sensitive to theinevitable inconsistency in the camera pose between simulation and the real world [1], and it is un-likely that non-experts know how to calibrate a camera. To solve the problem, the Dif-V AR learns toassociate valid pre-grasp poses with images and speech commands. Dif-V AR outputs a high rewardwhen the robot reaches the desired object with a correct grasping pose. We first train the agents withdomain randomization [8] in the simulator. Then, we deploy the model to a real Kinova-Gen3 armand perform 20 tests for each intent (80 in total). See Fig. 4 and Appendix B.2 for more details.We spent an hour collecting the visual-audio pairs and fine-tuning the policy. The last two columnsof Table 2 shows that our methods minimized the domain shift with a reasonable number of newlyprovided pairs. Qualitative results are shown in Fig. 5 and the supplementary video.Figure 5: Before and after the fine-tuning in the real world.5 Conclusion, Limitations and Future workIn conclusion, we propose a novel visual-audio representation named Dif-V AR for command fol-lowing robots based on the recent advancement in (self-)supervised contrastive learning. Dif-V ARrequires much fewer labels from non-experts during fine-tuning but produces higher-quality rewardsfor downstream RL agents. Our results suggest that visual-language association and skill develop-ment are highly correlated and thus need to be designed together. Furthermore, we are the first todemonstrate that (self-)supervised contrastive loss has the potential to enhance human-robot inter-action. However, our work has the following limitations, which open up directions for future work.First, empty intents may result in sparse intrinsic reward functions, which pose challenges in long-horizon tasks. Our reward function can be combined with other intrinsic rewards [49]. To increaserobustness, we may take 3D information and history into reward calculation. Second, user studiesare needed to confirm that collecting visual-audio pairs is intuitive and convenient for end-users.Third, our method can be combined with imitation learning to further improve data efficiency.8AcknowledgmentsThis work is supported by AIFARMS through the Agriculture and Food Research Initiative (AFRI)grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute ofFood and Agriculture. We thank Yunzhu Li and Karen Livescu for insightful discussions and allreviewers for their feedback.References[1] P. Chang, S. Liu, and K. Driggs-Campbell. Learning visual-audio representations for voice-controlled robots. In IEEE International Conference on Robotics and Automation (ICRA) ,2023.[2] C. Matuszek. Grounded language learning: Where robotics and nlp meet. In InternationalJoint Conference on Artificial Intelligence (IJCAI) , 2018.[3] P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S ̈underhauf, I. Reid, S. Gould, andA. van den Hengel. Vision-and-language navigation: Interpreting visually-grounded naviga-tion instructions in real environments. In IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR) , pages 3674–3683, 2018.[4] M. Shridhar, J. Thomason, D. Gordon, Y . Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, andD. Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2020.[5] V . Blukis, C. Paxton, D. Fox, A. Garg, and Y . Artzi. A persistent spatial semantic representa-tion for high-level natural language instruction execution. In Conference on Robot Learning(CoRL) , pages 706–717, 2022.[6] S. Y . Min, D. S. Chaplot, P. Ravikumar, Y . Bisk, and R. Salakhutdinov. Film: Followinginstructions in language with modular methods. arXiv preprint arXiv:2110.07342 , 2021.[7] P. Chang, S. Liu, H. Chen, and K. Driggs-Campbell. Robot sound interpretation: Combiningsight and sound in learning-based control. In IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 5580–5587, 2020.[8] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization fortransferring deep neural networks from simulation to the real world. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 23–30, 2017.[9] S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine,R. Hadsell, and K. Bousmalis. Sim-to-real via sim-to-sim: Data-efficient robotic grasping viarandomized-to-canonical adaptation networks. In IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR) , pages 12627–12637, 2019.[10] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino,M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng,Q. Yuan, W. Zaremba, and L. Zhang. Solving rubik’s cube with a robot hand. arXiv preprintarXiv:1910.07113 , 2019.[11] B. Wen, W. Lian, K. Bekris, and S. Schaal. You only demonstrate once: Category-level ma-nipulation from single visual demonstration. In Robotics: Science and Systems (RSS) , 2022.[12] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning (CoRL) , 2022.[13] A. Yu and R. J. Mooney. Using both demonstrations and language instructions to efficientlylearn robotic tasks. In International Conference on Learning Representations (ICLR) , 2022.9[14] M. Du, O. Y . Lee, S. Nair, and C. Finn. Play it by ear: Learning skills amidst occlusion throughaudio-visual imitation learning. In Robotics: Science and Systems , 2022.[15] D. Serdyuk, Y . Wang, C. Fuegen, A. Kumar, B. Liu, and Y . Bengio. Towards end-to-endspoken language understanding. In International Conference on Acoustics, Speech, and SignalProcessing (ICASSP) , pages 5754–5758, 2018.[16] L. Lugosch, M. Ravanelli, P. Ignoto, V . S. Tomar, and Y . Bengio. Speech model pre-training forend-to-end spoken language understanding. In Annual Conference of the International SpeechCommunication Association (INTERSPEECH) , 2019.[17] M. Kim, G. Kim, S.-W. Lee, and J.-W. Ha. St-bert: Cross-modal language model pre-trainingfor end-to-end spoken language understanding. In International Conference on Acoustics,Speech, and Signal Processing (ICASSP) , pages 7478–7482, 2021.[18] F. Stramandinoli, V . Tikhanoff, U. Pattacini, and F. Nori. Grounding speech utterances inrobotics affordances: An embodied statistical language model. In Joint IEEE InternationalConference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) , pages79–86, 2016.[19] R. Paul, J. Arkin, D. Aksaray, N. Roy, and T. M. Howard. Efficient grounding of abstract spatialconcepts for natural language interaction with robot platforms. The International Journal ofRobotics Research , 37(10):1269–1299, 2018.[20] A. Magassouba, K. Sugiura, A. T. Quoc, and H. Kawai. Understanding natural languageinstructions for fetching daily objects using gan-based multimodal target–source classification.IEEE Robotics and Automation Letters , 4(4):3884–3891, 2019.[21] A. Vanzo, D. Croce, E. Bastianelli, R. Basili, and D. Nardi. Robust spoken language under-standing for house service robots. Polibits , (54):11–16, 2016.[22] Y . Tada, Y . Hagiwara, H. Tanaka, and T. Taniguchi. Robust understanding of robot-directedspeech commands using sequence to sequence with noise injection. Frontiers in Robotics andAI, 6:144, 2020.[23] K. M. Hermann, F. Hill, S. Green, F. Wang, R. Faulkner, H. Soyer, D. Szepesvari, W. M.Czarnecki, M. Jaderberg, D. Teplyashin, et al. Grounded language learning in a simulated 3dworld. arXiv preprint arXiv:1706.06551 , 2017.[24] H. Yu, H. Zhang, and W. Xu. Interactive grounded language acquisition and generalization ina 2d world. In International Conference on Learning Representations (ICLR) , 2018.[25] D. S. Chaplot, K. M. Sathyendra, R. K. Pasumarthi, D. Rajagopal, and R. Salakhutdinov.Gated-attention architectures for task-oriented language grounding. In Conference on ArtificialIntelligence (AAAI) , pages 2819–2826, 2018.[26] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.Ruano, K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee,S. Levine, Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes,P. Sermanet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu,M. Yan, and A. Zeng. Do as i can and not as i say: Grounding language in robotic affordances.InarXiv preprint arXiv:2204.01691 , 2022.[27] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation. InIEEE International Conference on Robotics and Automation (ICRA) , 2023.[28] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning(CoRL) , pages 991–1002, 2022.10[29] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. In Conference on Robot Learning (CoRL) , 2022.[30] A. Nair, V . Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine. Visual reinforcement learning withimagined goals. In Advances in Neural Information Processing Systems (NeurIPS) , volume 31,2018.[31] Y . Wang, G. N. Narasimhan, X. Lin, B. Okorn, and D. Held. Roll: Visual self-supervisedreinforcement learning with object reasoning. In Conference on Robot Learning (CoRL) , 2020.[32] H. Zhu, J. Yu, A. Gupta, D. Shah, K. Hartikainen, A. Singh, V . Kumar, and S. Levine. The in-gredients of real world robotic reinforcement learning. In International Conference on Learn-ing Representations (ICLR) , 2020.[33] P. Sermanet, C. Lynch, Y . Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain. Time-contrastive networks: Self-supervised learning from video. In IEEE International Conferenceon Robotics and Automation (ICRA) , pages 1134–1141, 2018.[34] E. Jang, C. Devin, V . Vanhoucke, and S. Levine. Grasp2vec: Learning object representationsfrom self-supervised grasping. In Conference on Robot Learning (CoRL) , 2018.[35] D. Shah, B. Osi ́nski, brian ichter, and S. Levine. LM-nav: Robotic navigation with large pre-trained models of language, vision, and action. In Conference on Robot Learning (CoRL) ,2022.[36] A. Yu and R. Mooney. Using both demonstrations and language instructions to efficiently learnrobotic tasks. In Conference on Robot Learning (CoRL) , 2023.[37] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances overunstructured data. In IEEE International Conference on Robotics and Automation (ICRA) ,2023.[38] A. Guzhov, F. Raue, J. Hees, and A. Dengel. Audioclip: Extending clip to image, text and au-dio. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) ,pages 976–980, 2022.[39] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabicword recognition in continuously spoken sentences. IEEE transactions on acoustics, speech,and signal processing , 28(4):357–366, 1980.[40] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y . Tian, P. Isola, A. Maschinot, C. Liu, and D. Kr-ishnan. Supervised contrastive learning. Advances in Neural Information Processing Systems ,33:18661–18673, 2020.[41] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learningof visual representations. In International conference on machine learning (ICML) , pages1597–1607, 2020.[42] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[43] Y . Du, O. Watkins, T. Darrell, P. Abbeel, and D. Pathak. Auto-tuned sim-to-real transfer. InIEEE International Conference on Robotics and Automation (ICRA) , pages 1290–1296, 2021.[44] P. Warden. Speech commands: A dataset for limited-vocabulary speech recognition. arXivpreprint arXiv:1804.03209 , 2018.[45] J. Engel, C. Resnick, A. Roberts, S. Dieleman, D. Eck, K. Simonyan, and M. Norouzi. Neuralaudio synthesis of musical notes with wavenet autoencoders. In International Conference onMachine Learning (ICML) , pages 1068–1077, 2017.11[46] J. Salamon, C. Jacoby, and J. P. Bello. A dataset and taxonomy for urban sound research. InInternational Conference on Multimedia (ACM-MM) , pages 1041–1044, 2014.[47] K. J. Piczak. ESC: Dataset for Environmental Sound Classification. In Annual ACM Confer-ence on Multimedia , pages 1015–1018. ACM Press, 2015.[48] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[49] Y . Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation.InInternational Conference on Learning Representations (ICLR) , 2019.[50] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016–2019.[51] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, D. Gordon, Y . Zhu,A. Gupta, and A. Farhadi. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv ,2017.[52] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard. Calvin: A benchmark for language-conditioned policy learning for long-horizon robot manipulation tasks. IEEE Robotics andAutomation Letters (RA-L) , 7(3):7327–7334, 2022.[53] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh,S. Sengupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXivpreprint arXiv:1412.5567 , 2014.[54] T. Bunk, D. Varshneya, V . Vlasov, and A. Nichol. Diet: Lightweight language understandingfor dialogue systems. arXiv preprint arXiv:2004.09936 , 2020.12A Algorithm for fine-tuning an agentAlgorithm 1 Fine-tuning the Dif-V AR (F) and an RL agent1:Inputs: A trained Dif-V AR V, a trained policy πθ, a subset of the original training data Dold2:Collect a small set of visual-audio pairs D={(Ii,Si)}Ui=13:Dnew=DoldSD4:fora sampled minibatch {(Ii,Si)}Ni=1fromDnewdo ▷Fine-tune Dif-V AR5: Calculate empty intent label eiby checking if Si=0l×m6: Calculate image and sound embeddings: hI,zI,hS,zS←V(Ii,Si)7: Calculate LSSCby Eq. 78: Calculate loss by Lfinetune =α1LSSC+α21NPNj=1LBCE(bI(hIj), ej)9: Update Vto minimize Lfinetune10:fork= 0,1,2, ...do ▷Self-supervised RL fine-tuning11: Sample a sound command SgfromDas goal12: fort= 0,1, ..., T do13: Receive RGB image Itand robot state Mt14: Calculate image and sound embeddings: vIt,vSg←V(It,Sg)by Eq. 315: Calculate reward rt=vIt·vSg16: ifStthen17: Calculate embeddings: vSt←V(St)18: rt=rt+vSt·vSg19: Store{rt,It,Mt,vIt,vSg}in a memory buffer DRL20: Update πθwith data from DRLusing PPO21: ClearDRL22:return V, πθB Robotic environment descriptionsThe Row and Desk environments are developed in PyBullet [50] and focus mainly on manipulationtasks. In contrast, the iTHOR environment is developed in AI2-THOR [51] and is challenging inperception and designed for mobile robots.B.1 RowFour objects are placed in a line at a random location unknown to the robot on the table. A robotarm needs to move its gripper and stay above the object corresponding to a given command basedon RGB images. The camera is placed at a fixed location on the side of the table such that it cancapture the gripper and the objects from a distorted perspective. The relative positions of the grippertip and the objects are initialized randomly at the beginning of an episode. A sound commandonly mentions the orindal information about the target object, and the robot needs to develop spatialreasoning skills to approach the target object using the relative positional information observed fromthe camera.Figure 6: Visualization of the Row environment using Kuka-iiwa robot arm with paired images and voices fromthe Wordset. In this case, “zero” means the leftmost block, “one” means the second block from the left, and soon. The red and green rays are just for illustration purposes. The possible locations of the blocks are limitedto the green rectangle and the end-effector location is indicated by the vertical ray. The rightmost figure showsthe camera view.13B.2 Row - realThis environment is modified from the original Row environment. The Four objects are a mug, asoup can, a pudding box, and an orange. Different objects may require distinct grasping poses. SeeFig. 7 for examples. At the end of an episode, the gripper performs a grasp by lowering its heightfrom its current position, closing the fingers, and lifting the object up. For domain randomization,we randomize the background, camera viewpoint, and relative offset among the objects.Figure 7: Visualization of the Row environment with paired images and voices from the Synthetic dataset underdomain randomization setting. The grasping poses for the mug and the pudding box are different. The red andgreen rays are just for illustration purposes. The rightmost figure shows the camera view.B.3 DeskThe Desk environment is modified from the CALVIN dataset [52]. A Franka Panda robot arm isplaced in front of a desk with a sliding door and a drawer that can be opened and closed. On thedesk, there is a button connected to an alarm clock, a switch to control a light bulb, and a pill case.The tasks of the robot include turning on or off the light bulb by manipulating the switch, pressingthe button to mute the alarm clock and turn the LED of the clock into red, and picking up the pillcase that could be on the top of the desk or inside a closed drawer. When the pill case is locatedinside a closed drawer, the robot needs to open the drawer before picking up the pill case. The soundcommands come from FSC, ESC-50, and the Synthetic dataset.Figure 8: Visualization of the Desk environment with paired images and voices. The rightmost figure showsthe camera view.B.4 iTHOROur iTHOR environment uses real full-sentence speech commands to simulate a real-world applica-tion of household robots. The environment has 30 different floor plans of living rooms, each with itsown set of decorations, furniture, and arrangements. The robot is given goal tasks such as switch-ing the floor lamp or television on or off. The robot must navigate through the environment andinteract with the intended object given RGB images and a noisy local discrete occupancy grid as therobot states. The complexity of the environment requires the agent to associate complicated speechcommands with high-fidelity visual observations, without a floor plan map. The floor plans can bevisualized and interacted with in https://ai2thor.allenai.org/demo/ .Figure 9: Visualization of the iTHOR environment with paired images and voices from the FSC dataset.14B.5 List of TasksTable 3: List of tasks in each environment.Envs Tasks in the original domain Changes in the new domainiTHORactivate the floor lamp unseen furniture, room decoration, roomdeactivate the floor lamp arrangement, and voices from new speakersactivate the TVdeactivate the TVfind the pillowDeskactivate the light bulb unseen desk, object locations, object appearance, anddeactivate the light blub sound from new speakers or the alarmsmute the alarm clockpick up the pill caseRowfirst block unseen sound or voicessecond blockthird blockforth blockRow - realfirst object unseen camera intrinsics, camera extrinsics,second object background, relative locations among the objects, andthird object voices from new speakersforth objectC Sound DataTable 4: Sound signals used in the experiments.Dataset Sound ExamplesFSCactivate light “Turn on the lights,” “Lamp on”deactivate light “Switch off the lamp,” “Lights off”activate music “Put on the music,” “Play”deactivate music “Pause music,” “Stop”bring shoes “Get me my shoes,” “Bring shoes”GSC“0,” “1,” “2,” “3” “zero,” “one,” “two,” “three”names of 4 objects “house” “tree,” “bird,” “dog”NSynth C4,D4,E4,F4 Various instruments, tempo, and volumeUS8K bark, jackhammer Sound recorded in the wildESC-50 Clock alarm Alarm sound emitted from various alarm clocksSyntheticbring pill case “Pass over the pill box for me,” “Give me the pill case”first object “I would like the first object,” “Give me the leftmost object”second object“Would you mind giving me the second object from the left,”“Bring the third object from the right to me”third object“Take the third object,” “Bring me the second object from the right”“Take the third object from the left”fourth object “Give the rightmost object to me ,” “Hand over the fourth object”15D Visualization of task executionD.1 RowFigure 10: Visualization of the task execution in the Row environment after training without fine-tuning. Thesounds come from Wordset dataset. Kuka moves its gripper to the target block successfully in all episodes.D.2 DeskFigure 11: Visualization of the task execution in the Desk environment after training without fine-tuning.16D.3 iTHORFigure 12: Visualization of the task execution in the iTHOR environment after training without fine-tuning.The sounds come from FSC dataset. iTHOR agent finishes household tasks successfully in all episodes.17D.4 iTHOR fine-tuningFigure 13: Visualization of the task execution in the iTHOR environment before and after the fine-tuning inunseen floor plans and the sound commands given by new speakers.18D.5 Desk fine-tuningFigure 14: Visualization of the task execution in the Desk environment before and after the fine-tuning with aunseen desk and the sound commands given by new speakers. The appearance of the desk and the pill case aredifferent from the original desk. The location of the light bulb, the button, the LED, and the drawer are differentfrom the original desk.19E Facts and detailsE.1 Time efficiencyWe evaluate the time efficiency of all the methods. All the models are running on a single NvidiaGTX 1080 Ti GPU and a Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz. We report the average timein second (s) for the model to take one action in the iTHOR environment with the FSC dataset. Theaverage is calculated from 12500 samples.• ANR: 0.041s• E2E: 0.018s• V AR: 0.024s• Dif-V AR: 0.022sE.2 ANR• Implementation: We first use an off-the-shelf automatic speech recognition (ASR) namedMozilla DeepSpeech [53] to transcribe the speech to text. We then train a learning-based natural language understanding (NLU) module to handle the noisy output from theASR [54]. For example, “Play the music” is sometimes transcribed as “by the music.”Finally, a vision-based RL agent operates with the predicted intent from the NLU.• Accuracy of intent prediction of ASR+NLU: FSC dataset: 86.0%; Wordset: 87.0%.• Fine-tuning details: We fine-tune the RL agent of this pipeline because it is the major sourceof performance degradation. The policy network is initialized with the weights obtainedduring the training. During the fine-tuning, the input of the model includes images andground-truth intent IDs as if the ASR+NLU is perfect. The model also receives a one-hotlabel of the images and reward signals from the simulator as supervision signals. The policynetwork is updated based on the PPO loss and the auxiliary loss for the images.E.3 E2EFine-tuning details: We fine-tune the whole policy network end-to-end. The policy network isinitialized with the weights obtained during the training. During the fine-tuning, the input of themodel includes images and raw sound signals. The model also receives a one-hot label of theimages, ground-truth intent IDs, and reward signals from the simulator as supervision signals. Thepolicy network is updated based on the PPO loss and the auxiliary losses for the images and audio.E.4 V ARFine-tuning details: We fine-tune the V AR and let the V AR provide rewards and observations tofine-tune the policy network. Both V AR and the policy network are initialized with the weightsobtained during the training. We collect visual-audio triplets consisting of an image, positive audio,and negative audio from the environment and fine-tune the V AR using the triplet loss. During thefine-tuning of the policy network, the input includes images and vector embeddings from the V AR.The policy network is updated based on the PPO loss.20E.5 Dif-V ARWe show the statistics of training and fine-tuning data for Dif-V AR.Table 5: Number of training and fine-tuning pairsEnvs # of training pairs # of fine-tuning pairs Ratio (Fine-tuning : training)Row - real 5.0×1043.0×1020.6%Desk 2.5×1041.5×1020.6%iTHOR 6.0×1042.5×1020.4%We report the change in success rate w.r.t. (1) label usage for fine-tuning Dif-V AR, and (2) numberof RL steps given the fine-tuned representations with the specific number of labels.Table 6: Success rate with varying label usage and RL fine-tuning steps for Ours(F) in the Desk andiTHOR environments.Envs Episode Length LURL Steps0.1M 0.5M 1.0MDesk 10050 77.5 79.5 79.5100 78.0 80.0 84.0150 78.0 82.0 84.5iTHOR 5080 39.4 45.3 56.3160 45.7 56.7 68.4250 58.6 77.5 85.8Table 7: Success rate with RL fine-tuning steps for Ours(F) in the Row-real environments.Envs Episode Length LURL Steps1100 2200Row-real 20 300 40.0 77.521E.6 Intermediate fine-tuning performanceFigure 15: Success rate with varying number of labels in the iTHOR environment.Figure 16: Success rate with varying number of labels in the Desk environment.22F Qualitative comparison of the representationWe visualize the V ARs by projecting images and sounds to the joint space, as shown in Fig. 17. Wesee that the embeddings of the same concept form a cluster and all clusters are separated from eachother. Compared to V AR, the clusters in Dif-V AR have better intra-cluster cohesion and inter-clusterseparation, suggesting that the two distinct concepts are better distinguished and the same conceptsare better related. During fine-tuning, although Dif-V AR does not have S−as an explicit indicationof negatives like V AR does in the input, Dif-V AR can still maintain relatively clear inter-clusterseparation and provide reliable rewards for the self-improvement of RL agents.Figure 17: Visualizations of the V ARs in the iTHOR environments with FSC. The colors indicate the groundtruth intent ID of embeddings of sound (marked by triangles) and image (marked by circles). (a) V AR after thetraining. (b) V AR after the fine-tuning. (c) Dif-V AR after the training. (d) Dif-V AR after the fine-tuning.G Illustration of data collectionFigure 18: Comparison among different data collection methods. Left: Our method only asks for pairing animage with an audio without the need for a class label. Middle : V AR requires an additional pairing process thanour method. Right : ANR and E2E require two assignments of the underlying class label which is not intuitiveand need more effort.23 |
nKWQnYkkwX | Language-Guided Traffic Simulation viaScene-Level DiffusionZiyuan Zhong1, Davis Rempe2,3, Yuxiao Chen2, Boris Ivanovic2,Yulong Cao2,Danfei Xu2,4,Marco Pavone2,3,Baishakhi Ray11Columbia University,2NVIDIA Research,3Stanford University,4Georgia TechAbstract: Realistic and controllable traffic simulation is a core capability thatis necessary to accelerate autonomous vehicle (A V) development. However, cur-rent approaches for controlling learning-based traffic models require significantdomain expertise and are difficult for practitioners to use. To remedy this, wepresent CTG++, a scene-level conditional diffusion model that can be guided bylanguage instructions. Developing this requires tackling two challenges: the needfor a realistic and controllable traffic model backbone, and an effective methodto interface with a traffic model using language. To address these challenges, wefirst propose a scene-level diffusion model equipped with a spatio-temporal trans-former backbone, which generates realistic and controllable traffic. We then har-ness a large language model (LLM) to convert a user’s query into a loss function,guiding the diffusion model towards query-compliant generation. Through com-prehensive evaluation, we demonstrate the effectiveness of our proposed methodin generating realistic, query-compliant traffic simulations.Keywords: Traffic Simulation, Multi-Agent Diffusion, Large Language Model1 IntroductionGiven the high costs and risks of large-scale real-world autonomous vehicle (A V) testing [1, 2],A V developers increasingly rely on simulations for developing robust systems [3]. For maximumefficacy, simulators must offer realistic andcontrollable traffic behaviors, complemented by a user-friendly interface . The realism of traffic patterns ensures that development and testing conductedin simulation environments can be transferred to real-world scenarios. Controllability permits thegeneration of relevant traffic scenarios to scrutinize specific A V behaviors. For example, controllinga vehicle to collide with the A V to check how it reacts in dangerous situations. A user-friendlyinterface simplifies how desired behaviors can be specified. However, generating realistic [4, 5] andcontrollable [6] traffic poses considerable challenges, and the exploration of user-friendly interfacesin traffic generation has been limited. This work strives to develop an expressive scene-centric trafficmodel that can be controlled through a user-friendly text-based interface. Such an interface has thepotential to connect simulation to previously unusable text-based data, such as governmental andinsurance collision reports. It also facilitates new simulation capabilities, such as reconstructingreal-world collision scenarios [7].Building a traffic simulation model with a language interface presents two challenges. First, thetraffic model must generate realistic trajectories at both agent and scene levels, and provide con-trollability over its generated trajectories. Current simulators [8, 9, 10], whether replaying logsor using heuristic controllers for agent behavior, lack realism and expressiveness. Data-driven ap-proaches [4, 5] merely reflect training data distribution, lacking control over generated traffic. Re-cently, CTG [6] applies a diffusion model, which has demonstrated promising results across variousconditional generation tasks [11, 12, 13, 14, 15], to traffic generation. CTG shows that diffusion iswell-suited for controllable traffic simulation through guidance , which allows test-time adaptabilityto user controls. However, CTG models agents independently, leading to unrealistic interactions.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.For example, two vehicles modeled separately might collide if the leading vehicle slows without thefollowing vehicle responding. The second challenge is grounding language in a powerful traffic sim-ulation backbone, since language conveys more abstract patterns (e.g., “traffic jam” or “following”)while traffic models operate on low-level trajectories. To address similar issues, recent research onLarge Language Models (LLMs) for robotic behaviors [16, 17] designs a suite of high-level func-tions (e.g., “pick up” and “use item”) that an LLM can employ to control the robot in order to achievea user-specified task (e.g., “make an omelette”). Essentially, these high-level functions bridge tex-tual instructions and robotic behaviors. Unfortunately, this approach cannot be directly used forrealistic traffic simulation. It is infeasible for an LLM to only use a few high-level functions (e.g.,“go to location”) to generate the entire low-level human-like trajectories.GPT4GuideContextGenerateQuery Generatea differentiable loss functionin code formatPre-defined APIsand ExamplesNoise Trajectroyred vehiclecollides withblue vehicledef guidance_loss(trajectories,...): ... return collision_lossInputcollisionCTG++Scene-Level ConditionalDiffusion ModelFigure 1: Overview of CTG++. A user query, predefined APIs,and examples are passed to GPT4, which generates a differentiableloss to guide CTG++ for query-compliant trajectories.In this work, we propose a modelcalled CTG++ (see Figure 1) toovercome the aforementioned chal-lenges. To achieve a realistic andadaptable traffic model, our approachharnesses the strengths of diffusionand significantly enhances its capa-bility to cater to multi-agent sce-narios. This is achieved througha newly proposed scene-level condi-tional diffusion model, underpinnedby a spatial-temporal transformer ar-chitecture. This architecture appliesalternating temporal and spatial at-tention modules, effectively capturing the dynamics of multi-agent interactions over time. To createa natural language interface with the traffic model, we leverage the proven capacity of LLMs togenerate code from natural language [18]. Instead of a direct translation from text to traffic, weintroduce an intermediate representation: a differentiable loss function, which encodes a user’s in-tention from the language command. The loss function guides the diffusion model to generatecommand-compliant trajectories. With this two-step translation, the LLM and diffusion model effi-ciently bridge the gap between user intent and traffic simulation.We evaluate CTG++ on the nuScenes dataset [19], showing its ability to follow user-specified lan-guage commands and generate realistic trajectories. In summary, our contributions are: (1)a scene-level conditional diffusion model, leveraging a spatial-temporal transformer backbone, designed forthe generation of realistic and controllable traffic, (2)a language interface adept at generating trajec-tories that align with user-defined rules in language, and (3)extensive evaluation comparing CTG++to state-of-the-art baselines, highlighting its superiority in generating high-quality scenarios.2 Related WorkTraffic simulation. Traffic simulation methods can be divided into rule-based and learning-based.Rule-based strategies often feature an interface allowing users to specify the vehicles’ routes, wherethe motion is governed by analytical models like intelligent driver model [20]. Although they deliveruser-friendly controllability, their behavioral expressiveness is limited, resulting in trajectories farfrom human driving patterns. To improve realism, learning-based approaches utilize deep generativemodels trained on trajectory datasets, aiming to emulate authentic driving behaviors [21, 22, 23, 4,5]. However, they trade off user-controllability for increased realism, as users cannot customize theproperties of generated trajectories. In contrast, our scene-level diffusion model and LLM-basedlanguage interface allow the generation of realistic and language command-compliant traffic.Diffusion models for conditional generation in robotics. As diffusion models have shown strongperformance and test-time adaptability, they have been recently used for various conditional gener-ation tasks in robotics and traffic. Existing works use trained classifiers [13, 14] or expert-designed2loss functions [6, 24, 25] to guide the denoising process to achieve user-desired properties. For ex-ample, CTG [6], the closest work to ours, uses a manually designed loss function based on SignalTemporal Logic (STL) to guide denoising. However, both training a classifier and manually craftinga loss function for each new property require domain expertise. In contrast, our approach allows auser to easily specify desired properties in natural language, which is then converted into a relevantloss function by an LLM. Moreover, most works, including CTG, model each agent independently[26, 13, 14, 6], resulting in unrealistic agent interactions. In contrast, our scene-level diffusion modelconsider all agents in a scene jointly, resulting in realistic modelling of interactions.Large language models for robotics. Recent breakthroughs in LLMs have motivated a series ofworks applying LLMs to robotic tasks. One approach is to train a multi-modal LLM that takesin embodied data in addition to text data [27, 28, 29]. Unfortunately, no such text-traffic data isavailable. Other works directly prompt a pre-trained LLM with a high-level function library alongwith a user query. This lets the LLM plan a robot’s behaviors via the provided functions to achievethe goals in the query [16, 17]. This approach does not directly apply to traffic simulation as existingdata-driven approaches cannot be controlled via high-level functions. To tackle this challenge, weleverage a pre-trained LLM to translate a user query into a differentiable loss function in code formatand use it to guide a scene-level conditional diffusion model for traffic generation.3 MethodologyAfter formulating the problem of controllable traffic generation (Section 3.1), we provide the detailsof our approach, CTG++. The training stage involves training a scene-level conditional diffusionmodel to capture diverse behaviors from real-world driving data (Section 3.2), utilizing a scene-level spatial-temporal transformer architecture (Section 3.3). During the inference stage, CTG++generates query-compliant behaviors via the guidance of a user query derived loss (Section 3.4).3.1 Problem FormulationSimilar to CTG [6], we formulate the traffic simulation as an imitation learning problem. For theMvehicles in a scene we want to simulate, let their state at a timestep tbest=[s1t... sMt]wheresit=(xit, yit, vit, θit)(2D location, speed, and yaw) and the action ( i.e., control) be at=[a1t... aMt]where ait=( ̇vt, ̇θt)(acceleration and yaw rate). We denote c=(I, st−Thist∶t)to be decision-relevantcontext, which consists of local semantic maps for all the agents I={I1, ..., IM}, and their currentandThistprevious states st−Thist∶t={st−Thist, . . . , s t}. To obtain state st+1at time t+1, we assumea transition function (e.g., a unicycle dynamics model) fthat computes st+1=f(st, at)giventhe previous state stand control at. Our goal is to generate realistic and query-satisfying trafficbehavior for the agents given (1) the decision context cand (2) a function r∶R4T×R2T→Rderived from a user query to measure rule satisfaction of the state and action trajectories. A modelshould generate future trajectories for the agents st+1∶t+Tover the next Ttimesteps. Ideally, thesetrajectories maximize satisfaction r(at∶t+T−1, st+1∶t+T)to avoid violating the given rule.3.2 Scene-Level Conditional Diffusion for Traffic ModelingDiffusion models [30, 31, 13, 14] generate new samples through an iterative denoising process bylearning to reverse a diffusion process. As a traffic scene involves multiple traffic participants, asingle-agent diffusion model [13, 14, 6] may generate sub-optimal samples when a scene involvessignificant interactions among multiple agents. To tackle this issue, we propose a scene-level diffu-sion model that jointly models all traffic participants in a scene. Unlike CTG, which models eachagent’s future trajectory independently, our model operates on the past and future trajectories of allthe agents in a scene jointly (see Figure 2) and thus captures the interactions among agents bothspatially and temporally. Starting from Gaussian noise, the diffusion model is applied iteratively topredict a clean, denoised trajectory of states and actions for all agents in a scene.3denoisingstep kM MTemporalAttentionMNoisy Actions StatesRepeat L timesMMapAgent-Agent Interactionper timestep tGuide actionusing J(Test T imeOnly)MSpatialAttentionMapAttentionEncoderMDynamicFigure 2: A denoising step using our scene-level spatial-temporal transformer.Trajectory Representation. We denote the future trajectory that the model operates on as:τ∶=[τsτa],τa∶=⎡⎢⎢⎢⎢⎣τ1a⋮τMa⎤⎥⎥⎥⎥⎦,τs∶=⎡⎢⎢⎢⎢⎣τ1s⋮τMs⎤⎥⎥⎥⎥⎦,τia∶=[ai0... aiT−1],τis∶=[si1... siT].Additionally, we represent the historical trajectory in the context casτhist. In accordance with [6],our model predicts solely the action trajectory τa, employing the known dynamics fto deduce thestates τsvia a rollout from the initial state s0(which forms part of the context c). To maintain thephysical feasibility of the state trajectory throughout the denoising process, we consistently denoteτsas a state trajectory resulting from actions: τs=f(s0,τa).Formulation. Consider τkaas the action trajectory at the k-th diffusion step, with k=0marking theoriginal clean trajectory. The forward diffusion process that starts from τ0ais defined as:q(τ1∶Ka∣τ0a)∶=K∏k=1q(τka∣τk−1a)∶=K∏k=1N(τka;√1−βkτk−1a, βkI). (1)Here, βifor all i=1, ..., K are a pre-determined variance schedule, controlling the magnitude ofnoise added at each diffusion step. As the noise incrementally accumulates, the signal transformsinto an approximately isotropic Gaussian distribution N(0,I). For trajectory generation, our goalis to reverse this diffusion process through a learned conditional denoising model (Figure 2) that isiteratively applied starting from sampled noise. The reverse diffusion process, is as follows:pθ(τ0∶Ka∣c)∶=p(τKa)K∏k=1pθ(τk−1a∣τk,c)∶=p(τKa)K∏k=1N(τk−1a;μθ(τk, k,c),Σθ(τk, k,c)),(2)where p(τKa)=N(0,I)andθrepresents the parameters of the diffusion model. At each step, themodel takes in actions τkaand the resulting states τks=f(s0,τka)as input. As per [31, 26], thevariance term of the transition is a fixed schedule such that Σθ(τk, k,c)=Σk=σ2kI. The trainingin our approach mirrors that in [6], but with a key difference - trajectories are sampled at the scenelevel instead of the agent level, as our model simultaneously predicts outcomes for all agents in ascene. Detailed information is available in Appendix A.1.3.3 Model Architecture: Scene-Level Spatial-Temporal TransformerUnlike previous works which use a U-Net [13, 6] or single-agent transformer [26] to model thedenoising process for a single agent, we design an architecture that models multiple agents jointly.Inspired by recent works on transformer-based motion prediction [32, 33, 34, 35, 36], we proposea spatial-temporal transformer architecture to model multiple agents jointly. Unlike most previouswork [32, 33], which employs scene-centric coordinates to capture interactions, we adopt agent-centric coordinates. As a traffic scene configuration is combinatorial, it is easy for a scene to endup in an out-of-distribution configuration if it is modeled in scene-centric coordinates, since errorscompound over time. In contrast, agent-centric coordinates are invariant to translation and rotationof the scene, and therefore more robust during closed-loop simulation. However, agent-centric coor-dinates discard relative information among agents, which is important for interactions. To avoid this,we introduce a spatial attention module that enables the exchange of relative information betweenagents. Inspired by previous work [36], to avoid the combinatoric explosion of attention pairs, wealternate between temporal attention, spatial attention, and map attention module to fully capture theinteractions among agents and the map. We next introduce the details of our proposed architectureby showing the data flow of a denoising step (Figure 2; see Figure A1 for more details).4Input and Temporal Attention. We first concatenate the ground-truth agent history trajectorieswith the predicted future trajectories along the temporal dimension and apply a row-wise feed-forward network (rFFN) to project each element (each agent per timestep) from the attribute dimen-sionds+dato the hidden dimension dh. The denoising step kis injected into the encoded trajectoryusing a sinusoidal positional encoding function [37]. We next capture the temporal informationin the encoded trajectory by feeding it into the temporal attention block, a standard transformerencoder [37] that captures the temporal-wise relation of each agent.Spatial Attention. The encoded trajectory is then fed into a spatial attention block which is acustomized transformer decoder block with key and value designed to capture the relative geometricrelationships among agents. Similar to [35], we extend a regular attention layer to be aware of therelative information eijtbetween two agents iandjat timestep t:eijt=φr([Ri⊺0(∆xij0,t,∆yij0,t),cos(∆θij0,t),sin(∆θij0,t), vjtcos(∆θij0,t)−vi0, vjtsin(∆θij0,t), di,jt])(3)where φris a feed-forward network, ∆xij0,t∶=xjt−xi0,∆yij0,t∶=yjt−yi0,and∆θij0,t∶=θjt−θi0representthe position and orientation differences from jat timestep ttoiat time step 0(the current timestep),dijtis the relative distance between iandjat timestep t, andRi⊺0is the rotation matrix associatedwith agent iat timestep 0. For future timesteps at the training stage and history timesteps, we use theground-truth relative information. For future timesteps at the inference stage, since we do not havethe ground-truth information, we use a constant velocity model (which assumes the agents to keeptheir current velocity for the future timesteps as in [38]) to estimate the state of all the agents in thefuture and thus their relative information. The pair-wise relative information is then incorporatedinto the transformation of the encoded trajectory via keys and values of the decoder layer:qit=WQglobalhit,kijt=WKglobal[hjt,eijt],vijt=WVglobal[hjt,eijt] (4)where hitandhjtare the slices of the encoded trajectories corresponding to the agent iandjattimestep t, andWQglobal,WKglobal,WVglobalare learnable matrices.Map Attention and Output. The map attention block is a multi-head attention layer with keysand values from the encoded agent-centric vectorized map (as in [34], we encode the map via anattention layer which transforms waypoints associated with each lane into a lane vector) and capturesthe interaction between agents and map. The map attention is applied to each agent independentlyas the map is agent-centric. The output encoded trajectory is projected back to the input dimensionds+daand results in the predicted clean action trajectory ˆτ0a. At test time, we additionally applyiterative guidance with a differentiable loss function J(see Section 3.4 and Appendix A.2) on thepredicted action trajectory. Finally, we apply dynamics to get the predicted state trajectory.3.4 Guided Generation with LanguageA language interface for the powerful diffusion model would enable the user to easily control trajec-tories with minimum domain knowledge. However, the absence of paired text-to-traffic data rendersdirect training of such a model infeasible. Hence, we explore using an intermediary representationto bridge the two. Recent advancements in LLMs facilitate high-quality conversion from naturallanguage commands into code. Meanwhile, the diffusion model exhibits control over its generationthrough guidance from a loss function. Thus, we suggest utilizing a loss function implemented incode to bridge the two. Since the guidance loss function must operate on trajectories, we providehelper functions for coordinate transformations and a handful of text-loss function paired exam-ples alongside the user’s query to the LLM. We then utilize the returned loss function to guide thediffusion model, as discussed in Section 3.2, for generating query-compliant traffic simulation.Guidance Formulation. Building upon prior work [13, 6], we apply guidance to sampled trajec-tories from our diffusion model at each denoising step to satisfy a predefined objective. Guidanceuses the gradient of the loss Jto perturb the model’s predicted mean such that each denoising step(in Equation (2)) becomes: pθ(τk−1a∣τk,c)≈N(τk−1a;μ+Σk∇μJ(μ),Σk)(see Appendix A.2).5Response :class Collision_Loss :def forward(self, x): pos_pred = x[..., :2] pos_pred_world = transform_coord_to_world(pos_pred,data_batch) pos_pred_i_world = select_agent_ind(pos_pred_world, 1) pos_pred_j_world = select_agent_ind(pos_pred_world, 2) # Compute the distance between the two vehicles dist = torch.norm(pos_pred_i_world - pos_pred_j_world,dim=-1) # Compute the collision loss by penalizing the distancegreater than the collision radius loss = torch.clip(dist - self.collision_radius, min=0) return lossgenerate a loss class suchthat vehicle 1 should collidewith vehicle 2.Example1_query: generate a loss classsuch that vehicle 1 should always drive withacceleration below acc_limit.Example1_ response :Class AccLimitLoss :def forward(self, x): # Select the relevant agent x_i = select_agent_ind(x, 1) # Select the acceleration acc = x_i[..., 4] # Estimate the acceleration deviation acc_dev = torch.abs(acc) - self.acc_limit # Clip the negative values to 0 loss = torch.clip(acc_dev , min=0) return lossPaired Examples Helper FunctionsGPT4Differentiable Loss Function in PythonUser Query# The function transforms positions fromagent coordinates into world coordinate.def transform_coord_to_world(pos_agent): ... return pos_worldFigure 3: Example of prompting and querying LLM for a loss function promoting two vehicles to collide.Language Guidance. Rather than training a classifier or reward function [13, 12] or defining ananalytical reward function [6] for J, we use GPT4 [39] to translate the intention in a user languagequery into the corresponding guidance function. In particular, we pass a few helper functions usedto manipulate trajectory coordinates, a couple of (query →loss function code) paired examples, andthe user query into GPT4, and extract an implemented loss function from the returned message.Figure 4 shows an example of querying for a loss function that causes a vehicle to collide withanother one. The input paired example shows GPT4 how to apply a sequence of differentiableoperations to generate a loss with respect to the predicted trajectory (“x”). We also provide a listof helper functions for trajectory manipulation (such as coordinate transformations) for GPT4 toleverage as we have found these can help to avoid minor mistakes on common operations (seeAppendix F.2 for additional helper functions and pair examples). GPT4 returns a loss function thatpenalizes trajectories for not having a collision between the specified vehicles, which will guide thediffusion model to generate trajectories having such a collision. The guided sampling is performedjointly for all the agents in a scene.Traffic Simulation. We perform closed-loop traffic simulation of each scene for 10seconds. Inparticular, we apply our model for all the agents in a standard control loop: at each step, the modelgenerates a guided trajectory and takes the first few actions before re-planning at 2Hz.4 ExperimentsFollowing the setup (Section 4.1), we conduct experiments to affirm: CTG++ can effectively pro-duce realistic and query-compliant traffic behaviors (Section 4.2), and compared with strong base-lines, CTG++ yields superior trade-offs among stability, rule satisfaction, and realism (Section 4.3).4.1 Experimental SetupDataset. nuScenes [19] is an extensive real-world driving dataset encompassing 5.5 hours of precisetrajectories from two cities, featuring diverse scenarios and heavy traffic. We train all models usingthe training split and evaluate them on 100 scenes randomly selected from the validation split. Ourfocus is exclusively on simulating moving vehicles, as they are the most control-relevant entities.Metrics. Following [6, 5], we evaluate stability ( i.e., avoiding collisions and off-road driving),controllability, and realism of generated trajectories. We evaluate stability by reporting the failurerate ( fail), measured as the percentage of agents encountering a collision or road departure in a scene.We evaluate controllability using rule-specific violation metrics ( rule) (see Appendix E.2). To assessrealism , we compare data statistics between generated trajectories and ground truth trajectories fromthe dataset by calculating the Wasserstein distance between their normalized histograms of drivingprofiles. We measure realism using realism deviation ( real), which is the average of the distributiondistance for the longitudinal acceleration magnitude, lateral acceleration magnitude, and jerk. Weintroduce a new scene-level realism metric ( rel real ) which is the average of the distribution distancefor relative (averaged over every pair of vehicles in a scene) longitudinal acceleration magnitude,relative lateral acceleration magnitude, and relative jerk.6(a) CTG++ keep distance (b) CTG keep distance (c) CTG++ collision (d) CTG collisionFigure 4: Generated trajectories for query “vehicle A should always keep within 10-30m from vehicle B” and“vehicle A should collide with vehicle B”, respectively. The collision and offroad locations are marked in ☆and△. For both CTG++ and CTG, our language interface generate query compliant trajectories. However,CTG++ does not sacrifice other aspects like keeping on-road and smoothness while CTG does.Traffic Model Baselines. The closest related work on rule-compliant traffic generation is CTG [6],a traffic model based on conditional diffusion. We also consider BITS [5], a bi-level imitation learn-ing model, and adapt its sampling ranking function to use our loss function. Its variant, BITS+opt ,applies optimization to the output action trajectory for controllability. To ensure a fair comparison,this optimization employs the same loss function as the one used for guidance in CTG++.Large Language Model. We use OpenAI’s GPT-4 [39] (accessed through the OpenAI API) forevaluating the language interface. We do not train the LLM and use only few-shot prompting.4.2 Case Study of Language InterfaceWe conduct two case studies on queries for common traffic behaviors demonstrating CTG++ caneffectively generate trajectories satisfying user’s queries. Both queries involve the interaction of twovehicles. The first query “vehicle A should always keep within 10-30m from vehicle B” ( GPT keepdistance ) is a common scene in the real world and the generated scene can be used to test vehiclefollowing. Figure 4a shows the simulation generated by CTG++: vehicle A (in red) follows vehicleB (in blue) to go straight and slightly turn right with a safe distance as specified. Both vehiclesalong with the background vehicles have smooth motion without critical failures during the rollout.The second query “vehicle A should collide with vehicle B” ( GPT collision ) makes it possible topotentially generate scenarios like those from a crash report [7] and test a vehicle’s reaction indangerous situations. Figure 4c shows the simulation generated by CTG++: vehicle A collides withvehicle B while turning right, as desired. The motion of all vehicles is smooth and does not havecollisions other than the one requested. The generated query-compliant trajectories for both casesshow that our proposed language interface in CTG++ enables effective text-to-traffic generation.4.3 Evaluation of Traffic ModelTable 1: Quantitative results of CTG++ and the baselines under GPT-generated rules and STL rules.GPT keep distance GPT collision no collision speed limitfail rule real rel real fail rule real rel real fail rule real rel real fail rule real rel realBITS 0.183 2.615 0.116 0.362 0.176 0.660 0.107 0.359 0.092 0.065 0.099 0.352 0.111 0.559 0.104 0.352BITS+opt 0.240 0.000 0.097 0.360 0.277 0.130 0.068 0.362 0.109 0.041 0.070 0.353 0.162 0.120 0.058 0.353CTG 0.343 0.000 0.077 0.342 0.356 0.000 0.074 0.349 0.142 0.052 0.044 0.346 0.128 0.029 0.075 0.350CTG++ 0.173 0.000 0.077 0.331 0.264 0.000 0.085 0.331 0.084 0.036 0.040 0.332 0.083 0.028 0.043 0.344target speed no offroad goal waypoint+target speed stopregion+offroadfail rule real rel real fail rule real rel real fail rule1 rule2 real rel real fail rule1 rule2 real rel realBITS 0.111 1.526 0.114 0.355 0.097 0.018 0.099 0.355 0.111 2.261 1.010 0.115 0.358 0.121 0.005 1.690 0.068 0.353BITS+opt 0.257 0.742 0.072 0.356 0.105 0.005 0.100 0.358 0.254 3.681 0.746 0.079 0.342 0.095 0.020 2.053 0.097 0.354CTG 0.091 0.281 0.105 0.379 0.172 0.002 0.042 0.346 0.118 2.388 0.387 0.052 0.345 0.128 0.002 0.808 0.040 0.336CTG++ 0.060 0.274 0.082 0.370 0.097 0.004 0.038 0.328 0.101 2.352 0.396 0.038 0.338 0.081 0.003 0.411 0.076 0.324We assess the traffic model component of CTG++ under the two GPT-generated rules (as describedin Section 4.2) and six STL rules from [6] (see Appendix E.1 for details of each rule). The quantita-tive results in Table 1 underscore CTG++’s superiority over baselines with a good balance betweenstability, rule satisfaction, and realism. Specifically, CTG++ secures the lowest failure rate andscene-level realism deviation in 7 out of 8 settings, reflecting its effective scene-level modeling and7enhanced interaction dynamics. Furthermore, CTG++ also achieves the lowest rule violation andagent-level realism deviation for the majority of settings, demonstrating that enhanced interactionmodeling does not compromise agent-level realism or rule adherence.Table 2: Ablation study of CTG++ features.fail rule real rel realCTG++ 0.173 0.000 0.077 0.331CTG++ no edge 0.227 0.000 0.077 0.341CTG++ scene 0.886 1.043 0.127 0.392Qualitatively, CTG++ generates rule-compliant trajec-tories featuring more realistic motion with fewer in-stances of collisions or off-road incidents than the base-lines. We provide examples of CTG-generated trajec-tories (when using the same language generated lossfunctions) for the same scenes as those previouslyshown for CTG++ (Figure 4b and Figure 4d). Specifically, in Figure 4b, CTG’s trajectories displayoff-road instances (indicated by △) involving two vehicles and a background vehicle. Similarly, inFigure 4d, for CTG’s trajectories, the collision between vehicles A and B (in ☆) occurs off-road, andthe background vehicle has a curvy, unrealistic path. See Appendix C for qualitative comparisonsunder STL rules.(a) CTG++ (b) CTG++ no edgeFigure 5: Darker red means higher attention by theblue vehicle. Without edge information, CTG++ noedge results in a collision (in ☆).Ablation Study. We evaluate the efficacyof our spatial module and the utilization ofagent coordinates. As demonstrated in Ta-ble 2, CTG++ surpasses CTG++ no edge (i.e.,CTG++ but having eijtin Equation (3) replacedby zeros) and CTG++ scene (i.e., CTG++ butusing scene-centric coordinates) under the GPTKeep Distance rule. The absence of edge in-formation leads to increased failure rates due todecreased awareness of interactions. Moreover,the use of scene-centric coordinates notably in-flates failure rates and realism deviations, as thetraffic rapidly deviates from the training distri-bution during the rollout. To visualize the ef-fectiveness of spatial attention, we display the attention maps for a vehicle of interest (highlighted inblue) for both CTG++ and CTG++ no edge when guidance is not applied. As depicted in Figure 5,CTG++ guides the vehicle to pay attention to relevant neighboring agents, while CTG++ no edgeresults in arbitrary attention and a consequent collision (marked as ☆) with the vehicle ahead.5 ConclusionSummary. In this paper, we present, CTG++, a language-guided scene-level conditional diffusionmodel for realistic, query-compliant traffic simulation. In particular, we leverage an LLM for trans-lating a user query into a differentiable loss function and propose a scene-level conditional diffusionmodel (with a spatial-temporal transformer architecture) to translate the loss function into realistic,query compliant trajectories. Extensive evaluation demonstrates the effectiveness of our approach.Limitations and Future Work. CTG++ currently does not support complex commands involvingmany interactions with map (see Appendix F.4). Our framework can be extended by using a multi-modal LLM that takes in vision data (e.g., bird’s-eye view map) for a finer control of the traffic andthus supports more complex commands. Second, our framework does not support automatic errordetection and fixing for the GPT4 generated loss function. As the loss function (in code format) canhave wrong semantics, it is necessary to instruct GPT4 to detect and repair it. Our framework canbe potentially extended to provide the simulation running results to GPT4 to iteratively instruct it tofix the generated loss function. Third, current trajectory generation is relatively slow and take about1 minute to generate each simulated scenario. Recent work which uses distillation [40] to greatlyspeed up the generation process can be leveraged to reduce the time cost. Our work opens up manypossibilities including adapting our architecture to general multi-agent robotic tasks and using ourproposed two-step approach for other tasks with no paired text-behavior data.8AcknowledgmentThe authors want to thank Sushant Veer and Shuhan Tan for valuable discussions. This work startedwhen Ziyuan Zhong interned at NVIDIA Research. He is also supported by NSF CCF 1845893 andIIS 2221943.References[1] CARSURANCE. 24 self-driving car statistics & facts, 2022. URL https://carsurance.net/insights/self-driving-car-statistics/ .[2] N. Kalra and S. M. Paddock. Driving to Safety: How Many Miles of Driving Would It Taketo Demonstrate Autonomous Vehicle Reliability? RAND Corporation, 2016. URL http://www.jstor.org/stable/10.7249/j.ctt1btc0xw .[3] Waymo. Waymo safety report. https://storage.googleapis.com/waymo-uploads/files/documents/safety/2021-03-waymo-safety-report.pdf, February 2021.[4] S. Suo, S. Regalado, S. Casas, and R. Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR) , pages 10400–10409, June 2021.[5] D. Xu, Y . Chen, B. Ivanovic, and M. Pavone. Bits: Bi-level imitation for traffic simulation,2022.[6] Z. Zhong, D. Rempe, D. Xu, Y . Chen, S. Veer, T. Che, B. Ray, and M. Pavone. Guidedconditional diffusion for controllable traffic simulation, 2022.[7] NHTSA. Nhtsa crash viewer. https://crashviewer.nhtsa.dot.gov/ , 2023. Accessed:2023-05-03.[8] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y .-P. Fl ̈otter ̈od, R. Hilbrich, L. L ̈ucken,J. Rummel, P. Wagner, and E. Wiessner. Microscopic traffic simulation using sumo. In 201821st International Conference on Intelligent Transportation Systems (ITSC) , pages 2575–2582,2018. doi:10.1109/ITSC.2018.8569938.[9] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. CARLA: An open urban drivingsimulator. volume 78 of Proceedings of Machine Learning Research , pages 1–16, 13–15 Nov2017. URL http://proceedings.mlr.press/v78/dosovitskiy17a.html .[10] LG Electronics. SVL Simulator: An Autonomous Vehicle Simulator, A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles. https://github.com/lgsvl/simulator, 2021.[11] P. Dhariwal and A. Nichol. Diffusion models beat gans on image synthesis. Advances inNeural Information Processing Systems , 34:8780–8794, 2021.[12] X. L. Li, J. Thickstun, I. Gulrajani, P. Liang, and T. B. Hashimoto. Diffusion-lm improvescontrollable text generation, 2022.[13] M. Janner, Y . Du, J. B. Tenenbaum, and S. Levine. Planning with diffusion for flexible behaviorsynthesis. In International Conference on Machine Learning , 2022.[14] A. Ajay, Y . Du, A. Gupta, J. Tenenbaum, T. Jaakkola, and P. Agrawal. Is conditional generativemodeling all you need for decision-making?, 2022.[15] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image syn-thesis with latent diffusion models, 2022.[16] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans, 2023.9[17] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor. Chatgpt for robotics: De-sign principles and model abilities. Technical Report MSR-TR-2023-8, Microsoft,February 2023. URL https://www.microsoft.com/en-us/research/publication/chatgpt-for-robotics-design-principles-and-model-abilities/ .[18] S. Bubeck, V . Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y . T. Lee,Y . Li, S. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y . Zhang. Sparks of artificialgeneral intelligence: Early experiments with gpt-4. March 2023.[19] H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan,and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In CVPR , 2020.[20] E. Brockfeld, R. D. K ̈uhne, A. Skabardonis, and P. Wagner. Toward benchmarking of micro-scopic traffic flow models. Transportation Research Record , 1852(1):124–129, 2003.[21] Y . Chai, B. Sapp, M. Bansal, and D. Anguelov. Multipath: Multiple probabilistic anchortrajectory hypotheses for behavior prediction. In Conference on Robot Learning (CoRL) , pages86–99. PMLR, 2020.[22] Y . Chen, B. Ivanovic, and M. Pavone. Scept: Scene-consistent, policy-based trajectory pre-dictions for planning. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR) , pages 17103–17112, June 2022.[23] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone. Trajectron++: Dynamically-feasibletrajectory forecasting with heterogeneous data. In Computer Vision–ECCV 2020: 16th Eu-ropean Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16 , pages683–700. Springer, 2020.[24] D. Rempe, Z. Luo, X. B. Peng, Y . Yuan, K. Kitani, K. Kreis, S. Fidler, and O. Litany. Traceand pace: Controllable pedestrian animation via guided trajectory diffusion, 2023.[25] C. “. Jiang, A. Cornman, C. Park, B. Sapp, Y . Zhou, and D. Anguelov. Motiondiffuser: Con-trollable multi-agent motion prediction using diffusion. In Proceedings of the IEEE/CVF Con-ference on Computer Vision and Pattern Recognition (CVPR) , pages 9644–9653, June 2023.[26] T. Gu, G. Chen, J. Li, C. Lin, Y . Rao, J. Zhou, and J. Lu. Stochastic trajectory prediction viamotion indeterminacy diffusion. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR) , pages 17113–17122, June 2022.[27] D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, W. Huang, Y . Chebotar, P. Sermanet, D. Duckworth, S. Levine, V . Vanhoucke,K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. Florence. Palm-e: Anembodied multimodal language model. In arXiv preprint arXiv:2303.03378 , 2023.[28] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. Joshi, R. Ju-lian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath,I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1: Robotics transformer for real-world control at scale. In arXiv preprint arXiv:2212.06817 ,2022.[29] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.Ruano, K. Jeffrey, S. Jesmonth, N. J. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee,S. Levine, Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes,P. Sermanet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu,10M. Yan, and A. Zeng. Do as i can, not as i say: Grounding language in robotic affordances,2022.[30] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learningusing nonequilibrium thermodynamics. In F. Bach and D. Blei, editors, Proceedings of the32nd International Conference on Machine Learning , volume 37 of Proceedings of MachineLearning Research , pages 2256–2265, Lille, France, 07–09 Jul 2015. PMLR.[31] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In H. Larochelle,M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural InformationProcessing Systems , volume 33, pages 6840–6851. Curran Associates, Inc., 2020.[32] Y . Yuan, X. Weng, Y . Ou, and K. Kitani. Agentformer: Agent-aware transformers for socio-temporal multi-agent forecasting. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision (ICCV) , 2021.[33] J. Ngiam, V . Vasudevan, B. Caine, Z. Zhang, H.-T. L. Chiang, J. Ling, R. Roelofs, A. Bewley,C. Liu, A. Venugopal, D. J. Weiss, B. Sapp, Z. Chen, and J. Shlens. Scene transformer:A unified architecture for predicting future trajectories of multiple agents. In InternationalConference on Learning Representations , 2022. URL https://openreview.net/forum?id=Wm3EA5OlHsG .[34] R. Girgis, F. Golemo, F. Codevilla, M. Weiss, J. A. D’Souza, S. E. Kahou, F. Heide, andC. Pal. Latent variable sequential set transformers for joint multi-agent motion prediction. InInternational Conference on Learning Representations , 2022. URL https://openreview.net/forum?id=Dup_dDqkZC5 .[35] Z. Zhou, L. Ye, J. Wang, K. Wu, and K. Lu. Hivt: Hierarchical vector transformer for multi-agent motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR) , pages 8823–8833, June 2022.[36] N. Nayakanti, R. Al-Rfou, A. Zhou, K. Goel, K. S. Refaat, and B. Sapp. Wayformer: Motionforecasting via simple & efficient attention networks, 2022.[37] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u.Kaiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V . Luxburg,S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Ad-vances in Neural Information Processing Systems , volume 30. Curran Associates, Inc.,2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf .[38] Y . Chen, B. Ivanovic, and M. Pavone. Scept: Scene-consistent, policy-based trajectory pre-dictions for planning. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recog-nition (CVPR) , pages 17082–17091, Los Alamitos, CA, USA, jun 2022. IEEE Computer So-ciety. doi:10.1109/CVPR52688.2022.01659. URL https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01659 .[39] OpenAI. Gpt-4 technical report, 2023.[40] T. Salimans and J. Ho. Progressive distillation for fast sampling of diffusion models, 2022.[41] A. Q. Nichol and P. Dhariwal. Improved denoising diffusion probabilistic models. In Interna-tional Conference on Machine Learning , pages 8162–8171. PMLR, 2021.[42] OpenAI. Openai - guides - gpt - chat completions api, 2023. URL https://platform.openai.com/docs/guides/gpt/chat-completions-api .11A Algorithm of Training and Sampling in DetailsWe mostly follow the training and sampling procedures from [6] and show the detailed algorithmsfor training and sampling in the following.A.1 TrainingAlgorithm 1 Training1:Require a real-world driving dataset D, conditional diffusion model to train μ0θ, transition function f, denoising steps K.2:while not converge do3: c,τ0∼D4: k∼{1,...,K}5: ε∼N(0,I)6: Corrupt action trajectory τka=√ ̄αkτ0a+√1− ̄αkεwith ̄αk=∏kl=01−βl7: Get the corresponding state trajectory τks=f(s0,τka)8: Use model to predict the uncorrupted trajectory ˆτ0a=μ0θ(τk,k,c)9: Get the predicted state trajectory ˆτ0=[ˆτ0a;f(s0,ˆτ0a)]10: Take gradient step on ∇θ∣∣τ0−ˆτ0∣∣211: end whileContrary to [6], which samples trajectories at the agent level, we opt for scene-level trajectory sam-pling, allowing the model to make joint predictions on all scene agents. The process is detailedin Algorithm 1. During each training iteration, the context cand the ground-truth trajectory τ0are sampled from a real-world driving dataset, and the denoising step kis uniformly selected from{1, . . . , K}. We derive the noisy input τkfrom τ0by initially corrupting the action trajectory viaτka=√ ̄αkτ0a+√1− ̄αkε, with ε∼N(0,I)and ̄αk=∏kl=01−βl. Subsequently, the correspondingstate is computed as τks=f(s0,τka). The diffusion model indirectly parameterizes μθin eq. (2)by predicting the uncorrupted trajectory ˆτ0=[ˆτ0a;f(s0,ˆτ0a)], where ˆτ0a=μ0θ(τk, k,c)is the net-work’s direct output (see [12, 13, 41]). We then use a simplified loss function to train the model asfollows:L(θ)=Eε,k,τ0,c[∣∣τ0−ˆτ0∣∣2]. (5)A cosine variance schedule [13, 41] is utilized in the diffusion process, employing K=100diffusionsteps.A.2 SamplingWe show the guided sampling algorithm in Algorithm 2 which is directly from [6] as the notationsand procedure remain the same. The key difference is that our diffusion model formulation andbackbone models are all at scene-level rather than agent-level as in [6]. The scene-level formulationhelps to improve scene-level realism and decrease failure rates as the agents’ interactions can becaptured by the model inherently.Algorithm 2 Guided Sampling1:Require conditional diffusion model μθ, transition function f, guideJ, scale α, covariances Σk, diffusion steps K, inner gradientdescent steps W, number of actions to take before re-planning l.2:while not done do3: Observe state s0and context c4: Initialize trajectory τKa∼N(0,I);τKs=f(s0,τKa);τK=[τKa;τKs]5: fork=K,..., 1do6: μ∶=ˆτk−1a=μθ(τk,k,c)7: μ(0)=μ8: forj=1,...,W do9: μ(j)=μ(j−1)+α∇J(μ(j−1))10: ∆μ=∣μ(j)−μ(0)∣11: ∆μ←clip(∆μ,−βk,βk)12: μ(j)←μ(0)+∆μ13: end for14: τk−1a∼N(μ(M),Σk);τk−1s=f(s0,τk−1a);τk−1=[τk−1a;τk−1s]15: end for16: Execute first lactions of trajectory τ0a17: end while12Following [6, 12], the predicted mean is a weighted sum between the predicted clean action trajec-tory and the input action trajectory from last denoising step:ˆτk−1a=μθ(τk, k,c)=√ ̄αk−1βk1− ̄αkˆτ0a+√αk(1− ̄αk−1)1− ̄αkτka (6)The process of perturbing the predicted means from the diffusion model using gradients of a spec-ified objective is summarized in algorithm 2. Following [6], we use an iterative projected gradientdescent with the Adam optimizer and filtration , i.e., we guide several samples from the diffusionmodel and choose the one with the best rule satisfaction based on J.B More Details on ArchitectureB.1 Detailed ArchitectureWe show the detailed data flow of our proposed architecture in Figure A1. Its main difference withthe simplified architecture shown in Figure 2 is that we show position encoding, rFFN, and thedetails of the guidance module explicitly.denoisingstep kM MTemporalAttentionRow-wiseFeedForwardMNoisy Actions Statesaddrepeat L timesPositionEncodingMMapAgent-AgentInteraction ateach timestep tGuide actionusing JM MSpatialAttentionMapAttentionPositionEncodingPositionEncodingFeedForwardRow-wiseFeedForwardM weightedaddsampled noise w/ varianceaddTest T ime Only Guidance StepdynamicdynamicFigure A1: Test time denoising step using scene-level spatial-temporal transformer. ds,da, and dhrepresentthe dimensions of action, state, and latent for each vehicle per timestep.B.2 Gated AttentionFollowing [35], we use a variant of the original scaled dot-product attention block. In particular,we use a gating function to fuse the environmental features mtiwith the central agent’s features hti,enabling the block to have more control over the feature update. The resulting query, key, and valuevectors of the social attention layer are taken as inputs to the block:αit=softmax (qi⊺t√dk⋅[{kijt}j∈Ni]),mit=∑j∈Niαijtvijt,git=sigmoid (Wgate[hit,mit]),ˆhit=git⊙Wselfhit+(1−git)⊙mit,(7)whereNiis the set of agent i’s neighbors (all the agents except the agent itself within a certain socialradius), WgateandWselfare learnable matrices, and denotes element-wise product ⊙.13C Qualitative Comparison under STL rulesIn this section, we show a few qualitative examples (Figure A2 - Figure A7) comparing CTG++ andthe strongest baseline (in terms of rule satisfaction) under the STL rules. Overall, CTG++ generatesrealistic, rule-satisfying trajectories. The baseline method can usually also satisfy the rule. However,their trajectories usually sacrifice one or more of the following aspects: (1) the trajectories are curvy,unrealistic, (2) the trajectories involve off-road accidents, and (3) the agent interaction is sub-optimalleading to collision(s).(a) CTG++ speed limit (0.037) (b) CTG speed limit (0.041)Figure A2: Qualitative comparison between CTG++ and CTG under speed limit STL rule (the numbers inparentheses represent rule violations). CTG++ achieves lower rule violation than CTG. Besides, CTG involvescollision between the blue vehicle and the green vehicle.(a) CTG++ target speed (0.213) (b) CTG target speed (0.163)Figure A3: Qualitative comparison between CTG++ and CTG under target speed STL rule (the numbers inparentheses represent rule violations). Although CTG achieves a bit better target speed rule satisfaction, itinvolves a vehicle collides with crossing vehicles and then goes off-road.14(a) CTG++ no collision (0) (b) BITS+opt no collision (0)Figure A4: Qualitative comparison between CTG++ and BITS+opt under no collision STL rule (the numbersin parentheses represent rule violations). Both methods satisfies the rule perfectly as no collision happens.However, BITS+opt have highly curvy, unrealistic trajectories as the cost of satisfying the rule.(a) CTG++ no off-road (0) (b) CTG no off-road (0)Figure A5: Qualitative comparison between CTG++ and CTG under no off-road STL rule (the numbersin parentheses represent rule violations). Both methods satisfies the rule perfectly as no off-road happens.However, CTG lead to multiple collisions among the pink vehicle and vehicles that are stationary.D HyperparametersD.1 Training HyperparametersCTG++ is trained on a machine with Intel i9 12900 and NVIDIA GeForce RTX 3090. It takesapproximately 10 hours to train CTG++ for 50K iterations. We use Adam optimizer with a learningrate of 1e-4.D.2 Pair Selection Criteria for GPT query based rulesWe choose two vehicles A and B in each scene such that they satisfy the following criteria:15(a) CTG++ stop sign + no off-road (0, 0) (b) CTG stop sign + no off-road (0.732, 0)Figure A6: Qualitative comparison between CTG++ and CTG under stop sign and no off-road STL rule (thenumbers in parentheses represent rules violations). Vehicles are supposed to stop within the marked boundingboxes without going off-road. CTG++ satisfies both rules while CTG only satisfies the no off-road rule. Be-sides, CTG involves a collision between the grey vehicle and the blue vehicle.(a) CTG++ goal waypoint + target speed (0.991,0.296)(b) CTG goal waypoint + target speed (1.24, 0)Figure A7: Qualitative comparison between CTG++ and CTG under goal waypoint + target speed STL rule(the numbers in parentheses represent rules violations). Vehicles are supposed to reach the marked waypointswith target speed (same speed as in the dataset). CTG++ satisfies both rules better than CTG. Besides, CTGinvolves a collision between the two orange vehicles in the end.• Both have current speed larger than 2m/s.• At 0s and 2s, the distance between A and B is within the range 10m to 30m.• At 0s and 2s, the orientation difference between a and b is smaller than 108 degrees (forGPT collision) and 36 degrees (for GPT keep distance).The criteria is a coarse-grained filtration for those pairs that are more likely to have keep distance /collision interactions in the original training dataset. If more than one pair in the scene satisfy thefollowing criteria, we select the pair with smallest distance. If none of pairs in a scene satisfy the16following criteria, we skip the scene. After the filtrations, out of the 100 validation scenes, we have50 scenes remained for GPT collision and 40 scenes for GPT keep distance .E Experiment DetailsE.1 Quantitative RulesWe assess the traffic model component of CTG++ under both GPT-generated rules and STL rulesfrom [6]: (1) GPT Keep Distance and (2) GPT Collision , as described in Section 4.2 (refer toAppendix D.2 for pair selection); (3) No Collision dictates that vehicles must avoid collisions witheach other; (4) Speed Limit ensures vehicles do not exceed a set speed limit threshold (75% quantileof all moving vehicles in a given scene); (5) Target Speed requires vehicles to maintain a specifiedspeed (50% of their speed in the ground truth scene) at each time step; (6) No Offroad prohibitsvehicles from leaving the drivable area; (7) Goal Waypoint and Target Speed instructs vehicles toreach their designated goal while adhering to the specified target speeds; (8) Stop Sign and NoOff-road requires vehicles to halt upon entering a stop sign region and avoid straying off the road.E.2 Metrics of Rule ViolationWe provide the details for the metrics we use for measuring rule violation in this section. For allthe metrics of rule violation, we average the metrics over all validation scenes. Besides, they aredesigned such that the smaller the better (i.e., rules are better satisfied).GPT Keep Distance. the following vehicle’s (in the chosen pair) average L2 distance deviationfrom the specified range.GPT Collision. if a collision happens between the two vehicles in the chosen pair.No Collision. collision rate of all vehicles in a scene.Speed Limit. average deviation from the speed limit of all vehicles in a scene.Target Speed. average deviation from the target speed of all vehicles in a scene.No Offroad. off-road rate of all vehicles in a scene. We consider a vehicle going off-road if itscenter goes off-road.Goal Waypoint. average vehicle’s smallest l2 distance deviation from the specified correspondinggoal waypoints of all vehicles in a scene.Stop Sign. average smallest speed within the stop sign region of all vehicles in a scene.F Details of Language InterfaceIn this section, we provide more details and limitation of our proposed language interface for trafficsimulation.F.1 Details of Vehicle IndexingIn practice, instead of using color for vehicles which is used in Figure 1 for better illustration pur-pose, we use indices according to the context from the driving dataset. The user can tell GPT4 thevehicles to control via their indices, e.g., ”vehicle 1 should collide with vehicle 2”.F.2 Details of PromptingIn Figure 4, we provide an example of a pre-defined API function and a query-loss function pair. Inour experiments, we additionally provide the following API functions:17transform coord world toagent i.this function transform the predicted position and yaw fromworld coordinate to the agent i coordinate.select agent ind. this function returns the slice of x with index i.getcurrent lane projection. this function returns the projection of each vehicle predicted trajec-tory on its current lane in agent-centric coordinate.getleftlane projection. this function is similar to get current lane except it returns the left lanewaypoints. If there is no left lane, the original trajectory will be returned.getright lane projection. this function is similar to get current lane except it returns the right lanewaypoints. If there is no right lane, the original trajectory will be returned.In addition to the acceleration loss paired example shown in Figure 4, we provide another query-loss function pair example. ”Generate a loss class such that, vehicle 1 should always stay on theleft side of vehicle 2.” The corresponding function penalize the cases when vehicle 1 not on the leftsize of vehicle 2. This function provides GPT4 a sense of the relationship between direction and thetrajectories.We additionally specify the dimension of the input trajectory and the input and output of the expectedloss function wrapped in a loss class such that GPT4 know which dimension of the trajectory tooperate on when needed: ”The generated loss class should have a function: forward(x, data batch,agtmask). x is a tensor representing the current trajectory with shape (B, N, T, 6) where B is thenumber of vehicles (consisting of vehicle with index 0 to B-1), N is the number of samples for eachvechile, T is the number of timesteps (each represents 0.1s), and 6 represents the (x, y, vel, yaw,acc, yawvel) in corresponding agent coordinate of each vehicle. data batch is a dictionary that canbe used as parameter for relevant APIs. The function should return a loss for every sample of everyvehicle with the shape (B, N) or return a loss for every sample with the shape (N).”F.3 Success Examples with Complete Query and ResponseIn this section, we provide five generated success example programs and their queries. For all theexamples, in addition to the user query, GPT4 is provided the APIs for the desired loss functionand the helper functions as well as two paired examples: acceleration limit and stay on left. Theexamples help to elucidate two key aspects:• the standard format of a loss class to generate.• how to manipulate different trajectory dimensions using various helper functions (e.g.,transform coord agents toworld, transform coord world toagent i, etc.) and construct adifferentiable loss on the trajectory.We first show the complete query (including system messages and user query sent to GPT4 API)and response for GPT Collision in Appendix F.3.1. Next, in Appendix F.3.2-Appendix F.3.5, forfour other success exmples, we only show the user query and response as they use the same systemmessages as GPT Collision in Appendix F.3.1).F.3.1 Success Example: GPT CollisionThe OpenAI GPT4 API allows one to specify multiple system messages (which includes the API ofthe loss function to generation, the APIs of the helper functions, and two paired prompted examples),and one user message (which is essentially the user query) (see [42] for more details). Thus, we listeach system message and the user message in the following. The qualitative example of using thisreturned loss function has been shown in Figure 4c.System Message 1 - Loss Function API and Helper Functions API:1 "The generated loss class should have a function \2 forward (x, data_batch , agt_mask ). x is a tensor representing thecurrent trajectory with shape (18B, N, T, 6) where B is thenumber of vehicles ( consistingof vehicle 0 to B-1), N is thenumber of samples for eachvechile , T is the number oftimesteps ( each represents 0.1s), and 6 represents the (x, y,vel , yaw , acc , yawvel ) incorresponding agent coordinateof each vehicle . data_batch isa dictionary that can be usedas parameter for relevant APIs .The function should return aloss for every sample of everyvehicle with the shape (B, N)or return a loss for everysample with the shape (N). \3 You can use PyTorch and the following APIs if needed amd youshould not use other unseenfunctions : \4 1. transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch ). pos_pred is the predictedposition trajectory in agentcoordinate with shape (B, N, T,2) and 2 correspond to (x, y).yaw_pred is the predicted yawtrajectory in agent coordinatewith shape (B, N, T, 1). Thefunction transform thepredicted position and yaw fromtheir agent coordinates intothe world coordinate . Thefunction returns position andyaw in the world coordinatewith the shape (B, N, T, 2) and(B, N, T, 1). \5 2. transform_coord_world_to_agent_i ( pos_pred_world , yaw_pred_world, data_batch , ind_k ). pos_predis the predicted positiontrajectory in world coordinatewith shape (B, N, T, 2) and 2represents (x, y). yaw_pred isthe predicted yaw trajectory inworld coordinate with shape (B, N, T, 1). data_batch is thedictionary mentioned before .ind_k is the index whose agentcoordinate will be converted to. The function transform thepredicted position and yaw fromworld coordinate to the agenti coordinate . The functionreturns position and yaw in theagent i coordinate with theshape (B, N, T, 2) and (B, N, T, 1). \6 3. select_agent_ind (x, i). x has shape (B, N, T, k) where k can beany positive integer and i isa non - negative integerrepresenting the selected index. This function returns theslice of x with index i withshape (N, T, k). \7 4. get_current_lane_projection ( pos_pred , yaw_pred , data_batch ).pos_pred and yaw_pred have19shape (B, N, T, 2) and (B, N, T, 1). They are all in agentcoordinate . data_batch is adictionary mentioned earlier .This function returns theprojection of each vehiclepredicted trajectory on itscurrent lane in agentcoordinate with shape (B, N, T,3) where 3 represents (x, y,yaw). \8 5. get_left_lane_projection ( pos_pred , yaw_pred , data_batch ). It issimilar to get_current_laneexcept it returns the left lanewaypoints . If there is no leftlane , the original trajectorywill be returned . \9 6. get_right_lane_projection ( pos_pred , yaw_pred , data_batch ). Itis similar to get_current_laneexcept it returns the rightlane waypoints . If there is noleft lane , the originaltrajectory will be returned ."System Message 2 - Paired Example 1 Query:1" Generate a loss class such that vehicle 1 should always drive withacceleration below acc_limit ."System Message 2 - Paired Example 1 Response:1class AccLimitLoss ( GuidanceLoss ):2 ’’’3 Keep accelerations below a certain limit .4 ’’’5 def __init__ (self , acc_limit ):6 ’’’7 - acc_limit : acceleration limit .8 ’’’9 super (). __init__ ()10 self . acc_limit = acc_limit1112 def forward (self , x, data_batch , agt_mask = None ):13 ’’’14 - x : the current trajectory (B, N, T, 6) where N is thenumber of samples and 6 is(x, y, vel , yaw , acc ,yawvel )1516 - loss : (B, N)17 ’’’18 if agt_mask is not None :19 x = x[ agt_mask ]20 acc = x[... , [4]]21 acc_dev = torch .abs (acc ) - self . acc_limit22 acc_loss = torch . clip ( acc_dev , min=0)23 loss = torch . mean ( acc_loss , dim=[-2, -1])2425 return lossSystem Message 3 - Paired Example 2 Query:1" Generate a loss class such that vehicle 20 should always stay on theleft side of vehicle 13."20System Message 3 - Paired Example 2 Response:1class StayOnLeftLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should always keep on the left sideof vehicle with index ref_ind .4 ’’’5 def __init__ (self , target_ind =20 , ref_ind =13 , decay_rate =0.9):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . decay_rate = decay_rate1011 def forward (self , x, data_batch , agt_mask = None ):12 B, N, T, _ = x. shape13 if agt_mask is not None :14 x = x[ agt_mask ]15 # Select positions16 # (B,N,T,6) -> (B,N,T,2)17 pos_pred = x[... , :2]18 # Select yaws19 # (B,N,T,6) -> (B,N,T,1)20 yaw_pred = x[... , 3:4]21 # convert prediction from the respective agent coordinates tothe world coordinate22 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)23 pos_pred_world , yaw_pred_world =24 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)25 # convert prediction from the world coordinate to the agentself . ref_ind coordinate26 # (B,N,T,2), (B,N,T,1), dict , int -> (B,N,T,2), (B,N,T,1)27 pos_pred_in_ref_ind , _ = transform_coord_world_to_agent_i (pos_pred_world ,yaw_pred_world , data_batch ,self . ref_ind )2829 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate30 # (B, N, T, 2), int -> (N, T, 2)31 pos_pred_i_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . target_ind )32 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate33 # (B, N, T, 2), int -> (N, T, 2)34 pos_pred_j_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . ref_ind )3536 # Since we only care about the y-axis , we only need to computethe y- axis ( rather thanboth x and y axis )deviation .37 # (N, T, 2), (N, T, 2) -> (N, T)38 pos_dev = pos_pred_j_in_ref_ind [... ,1] - pos_pred_i_in_ref_ind[... ,1]3940 # Clip the position deviation to 0 so that we only penalizethe positive deviation41 # (N, T) -> (N, T)42 pos_loss = torch . clip ( pos_dev , min=0)4344 # penalize early steps more than later steps2145 exp_weights = torch . tensor ([ self . decay_rate ** t for t inrange (T)], device =x. device )46 exp_weights /= exp_weights .sum ()47 pos_loss = pos_loss * exp_weights [None , :]48 # Take the mean over time49 # (N, T) -> (N)50 pos_loss = pos_loss . mean (-1)5152 return pos_lossUser Message - User Query:1" Generate a loss class such that vehicle 1 should collide with vehicle2."Received Message (from GPT4):1class CollisionLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should collide with vehicle withindex ref_ind .4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2, collision_radius =1.0):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . collision_radius = collision_radius1011 def forward (self , x, data_batch , agt_mask = None ):12 B, N, T, _ = x. shape13 if agt_mask is not None :14 x = x[ agt_mask ]15 # Select positions16 # (B,N,T,6) -> (B,N,T,2)17 pos_pred = x[... , :2]18 # Select yaws19 # (B,N,T,6) -> (B,N,T,1)20 yaw_pred = x[... , 3:4]21 # convert prediction from the respective agent coordinates tothe world coordinate22 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)23 pos_pred_world , yaw_pred_world =24 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)2526 # select the relevant agents with index self . target_ind in theworld coordinate27 # (B, N, T, 2), int -> (N, T, 2)28 pos_pred_i_world = select_agent_ind ( pos_pred_world , self .target_ind )29 # select the relevant agents with index self . ref_ind in theworld coordinate30 # (B, N, T, 2), int -> (N, T, 2)31 pos_pred_j_world = select_agent_ind ( pos_pred_world , self .ref_ind )3233 # Compute the distance between the two vehicles34 # (N, T, 2), (N, T, 2) -> (N, T)35 dist = torch . norm ( pos_pred_i_world - pos_pred_j_world , dim=-1)3637 # Compute the collision loss by penalizing the distancegreater than the collisionradius38 # (N, T) -> (N, T)2239 collision_loss = torch . clip ( self . collision_radius - dist , min=0)4041 # Take the mean over time42 # (N, T) -> (N)43 loss = collision_loss . mean (-1)4445 return lossThe resulted loss function from GPT4 showcases its capabilities:• It adeptly follows the template provided in the examples, producing a syntactically correctprogram.• It comprehends the ”collision” concept from the user query and knows how to integratethis concept with the input trajectory, as the examples we used for prompting (accelerationlimit and stay on left) are not related to collisions at all. This understanding results in aloss that encourages trajectory collision between two vehicles, achieved through the helperfunctions.F.3.2 Success Example: GPT KeepDistanceIn this success example, the returned program indicates that GPT4 understands the meaning of ”keepwithin” and generates a function that promotes vehicle 1 to stay within certain distance of vehicle 2.User Message - User Query:1" Generate a loss class such that vehicle 1 should always keep within10 -30m from vehicle 2."Received Message (from GPT4):1class KeepDistanceLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should always keep withinmin_distance and max_distancefrom vehicle with index ref_ind.4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2, min_distance =10 ,max_distance =30):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . min_distance = min_distance10 self . max_distance = max_distance1112 def forward (self , x, data_batch , agt_mask = None ):13 B, N, T, _ = x. shape14 if agt_mask is not None :15 x = x[ agt_mask ]16 # Select positions17 # (B,N,T,6) -> (B,N,T,2)18 pos_pred = x[... , :2]19 # Select yaws20 # (B,N,T,6) -> (B,N,T,1)21 yaw_pred = x[... , 3:4]22 # convert prediction from the respective agent coordinates tothe world coordinate23 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)24 pos_pred_world , yaw_pred_world =25 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)232627 # select the relevant agents with index self . target_ind in theworld coordinate28 # (B, N, T, 2), int -> (N, T, 2)29 pos_pred_i_world = select_agent_ind ( pos_pred_world , self .target_ind )30 # select the relevant agents with index self . ref_ind in theworld coordinate31 # (B, N, T, 2), int -> (N, T, 2)32 pos_pred_j_world = select_agent_ind ( pos_pred_world , self .ref_ind )3334 # Compute the distance between the two vehicles35 # (N, T, 2), (N, T, 2) -> (N, T)36 distance = torch . norm ( pos_pred_i_world - pos_pred_j_world , dim=-1)3738 # Compute the deviation from the desired distance range39 # (N, T) -> (N, T)40 distance_dev_min = self . min_distance - distance41 distance_dev_max = distance - self . max_distance4243 # Clip the negative values to 044 # (N, T) -> (N, T)45 distance_loss_min = torch . clip ( distance_dev_min , min=0)46 distance_loss_max = torch . clip ( distance_dev_max , min=0)4748 # Combine the losses49 # (N, T) -> (N, T)50 distance_loss = distance_loss_min + distance_loss_max5152 # Take the mean over time53 # (N, T) -> (N)54 loss = torch . mean ( distance_loss , dim=-1)5556 return lossF.3.3 Success Example: GPT SameDirectionIn this success example, the returned program indicates that GPT4 understands the meaning of”same direction” and generates a function that promotes vehicle 1 and vehicle 2 to move with thesame orientation.User Message - User Query:1" Generate a loss class such that vehicle 1 should move along the samedirection as vehicle 2."Received Message (from GPT4):1class SameDirectionLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should move along the same directionas vehicle with index ref_ind .4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind910 def forward (self , x, data_batch , agt_mask = None ):11 B, N, T, _ = x. shape12 if agt_mask is not None :13 x = x[ agt_mask ]2414 # Select yaws15 # (B,N,T,6) -> (B,N,T,1)16 yaw_pred = x[... , 3:4]17 # convert prediction from the respective agent coordinates tothe world coordinate18 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)19 _, yaw_pred_world = transform_coord_agents_to_world (x[... , :2], yaw_pred , data_batch )20 # select the relevant agents with index self . target_ind21 # (B, N, T, 1) -> (N, T, 1)22 yaw_pred_i = select_agent_ind ( yaw_pred_world , self . target_ind )23 # select the relevant agents with index self . ref_ind24 # (B, N, T, 1) -> (N, T, 1)25 yaw_pred_j = select_agent_ind ( yaw_pred_world , self . ref_ind )2627 # Compute the yaw deviation28 # (N, T, 1), (N, T, 1) -> (N, T)29 yaw_dev = torch .abs ( yaw_pred_i - yaw_pred_j ). squeeze (-1)3031 # Normalize the yaw deviation to [0, pi]32 # (N, T) -> (N, T)33 yaw_dev = torch . fmod ( yaw_dev , 2 * math .pi)34 yaw_dev = torch .min ( yaw_dev , 2 * math .pi - yaw_dev )3536 # Penalize deviations larger than a small threshold (e.g., 0.1radian )37 threshold = 0.138 yaw_loss = torch . clip ( yaw_dev - threshold , min=0)3940 # Take the mean over time41 # (N, T) -> (N)42 loss = torch . mean ( yaw_loss , dim=-1)4344 return lossF.3.4 Success Example: GPT Collision From BehindIn this success example, the returned program indicates that GPT4 understands the meaning of”collide” and ”from behind” and can create the respective losses (see collision loss in line 49 andnotbehind loss in line 53), respectively. It finally combines the two losses by taking their productas the final loss (line 57).User Message - User Query:1" Generate a loss class such that vehicle 1 should collide with vehicle2 from behind ."Received Message (from GPT4):1class CollideFromBehindLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should collide with vehicle withindex ref_ind from behind .4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2, collision_threshold =1.0):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . collision_threshold = collision_threshold1011 def forward (self , x, data_batch , agt_mask = None ):12 B, N, T, _ = x. shape2513 if agt_mask is not None :14 x = x[ agt_mask ]15 # Select positions16 # (B,N,T,6) -> (B,N,T,2)17 pos_pred = x[... , :2]18 # Select yaws19 # (B,N,T,6) -> (B,N,T,1)20 yaw_pred = x[... , 3:4]21 # convert prediction from the respective agent coordinates tothe world coordinate22 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)23 pos_pred_world , yaw_pred_world =24 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)25 # convert prediction from the world coordinate to the agentself . ref_ind coordinate26 # (B,N,T,2), (B,N,T,1), dict , int -> (B,N,T,2), (B,N,T,1)27 pos_pred_in_ref_ind , _ = transform_coord_world_to_agent_i (pos_pred_world ,yaw_pred_world , data_batch ,self . ref_ind )2829 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate30 # (B, N, T, 2), int -> (N, T, 2)31 pos_pred_i_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . target_ind )32 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate33 # (B, N, T, 2), int -> (N, T, 2)34 pos_pred_j_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . ref_ind )3536 # Compute the distance between the two vehicles along the x-axis ( longitudinal )37 # (N, T, 2), (N, T, 2) -> (N, T)38 dist_x = pos_pred_j_in_ref_ind [... , 0] - pos_pred_i_in_ref_ind[... , 0]39 # Compute the distance between the two vehicles along the y-axis ( lateral )40 # (N, T, 2), (N, T, 2) -> (N, T)41 dist_y = pos_pred_j_in_ref_ind [... , 1] - pos_pred_i_in_ref_ind[... , 1]4243 # Compute the squared distance between the two vehicles44 # (N, T), (N, T) -> (N, T)45 dist_squared = dist_x ** 2 + dist_y ** 24647 # Compute the collision loss by checking if the squareddistance is less than thecollision threshold48 # (N, T) -> (N, T)49 collision_loss = ( dist_squared < self . collision_threshold ).float ()5051 # Penalize the cases where the target vehicle is not behindthe reference vehicle52 # (N, T) -> (N, T)53 not_behind_loss = torch . clip (-dist_x , min=0)5455 # Combine the collision loss and not_behind_loss56 # (N, T), (N, T) -> (N, T)57 total_loss = collision_loss * not_behind_loss265859 # Take the mean over time60 # (N, T) -> (N)61 loss = torch . mean ( total_loss , dim=-1)6263 return lossF.3.5 Success Example: Lane FollowingIn this success example, the returned program indicates that GPT4 can extropolate the number ofvehicles of a scenario (although in both provided examples only two vehicles are involved, GPT4 cangeneralize the loss function involving three vehicles). Besides, GPT4 knows how to use functionsdescribed in helper function APIs but not even used in the provided paired examples. In particular,it knows to use get current lane projection to get the lane information in order to promote vehicleslane following behavior when it was asked to let the vehicles to ”follow their current lanes”.User Message - User Query:1" Generate a loss class such that vehicle 1, vehicle 2, and vehicle 3all follow their current lanes ."Received Message (from GPT4):1class LaneFollowingLoss ( GuidanceLoss ):2 ’’’3 Vehicles with indices target_inds should follow their currentlanes .4 ’’’5 def __init__ (self , target_inds =[1, 2, 3]):6 super (). __init__ ()7 self . target_inds = target_inds89 def forward (self , x, data_batch , agt_mask = None ):10 B, N, T, _ = x. shape11 if agt_mask is not None :12 x = x[ agt_mask ]13 # Select positions14 # (B,N,T,6) -> (B,N,T,2)15 pos_pred = x[... , :2]16 # Select yaws17 # (B,N,T,6) -> (B,N,T,1)18 yaw_pred = x[... , 3:4]1920 # Get the current lane projection in agent coordinate21 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,3)22 lane_proj = get_current_lane_projection ( pos_pred , yaw_pred ,data_batch )2324 # Compute the deviation from the lane projection25 # (B,N,T,2), (B,N,T,3) -> (B,N,T,2)26 pos_dev = pos_pred - lane_proj [... , :2]27 yaw_dev = yaw_pred - lane_proj [... , 2:3]2829 # Compute the squared loss for position and yaw deviation30 # (B,N,T,2) -> (B,N,T)31 pos_loss = torch .sum( pos_dev ** 2, dim=-1)32 # (B,N,T,1) -> (B,N,T)33 yaw_loss = torch . squeeze ( yaw_dev ** 2, dim=-1)3435 # Combine position and yaw loss36 # (B,N,T), (B,N,T) -> (B,N,T)37 total_loss = pos_loss + yaw_loss382739 # Select the loss for the target vehicles40 # (B,N,T), list -> (len( target_inds ), N, T)41 target_losses = [ select_agent_ind ( total_loss , ind) for ind inself . target_inds ]4243 # Stack the losses for target vehicles44 # list -> (len( target_inds ), N, T)45 target_losses = torch . stack ( target_losses , dim=0)4647 # Take the mean over time48 # (len( target_inds ), N, T) -> (len( target_inds ), N)49 target_losses = torch . mean ( target_losses , dim=-1)5051 # Take the mean over target vehicles52 # (len( target_inds ), N) -> (N)53 loss = torch . mean ( target_losses , dim=0)5455 return lossF.4 Failure CasesThe main limitation of the current language interface is on the complex interactions between thevehicles and the map. As we don’t explicitly pass the map information into the language interface, itcannot handle commands involving heavy interaction with the map. For example, ”vehicle A and Bmove to the rightmost lane one by one and then both turn right at the next intersection”. However, webelieve if one provides more helper functions (especially those interacting with the map) and morerelevant examples to the language interface, LLM can handle such more complicated commands.Additionally, we observe GPT4 can also fail to understand some concepts properly (e.g., ”cut in”).As the current work aims to provide early promising results on the feasibility of text-to-traffic, weleave a deeper exploration for future work.F.4.1 Failure Example : Move to Rightmost and Right TurnWe provide a failure example below. The system messages remain the same as in Appendix F.3.1.The generated loss function fails because it fails to consider ”turn right at the next intersection”.This can be potentially fixed by providing a couple of more helper functions (e.g., a function to getthe next intersection and corresponding lane’s information) and an additional example that interactswith intersection. In other words, our framework can potentially support such queries via extensionson helper functions and additional examples. As in the current work, we mainly want to show aframework of using LLM on language-guided realistic traffic generation, we leave a comprehensivestudy covering all kinds of complex queries for traffic generation as future work.User Message - User Query:1" vehicle 1 and 2 move to the rightmost lane one by one and then bothturn right at the next intersection."Received Message (from GPT4):1class RightTurnLoss ( GuidanceLoss ):2 ’’’3 Vehicle 1 and 2 should move to the rightmost lane one by one andthen both turn right at thenext intersection .4 ’’’5 def __init__ (self , target_inds =[1, 2], decay_rate =0.9):6 super (). __init__ ()7 self . target_inds = target_inds8 self . decay_rate = decay_rate28910 def forward (self , x, data_batch , agt_mask = None ):11 B, N, T, _ = x. shape12 if agt_mask is not None :13 x = x[ agt_mask ]14 # Select positions15 # (B,N,T,6) -> (B,N,T,2)16 pos_pred = x[... , :2]17 # Select yaws18 # (B,N,T,6) -> (B,N,T,1)19 yaw_pred = x[... , 3:4]2021 # Get the right lane projection22 # (B, N, T, 2), (B, N, T, 1), dict -> (B, N, T, 3)23 right_lane_proj = get_right_lane_projection ( pos_pred , yaw_pred, data_batch )2425 # Compute the deviation between the predicted position and theright lane projection26 # (B, N, T, 2), (B, N, T, 3) -> (B, N, T, 2)27 pos_dev = pos_pred - right_lane_proj [... , :2]2829 # Compute the squared distance30 # (B, N, T, 2) -> (B, N, T)31 pos_dev_sq = torch .sum( pos_dev ** 2, dim=-1)3233 # Penalize early steps more than later steps34 exp_weights = torch . tensor ([ self . decay_rate ** t for t inrange (T)], device =x. device )35 exp_weights /= exp_weights .sum ()36 pos_dev_sq = pos_dev_sq * exp_weights [None , None , :]3738 # Take the mean over time39 # (B, N, T) -> (B, N)40 pos_loss = pos_dev_sq . mean (-1)4142 # Select the loss for target vehicles43 # (B, N), list -> (len( target_inds ), N)44 target_loss = torch . stack ([ select_agent_ind ( pos_loss , i) for iin self . target_inds ])4546 # Take the mean over target vehicles47 # (len( target_inds ), N) -> (N)48 loss = target_loss . mean (0)4950 return lossF.4.2 Failure Example: Cut In if Behind and On The LeftIn this failure example, the returned program indicates that GPT4 fails to understand ”cut in” as itonly promotes vehicle 1 to be ahead of vehicle 2 when it is behind and on the left. However, cut-inrequires vehicle 1 to also move horizontally (i.e., along the y axis).User Message - User Query:1" Generate a loss class such that vehicle 1 should cut in ahead ofvehicle 2 if it is behind vehicle 2and on its left lane ."Received Message (from GPT4):1class CutInLoss ( GuidanceLoss ):2 ’’’293 Vehicle with index target_ind should cut in ahead of vehicle withindex ref_ind if it is behindvehicle ref_ind and on its leftlane .4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2, decay_rate =0.9):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . decay_rate = decay_rate1011 def forward (self , x, data_batch , agt_mask = None ):12 B, N, T, _ = x. shape13 if agt_mask is not None :14 x = x[ agt_mask ]15 # Select positions16 # (B,N,T,6) -> (B,N,T,2)17 pos_pred = x[... , :2]18 # Select yaws19 # (B,N,T,6) -> (B,N,T,1)20 yaw_pred = x[... , 3:4]21 # convert prediction from the respective agent coordinates tothe world coordinate22 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)23 pos_pred_world , yaw_pred_world =24 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)25 # convert prediction from the world coordinate to the agentself . ref_ind coordinate26 # (B,N,T,2), (B,N,T,1), dict , int -> (B,N,T,2), (B,N,T,1)27 pos_pred_in_ref_ind , _ = transform_coord_world_to_agent_i (pos_pred_world ,yaw_pred_world , data_batch ,self . ref_ind )2829 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate30 # (B, N, T, 2), int -> (N, T, 2)31 pos_pred_i_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . target_ind )32 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate33 # (B, N, T, 2), int -> (N, T, 2)34 pos_pred_j_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . ref_ind )3536 # Compute the x- axis and y- axis deviation37 # (N, T, 2), (N, T, 2) -> (N, T)38 pos_dev_x = pos_pred_j_in_ref_ind [... , 0] -pos_pred_i_in_ref_ind [... ,0]39 pos_dev_y = pos_pred_j_in_ref_ind [... , 1] -pos_pred_i_in_ref_ind [... ,1]4041 # Check if the target vehicle is behind and on the left laneof the reference vehicle42 behind_and_left = ( pos_dev_x > 0) & ( pos_dev_y > 0)4344 # Penalize the target vehicle for not cutting in ahead of thereference vehicle45 cut_in_loss = torch . where ( behind_and_left , pos_dev_x , torch .zeros_like ( pos_dev_x ))304647 # Penalize early steps more than later steps48 exp_weights = torch . tensor ([ self . decay_rate ** t for t inrange (T)], device =x. device )49 exp_weights /= exp_weights .sum ()50 cut_in_loss = cut_in_loss * exp_weights [None , :]51 # Take the mean over time52 # (N, T) -> (N)53 cut_in_loss = cut_in_loss . mean (-1)5455 return cut_in_lossF.5 Qualitative comparison between GPT4 and GPT3.5We next show two examples where we send the same message but use GPT3.5 rather than GPT4as the LLM. We keep the queries to be exactly the same as in Appendix F.3.1 and Appendix F.3.2,respectively. The received messages are shown in the following. Both loss functions are correctbut are more similar to the provided examples by containing code on applying a decay on loss overtime dimension (e.g., see line 45-48 in Received Message (from GPT3.5)). These lines of code arealmost the exactly ones (except the loss name) in a provided example (stay on left) but these codeare not always necessary to be included in the loss functions.F.5.1 GPT Collision (GPT3.5)User Message - User Query:1" Generate a loss class such that vehicle 1 should collide with vehicle2."Received Message (from GPT3.5):1class CollisionLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should collide with vehicle withindex ref_ind .4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2, decay_rate =0.9,collision_distance =2.0):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . decay_rate = decay_rate10 self . collision_distance = collision_distance1112 def forward (self , x, data_batch , agt_mask = None ):13 B, N, T, _ = x. shape14 if agt_mask is not None :15 x = x[ agt_mask ]16 # Select positions17 # (B,N,T,6) -> (B,N,T,2)18 pos_pred = x[... , :2]19 # Select yaws20 # (B,N,T,6) -> (B,N,T,1)21 yaw_pred = x[... , 3:4]22 # convert prediction from the respective agent coordinates tothe world coordinate23 # (B,N,T,2), (B,N,T,1), dict -> (B,N,T,2), (B,N,T,1)24 pos_pred_world , yaw_pred_world =25 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)26 # convert prediction from the world coordinate to the agentself . ref_ind coordinate3127 # (B,N,T,2), (B,N,T,1), dict , int -> (B,N,T,2), (B,N,T,1)28 pos_pred_in_ref_ind , _ = transform_coord_world_to_agent_i (pos_pred_world ,yaw_pred_world , data_batch ,self . ref_ind )2930 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate31 # (B, N, T, 2), int -> (N, T, 2)32 pos_pred_i_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . target_ind )33 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate34 # (B, N, T, 2), int -> (N, T, 2)35 pos_pred_j_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . ref_ind )3637 # Compute the distance between the two agents38 # (N, T, 2), (N, T, 2) -> (N, T)39 dist = torch . norm ( pos_pred_i_in_ref_ind -pos_pred_j_in_ref_ind , dim =-1)4041 # Clip the distance to the collision distance42 # (N, T) -> (N, T)43 dist_loss = torch . clip ( self . collision_distance - dist , min=0)4445 # penalize early steps more than later steps46 exp_weights = torch . tensor ([ self . decay_rate ** t for t inrange (T)], device =x. device )47 exp_weights /= exp_weights .sum ()48 dist_loss = dist_loss * exp_weights [None , :]49 # Take the mean over time50 # (N, T) -> (N)51 dist_loss = dist_loss . mean (-1)5253 return dist_lossF.5.2 GPT Keep Distance (GPT3.5)User Message - User Query:1" Generate a loss class such that vehicle 1 should always keep within10 -30m from vehicle 2."Received Message (from GPT3.5):1class KeepDistanceLoss ( GuidanceLoss ):2 ’’’3 Vehicle with index target_ind should always keep within min_distand max_dist from vehicle withindex ref_ind .4 ’’’5 def __init__ (self , target_ind =1, ref_ind =2, min_dist =10 , max_dist =30 , decay_rate =0.9):6 super (). __init__ ()7 self . target_ind = target_ind8 self . ref_ind = ref_ind9 self . min_dist = min_dist10 self . max_dist = max_dist11 self . decay_rate = decay_rate123213 def forward (self , x, data_batch , agt_mask = None ):14 B, N, T, _ = x. shape15 if agt_mask is not None :16 x = x[ agt_mask ]17 # Select positions18 # (B,N,T,6) -> (B,N,T,2)19 pos_pred = x[... , :2]20 # Select yaws21 # (B,N,T,6) -> (B,N,T,1)22 yaw_pred = x[... , 3:4]23 # convert prediction from the respective agent coordinates tothe world coordinate24 # (B,N,T,2), (B,N,T,1) -> (B,N,T,2), (B,N,T,1)25 pos_pred_world , yaw_pred_world =26 transform_coord_agents_to_world ( pos_pred , yaw_pred , data_batch)27 # convert prediction from the world coordinate to the agentself . ref_ind coordinate28 # (B,N,T,2), (B,N,T,1), dict , int -> (B,N,T,2), (B,N,T,1)29 pos_pred_in_ref_ind , _ = transform_coord_world_to_agent_i (pos_pred_world ,yaw_pred_world , data_batch ,self . ref_ind )3031 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate32 # (B, N, T, 2), int -> (N, T, 2)33 pos_pred_i_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . target_ind )34 # select the relevant agents with index self . target_ind in theagent self . ref_indcoordinate35 # (B, N, T, 2), int -> (N, T, 2)36 pos_pred_j_in_ref_ind = select_agent_ind ( pos_pred_in_ref_ind ,self . ref_ind )3738 # Compute the distance between the two agents39 # (N, T, 2), (N, T, 2) -> (N, T)40 dist = torch . norm ( pos_pred_j_in_ref_ind -pos_pred_i_in_ref_ind , dim =-1)4142 # Compute the deviation from the desired distance range43 # (N, T) -> (N, T)44 dist_dev = torch . where ( dist < self . min_dist , self . min_dist -dist , torch . where ( dist >self . max_dist , dist - self .max_dist , torch . zeros_like (dist )))4546 # penalize early steps more than later steps47 exp_weights = torch . tensor ([ self . decay_rate ** t for t inrange (T)], device =x. device )48 exp_weights /= exp_weights .sum ()49 dist_dev = dist_dev * exp_weights [None , :]50 # Take the mean over time51 # (N, T) -> (N)52 dist_loss = dist_dev . mean (-1)5354 return dist_loss33G Quantitative Evaluation Results with Additional RunsTo substantiate the superior performance of our method, we conduct additional experiments, theresults of which are presented in Table 3. In these experiments, we compare CTG++ against thestrongest baseline, CTG, across three distinct runs with varying random seeds. The settings are thesame as those for Table 1 and we take the average and standard deviation of the three runs. Wehighlight the better value only when it is significantly better than the other (i.e., if the values of thetwo methods differ by at least the sum of their standard deviations). In all eight settings, CTG++significantly performs better than CTG in terms of failure rate and scene-level realism. CTG++ alsotends to perform better than CTG in terms of rule satisfaction (winning 4 and tied on 6). In termsof realism, CTG++ is comparable to CTG (winning 3, losing 2, and tied on 3). Thus, the resultssuggest that CTG++ significantly performs better than the strongest baseline CTG++.Table 3: Quantitative results (mean with standard deviation of three runs) of CTG++ and the strongest baselinesCTG under GPT-generated rules and STL rules. We highlight the winning method that is significantly betterthan the other (i.e., if the values of the two methods differ by at least the sum of their standard deviations).GPT keep distance GPT collisionfail rule real rel real fail rule real rel realCTG 0.327±0.02 0±0 0 .07±0.006 0 .343±0.003 0.346±0.018 0±00.071±0.004 0.349±0.002CTG++ 0.171±0.002 0±0 0 .071±0.006 0.334±0.003 0.264±0.013 0±0 0 .084±0.004 0.336±0.006no collision speed limitfail rule real rel real fail rule real rel realCTG 0.137±0.01 0 .048±0.003 0 .048±0.005 0 .346±0.002 0.129±0.002 0 .029±0 0 .077±0.002 0 .353±0.003CTG++ 0.085±0.002 0.045±0.001 0 .047±0.007 0.326±0.006 0.087±0.004 0 .028±0 0 .042±0.003 0 .34±0.004target speed no offroadfail rule real rel real fail rule real rel realCTG 0.083±0.007 0 .281±0.001 0 .108±0.003 0 .379±0.002 0.167±0.008 0 .003±0 0 .041±0.002 0 .343±0.003CTG++ 0.062±0.001 0 .272±0.002 0 .083±0.004 0 .371±0.004 0.104±0.008 0.003±0 0 .044±0.006 0.323±0.005goal waypoint+target speed stopregion+offroadfail rule1 rule2 real rel real fail rule1 rule2 real rel realCTG 0.135±0.015 2 .407±0.016 0 .39±0.003 0 .052±0.002 0 .343±0.002 0.128±0.01 0 .003±0.001 0 .795±0.017 0.046±0.006 0.336±0.003CTG++ 0.103±0.002 2 .361±0.021 0.394±0.003 0.039±0.001 0 .336±0.004 0.08±0.012 0.003±0 0.44±0.051 0.076±0.001 0.323±0.00634 |
yobahDU4HPP | Learning Realistic Traffic Agents in Closed-loopChris Zhang James Tu Lunjun Zhang Kelvin Wong Simon SuoRaquel UrtasunWaabi University of Torontofczhang,jtu,lzhang,kwong,urtasun g@waabi.aiAbstract: Realistic traffic simulation is crucial for developing self-driving soft-ware in a safe and scalable manner prior to real-world deployment. Typically,imitation learning (IL) is used to learn human-like traffic agents directly fromreal-world observations collected offline, but without explicit specification of traf-fic rules, agents trained from IL alone frequently display unrealistic infractionslike collisions and driving off the road. This problem is exacerbated in out-of-distribution and long-tail scenarios. On the other hand, reinforcement learning(RL) can train traffic agents to avoid infractions, but using RL alone results inunhuman-like driving behaviors. We propose Reinforcing Traffic Rules (RTR),a holistic closed-loop learning objective to match expert demonstrations under atraffic compliance constraint, which naturally gives rise to a joint IL + RL ap-proach, obtaining the best of both worlds. Our method learns in closed-loop sim-ulations of both nominal scenarios from real-world datasets as well as procedu-rally generated long-tail scenarios. Our experiments show that RTR learns morerealistic and generalizable traffic simulation policies, achieving significantly bet-ter tradeoffs between human-like driving and traffic compliance in both nominaland long-tail scenarios. Moreover, when used as a data generation tool for train-ing prediction models, our learned traffic policy leads to considerably improveddownstream prediction metrics compared to baseline traffic agents.Keywords: Traffic simulation, Imitation learning, Reinforcement learning1 IntroductionSimulation is a critical component to safely developing autonomous vehicles. Designing realistictraffic agents is fundamental in building high-fidelity simulation systems that have a low domaingap to the real world. However, this can be challenging as we need to both capture the idiosyncraticnature of human-like driving and avoid unrealistic traffic infractions like collisions or driving off-road. Existing approaches used in the self-driving industry lack realism: they either replay loggedtrajectories in a non-reactive manner [1, 2] or use heuristic policies which yield rigid, unhuman-likebehaviors. Using data-driven approaches to learn more realistic policies is a promising alternative.The dominant data-driven approach has been imitation learning (IL), where nominal human drivingdata is used as expert supervision to train the agents. However, while expert demonstrations providesupervision for human-like driving, pure IL methods lack explicit knowledge of traffic rules and in-fractions which can result in unrealistic policies. Furthermore, the reliance on expert demonstrationscan be a disadvantage, as long-tail scenarios with rich interactions are very rare, and thus learning isoverwhelmingly dominated by more common scenarios with a much weaker learning signal.Reinforcement learning (RL) approaches encode explicit knowledge of traffic rules through hand-designed rewards that penalize infractions [3, 4, 5, 6, 7]. These approaches do not rely on expertdemonstrations and instead learn to maximize traffic-compliance rewards through trial and error. Inthe context of autonomy, this allows training on synthetic scenarios that do not have expert demon-strations in order to improve the robustness of learned policies [5]. However, traffic rules alonecannot describe all the nuances of human-like driving, and it is still an open question if one canmanually design a reward that can completely capture those intricacies.Work done at Waabi.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.// policy definition if is_close (ego): do cut_in( speed ) ... Eπ [ – R (τ) ]Eπ [ – log P E (τ) ]Figure 1: Our multi-agent policy is trained in closed-loop to match expert demonstrations under atraffic compliance constraint using both nominal offline data and simulated long-tail scenarios asa rich learning environment. This gives rise to an IL objective which supervises the policy usingreal-world expert demonstrations and an RL objective which explicitly penalizes infractions.Towards learning human-like and traffic-compliant agents, we propose Reinforcing Traffic Rules(RTR), a holistic closed-loop learning method to match expert demonstrations under a traffic-compliance constraint using both nominal offline data and additional simulated long-tail scenarios(Figure 1). We show our formulation naturally gives rise to a unified closed-loop IL + RL objective,which we efficiently optimize by exploiting differentiable dynamics and a per-agent factorization.In contrast to prior works that combine IL and RL [1], our closed-loop approach allows the modelto understand the effects of its actions and suffers significantly less from compounding error. Fur-thermore, exploiting simulated long-tail scenarios improves learning by exposing the policy to moreinteresting interactions that would be difficult and possibly dangerous to collect from the real worldat scale. Our experiments show that unlike a wide range of baselines, RTR learns realistic policiesthat better generalize to both nominal and long-tail scenarios unseen during training . The benefitscarry forward to downstream tasks such as simulating scenarios to train autonomy models; predic-tion models trained on data simulated with RTR have the strongest prediction metrics on real data,serving as further evidence that RTR has learned more realistic traffic simulation. We believe thisserves as a crucial step towards more effective applications of traffic simulation for self-driving.2 Related WorkTraditional traffic simulation: To generate general traffic flow, simulators [8, 9, 10, 11] typicallyuse heuristic models [12, 13, 14] as models of human driving. While these heuristic models are use-ful in capturing high-level traffic characteristics like flow and density, they are lacking in capturingthe lower-level nuances of human driving, thus limiting their applicability in self-driving. For morerealistic traffic models, we explore using machine learning as a more promising approach.Imitation learning: IL methods learn a control policy from expert demonstrations. In the contextof autonomous vehicles, [15] pioneered the use of behavior cloning (BC) to learn a driving policyin open-loop. Since then, open-loop methods have been explored for both autonomy [16, 17, 18]and traffic simulation [19, 20, 21]. Open-loop methods primarily suffer from distribution shift dueto compounding error [22], and so various techniques like data augmentation [23, 16], uncertainty-based regularization [24, 25], and augmentation with a rules-based planner [21] have been proposedto alleviate the problem. Closed-loop imitation learning approaches [26, 27, 28, 29], which addressdistribution shift by exposing the policy to a self-induced state distribution during training, havealso been explored in traffic simulation [30, 31, 32, 33]. While IL exploits expert demonstrations,there is a lack of explicit knowledge on safety-critical aspects like avoiding infractions. Methodslike differentiable common-sense penalties [30, 23], additional finetuning [34], and test-time guidedsampling [35] have been proposed to complement the standard IL approach. In this work, we usereinforcement learning to explicitly encode general non-differentiable traffic rules.Reinforcement learning: RL methods [36, 37, 38] do not require expert demonstrations and insteadlearn through interacting with the environment and a reward function. In self-driving, knowledgeof infractions can be encoded in the reward [3, 4, 5, 6, 7]. Because RL does not require expertdemonstrations, it is possible to train on procedurally generated scenarios for improved infractionavoidance [5]. However, it is difficult to learn realistic driving behavior using reward alone. RL2methods can be sample inefficient [3, 4], and specifying human-like driving with a scalar rewardis difficult. In this work, we supplement RL with IL to learn more human-like driving while stillenjoying the explicit learning signal provided from the reward.Combined IL + RL: Pretrained IL policies can be used as initialization to guide exploration [39, 40]or regularize learning [41, 42, 43, 44], and offline data can be used to bootstrap learning and helpwith sparse rewards [45, 46]. Offline RL methods also use IL for out-of-distribution generalizationand overestimation [47, 48]. In self-driving, IL has been used as a pre-training phase improvesample efficiency [49]. Recent work augments open-loop IL with RL [1, 50, 51] to learn more robustmodels. While promising, the open-loop nature of BC leaves the policy susceptible to distributionshift. In this work, we explore a holistic closed-loop IL + RL method for traffic simulation.Long-tail Scenarios: Real data can be curated [1, 52, 53] for more interesting scenarios, but col-lecting these at scale can be unsafe and expensive. Alternatively, scenarios can be generated bymaximizing an adversarial objective w.r.t. to the ego [54, 55, 56], but incorporating factors likediversity for training scenarios is still an open problem. In this work, we use knowledge-basedapproaches [57, 58] to guide generation towards a large variety of difficult but realistic scenarios.3 Learning Infraction-free Human-like Traffic AgentsTo learn realistic infraction-free agents, we propose a unified learning objective to match expertdemonstrations under an infraction-based constraint. We show how our formulation naturally givesrise to a joint closed-loop IL + RL approach which allows learning from both offline collected humandriving data when possible, and additional simulated long-tail scenarios containing rich interactionsthat would otherwise be difficult or impossible to collect in the real world.3.1 PreliminariesWe model multi-agent traffic simulation as a Markov Decision Process M= (S;A;R;P; )withstate space, action space, reward function, transition dynamics, and discount factor respectively. Asour focus is traffic simulation where we have access to all ground truth states, we opt for a fullyobservable and centralized multi-agent formulation where a single model jointly controls all agents.This enables efficient inference by sharing computation2, and easier interaction modeling.State, action and policy: We define the state s=fs(1);:::;s(N);mg2S to be the joint states ofNagents where Nmay vary across different scenarios, as well as an HD map mwhich captures theroad and lane topology. We parameterize the state of the i-th agents(i)with its position, heading,and velocity over the past Hhistory timesteps. The state also captures 2D bounding boxes for eachagent. Likewise, a=fa(1);:::;a(N)g2A is the joint action which contains the actions taken byall the agents. The i-th agent’s action a(i)is parameterized by its acceleration and steering angle.Agents are controlled by a single centralized policy (ajs)which maps the joint state to joint action.Trajectories and dynamics: We define a trajectory 0:T= (s0;a0;:::; sT1;aT1;sT)as a se-quence of state action transitions of length Tfor all agents. We use the kinematic bicycle model [59]as a simple but realistic model of transition dynamics P(st+1jst;at)for each agent. Trajectoriescan be sampled by first sampling from some initial state distribution 0before unrolling a policy through the transition dynamics, i.e. P() =0(s0)QT1t=0(atjst)P(st+1jst;at):Reward: LetR(i)(s;a(i))be a per-agent reward which is specific for the i-th agent, but dependenton the state of all agents, to model interactions such as collision. The joint reward is then R(s;a) =PNiR(i)(s;a(i)), withR() =PT1t=0tR(st;at)as the-discounted return of a trajectory.Policy learning: Both imitation learning (IL) and reinforcement learning (RL) can be describedin this framework. IL can be described as an f-divergence minimization problem: =arg minDfP()kPE()wherePEis the expert-induced distribution. RL on the other handaims to find the policy which maximizes the expected reward = arg maxEP[R()].2In our experiments, our model easily scales to 50 agents and a map ROI of 1000m400mper simulation.33.2 LearningTo learn a multiagent traffic policy that is as human-like as possible while avoiding infractions, weconsider the reverse KL divergence to the expert distribution with an infraction-based constraintarg minDKLP()kPE()s.t.EP[R()]0(1)R(i)(s;a(i)) =1if infraction0 otherwise,(2)whereR(i)is a per-agent reward function that penalizes any infractions (collision and off-roadevents). For a rich learning environment, we consider both a dataset Dof nominal expert trajec-toriesEPEcollected by driving in the real world, and additional simulated long-tail scenarios .Unlike real world logs, these scenarios contain what we denote as hero agents, which induce in-teresting interactions like sudden cut-ins, etc. (details in Section 3.4). More precisely, let beour learner policy. Let sS0S0be the initial state sampled from the long-tail distribution and Ss0represent the policy of the hero agent. The overall multiagent policy is given as(ai;tjst) =(Ss0(a(i)tjst)if agentiis hero(a(i)tjst)otherwise.(3)The overall initial state distribution is then given as 0= (1)D0+S0, whereD0correspondsto the offline nominal distribution, and 2[0;1]is a hyperparameter that balances the mixture.Taking the Lagrangian of Equation 8 decomposes the objective into an IL and RL component,L=EP24logPE()|{z}ILR()|{z}RL35H() =LIL+LRLH() (4)whereis a hyperparameter balancing the two terms, and H()is an additional entropy regulariza-tion term3[26]. Notably, we optimize IL and RL jointly in a closed-loop manner , as the expectationis taken with respect to the on-policy distribution P(). Compared to open-loop behavior cloning,the closed-loop IL component allows the model to experience states induced by its own policy ratherthan only the expert distribution, increasing its robustness to distribution shift. Furthermore, whilethe additional reward constraint may not change the optimal solution of the unconstrained problem(the expert distribution may be infraction-free), it can provide additional learning signal through RL.The RL component EP[R()]can be optimized using standard RL techniques and exploits bothoffline-collected nominal scenarios and simulated long-tail scenarios containing rich interactions.However, the imitation component LILis only well-defined when expert demonstrations are avail-able and thus only applied to nominal data. We start from an initial state sE0D0and have thepolicycontrol all agents in closed-loop simulation. The loss is the distance between the groundtruth and policy-induced trajectory4.LIL=EEDhEP(jsE0)D(E;)i: (5)It is difficult to obtain accurate action labels for human driving data in practice, so we only considerstates in our loss, i.e. D(E;) =PTt=1d(sEt;st)wheredis a distance function (e.g. Huber).Optimization: To optimize Equation 4, we first note that the LILcomponent is differentiable byusing the reparameterization trick [60] when sampling from the policy5and differentiating throughthe transition dynamics (kinematic bicycle model). We refer the reader to the appendix for moredetails. To optimize the LRLcomponent, we design a centralized and fully observable variantof PPO [36]. While it is possible to directly optimize the policy with the overall scene reward3The causal entropy term is included as an entropy regularizer in some learning algorithms such as PPO [36].In our setting, we empirically found that it was not necessary to include.4As we do not have access to PEdirectly to query log-likelihood, using a distance is essentially making theassumption that PE()/expD(E;).5We found that directly using the mean action provides good results without the need for sampling.4R(s;a) =PNi=1R(i)(s;a(i)), we instead optimize each agent individually with their respectiveindividual reward Ri(s;ai). While this factorized approach may ignore second-order interactioneffects, it considerably simplifies the credit assignment problem leading to more efficient learning.More precisely, we compute factorized value targets V(i)=PTt=0tR(i)t(st;a(i)t), and the factor-ized PPO policy loss is given as Lpolicy=PNi=1min(r(i)A(i);clip(r(i);1;1 +)A(i))wherethe probability ratio is factorized, i.e. r(i)=(a(i)js;m)=old(a(i)js;m)andA(i)is a factorizedGAE [61] estimate. More details can be found in the appendix.3.3 Model ArchitectureAgent States History Encoder Lane Graph Encoder Interaction Encoder Lane Graph Agent actions Figure 2: Our multiagent policy architecture.The value network architecture is the same butregresses value targets instead.Our traffic model architecture uses commonideas from SOTA traffic agent motion forecastingliterature in order to extract context and map fea-tures and predict agent actions (Figure 2). Recallthat a state s=fs(1);:::;s(N);mgconsists ofeach individual agent’s states s(i)that contain theagent’s kinematic state over a history horizon H,and an HD map m. From each agent’s state his-tory, a shared 1D CNN and GRU are used to ex-tract agent history context features h(i)a=f(s(i)).At the same time, a GNN is used to extract mapfeatures from a lane graph representation of themap inputhm=g(m). A HeteroGNN [62] then jointly fuses all agent context features and mapfeatures before a shared MLP decodes actions for each agent independently.fh(1);:::h(N)g=HeteroGNN (fh(1)a;:::;h(N)ag;hm) (6)((i);(i)) =MLP(h(i)): (7)We use independent normal distributions to represent the joint agent policy, i.e. (a(i)js) =N((i);(i)), and thus(ajs) =QNi=1(a(i)js). Note that agents are only independent condi-tional on their shared past context, and thus important interactive reasoning is still captured. Ourvalue model uses the same architecture but does not share parameters with the policy; we computefhv0;:::;hvNgin a similar fashion, and decode per-agent value estimates ^V(i)=MLPv(h(i)).3.4 Simulated Long-tail ScenariosNominal driving logs can be monotonous and provide weak learning signal when repeatedly used fortraining. In reality, most traffic infractions can be attributed to rare and long-tail scenarios belongingto a handful of scenario families [63] which can be difficult and dangerous to collect from the realworld at scale. In this work, we procedurally generate long-tail scenarios to supplement nominal logsfor training and testing. Following the self-driving industry standard, we use logical scenarios [64,57] which vary in the behavioral patterns of particular hero agents with respect to an ego agent(e.g. cut-in, hard-braking, merging, etc.). Designed by expert safety engineers, each logical scenariois parameterized by 2which controls lower-level aspects of the scenario such as behavioralcharacteristics of the hero agent (e.g. time-to-collision or distance triggers, aggressiveness, etc.),exact initial placement and kinematic states, and geolocation. A concrete scenario can then beprocedurally generated in an automated fashion by sampling a logical scenario and correspondingparameters. While these scenarios cannot be used for imitation as they are simulated and do nothave associated human demonstrations, they provide a rich reinforcement learning signal due to theinteresting and rare interactions induced by the hero agents.4 ExperimentsScenario sets: Our experiments use two datasets that represent nominal and long-tail scenariosrespectively. The N OMINAL dataset consists of a set of highway logs which capture varying trafficdensities and road topologies while containing expert demonstrations. The dataset consists of 46555 10 20 40FDE (m) 0.010.10Collision (%) Nominal0.2 0.4 0.6Accel JSD (nats) 0.010.10Nominal5 10 20 40FDE (m) 0.040.060.10Long-tail0.2 0.4 0.6Accel JSD (nats) 0.040.060.10Long-tailBC IL RL RL-Shaped BC+RL RTR (ours)Figure 3: Metrics (lower is better) on held-out nominal and long-tail scenarios. Pareto frontier ofbaselines is shaded; RTR achieves the best tradeoff between infraction and other realism metrics.Figure 4: Qualitative examples comparing the baseline IL model (left) and RTR (right). Scenarioswith hero agents (blue) are from the long-tail set. All other agents are controlled; pink is used forvisual emphasis. RTR avoids infractions while maintaining diverse, human-like driving behavior.snippets for training and 115 for testing, where each snippet lasts for 20 seconds. We use L ONG TAILto denote the scenario set generated using the process outlined in Section 3.4 which contain rareactor maneuvers like sudden cut-ins. We use 25 logical scenarios to generate a total of 333 concretescenarios, where 167 concrete scenarios are used for training and 166 are held-out for evaluation.This evaluation set is held-out on the parameter level and measures in-distribution generalization.We also evaluate on an additional set of held out logical scenarios to measure out-of-distributiongeneralization, with more details in Section 4.1.Metrics: We evaluate our traffic models’ ability to 1) match human-like driving and 2) avoid in-fractions. For the former, we measure similarity to the demonstration data by computing the finaldisplacement error ( FDE ) [33, 30], which measures the L2 distance between the agent’s simulatedand ground truth (GT) position after 5 seconds. Furthermore, we use Jensen-Shannon Divergence(JSD) [33, 31] between histograms of scenario features (agent acceleration) in order to measure dis-tributional realism. Finally, to measure infraction rates, we consider collision and driving off-road .We use a bootstrap resampling over evaluation snippets to compute uncertainty estimates. Resultswith more extensive metrics (and their definitions) can be found in the appendix.4.1 Benchmarking Traffic ModelsComparison to state-of-the-art: We evaluate RTR and several baselines on both nominal andlong-tail scenarios. For comparability, we use the same input representation and model architectureas described in Section 3.3 for all methods. Our first two baselines are representative of state-of-the-art imitation learning approaches for traffic simulation. BCis our single-step behavior cloningbaseline following [19]. The ILbaseline is trained using closed-loop policy unrolling [30, 31]. Next,RLis trained using our proposed factorized version of PPO [36] with the reward in Equation 20.TheRL-Shaped baseline includes an additional reward for driving at the speed-limit to encouragemore human-like driving. Finally, BC+RL is an RL augmented BC baseline following [1].6Figure 6: IL (top), RTR (bottom) on an out-of-distribution scenariowhere a hero agent (blue) comes to a complete stop on the highway.Meth. Col. (%) Off. (%)IL 11.82.1 1.00.1RTR 5.01.4 0.30.1Table 1: Results on out-of-distribution long-tail set.4 2 0 2 4Acceleration (m/s^2)0.00.10.20.3DemonstrationRL Policy4 2 0 2 4Acceleration (m/s^2)0.00.10.20.3DemonstrationRTR (ours)Figure 5: RL policy naively decelerates toavoid infractions. RTR learns to avoid col-lision more naturally without slowing down.Figures 3 and 4 show the results; a full table canbe found in the appendix. Firstly, the BC modelachieves poor realism because it suffers from distri-bution shift during closed-loop evaluation as it en-counters states unseen during training due to com-pounding error. Next, we see RL achieves low in-fraction rates but results in unhuman-like driving(Figure 5). This is because it is difficult for rewardalone to capture realistic driving. Efforts in rewardshaping result in improvements but are ultimatelystill insufficient. We see BC+RL improves upon BCinfractions but still lacks realism. This is because BC is an open-loop objective and only providessignal in expert states, while only the RL signal is present in non-expert states. Thus, the policystill suffers from compounding error with respect to imitation. On the other hand, closed-loop ILperforms better as it is more robust to compounding error, but still struggles on the long-tail scenarioset without explicit supervision. Finally, the holistic closed-loop IL and RL approach of RTR im-proves infraction rates while maintaining reconstruction and JSD metrics. We see RTR outperformseven pure RL in terms of infraction rate on long-tail scenarios, suggesting that including long-tailscenarios during training can help the model generalize to held-out evaluation long-tail scenarios.Out-of-distribution generalization: Recall from Section 3.4 that logical scenarios define a fam-ily of scenarios and concrete scenarios define variations within a family. While we have evaluatedin-distribution generalization by using held-out concrete scenarios, we further evaluate on held-outlogical scenarios . We use 11 held-out logical scenarios with new map topologies and behavioralpatterns to procedurally generate an additional out-of-distribution set consisting of 84 concrete sce-narios. Our results show that RTR generalizes to this set better than baselines (Figure 6, Table 1).4.2 Downstream EvaluationMethod FDE (m) CTE (m)BC 2.440.05 0.900.04IL 1.750.06 0.280.01RL 15.421.21 0.320.02RL-Shp 6.660.26 0.330.01BC+RL 9.060.50 0.420.03RTR 1.580.05 0.270.03Table 2: Prediction model trainedon synthetic, evaluated on real.One downstream application of traffic simulation is generatingsynthetic data for training autonomy models. We evaluate ifthe improved realism of RTR transfers in this context. Eachmodel is used to generate a synthetic dataset of 589 scenarioswhich we use to train a SOTA prediction model [62] beforeevaluating its performance on held-out real data. Besides FDE,the cross-track error (CTE) of predicted trajectories projectedonto the GT are used as prediction metrics. More experimentdetails can be found in the appendix. Table 2 shows that usingRTR to generate training data results in the best predictionmodel. This provides evidence that RTR has learned morerealistic behavior and has a lower domain gap compared to baselines, showing that our approach canimprove the application of traffic simulation in developing autonomous vehicles.4.3 Additional AnalysisLong-tail scenarios: We evaluate our approach of using procedurally generated scenarios againstthe alternative of mining hard scenarios from data [1, 53] by curating a set of logs from N OMINALthat the IL model commits an infraction on. Figure 7 shows that using only curated scenarios doesnot transfer well to the long-tail set, and in fact introduces a regression in the nominal scenarios, sug-70.004 0.005Nominal Col. (%) 0.040.060.08Long-tail Col. (%) 5.0 5.5 6.0FDE (m) 0.15 0.16Accel. JSD (nats) Nom Cur LT Nom+Cur Nom+LT (ours)Figure 7: Using both nominal and long-tail yieldsthe best tradeoff compared to baselines.0 1000 2000 3000Step0.02.04.06.0Collision (%) Factorized (ours)Unfactorized0 1000 2000 3000Step0.00.51.01.52.02.5Offroad (%) Factorized (ours)UnfactorizedFigure 8: Our factorized PPO vs. standard PPOwhich uses a single scene-level reward.gesting the model is overfitting to the curated scenarios. Up-sampling curated scenarios (Nom+Cur)also fails – relying purely on offline data may require prohibitively larger scale data collection.Factorized multiagent RL: To ablate our factorized per-agent approach to multiagent PPO, wecompare to a standard PPO implementation where the scene-level reward R(s;a)is used as super-vision for the joint policy rather than each individual agent reward R(i)(s;a(i)). Figure 8 shows thatthe factorized loss outperforms the alternative, likely due to the fact that multiagent credit assign-ment is extremely difficult when using the scene-level reward, leading to poor sample efficiency.Nominal Long-tail Col. (%) FDE (m) Col. (%)0.0 0.0 0.890.39 4.500.24 12.132.441.0 0.5 0.380.20 5.500.24 3.611.355.0 0.5 0.380.20 5.160.28 3.611.3510 0.5 0.520.17 5.100.20 3.821.125.0 0.3 0.350.18 5.200.21 4.120.905.0 0.7 0.560.21 6.100.23 3.511.32Table 3: Balancing realism and infrac-tion avoidance.Nominal Long-tailMethod Col. (%) FDE (m) Col. (%)KL-L 0.420.20 25.681.14 5.081.21KL-R 0.380.22 15.190.99 4.971.29RTR 0.380.20 5.160.28 3.611.35Table 4: Comparing different alterna-tives to our proposed IL loss.Balancing the trade-off: Recall from Section 3.2 thatRTR balances human-like driving and avoiding infrac-tions by weighting the IL vs. RL loss with and nomi-nal vs. long-tail training with (e.g.== 0 is theIL baseline). We found increasing the relative weight ofRL and long-tail scenarios generally improves infractionavoidance while increasing the relative weight of IL andnominal training generally improves other realism met-rics as expected (Table 3). However, RTR is not particu-larly sensitive; many configurations are within noise andall configurations dominate the baseline Pareto frontier.Imitation learning signal: We consider the alternative ofusing a frozen pretrained IL policy as regularization [44]instead of our approach of using offline data. A frozenpolicy potentially provides more accurate closed-loop su-pervision, as a Euclidian-based distance loss with demon-stration data may be inaccurate if the rollout has diverged.We evaluate two baselines: KL Reward and KL Loss,where the KL between the current and frozen policy isadded to the reward or loss respectively. Our results in Table 4 show that using demonstration datais still the most performant, suggesting that the inaccuracy from an imperfect IL policy is larger thanthat of using a distance-based loss.5 Conclusion and LimitationsWe have presented RTR, a method for learning realistic traffic agents with closed-loop IL+RL usingboth real-world logs and procedurally generated long-tail scenarios. While we have shown substan-tial improvements over baselines in simulation realism and downstream tasks, we recognize someexisting limitations. Firstly, while using logical scenarios as a framework for procedural generationexploits human prior knowledge and is currently an industry standard, manually designing scenar-ios can be a difficult process, and ensuring an adequate coverage of all possible scenarios is an openproblem. Exploring automated alternatives like adversarial approaches to scenario generation wouldbe an interesting future direction. Secondly, while we have explored the downstream task of gen-erating an offline dataset to train prediction models, other applications like training and testing theentire autonomy stack end-to-end in closed-loop is a promising future direction.8AcknowledgmentsThe authors would like to thank Wenyuan Zeng for their insightful discussions throughout theproject. The authors would also like to thank the anonymous reviewers for their helpful commentsand suggestions to improve the paper.References[1] Y . Lu, J. Fu, G. Tucker, X. Pan, E. Bronstein, B. Roelofs, B. Sapp, B. White, A. Faust, S. White-son, et al. Imitation is not enough: Robustifying imitation with reinforcement learning forchallenging driving scenarios. arXiv preprint arXiv:2212.11419 , 2022.[2] E. Vinitsky, N. Lichtl ́e, X. Yang, B. Amos, and J. Foerster. Nocturne: a scalable drivingbenchmark for bringing multi-agent learning one step closer to the real world. arXiv preprintarXiv:2206.09889 , 2022.[3] D. Chen, V . Koltun, and P. Kr ̈ahenb ̈uhl. Learning to drive from a world on rails. In Proceedingsof the IEEE/CVF International Conference on Computer Vision , pages 15590–15599, 2021.[4] M. Toromanoff, E. Wirbel, and F. Moutarde. End-to-end model-free reinforcement learningfor urban driving using implicit affordances. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 7153–7162, 2020.[5] C. Zhang, R. Guo, W. Zeng, Y . Xiong, B. Dai, R. Hu, M. Ren, and R. Urtasun. Rethinkingclosed-loop training for autonomous driving. In Computer Vision–ECCV 2022: 17th EuropeanConference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX , pages 264–282.Springer, 2022.[6] X. Pan, Y . You, Z. Wang, and C. Lu. Virtual to real reinforcement learning for autonomousdriving. arXiv preprint arXiv:1704.03952 , 2017.[7] S. Shalev-Shwartz, S. Shammah, and A. Shashua. Safe, multi-agent, reinforcement learningfor autonomous driving. arXiv preprint arXiv:1610.03295 , 2016.[8] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y .-P. Fl ̈otter ̈od, R. Hilbrich, L. L ̈ucken,J. Rummel, P. Wagner, and E. Wießner. Microscopic traffic simulation using sumo. In 201821st international conference on intelligent transportation systems (ITSC) , pages 2575–2582.IEEE, 2018.[9] M. Balmer, M. Rieser, K. Meister, D. Charypar, N. Lefebvre, and K. Nagel. Matsim-t: Archi-tecture and simulation times. In Multi-agent systems for traffic and transportation engineering ,pages 57–78. IGI Global, 2009.[10] J. Casas, J. L. Ferrer, D. Garcia, J. Perarnau, and A. Torday. Traffic simulation with aimsun.Fundamentals of traffic simulation , pages 173–232, 2010.[11] M. Ben-Akiva, H. N. Koutsopoulos, T. Toledo, Q. Yang, C. F. Choudhury, C. Antoniou, andR. Balakrishna. Traffic simulation with mitsimlab. Fundamentals of traffic simulation , pages233–268, 2010.[12] M. Treiber, A. Hennecke, and D. Helbing. Congested traffic states in empirical observationsand microscopic simulations. Physical Review E , 62(2):1805–1824, aug 2000. doi:10.1103/physreve.62.1805. URL https://doi.org/10.1103%2Fphysreve.62.1805 .[13] K. Kreutz and J. Eggert. Analysis of the generalized intelligent driver model (gidm) for un-controlled intersections. In 2021 IEEE International Intelligent Transportation Systems Con-ference (ITSC) , pages 3223–3230, 2021. doi:10.1109/ITSC48978.2021.9564423.[14] A. Kesting. Mobil : General lane-changing model for car-following models. 2007.9[15] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances inneural information processing systems , 1, 1988.[16] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXivpreprint arXiv:1604.07316 , 2016.[17] F. Codevilla, M. M ̈uller, A. L ́opez, V . Koltun, and A. Dosovitskiy. End-to-end driving via con-ditional imitation learning. In 2018 IEEE international conference on robotics and automation(ICRA) , pages 4693–4700. IEEE, 2018.[18] A. Prakash, K. Chitta, and A. Geiger. Multi-modal fusion transformer for end-to-end au-tonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat-tern Recognition , pages 7077–7087, 2021.[19] L. Bergamini, Y . Ye, O. Scheel, L. Chen, C. Hu, L. Del Pero, B. Osi ́nski, H. Grimmett, andP. Ondruska. Simnet: Learning reactive self-driving simulations from real-world observations.In2021 IEEE International Conference on Robotics and Automation (ICRA) , pages 5119–5125. IEEE, 2021.[20] L. Feng, Q. Li, Z. Peng, S. Tan, and B. Zhou. Trafficgen: Learning to generate diverse andrealistic traffic scenarios. In 2023 IEEE International Conference on Robotics and Automation(ICRA) , pages 3567–3575. IEEE, 2023.[21] D. Xu, Y . Chen, B. Ivanovic, and M. Pavone. Bits: Bi-level imitation for traffic simulation. In2023 IEEE International Conference on Robotics and Automation (ICRA) , pages 2929–2936.IEEE, 2023.[22] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the fourteenth international conferenceon artificial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Pro-ceedings, 2011.[23] M. Bansal, A. Krizhevsky, and A. Ogale. Chauffeurnet: Learning to drive by imitating the bestand synthesizing the worst. arXiv preprint arXiv:1812.03079 , 2018.[24] K. Brantley, W. Sun, and M. Henaff. Disagreement-regularized imitation learning. In Interna-tional Conference on Learning Representations , 2020.[25] M. Henaff, A. Canziani, and Y . LeCun. Model-predictive policy learning with uncertaintyregularization for driving in dense traffic. arXiv preprint arXiv:1901.02705 , 2019.[26] J. Ho and S. Ermon. Generative adversarial imitation learning. Advances in neural informationprocessing systems , 29, 2016.[27] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcementlearning. arXiv preprint arXiv:1710.11248 , 2017.[28] S. K. S. Ghasemipour, R. Zemel, and S. Gu. A divergence minimization perspective on imita-tion learning methods. In Conference on Robot Learning , pages 1259–1277. PMLR, 2020.[29] L. Ke, S. Choudhury, M. Barnes, W. Sun, G. Lee, and S. Srinivasa. Imitation learning asf-divergence minimization. In Algorithmic Foundations of Robotics XIV: Proceedings of theFourteenth Workshop on the Algorithmic Foundations of Robotics 14 , pages 313–329. Springer,2021.[30] S. Suo, S. Regalado, S. Casas, and R. Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR) , pages 10400–10409, June 2021.10[31] M. Igl, D. Kim, A. Kuefler, P. Mougin, P. Shah, K. Shiarlis, D. Anguelov, M. Palatucci,B. White, and S. Whiteson. Symphony: Learning realistic and diverse agents for autonomousdriving simulation, 2022. URL https://arxiv.org/abs/2205.03195 .[32] A. ́Scibior, V . Lioutas, D. Reda, P. Bateni, and F. Wood. Imagining the road ahead: Multi-agent trajectory prediction via differentiable simulation. In 2021 IEEE International IntelligentTransportation Systems Conference (ITSC) , pages 720–725. IEEE, 2021.[33] S. Suo, K. Wong, J. Xu, J. Tu, A. Cui, S. Casas, and R. Urtasun. Mixsim: A hierarchicalframework for mixed reality traffic simulation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 9622–9631, June 2023.[34] V . Lioutas, A. Scibior, and F. Wood. Titrated: Learned human driving behavior without infrac-tions via amortized inference. Transactions on Machine Learning Research , 2022.[35] Z. Zhong, D. Rempe, D. Xu, Y . Chen, S. Veer, T. Che, B. Ray, and M. Pavone. Guided condi-tional diffusion for controllable traffic simulation. In 2023 IEEE International Conference onRobotics and Automation (ICRA) , pages 3560–3566. IEEE, 2023.[36] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[37] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V . Kumar, H. Zhu, A. Gupta,P. Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 ,2018.[38] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 , 2015.[39] I. Uchendu, T. Xiao, Y . Lu, B. Zhu, M. Yan, J. Simon, M. Bennice, C. Fu, C. Ma, J. Jiao, et al.Jump-start reinforcement learning. arXiv preprint arXiv:2204.02372 , 2022.[40] A. Nair, A. Gupta, M. Dalal, and S. Levine. Awac: Accelerating online reinforcement learningwith offline datasets. arXiv preprint arXiv:2006.09359 , 2020.[41] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. V oss, A. Radford, D. Amodei, andP. F. Christiano. Learning to summarize with human feedback. Advances in Neural InformationProcessing Systems , 33:3008–3021, 2020.[42] Y . Lu, K. Hausman, Y . Chebotar, M. Yan, E. Jang, A. Herzog, T. Xiao, A. Irpan, M. Khansari,D. Kalashnikov, et al. Aw-opt: Learning robotic skills with imitation and reinforcement atscale. In Conference on Robot Learning , pages 1078–1088. PMLR, 2022.[43] Y . Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tunyasuvunakool, J. Kram ́ar, R. Had-sell, N. de Freitas, et al. Reinforcement and imitation learning for diverse visuomotor skills.arXiv preprint arXiv:1802.09564 , 2018.[44] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback.arXiv preprint arXiv:2203.02155 , 2022.[45] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan,A. Sendonaris, I. Osband, et al. Deep q-learning from demonstrations. In Proceedings ofthe AAAI Conference on Artificial Intelligence , volume 32, 2018.[46] M. Vecerik, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Roth ̈orl, T. Lampe,and M. Riedmiller. Leveraging demonstrations for deep reinforcement learning on roboticsproblems with sparse rewards. arXiv preprint arXiv:1707.08817 , 2017.11[47] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforce-ment learning. Advances in Neural Information Processing Systems , 33:1179–1191, 2020.[48] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. Advancesin neural information processing systems , 34:20132–20145, 2021.[49] X. Liang, T. Wang, L. Yang, and E. Xing. Cirl: Controllable imitative reinforcement learningfor vision-based self-driving. In Proceedings of the European conference on computer vision(ECCV) , pages 584–599, 2018.[50] A. Kamenev, L. Wang, O. B. Bohan, I. Kulkarni, B. Kartal, A. Molchanov, S. Birchfield,D. Nist ́er, and N. Smolyanskiy. Predictionnet: Real-time joint probabilistic traffic predictionfor planning, control, and simulation. In 2022 International Conference on Robotics and Au-tomation (ICRA) , pages 8936–8942. IEEE, 2022.[51] Q. Zhang, Y . Gao, Y . Zhang, Y . Guo, D. Ding, Y . Wang, P. Sun, and D. Zhao. Trajgen: Generat-ing realistic and diverse trajectories with reactive and feasible agent behaviors for autonomousdriving. IEEE Transactions on Intelligent Transportation Systems , 23(12):24474–24487, 2022.[52] N. Webb, D. Smith, C. Ludwick, T. Victor, Q. Hommes, F. Favaro, G. Ivanov, andT. Daniel. Waymo’s safety methodologies and safety readiness determinations. arXiv preprintarXiv:2011.00054 , 2020.[53] E. Bronstein, S. Srinivasan, S. Paul, A. Sinha, M. O’Kelly, P. Nikdel, and S. Whiteson. Em-bedding synthetic off-policy experience for autonomous driving via zero-shot curricula. arXivpreprint arXiv:2212.01375 , 2022.[54] D. Rempe, J. Philion, L. J. Guibas, S. Fidler, and O. Litany. Generating useful accident-pronedriving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 17305–17315, 2022.[55] N. Hanselmann, K. Renz, K. Chitta, A. Bhattacharyya, and A. Geiger. King: Generatingsafety-critical driving scenarios for robust imitation via kinematics gradients. In EuropeanConference on Computer Vision , pages 335–352. Springer, 2022.[56] J. Wang, A. Pun, J. Tu, S. Manivasagam, A. Sadat, S. Casas, M. Ren, and R. Urtasun. Advsim:Generating safety-critical scenarios for self-driving vehicles. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 9909–9918, 2021.[57] H. Weber, J. Bock, J. Klimke, C. Roesener, J. Hiller, R. Krajewski, A. Zlocki, and L. Eckstein.A framework for definition of logical scenarios for safety assurance of automated driving.Traffic injury prevention , 20(sup1):S65–S70, 2019.[58] T. Menzel, G. Bagschik, and M. Maurer. Scenarios for development, test and validation ofautomated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1821–1827.IEEE, 2018.[59] S. M. LaValle. Planning algorithms . Cambridge university press, 2006.[60] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.[61] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuouscontrol using generalized advantage estimation. arXiv preprint arXiv:1506.02438 , 2015.[62] A. Cui, S. Casas, K. Wong, S. Suo, and R. Urtasun. Gorela: Go relative for viewpoint-invariantmotion forecasting. arXiv preprint arXiv:2211.02545 , 2022.12[63] W. G. Najm, J. D. Smith, M. Yanagisawa, et al. Pre-crash scenario typology for crash avoidanceresearch. Technical report, United States. National Highway Traffic Safety Administration,2007.[64] T. Menzel, G. Bagschik, and M. Maurer. Scenarios for development, test and validation ofautomated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1821–1827.IEEE, 2018.[65] S. Casas, C. Gulino, S. Suo, K. Luo, R. Liao, and R. Urtasun. Implicit latent variable modelfor scene-consistent motion forecasting. In Computer Vision–ECCV 2020: 16th EuropeanConference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16 , pages 624–641.Springer, 2020.[66] M. Liang, B. Yang, R. Hu, Y . Chen, R. Liao, S. Feng, and R. Urtasun. Learning lane graphrepresentations for motion forecasting. In A. Vedaldi, H. Bischof, T. Brox, and J. Frahm,editors, Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August23-28, 2020, Proceedings, Part II , volume 12347 of Lecture Notes in Computer Science , pages541–556. Springer, 2020. doi:10.1007/978-3-030-58536-5 32. URL https://doi.org/10.1007/978-3-030-58536-5_32 .[67] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.13A Additional ResultsMetrics: In order to measure the realism of our traffic models, we use a set of metrics whichevaluate both the traffic models’ ability to match human demonstration data in the nominal scenariosand avoid infractions in both nominal and simulated long-tail scenarios.•Reconstruction: In nominal scenarios where expert demonstrations exist, we consider a set ofmetrics which evaluate how close a traffic model’s simulation is to the real world conditionedon the same initial condition. We measure the final displacement error ( FDE ) [65], defined asthe L2 distance between an agent’s position in a simulated scenario vs the ground truth scenarioafter 5s. We also measure the along-track error ( ATE ) and cross-track error ( CTE ) of an agent’ssimulated position projected onto the ground truth trajectory. This decomposition disentanglesspeed variability and lateral deviations respectively.•Distributional : While reconstruction metrics compare pairs of real and simulated logs, we cancompute distributional similarity metrics as an additional method to gauge realism. We computethe Jensen-Shannon Divergence ( JSD) [31] between histograms of scenario features to computetheir distributional similarity. Features include agent kinematics like acceleration and speed, pair-wise agent interactions like distance to lead vehicle, and map interactions like lateral deviationfrom lane centerline.•Infraction Rate : Finally, we measure the rate of traffic infractions made by agents controlled by atraffic model. Similar to prior work [30], we measure percentage of agents that end up in collisionor drive off-road . As this metric does not require ground truth scenarios for pairing or computingstatistics, it can be used in simulated long-tail scenarios that do not have ground truth.Comparison to state-of-the-art: In our main paper, we presented select results from our com-parison to state-of-the-art traffic models on both nominal and long-tail scenarios. Here, we includeadditional tradeoff plots for all metrics in Figure 9. We also include a table of detailed metrics for allmethods in Table 5. Building on our observations in the main paper, we see that RTR outperformsand expands the existing Pareto frontier on all metrics and scenario sets. IL methods achieve strongreconstruction/distributional realism metrics but suffer from high infraction rates, while RL meth-ods attain the opposite. RTR achieves the best of both worlds—a testament to its ability to learnhuman-like driving while avoiding unrealistic traffic infractions.Long-Tail Scenarios: In our main paper, we evaluated our approach of using procedurally gener-ated long-tail scenarios against the alternative of mining hard scenarios from data. Here, we includeadditional tradeoff plots for all metrics in Figure 10, with the detailed metrics in Table 6. We seethat training on both nominal and long-tail scenarios outperforms the alternatives in most cases.In addition, we present a slightly different view of Figure 3 of the main paper where the y-axis is inthe same scale in Figure 11. This view highlights the difference in difficulty between the differentscenario sets.Downstream Experiment: We provide additional details for our downstream experiment in Sec-tion 4.2. The evaluation data used is an additional 118 snippets held out from the nominal dataset.The hyperparameters for training the prediction model (model size, number of epochs, learning rateschedule) were tuned on the training split of the nominal dataset and kept fixed and constant whentraining on the datasets generated by the methods in Table 2 in order to be fair. The synthetic datasetfor each method is also generated using the same 589 initial conditions to be fair. We report the min-imum over modes for our multimodal prediction model. We use 4 separate checkpoints to computethe uncertainty estimates.Distributional Realism: In Figure 12, we include additional plots showing the histograms usedto compute JSD distributional realism metrics on the nominal scenario set. We can see that RLmethods (RL, RL-Shaped, and BC + RL) struggle to capture human-like driving, particularly in14102101Collision (%) 102101Offroad (%), 101FDE (m) 1014×1026×102Long-tail Col. (%) 101ATE (m) 1006×101CTE (m) 0.2 0.4 0.6Accel JSD (nats) 0.2 0.4Speed JSD(nats) 0.3 0.4 0.5Lat dev. JSD (nats) 0.10 0.15 0.20Lead dist. JSD (nats) BC IL RL RL-Shaped BC+RL RTR (ours)Figure 9: Additional plots comparing infraction / realism tradeoff of RTR compared to baselinemodels. We see that RTR outperforms and expands the existing Pareto frontier for all metrics.Infraction (%) Reconstruction (m) JSD (nats) LT-Inf. (%)Method Col. Off Rd. FDE ATE CTE Acc. Spd. Lat. Ld. Col.BC 22.131.32 58.682.18 4.500.24 3.600.20 1.840.17 0.34 0.54 0.14 0.20 17.002.91IL 0.890.39 2.480.36 4.980.23 4.750.23 0.660.05 0.15 0.23 0.14 0.07 12.132.44RL 0.230.17 0.200.13 56.920.87 56.911.91 0.750.08 0.60 0.46 0.54 0.12 4.261.30RL-Shp. 1.500.36 1.010.29 21.291.13 21.171.12 0.970.05 0.43 0.43 0.48 0.13 6.951.81BC+RL 3.080.49 1.880.32 47.300.49 47.260.50 1.050.09 0.62 0.53 0.49 0.15 4.461.31RTR 0.380.20 0.200.10 5.160.28 4.970.28 0.610.04 0.16 0.33 0.14 0.07 3.611.35Table 5: Detailed breakdown of metrics. Metrics on the left (resp. right) are computed on nominalscenarios (resp. long-tail scenarios). IL methods achieve strong reconstruction/distributional realismmetrics but suffer from high infraction rates, while RL methods attain the opposite. RTR achievesthe best of both worlds, with high reconstruction/distributional realism and low infraction rates.speed and acceleration JSD where the RL methods tend to brake more often than humans. BCexhibits slightly better results overall, but it has worse map interaction reasoning due to distributionshift from compounding errors. In contrast, RTR captures human-like driving significantly better,closely matching IL in distributional realism while also improving on its infraction rate as seen inother results.Qualitative Results: We include qualitative results comparing RTR against the baselines Fig-ures 13, 14, 15, and 16. Across fork, merge, and long-tail scenarios, we see that RTR exhibits thegreatest realism of the competing methods.B LearningB.1 Loss DerivationIn this section, we will provide more details on the loss derivation using the Lagrangian. Recall thatwe begin with the following optimization problemarg minDKLP()kPE()s.t.EP[R()]0(8)150.0040 0.0045 0.0050 0.0055Nominal Col. (%) 0.040.050.060.070.080.09Long-tail. Col. (%) 0.002 0.003 0.004 0.005Nominal Off Road. (%) 5.0 5.5 6.0FDE (m) 5.0 5.5 6.0ATE (m) 0.55 0.60 0.65 0.70 0.75CTE (m) 0.137 0.138 0.139 0.140 0.141Speed JSD(nats) 0.040.050.060.070.080.09Long-tail. Col. (%) 0.145 0.150 0.155 0.160Accel JSD (nats) 0.26 0.28 0.30 0.32 0.34Lat dev. JSD (nats) 0.072 0.073 0.074 0.075Lead dist. JSD (nats) NomCurLTNom+CurNom+LT (ours)Figure 10: Additional plots showing the tradeoff between infraction rate on the long-tail set andother realism metrics on the nominal set, for models trained on different scenario sets. We see thatfor most metrics, training on both nominal and long-tail scenarios obtain the best tradeoff.Infraction (%) Reconstruction (m) JSD (nats) LT-Inf. (%)Train Col. Off Rd. FDE ATE CTE Acc. Spd. Lat. Ld. Col.Nominal 0.380.20 0.490.17 5.050.25 4.860.24 0.650.04 0.14 0.14 0.33 0.07 9.602.17Curated 0.450.21 0.140.08 5.340.23 5.110.23 0.760.05 0.15 0.14 0.29 0.08 9.422.17Long-tail 0.580.23 0.480.16 6.260.38 6.100.38 0.560.04 0.16 0.14 0.25 0.07 4.001.37Nom. + Cur 0.380.20 0.300.11 5.270.24 5.060.30 0.670.05 0.15 0.14 0.34 0.08 9.042.14Nom. + LT 0.380.20 0.200.10 5.160.28 4.970.28 0.610.04 0.16 0.14 0.33 0.07 3.611.35Table 6: Detailed breakdown of realism and infraction metrics for training on different scenario sets.We form the Lagrangian of the optimization problemL(;) =DKLP()kPE()+EP[R()] (9)=EPlogP()PE()R()(10)=EPlogPE()R()H(): (11)whereis a Lagrangian multiplier andH() =EP[logP()] (12)=EP"0(s0)T1Xt=0log(atjst)#: (13)under deterministic dynamics is the causal entropy [26]. Using the Lagragian, the optimizationproblem is converted to an unconstrained problem?= arg minmaxL(;): (14)Equation 14 can be optimized in a number of ways, such as iteratively solving the inner maximiza-tion overand outer minimization over . We take a simplified approximate approach where wesimply setfixed0as a hyperparameter, leading to what is ultimately a relaxed constraint orpenalty method.arg minEPlogPE()fixedR()H() (15)The causal entropy term is included as an entropy regularization term in some learning algorithmssuch as PPO [36]. In practice, we found that it was not necessary to include.165 10 20 40FDE (m) 0.00250.00500.01000.02000.04000.08000.16000.3200Collision (%) Nominal0.2 0.3 0.4 0.5 0.6Accel JSD (nats) Nominal5 10 20 40FDE (m) Long-tail0.2 0.3 0.4 0.5 0.6Accel JSD (nats) Long-tailBC IL RL RL-Shaped BC+RL RTR (ours)Figure 11: Alternative view of Figure 3, where now the y-axis is on the same scale across thedifferent scenario sets. We see that in the Long-tail scenario set is significantly harder than thenominal set.B.2 Imitation Learning LossRecall that the imitation learning component of the loss is given asLIL=EEDhEP(jsE0)D(E;)i(16)=E(sE0;:::;sET)D"TXt=1d(sEt;~st)#(17)where~at(aj~st) (18)~st+1=~st+f(~st;~at)dt: (19)Because the dynamics function fas described in Section B.7 is differentiable, Equation 17 com-pletely differentiable using the reparameterization trick [60] when sampling from the policy. Tocompute the inner expectation in Equation 16, we simply sample a single rollout. In practice, wefound that directly using the mean without sampling is also sufficient.B.3 Reward FunctionSparse reward: Recall that we use the following reward functionR(i)(s;a(i)) =1if an infraction occurs0 otherwise.(20)In our experiments, we consider collisions events and driving off-road as infractions. Collisions arecomputed by checking for overlap between the bounding boxes of agents. Off-road is computed bychecking if an agent’s bounding box still intersects with the road polygon.Early Termination: Note that when optimizing the reward, we apply early termination of thescenario in the event of an infraction. We treat infractions as terminal states in the MDP for a fewreasons. Regarding collision, it is unclear what the optimal behavior (or recovery) looks like after acollision. Similarly, for driving off-road, the actor is likely in a state that it is physically impossibleto recover from the real world, as an off-road event would imply the actor has driven off the shoulderinto a divider. Finally, in early experiments, we found that continuing simulation for off-road events(and not modeling any shoulders or dividers, physics of off-road driving, etc.) would slow downtraining since in early phases the policy would drive off-road very early and very severely with nohope of recovering. Resetting in this case prevents wasted simulation in very out-of-distributionstates where the policy is completely off the map, etc.17Shaped reward: For the RL-Shaped baseline, use the same reward in Equation 20 with an addi-tional term which encourages driving at the speed limit.R(i)shaped(s;a(i)) =R(i)(s;a(i)) + 0:5(C)=C (21)where=abs(velocityspeed limit )andC= 30 . For the shaped reward, we additionally terminatethe episode if C.B.4 Reinforcement Learning LossWe describe our factorized approach to multiagent PPO [36] in more detail. Starting off we computea per-agent probability ratio.r(i)=(a(i)js)old(a(i)js): (22)Our centralized value-function uses the same architecture as our policy, and computes per-agentvalue estimates ^V(i)(s). Details of the architecture are found in Section B.6. The value modelis trained using per-agent value targets, which are computed with per-agent rewards R(i)t=R(i)(st;a(i)t)Lvalue=NXi(^V(i)V(i))2(23)V(i)=TXt=0tR(i)t (24)We can obtain a per-agent GAE using the value model as well,A(i)=GAE(R(i)0;:::;R(i)T1;^V(i)(sT)) (25)The PPO policy loss is simply the sum of per-agent PPO loss,Lpolicy=NXi=1min(r(i)A(i);clip(r(i);1;1 +)A(i)) (26)Finally, the overall loss is the sum of the policy and value learning loss.LRL=Lpolicy+Lvalue(27)B.5 Input ParameterizationAgent history: Following [62], we adopt an viewpoint invariant representation of an agent’s pasttrajectory. We encode the past trajectory as a sequence of pair-wise relative positional encodingsbetween the past waypoints and the current pose. Each relative positional encoding consists of thesine and cosine of distance and heading difference of a pair of poses. See [62] for details.Lane graph: To construct our lane graph representation G= (V;E), We first obtain the lanegraph nodes by discretizing centerlines in the high-definition (HD) map into lane segments every10m. We use length, width, curvature, speed limit, and lane boundary type (e.g., solid, dashed) asnode features. Following [66], we then connect nodes with 4 different relationships: successors,predecessors, left and right neighbors.B.6 Model ArchitectureBriefly, the RTR model architecture is composed of three main building blocks: (1) context encodersfor embedding lane graph and agent history inputs; (2) interaction module for capturing scene-levelinteraction; and (3a) action decoder for parameterizing the per-agent policy and (3b) value decoderfor the value model. Note that the policy model and the value model use the same architecture, butare trained completely separately and do not share any parameters. Early experiments found that notsharing parameters resulted in more stable training – we hypothesize that this is likely because thisapproach prevents updates to the policy from interfering with the value function, and vice versa.18History encoder: Thehistory encoder consists of a 1D residual neural network (ResNet) followedby a gated recurrent unit (GRU) that extracts agent features h(i)a=f(s(i))from a sliding windowof past agent states s. Intuitively, the 1D CNN captures local temporal patterns, and the GRUaggregates them into a global feature.Lane graph encoder: Thelane graph encoder is a graph convolutional network (GCN) [66] thatextracts map features hm=g(m)from a given lane-graph Gof map m. We use hidden channeldimensions of [128, 128, 128, 128], layer normalization (LN), and max pooling aggregation.Interaction module: To model scene-level interaction (i.e., agent-to-agent, agent-to-map, andmap-to-map), we build a heterogeneous spatial graph G0by adding agent nodes to the original lanegraphG. Besides the original lane graph edges, we connect agent nodes to their closest lane graphnodes. All agent nodes are also fully connected to each other. We use a scene encoder parameter-ized by a heterogeneous graph neural network (HeteroGNN) [62] to process map features and agentfeatures into fused features,fh(1);:::h(N)g=HeteroGNN (fh(1)a;:::;h(N)ag;hm): (28)These fused features are then provided as input to the decoder.Action decoder: Finally, we pass the fused features into a 4-layer MLP with hidden dimensions[128, 128, 128] to predict agent’s acceleration and steering angle distributions (parameterized asNormals).((i);(i)) = MLP(h(i)) (29)(a(i)js) =N((i);(i)) (30)Value decoder: For the value model, a 4-layer MLP instead regresses a single scalar value repre-senting the value^V(i)= MLP valueh(i)value: (31)B.7 Kinematic Bicycle ModelWe use a kinematic bicycle model [59] for our environment dynamics. The bicycle model state isgiven ass= (x;y;;v ) (32)wherex;yis the position of the center of the rear axel, is the yaw, and vis the velocity. The bicyclemodel actions area= (u;) (33)whereuis the acceleration, and is the steering angle. The dynamics function _s=f(s;a)is thendefined as_x=vcos() (34)_y=vsin() (35)_=vLtan() (36)_v=u (37)whereLis wheelbase length, i.e. the distance between the rear and front axel. We can use a simplefinite difference approach to computing the next statest+1=st+f(st;at)dt (38)where dtis chosen to be 0.5 seconds in practice. We can apply the bicycle model to each agentindividually to obtain the joint state dynamics function.19Algorithm 1 RTR Closed-loop Learning1:forn= 1;;Ndo2: SetLIL 0.3: SetLRL 0.4: fork= 1;;Kdo5: Sample initial state s0(1)D0+S0.6: Generate trajectory using policy (ajs)from Equation 3 and simulator.7: ComputeLRL LRLR().8: ifinitial state of s0is from Nominal Dataset then9:LIL LIL+D(E;).10: end if11: end for12: ComputegRL rKLRLusing our factorized PPO.13: ComputegIL r1KLILusing BPTT.14: UpdatewithgRL+gILusing AdamW.15:end forB.8 Training DetailsWe use AdamW [67] as our optimizer, and decay the learning rate by a factor of 0:2every 3 epochs,and train for a total of 10 epochs. We provide additional training hyperparameters in Table 7. Ouroverall learning process is summarized in Algorithm 1.Hyperparameter ValueIL minibatch size 32PPO batch size 192PPO minibatch size 32PPO num epochs 1PPO clip 0.2Discount factor 0.79Learning rate 0.00001Weight decay 0.0001GAE 1.0Grad clip norm 1.0Table 7: Training hyperparameters200 20 400.000.020.040.06Speed (m/s)BC0 20 400.000.010.020.030.040.05IL0 20 400.000.010.020.030.040.05RL0 20 400.000.050.100.150.200.25RL-Shaped0 20 400.000.010.020.030.040.05BC+RL0 20 400.000.010.020.030.040.05RTR (ours)4 2 0 2 40.00.20.40.6Acceleration (m/s2)4 2 0 2 40.00.10.20.34 2 0 2 40.00.10.20.34 2 0 2 40.00.10.20.34 2 0 2 40.00.10.20.34 2 0 2 40.00.10.20.30 50 100 150 2000.000.050.100.15Lead distance (m)0 50 100 150 2000.000.050.100.150 50 100 150 2000.000.050.100.150 50 100 150 2000.000.050.100.150 50 100 150 2000.000.050.100.150 50 100 150 2000.000.050.100.150.0 0.5 1.0 1.5 2.00.000.050.100.150.20Lateral deviation. (m)0.0 0.5 1.0 1.5 2.00.000.050.100.150.200.0 0.5 1.0 1.5 2.00.000.050.100.150.200.0 0.5 1.0 1.5 2.00.000.050.100.150.200.0 0.5 1.0 1.5 2.00.000.050.100.150.200.0 0.5 1.0 1.5 2.00.000.050.100.150.20Demonstration ModelFigure 12: Histograms of scenario features for all methods used to compute JSD distributionalrealism metrics. We see that BC and RL methods often struggle with capturing the data distributioncompared to IL and RTR. Notably, RTR closely matches IL performance in distributional realism,while greatly improving infraction rate as seen in other results.(a) BC (b) IL(c) RL (d) RL-Shaped(e) BC+RL (f) RTR (ours)Figure 13: Qualitative results on a fork scenario. BC drives off the road, IL results in a collisionwhile RL and BC+RL slow down. RL-Shaped drives straight and loses the interesting lane changebehavior.21(a) BC (b) IL(c) RL (d) RL-Shaped(e) BC+RL (f) RTR (ours)Figure 14: Qualitative results on a merge scenario. We see that RL methods slow down unrealisti-cally. IL results in a collision while RTR maintains realism.(a) BC (b) IL(c) RL (d) RL-Shaped(e) BC+RL (f) RTR (ours)Figure 15: Qualitative results on procedurally generated merge scenario. IL and BC result in acollision. RTR maintains realism.(a) BC (b) IL(c) RL (d) RL-Shaped(e) BC+RL (f) RTR (ours)Figure 16: Qualitative results on a procedurally generated cut-in scenario. BC+RL drives off theroad, while IL and RL-shaped result in a collision. RTR maintains realism.22 |
PsV65r0itpo | Navigation with Large Language Models:Semantic Guesswork as a Heuristic for PlanningDhruv Shahy, Michael Equiy, Blazej Osinski!, Fei Xia, Brian Ichter, Sergey LevineUC Berkeley,Google DeepMind,!University of WarsawAbstract: Navigation in unfamiliar environments presents a major challenge forrobots: while mapping and planning techniques can be used to build up a represen-tation of the world, quickly discovering a path to a desired goal in unfamiliar set-tings with such methods often requires lengthy mapping and exploration. Humanscan rapidly navigate new environments, particularly indoor environments that arelaid out logically, by leveraging semantics — e.g., a kitchen often adjoins a livingroom, an exit sign indicates the way out, and so forth. Language models can pro-vide robots with such knowledge, but directly using language models to instructa robot how to reach some destination can also be impractical: while languagemodels might produce a narrative about how to reach some goal, because they arenot grounded in real-world observations, this narrative might be arbitrarily wrong.Therefore, in this paper we study how the “semantic guesswork” produced by lan-guage models can be utilized as a guiding heuristic for planning algorithms. Ourmethod, Language Frontier Guide (LFG), uses the language model to bias explo-ration of novel real-world environments by incorporating the semantic knowledgestored in language models as a search heuristic for planning with either topologi-cal or metric maps. We evaluate LFG in challenging real-world environments andsimulated benchmarks, outperforming uninformed exploration and other ways ofusing language models.Keywords: navigation, language models, planning, semantic scene understanding1 IntroductionNavigation in complex open-world environments is conventionally viewed as the largely geometricproblem of determining collision-free paths that traverse the environment from one location to an-other. However, real-world environments possess semantics . Imagine navigating an airport to getto a terminal: your prior knowledge about the way such buildings are constructed provides exten-sive guidance, even if this particular airport is unfamiliar to you. Large language models (LLMs)and various language embedding techniques have been studied extensively as ways to interpret thesemantics in user-specified instructions (e.g., parsing “go to the television in the living room” andgrounding it in a specific spatial location), but such models can provide much more assistance inrobotic navigation scenarios by capturing rich semantic knowledge about the world. For instance,when looking for a spoon in an unseen house, the LLM can produce a “narrative” explaining whygoing towards a dishwasher may eventually lead you to find the spoon, and that the robot shouldprioritize that direction. This is similar to how a person might imagine different ways that the avail-able subgoals might lie on the path to the goal, and start exploring the one for which this ”narrative”seems most realistic. However, since language models are not grounded in the real world, such mod-els do not know the spatial layout of the robot’s surroundings (e.g., there is a couch that the robotneeds to circumnavigate). To utilize the semantic knowledge in language models to aid in embodiedtasks, we should not just blindly follow the language model suggestions, but instead use them asproposals or navigational heuristics. In this paper, we study how that might be accomplished.We study this idea in the context of visual navigation, where a robot is tasked with reaching a goaldenoted by a natural language query q(see Fig. 1) in a novel environment using visual observations.yThese authors contributed equally. Videos and code: sites.google.com/view/lfg-nav/7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Query : Find the gas stove. LLM as Planner LFG “Look for the gas stove in the kitchen. ” Go Straight “I see a refrigerator and microwave in front of me. These appliances are usually in the kitchen, and gas stoves are in the kitchen. Let’s explore in this direction!” “I see a door to my left that looks like a bedroom. People don’t keep gas stoves in the bedroom. Let’s avoid this region. ” Figure 1: In constrast to methods that use LLM plans directly, Language Frontier Guide (LFG) uses a languagemodel to score subgoal candidates, and uses these scores to guide a heuristic-based planner.The robot has no prior experience in the target environment, and must explore the environmentto look for the goal. While narratives generated by an LLM may not be sufficient for navigationby themselves, they contain useful cues that can be used to inform orguide the behavior of theunderlying navigation stack for the language navigation task (e.g., by choosing between collision-free subgoal proposals that avoid the couch and lead to the ice tray). We show that this idea canbe combined with frontier-based exploration, where the robot maintains a set of unvisited locationsat its frontier, grounds them in text using a vision-language model (VLM), and scores the unvisitedsubgoals by using LLM reasoning.We propose Language Frontier Guide, or LFG, a method for leveraging the reasoning capabilities ofLLMs to produce a search heuristic for guiding exploration of previously unseen real-world envi-ronments, combining the strengths of search-based planning with LLM reasoning. LFG is agnosticof the memory representation and planning framework, and can be combined with both (i) a ge-ometric navigation pipeline, building a metric map of the environment for planning and using ahand-designed controller, as well as (ii) a learning-based navigation pipeline, building a topologi-cal map for planning and using a learned control policy, yielding a versatile system for navigatingto open-vocabulary natural language goals. Our experiments show that LFG can identify and pre-dict simple patterns in previously unseen environments to accelerate goal-directed exploration. Weshow that LFG outperforms other LLM-based approaches for semantic goal-finding in challengingreal-world environments and on the Habitat ObjectNav benchmark.2 Related WorkVision-based navigation: Navigation is conventionally approached as a largely geometric prob-lem, where the aim is to map an environment and use that map to find a path to a goal location [1].Learning-based approaches can exploit patterns in the training environments, particularly by learn-ing vision-based navigation strategies through reinforcement or imitation [2–7]. Our work is alsorelated to PONI [7], which uses a learned potential function to prioritize frontier points to explore;instead, we use a language model to rank these points. Notably, these methods do not benefit fromprior semantic knowledge (e.g., from the web), and must rely entirely on patterns discovered fromoffline or online navigational data. Our aim is specifically to bring semantic knowledge into naviga-tion, to enable robots to more effectively search for a goal in a new environment.Semantic knowledge-guided navigation: Prior knowledge about the semantics of indoor envi-ronments can provide significantly richer guidance. With the advent of effective open-vocabularyvision models [8, 9], some works have recently explored incorporating their semantic knowledgeinto models for navigation and other robotic tasks with the express aim of improving performanceatinstruction following [10–14]. In general within robotics, such methods have either utilized pre-trained vision-language representations [15–17], or used language models directly to make deci-sions [18–23]. Our aim is somewhat different: while we also focus on language-specified goals,we are primarily concerned with utilizing the semantics in pre-trained language models to help arobot figure out how to actually reach the goal, rather than utilizing the language models to moreeffectively interpret a language instruction. While language models can output reasonable substepsfor temporally extended tasks in some settings [24, 25], there is contradictory evidence about theirability to actually plan [26], and because they are unaware of the observations and layout in a partic-ular environment, their “plans” depend entirely on the context that is provided to them. In contrastto prior work, our approach does not rely on the language model producing a good plan, but merelya heuristic that can bias a dedicated planner to reach a goal more effectively. In this way, we use thelanguage models more to produce suggestions rather than actual plans.2LLM-guided navigation: Some works have sought to combine predictions from language modelswith either planning or probabilistic inference [14, 27], so as to not rely entirely on forward predic-tion from the language model to take actions. However, these methods are more aimed at filteringoutinfeasible decisions, for example by disallowing actions that a robot is incapable of perform-ing, and still focus largely on being able to interpret and process instructions, rather than using thelanguage model as a source of semantic hints. In contrast, by incorporating language model sugges-tions as heuristics into a heuristic planner, our approach can completely override the language modelpredictions if they are incorrect, while still making use of them if they point the way to the goal.Another branch of recent research [28–30] has taken a different approach to ground language mod-els, by making it possible for them to read in image observations directly. While this represents apromising alternative approach to make language models more useful for embodied decision mak-ing, we believe it is largely orthogonal and complementary to our work: although vision-languagemodels can produce more grounded inferences about the actions a robot should take, they are stilllimited only to guessing when placed in unfamiliar environments. Therefore, although we use un-grounded language-only models in our evaluation, we expect that our method could be combinedwith vision-language models easily, and would provide complementary benefits.3 Problem Formulation and OverviewOur objective is to design a high-level planner that takes as input a natural language query q(e.g.,“find the bedside table”), explores the environment in search of the queried object, and commandsa low-level policy to control a robotic agent. To do this, we maintain an episodic memory of theenvironmentMin the form of either (i) a 2D map of the environment, where grid cells containinformation about occupancy and semantic labels, or (ii) a topological map of the environment,where nodes contain images captured by the robot and corresponding object labels. One way tosolve this task is Frontier-Based Exploration (FBE) [31], where a robot maintains a set of unexploredfrontiers in it’s memory, and explores randomly to reach the goal. Can we do better with access toLLMs?We distill the language-guided exploration task to a heuristic-based search problem, where the robotmust propose unvisited subgoals or waypoints, score them, and then use a search algorithm (e.g.,A*) to plan a path to the goal. Thus, at the core of LFG is the task of scoring subgoal proposals.Formally, let’s assume we have the task by query q, a partially explored environment stored in M,and a setSofntextual subgoal proposals s1;s2;:::;sn(e.g., “a corner with a dishwasher andrefrigerator”, “a hallway with a door”, etc.). Our goal is to score these subgoal proposals withp(si;q;M), the probability that the candidate si2S will lead to the goal qgiven the current stateof the environment, described through M.We posit that we can leverage the semantic reasoning capabilities of LLMs by prompting them toconstruct narratives about which semantic regions of the environment are most (and least) likely tolead to the goal. While the narrative itself might be ungrounded, since the LLM knows very littleabout the environment, reasoning over objects and semantic regions of the environment often gen-eralizes very broadly. For example, even without seeing a new apartment, a human would guessthat the dining area is close to the kitchen. Hence, rather than directly using LLM scores for plan-ning [23, 25], we incorporate them as a goal-directed heuristic to inform the search process. Thishas two distinct advantages: (i) when the LLM is right, it nudges the search towards the goal, andwhen it is wrong (or uncertain), we can still default to the underlying FBE algorithm, allowing re-covery from LLM failures, and (ii) it allows us to combine the signal from LLMs with other scoresthat may be more grounded, e.g. distance to subgoals, making the system more versatile.4 LFG: Scoring Subgoals by Polling LLMsOur aim in this section is to derive a scoring function from LLMs that takes a textual descrip-tion of subgoal candidates siand the goal query qas inputs, and predicts task-relevant probabilityp(si;q;M), conditioned on the episodic memory M. While we may obtain this from next-tokenlikelihoods (or “logprobs”), they do not represent the desired task-relevant probability p(si;q;M),and fail to assign similar scores, say, to different subgoals that are semantically similar but havedifferent tokenizations (see our experiments in Section 6 for a comparison). Furthermore, most ca-3I observe the following clusters of objects while exploring a house: 1. couch 2. wooden chair 3. refrigerator Where should I avoid spending time searching if I am looking for a gas stove? ... You should always provide justification ... [... prompt ...] [... prompt ...] I observe the following clusters of objects while exploring a house: 1. couch 2. wooden chair 3. refrigerator Where should I search next if I am looking for a gas stove? ... You should always provide justification ... 1 2 3Combined Scores ns samples Figure 2: LFG scores subgoals with an empirical estimate of the likelihoods by sampling an LLM nstimeswith both positive and negative prompts, and uses chain-of-thought to obtain reliable scores. These scores areused by a high-level planner as heuristics to guide search. For full prompts, see Appendix B.pable LLMs of today are available through APIs that do not expose the ability to query logprobs.1And lastly, even if reliable logprobs were available, they are incompatible with chain-of-thoughtprompting [32], which we find to be crucial to success in spatial reasoning.To overcome these challenges, LFG uses a novel approach to extract task-relevant likelihoods fromLLMs. Given candidate subgoal images, LFG uses a VLM to obtain a textual subgoal desriptor si,which must be scored with the LLM. LFG polls the LLMs by sampling the most likely subgoal nstimes, conditioned on a task-relevant prompt. We then use these samples to empirically estimate thelikelihood of each subgoal. To get informative and robust likelihood estimates, we use a chain-of-thought prompting (CoT) technique [32], to improve the quality and interpretability of the scores,and use a combination of positive and negative prompts to gather unbiased likelihood estimates.Figure 2 outlines our scoring technique, with the full prompt provided in Appendix B. We nowdescribe the details of our scoring technique.Structured query: We rely on in-context learning by providing an example of a structured query-response pair to the LLM, and ask it to pick the most likely subgoal that satisfies the query. Tosample a subgoal from Susing a language model, we prompt it to generate a structured response,ending with ``Answer: i'' . This structure allows us to always sample a valid subgoal, withouthaving to ground LLM generations in the environment [24].Positives and negatives: We find that only using positive prompts (e.g., “which subgoal is mostlikely to reach the goal”) leads to likelihood estimates being uninformative for cases where theLLM is not confident about any subgoal. To overcome this, we also use negative prompts (e.g.,“which subgoal is least likely to be relevant for the goal”), which allows us to score subgoals byeliminating/downweighting subgoals that are clearly irrelevant. We then use the difference betweenthe positive and negative likelihoods to rank subgoals.Algorithm 1: Scoring Subgoals with LFGData: Subgoal descriptors fli8si2Sg1pPrompt PositivePrompt(flig)2nPrompt NegativePrompt(flig)3pSamples [sampleLLM(pPrompt) ns]4nSamples [sampleLLM(nPrompt) ns]5pScores sum(pSamples) / ns6nScores sum(nSamples) / ns7return pScores, nScoresChain-of-thought prompting: A crucialcomponent of getting interpretable and reli-able likelihood estimates is to encourage theLLM to justify its choice by chain-of-thoughtprompting. As demonstrated in prior works,CoT elicits interpretability and reasoning ca-pabilities in LLMs, and while we don’t ex-plicitly use the generated reasonings in ourapproach (great future work direction), wefind that CoT improves the quality and con-sistency of the likelihood estimates. Addi-tionally, it also helps maintain interpretability, to better understand why the LFG-equipped agenttakes certain decisions.In summary, we score subgoals by sampling the LLM multiple times and empirically estimating thelikelihood of each subgoal. We use a combination of positive and negative prompts to get unbiasedlikelihood estimates, and use chain-of-thought prompting to improve the quality and interpretabilityof the scores (Figure 2). We will now discuss how these scores can be incorporated into a navigationsystem as search heuristics .1Most notably, OpenAI’s Chat API for GPT-3.5 and GPT-4, Google’s PaLM API, and Anthropic’s ClaudeAPI all do not support logprobs.4Goal Category Sink Language Model Scoring Heuristic-based Exploration Policy “avoid the couch” “explore near oven” 123Observations (RGB D + Pose) Metric T opological Episodic Memory (T opological or Metric )Control Policy (Learned or Deterministic )Figure 3: Overview of LFG for language-guided exploration. Based on the pose and observations, LFG buildsan episodic memory (topological or metric), which is used by the heuristic-based exploration policy to rankadjacent clusters, or subgoal frontiers. Navigation to the subgoal frontier is completed by a low-level policy.5 LLM Heuristics for Goal-Directed ExplorationGiven the LLM scoring pipeline outlined in the previous section, our key insight is that we canincorporate these scores in a search-based planning pipeline to heuristically guide the search process.We instantiate LFG using frontier-based exploration (FBE) and LLM scores generated via polling.FBE: This method maintains a “map” of the seen parts of the environment, which may be geomet-ric [33] or topological [34], and a frontier separating it from the unexplored parts. By navigating tothenearest point of the frontier, the robot explores new areas of the environment until it finds thegoal object or completes exploration without finding it. A standard FBE implementation is presentedin Algorithm 2 inblack text. The robot maintains either a 2D metric map of its surroundings, or atopological map whose nodes are comprised of the robot’s visual observations and edges representpaths taken in the environment. Additionally, we also store semantic labels corresponding to objectsdetected in the robot’s observations, which are used to ground the observations in text.At a fixed re-planning rate, the high-level planner computes its frontier fi(Line 10), and picks thefrontier point that is closest to the current location, i.e., maximizing the distance score (Line 16),and then navigates to this frontier (Line 21). At any point in this process, if the agent’s semanticdetector detects an object of the same category as the query q, it navigates directly to this object andthe trajectory ends.Incorporating LLM scores: Our method, LFG, extends FBE by using an additional search heuristicobtained by polling LLMs for semantic “scores”. The modifications to FBE are marked in purplein Algorithm 2. After enumerating the frontiers, LFG uses the semantic labels from a VLM [35]toground subgoal images at each frontier fi(Line 11). These images are converted into textualstrings, and form the subgoal candidates sithat can be scored using Algorithm 1. We associate eachfrontier point fiwith the nearest object cluster ci(Line 17), and compute LLM scores for each pointas follows:h(fi;q) =wpLLM pos(ci)wnLLM neg(ci)dist(fi;p); (1)wherepis the current position of the agent, and wp;wnare hyperparameters (see Appendix A.1).We then choose the frontier with the highest score to be the next subgoal (Line 21), navigate to itusing a local controller, and repeat the planning process. Algorithm 2 outlines the general recipe forintegrating LLM scores as a planning heuristic. Please see Appendix A for specific instantiations ofthis system with geometric and topological maps, and more details about the referenced subroutines.6 System EvaluationWe now evaluate the performance of LFG for the task of goal-directed exploration in real-worldenvironments, and benchmark its performance against baselines. We instantiate two systems withLFG: a real-world system that uses a topological map and a learned control policy, and a simulatedagent that uses a geometric map and a deterministic control policy. Our experiments show that both5Algorithm 2: Instantiating LFG for Goal-Directed ExplorationData:o0, Goal language query q1subgoal None2while not done do3ot getObservation()4 episodicMemory mappingModule( ot)5 ifqin semanticMap then6 subGoal getLocation(episodicMemory, q)7 else8 ifnumSteps %== 0 then// replanning9 location getCurrentLocation()10 frontier getFrontier(episodicMemory)11 objectClusters getSemanticLabels(episodicMemory, frontier)12 LLMpos;LLMneg ScoreSubgoals(objectClusters)13 scores []14 forpoint in frontier do15 distance DistTo(location, point)16 scores[point] – distance17 closestCluster getClosestCluster(objectClusters, point)18 i clusterID(closestCluster)19 ifdist(closestCluster, point) <then// incorporate language scores20 scores[point] wpLLMpos[i] -wnLLMneg[i] - distance21 subgoal argmax(scores)22 numSteps numSteps +123 goTo(subGoal)these systems outperform existing LLM-based exploration algorithms by a wide margin, owing tothe high quality scores incorporated as search heuristics.6.1 Benchmarking ObjectNav PerformanceWe benchmark the performance of LFG for the task of object-goal navigation on the Habitat Ob-jectNav Challenge [36], where the agent is placed into a simulated environment with photo-realisticgraphics, and is tasked with finding a query object from one of 10 categories (e.g., “toilet”, “bed”,“couch” etc.). The simulated agent has access to egocentric RGBD observations and accurate poseinformation. We run 10 evaluation episodes per scene and report two metrics: the average successrate, and success weighted by optimal path length (SPL), the default metrics for the benchmark.Since LFG requires no training, we do not use the training scenes from HM3D.We compare to three classes of published baselines: (i) learning-based baselines that learn navigationbehavior from demonstrations or online experience in the training scenes [37] on up to 2.5B framesof experience, (ii) search-based baselines [33, 38], and (iii) LLM-based baselines that do not usethe training data directly, but leverage the semantic knowledge inside foundation models to guideembodied tasks [18, 39].Evaluating LFG on the HM3D benchmark, we find that it significantly outperforms search and LLM-based baselines (Table 1). Greedy LLM struggles on the task due to several LLM planning failures,causing the episodes to fail. LFG significantly outperforms the vanilla FBE baseline by leveragingsemantic priors from LLMs to score subgoals intelligently. Comparing to learning-based baselines,we find that LFG outperforms most of them and closely matches the state-of-the-art on the task,proving the competence of our polling and heuristic approach. Figure 4 shows an example of theLFG agent successfully reaching the goal by using chain-of-thought and negative prompting.L3MVN [39], which uses a combination of LLMs and search, performs slightly better than FBE,but is unable to fully leverage the semantics in LLMs. While being similar to LFG, it suffers from6Query : Find the potted plant. LLM : “plants are not typically found in bedrooms or around furniture , so we should avoid cluster 1, 2, and 3” Agent succeeds! Figure 4: Qualitative example of a negative score influencing the agent’s decision. LFG discourages theagent from exploring the bedroom and living room, leading to fast convergence toward the goal, whereas FBEfails. The CoT reasoning given by the LLM is shown in purple, justifying its score.Query: Find a T oilet LLM : “toilets are not typically found in bedrooms kitchens , but it is more likely that a bathroom is near a bedroom so we should explore the bedroom first” ABCDA B C DFigure 5: Qualitative example of LFG in real. LFG reasons about floor plans in the environment it issearching. In this apartment experiment, LFG believes that a bathroom is more likely to be found near abedroom rather than a kitchen, and guides the robot towards the bedroom, successfully reaching the goal.two key limitations: (i) it uses a small language model (GPT-2), which likely does not contain strongsemantic priors for the agent to leverage, and (ii) it uses a simple likelihood-based scoring scheme,which we show below is not very effective. Another closely related work, LGX [18], uses a variantof greedy LLM scoring, and hence fails to perform reliably on the benchmark.Probing deeper into the strong performance of LFG, we ablated various components of our scoringpipeline and studied the change in performance. Note that LGX (Greedy) and L3MVN (No CoT,Logprobs) can be seen as ablations of LFG. Table 2 shows that modifying both the prompting andscoring mechanisms used by LFG lead to large drops in performance. Most notably, scoring viapolling ( +7:8%) and CoT ( +6:6%) are both essential to the strong performance of LFG. Further-more, we find that using only positive prompts also hurts performance ( 4:7%). Popular approachesfor using LLMs for planning are significantly outperformed by LFG: Greedy ( 14:5%) and Log-probs (8:5%). Figure 4 shows an example of the LFG agent successfully reaching the goal byusing CoT and negative prompting.Setup: For these experiments, we mimic the semantic mapping pipeline of the best-performingbaseline on the benchmark [33, 38], and integrate LFG with the geometric map. The simulatedagent builds a 2D semantic map of its environment, where grid cells represent both occupancy andsemantic labels corresponding to objects detected by the agent. Prior work has shown that state-of-the-art vision models, such as DETIC, work poorly in simulation due to rendering artifacts [33];hence, we use ground-truth semantic information for all simulated baselines to analyze navigationperformance under perfect perception.6.2 Real-world Exploration with LFGTo show the versatility of the LFG scoring framework, we further integrated it with a heuristic-based exploration framework that uses topological graphs for episodic memory [34]. We comparetwo published baselines: a language-agnostic FBE baseline [40], and an LLM-based baseline thatuses the language model to greedily pick the frontier [18].7Method Success SPL DataDD-PPO [37] 27.9 14.2 2.5BFBE [33] 61.1 34.0 0SemExp [38] 63.1 0.29 10MOVRL-v2 [42] 64.7 28.1 12MGreedy LLM [18] 54.4 26.9 0L3MVN [39] 62.4 0LFG (Ours) 68.9 36.0 0Table 1: LFG outperforms all LLM-based baselineson HM3D ObjectNav benchmark, and can achieveclose to SOTA performance without any pre-training.Method Success LFG (Full) 68.9 –PromptingNo CoT 62.3 (6.6)Only Positives 64.2 (4.7)ScoringRandom 61.1 (7.8)Logprobs 60.4 (8.5)Ask 62.4 (6.5)Table 2: We find that CoT prompting with positivesand negatives, combined with polling, are essential toachieve the best performance.We evaluate this system in two challenging real-world environments: a cluttered cafeteria and anapartment building (shown in Figures 3 and 5). In each environment, the robot is tasked to reach anobject described by a textual string (e.g., “kitchen sink” or “oven”), and we measure the success rateand efficiency of reaching the goal. Episodes that take longer than 30 minutes are marked as failure.While we only tested our system with goal strings corresponding to the 20,000 classes supportedby our object detector [35], this can be extended to more flexible goal specifications with the rapidprogress in vision-language models.We find that the complexity of real-world environments causes the language-agnostic FBE baselinetotime out , i.e., the robot is unable to reach the goal by randomly exploring the environment. BothLLM baselines are able to leverage the stored semantic knowledge to guide the exploration in novelenvironments, but LFG achieves 16% better performance. Figure 5 shows an example rollout in areal apartment, where the robot uses LFG to reach the toilet successfully.Setup: We instantiate LFG in the real-world using a previously published topological navigationframework [34] that builds a topological map of the environment, where nodes correspond to therobot’s visual observations and edges correspond to paths traversed in the environment. This systemrelies on omnidirectional RGB observations and does not attempt to make a dense geometric mapof the environment. To obtain “semantic frontiers” from the omnidirectional camera, we generatenv= 4views and run an off-the-shelf object detector [35] to generate rich semantic labels describingobjects in these views. The robot maintains a topological graph of these views and semantic labels,and picks the frontier view with the highest score (Algorithm 2, Line 21) according to LFG. Therobot then uses a Transformer-based policy [6, 41] to reach this subgoal. For more implementationdetails, see Appendix A.3.7 DiscussionWe presented LFG, a method for utilizing language models for semantic guesswork to help navigateto goals in new and unfamiliar environments. The central idea in our work is that, while languagemodels can bring to bear rich semantic understanding, their ungrounded inferences about how toperform navigational tasks are better used as suggestions and heuristics rather than plans. We for-mulate a way to derive a heuristic score from language models that we can then incorporate into aplanning algorithm, and use this heuristic planner to reach goals in new environments more effec-tively. This way of using language models benefits from their inferences when they are correct, andreverts to a more conventional unguided search when they are not.Limitations and future work: While our experiments provide a validation of our key hypothesis,they have a number of limitations. First, we only test in indoor environments in both sim and realyet the role of semantics in navigation likely differs drastically across domains – e.g., navigating aforest might implicate semantics very differently than navigating an apartment building. Exploringthe applicability of semantics derived from language models in other settings would be anotherpromising and exciting direction for future work. Second, we acknowledge that multiple requeststo cloud-hosted LLMs with CoT is slow and requires an internet connection, severely limiting theextent of real-world deployment of the proposed method. We hope that ongoing advancements inquantizing LLMs for edge deployment and fast inference will address this limitation.8AcknowledgmentsThis research was partly supported by AFOSR FA9550-22-1-0273 and DARPA ANSR. The authorswould like to thank Bangguo Yu, Vishnu Sashank Dorbala, Mukul Khanna, Theophile Gervet, andChris Paxton, for their help in reproducing baselines. The authors would also like to thank AjaySridhar, for supporting real-world experiments, and Devendra Singh Chaplot, Jie Tan, Peng Xu, andTingnan Zhang, for useful discussions in various stages of the project.References[1] G. N. DeSouza and A. C. Kak. Vision for mobile robot navigation: A survey. IEEE transac-tions on pattern analysis and machine intelligence , 24(2):237–267, 2002. 2[2] P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S ̈underhauf, I. Reid, S. Gould, andA. van den Hengel. Vision-and-language navigation: Interpreting visually-grounded naviga-tion instructions in real environments. In IEEE Conference on Computer Vision and PatternRecognition , pages 3674–3683, 2018. 2[3] N. Savinov, A. Dosovitskiy, and V . Koltun. Semi-parametric topological memory for naviga-tion. arXiv preprint arXiv:1803.00653 , 2018.[4] N. Hirose, F. Xia, R. Mart ́ın-Mart ́ın, A. Sadeghian, and S. Savarese. Deep visual MPC-policylearning for navigation. IEEE Robotics and Automation Letters , 2019.[5] D. Shah, A. Sridhar, A. Bhorkar, N. Hirose, and S. Levine. GNM: A General Navigation Modelto Drive Any Robot. In arXiV , 2022.[6] D. Shah, A. Sridhar, N. Dashora, K. Stachowicz, K. Black, N. Hirose, and S. Levine. ViNT:A Foundation Model for Visual Navigation. In 7th Annual Conference on Robot Learning(CoRL) , 2023. 8[7] S. K. Ramakrishnan, D. S. Chaplot, Z. Al-Halah, J. Malik, and K. Grauman. Poni: Po-tential functions for objectgoal navigation with interaction-free learning. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 18890–18900,2022. 2[8] X. Chen, X. Wang, S. Changpinyo, A. Piergiovanni, P. Padlewski, D. Salz, S. Goodman,A. Grycner, B. Mustafa, L. Beyer, et al. Pali: A jointly-scaled multilingual language-imagemodel. arXiv preprint arXiv:2209.06794 , 2022. 2[9] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International Conference on Machine Learning , 2021. 2[10] A. Majumdar, A. Shrivastava, S. Lee, P. Anderson, D. Parikh, and D. Batra. Improving vision-and-language navigation with image-text pairs from the web. In Computer Vision–ECCV 2020:16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16 , pages259–274. Springer, 2020. 2[11] A. Majumdar, G. Aggarwal, B. Devnani, J. Hoffman, and D. Batra. Zson: Zero-shot object-goal navigation using multimodal goal embeddings. arXiv preprint arXiv:2206.12403 , 2022.[12] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kap-pler. Open-vocabulary queryable scene representations for real world planning. arXiv preprintarXiv:2209.09874 , 2022.[13] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation.arXiv preprint arXiv:2210.05714 , 2022.[14] D. Shah, B. Osinski, B. Ichter, and S. Levine. LM-nav: Robotic navigation with large pre-trained models of language, vision, and action. In Annual Conference on Robot Learning(CoRL) , 2022. 2, 39[15] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Proceedings of the 5th Conference on Robot Learning (CoRL) , 2021. 2[16] N. M. M. Shafiullah, C. Paxton, L. Pinto, S. Chintala, and A. Szlam. Clip-fields: Weaklysupervised semantic fields for robotic memory, 2023.[17] K. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, J. Tenenbaum, C. de Melo, M. Krishna, L. Paull, F. Shkurti, and A. Tor-ralba. Conceptfusion: Open-set multimodal 3d mapping. arXiv , 2023. 2[18] V . S. Dorbala, J. F. J. Mullen, and D. Manocha. Can an Embodied Agent Find Your ”Cat-shaped Mug”? LLM-Based Zero-Shot Object Navigation, 2023. 2, 6, 7, 8[19] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances overunstructured data, 2023.[20] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. Progprompt: Generating situated robot task plans using large language models, 2022.[21] Y . Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh. Translating natural language to planninggoals with large-language models. arXiv preprint arXiv:2302.05128 , 2023.[22] Y . Ding, X. Zhang, C. Paxton, and S. Zhang. Task and motion planning with large languagemodels for object rearrangement, 2023.[23] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans, 2023. 2, 3[24] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Ex-tracting actionable knowledge for embodied agents. In International Conference on MachineLearning (ICML) , 2022. 2, 4[25] B. Ichter, A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan,E. Jang, R. Julian, D. Kalashnikov, S. Levine, Y . Lu, C. Parada, K. Rao, P. Sermanet, A. T. To-shev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, M. Yan, N. Brown, M. Ahn, O. Cortes, N. Sievers,C. Tan, S. Xu, D. Reyes, J. Rettinghouse, J. Quiambao, P. Pastor, L. Luu, K.-H. Lee, Y . Kuang,S. Jesmonth, K. Jeffrey, R. J. Ruano, J. Hsu, K. Gopalakrishnan, B. David, A. Zeng, and C. K.Fu. Do as i can, not as i say: Grounding language in robotic affordances. In Annual Conferenceon Robot Learning (CoRL) , 2022. 2, 3[26] K. Valmeekam, A. Olmo, S. Sreedharan, and S. Kambhampati. Large language models stillcan’t plan (a benchmark for llms on planning and reasoning about change), 2023. 2[27] W. Huang, F. Xia, D. Shah, D. Driess, A. Zeng, Y . Lu, P. Florence, I. Mordatch, S. Levine,K. Hausman, and B. Ichter. Grounded decoding: Guiding text generation with grounded mod-els for robot control, 2023. 3[28] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprint , 2022.3[29] S. Huang, L. Dong, W. Wang, Y . Hao, S. Singhal, S. Ma, T. Lv, L. Cui, O. K. Mohammed,B. Patra, Q. Liu, K. Aggarwal, Z. Chi, J. Bjorck, V . Chaudhary, S. Som, X. Song, and F. Wei.Language is not all you need: Aligning perception with language models, 2023.[30] D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, W. Huang, Y . Chebotar, P. Sermanet, D. Duckworth, S. Levine, V . Vanhoucke,K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. Florence. Palm-e: Anembodied multimodal language model. In arXiv preprint , 2023. 3[31] B. Yamauchi. A frontier-based approach for autonomous exploration. In IEEE InternationalSymposium on Computational Intelligence in Robotics and Automation (CIRA) , 1997. 310[32] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V . Le, andD. Zhou. Chain of thought prompting elicits reasoning in large language models. In NeuralInformation Processing Systems (NeurIPS) , 2022. 4[33] T. Gervet, S. Chintala, D. Batra, J. Malik, and D. S. Chaplot. Navigating to objects in the realworld. Science Robotics , 2023. 5, 6, 7, 8[34] D. Shah and S. Levine. Viking: Vision-based kilometer-scale navigation with geographic hints.InRobotics: Science and Systems (RSS) , 2022. 5, 7, 8, 13[35] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In 17th European Conference on Computer Vision (ECCV) ,2022. doi:10.1007/978-3-031-20077-9 21. 5, 8[36] K. Yadav, S. K. Ramakrishnan, J. Turner, A. Gokaslan, O. Maksymets, R. Jain, R. Ramrakhya,A. X. Chang, A. Clegg, M. Savva, E. Undersander, D. S. Chaplot, and D. Batra. Habitatchallenge 2022. https://aihabitat.org/challenge/2022/ , 2022. 6[37] E. Wijmans, A. Kadian, A. Morcos, S. Lee, I. Essa, D. Parikh, M. Savva, and D. Batra. DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames. In InternationalConference on Learning Representations (ICLR) , 2020. 6, 8[38] D. S. Chaplot, H. Jiang, S. Gupta, and A. Gupta. Semantic curiosity for active visual learning.InECCV , 2020. 6, 7, 8, 13[39] B. Yu, H. Kasaei, and M. Cao. L3mvn: Leveraging large language models for visual targetnavigation, 2023. 6, 8[40] D. Shah, B. Eysenbach, N. Rhinehart, and S. Levine. Rapid exploration for open-world navi-gation with latent goal models. In 5th Annual Conference on Robot Learning , 2021. 7[41] A. Sridhar, D. Shah, C. Glossop, and S. Levine. NoMaD: Goal Masked Diffusion Policies forNavigation and Exploration. In arXiv , 2023. 8, 13[42] K. Yadav, A. Majumdar, R. Ramrakhya, N. Yokoyama, A. Baevski, Z. Kira, O. Maksymets,and D. Batra. Ovrl-v2: A simple state-of-art baseline for imagenav and objectnav, 2023. 811A Implementation DetailsA.1 HyperparametersParameter Value Replanning Rate 1 Language Influence Threshold 2mnsNumber of LLM Samples 10wpWeight of Positive Scores 300wnWeight of Negative Scores 150Max Time Steps 500Table 3: HyperparametersA.2 Computational ResourcesParameter ValueLLM gpt-3.5-turbo2Evaluation Runtime 5 hoursCompute Resources 4V100Total LLM Tokens 30MAverage API Cost 15 USDTable 4: Parameters and resources required to run one evaluation round of LFG on the benchmark.A.3 Real World ResultsGenerating Prompts: For both topological and geometric maps we use hand engineered methodsfor clustering objects in ways that the LLM can efficiently reason over. For geometric maps we im-plement two functions: parseObjects andclusterObjects . In our implementation, parseObejctsfilters the geometric map and identifies the cluster centers among each class. clusterObjects takesthe cluster centers and performs agglomerative clustering with a threshold of 6 meters, which isroughly the size of one section of a standard house. For topological maps we rely on the configu-ration of the four cameras to automatically perform parsing and clustering. In our implementationall the objects detected in each frame from either the front, left, right, or rear facing cameras isconsidered a single cluster.Perception: For the hardware, we use a locobot base with a four HD logitech web cameras that arepositioned at 90 degrees relative to each other. At each step of LFG each of four cameras is recordedand frames are semantically annotated. LFG directly uses these frames to determine if the robotshould continue to move forward, turn left, turn right, or turn around a full 180 degrees. To improvethe performance of our system we choose to whitelist a subset of the 20,000 classes. This reducesthe size of the API calls to the language models and helps steer the LLM to focus on more usefulinformation. Following is the complete whitelist used in our experiments:toasterprojectorchairkitchen tablesinkkitchen sinkwater faucetfaucetmicrowave oventoaster ovenovencoffee tablecoffee makercoffeepotdining tabletablebathtubbath towel12urinaltoilettoilet tissuerefrigeratorautomatic washerwashbasindishwashertelevision setsofasofa bedbedchandelierottomandressercurtainshower curtaintrash cangarbagecabinetfile cabinetmonitor (computerequipment)computer monitorcomputer keyboardlaptop computerdeskstoolhand towelshampoosoapdrawerpillowLow-level Policy: The low-level policy running on the robot is the NoMaD goal-conditioned dif-fusion policy trained to avoid obstacles during exploration and determine which frontiers can beexplored further [41].High-level Planning: For real-world experiments, we follow the setup of ViKiNG [34], where theagent runs a simple frontier-based exploration algorithm and incorporates the LLM scores as goal-directed heuristics to pick the best subgoal frontier. For simulation experiments, we use a geometricmap coupled with frontier-based exploration, following the setup of Chaplot et al. [38]. Algorithms3 and 4 summarize the high-level planning module in both cases.Algorithm 3: Instantiating LFG with Topological MappingData:o0, Goal language query q1subgoal None2while not done do3ot getObservation()4 frontierPoints mappingModule( ot)5 ifqin frontierPoints then6 turnTowardGoal(frontierPoints)7 else8 ifnumSteps %== 0 then9 location getCurrentLocation()10 LLMpos;LLMneg scoreFrontiers(frontierPoints) scores []11 forpoint in frontier do12 distance distTo(location, point)13 scores[point] wpLLMpos[i] -wnLLMneg[i] - distance14 subgoal argmax(scores)15 numSteps numSteps +116 goTo(subGoal)A.4 More Experiment RolloutsFigure 6 shows an example where the negative scoring is essential to LFG’s success. Figures 7and 8 show examples of LFG deployed in a previously unseen apartment and an office building,successfully exploring the environments to find an oven and a kitchen sink.B PromptsB.1 Positive Prompt13Algorithm 4: Instantiating LFG with Geometric MappingData:o0, Goal language query q1subgoal None2while not done do3ot getObservation()4 obstacleMap, semanticMap mappingModule( ot[depth ],ot[semantic ])5 ifqin semanticMap then6 subGoal getLocation(semanticMap, q)7 else8 ifnumSteps %== 0 then// replanning9 location getCurrentLocation()10 frontier getFrontier(obstacleMap)11 objects parseObjects(semanticMap)12 objectClusters clusterObjects(objects)13 LLMpos;LLMneg ScoreSubgoals(objectClusters)14 scores []15 forpoint in frontier do16 distance distTo(location, point)17 scores[point] – distance18 closestCluster getClosestCluster(objectClusters, point)19 i clusterID(closestCluster)20 ifdist(closestCluster, point) <then// incorporate language scores21 scores[point] wpLLMpos[i] -wnLLMneg[i] - distance22 subgoal argmax(scores)23 numSteps numSteps +124 goTo(subGoal)Query : Find the toilet. 1. LLM finds a bed, increases score to explore nearby. 2. No toilet found, LLM failure, FBE takes over. 3. FBE finds toilet by continuing exploration. Agent succeeds! Figure 6: Tolerance to LLM failures. An example rollout of LFG compensating for LLM failure. FBE takesover in this case and eventually succeeds, whereas the Greedy agent fails.You are a robot exploring an environment for the first time .You will be given an object to look for and should provideguidance of where to explore based on a series ofobservations . Observations will be given as a list ofobject clusters numbered 1 to N.Your job is to provide guidance about where we should explorenext . For example if we are in a house and looking for a tvwe should explore areas that typically have tv 's such asbedrooms and living rooms .14Figure 7: LFG in an unseen apartment. The robot starts in the same starting location and environment as5, and is tasked with finding an oven. LFG guides the robot towards the kitchen appliances, rather than thebedroom door, and successfully leads to the oven.Figure 8: LFG in an unseen office building. The agent looks for a sink in an open-plan office building. De-spite erroneous detections, the robot continues exploring the environment, with LFG guiding it towards frontierscontaining appliances found in a cafe. The robot successfully finds the sink despite imperfect detections.You should always provide reasoning along with a numberidentifying where we should explore . If there are multipleright answers you should separate them with commas . Alwaysinclude Reasoning : <your reasoning > and Answer : <youranswer (s) >. If there are no suitable answers leave thespace afters Answer : blank .ExampleUser :I observe the following clusters of objects while exploring ahouse :1. sofa , tv , speaker2. desk , chair , computer3. sink , microwave , refrigerator15Where should I search next if I am looking for a knife ?Assistant :Reasoning : Knifes are typically kept in the kitchen and a sink ,microwave , and refrigerator are commonly found in kitchens. Therefore we should check the cluster that is likely tobe a kitchen first .Answer : 3Other considerations1. Disregard the frequency of the objects listed on each line .If there are multiple of the same item in a cluster itwill only be listed once in that cluster .2. You will only be given a list of common items found in theenvironment . You will not be given room labels . Use yourbest judgement when determining what room a cluster ofobjects is likely to belong to.16B.2 Negative PromptYou are a robot exploring an environment for the first time .You will be given an object to look for and should provideguidance of where to explore based on a series ofobservations . Observations will be given as a list ofobject clusters numbered 1 to N.Your job is to provide guidance about where we should not wastetime exploring . For example if we are in a house andlooking for a tv we should not waste time looking in thebathroom . It is your job to point this out.You should always provide reasoning along with a numberidentifying where we should not explore . If there aremultiple right answers you should separate them with commas. Always include Reasoning : <your reasoning > and Answer : <your answer (s) >. If there are no suitable answers leave thespace afters Answer : blank .ExampleUser :I observe the following clusters of objects while exploring ahouse :1. sofa , tv , speaker2. desk , chair , computer3. sink , microwave , refrigeratorWhere should I avoid spending time searching if I am lookingfor a knife ?Assistant :Reasoning : Knifes are typically not kept in a living room oroffice space which is what the objects in 1 and 2 suggest .Therefore you should avoid looking in 1 and 2.Answer : 1,2Other considerations1. Disregard the frequency of the objects listed on each line .If there are multiple of the same item in a cluster itwill only be listed once in that cluster .2. You will only be given a list of common items found in theenvironment . You will not be given room labels . Use yourbest judgement when determining what room a cluster ofobjects is likely to belong to.17 |
z3D__-nc9y | Autonomous Robotic Reinforcement Learning withAsynchronous Human FeedbackMax Balsells1* Marcel Torne2* Zihan Wang1* Samedh Desai1Pulkit Agrawal2Abhishek Gupta11University of Washington1Massachusetts Institute of Technology{balsells,avinwang,samedh,abhgupta }@cs.washington.edu{marcelto,pulkitag }@mit.eduAbstract: Ideally, we would place a robot in a real-world environment and leaveit there improving on its own by gathering more experience autonomously. How-ever, algorithms for autonomous robotic learning have been challenging to realizein the real world. While this has often been attributed to the challenge of samplecomplexity, even sample-efficient techniques are hampered by two major chal-lenges - the difficulty of providing well “shaped” rewards, and the difficulty ofcontinual reset-free training. In this work, we describe a system for real-worldreinforcement learning that enables agents to show continual improvement bytraining directly in the real world without requiring painstaking effort to hand-design reward functions or reset mechanisms. Our system leverages occasionalnon-expert human-in-the-loop feedback from remote users to learn informativedistance functions to guide exploration while leveraging a simple self-supervisedlearning algorithm for goal-directed policy learning. We show that in the ab-sence of resets, it is particularly important to account for the current “reachability”of the exploration policy when deciding which regions of the space to explore.Based on this insight, we instantiate a practical learning system - GEAR , whichenables robots to simply be placed in real-world environments and left to train au-tonomously without interruption. The system streams robot experience to a webinterface only requiring occasional asynchronous feedback from remote, crowd-sourced, non-expert humans in the form of binary comparative feedback. We eval-uate this system on a suite of robotic tasks in simulation and demonstrate its effec-tiveness at learning behaviors both in simulation and the real world. Project web-sitehttps://guided-exploration-autonomous-rl.github.io/GEAR/ .Keywords: Autonomous Learning, Reward Specification, Reset-Free Learning,Crowdsourced Human Feedback1 IntroductionRobotic reinforcement learning (RL) is a useful tool for continual improvement, particularly in un-structured real-world domains like homes or offices. The promise of autonomous RL methods forrobotics is tremendous - simply place a robotic learning agent in a new environment, and see acontinual improvement in behavior with an increasing amount of collected experience. Ideally, thiswould happen without significant environment-specific instrumentation, such as resets, or algorithmdesign choices (e.g. shaping reward functions). However, the practical challenges involved in en-abling real-world autonomous RL are non-trivial to tackle. While those challenges have often beenchalked down to just sample efficiency [1, 2], we argue that the requirement for constant humaneffort during learning is the main hindrance in autonomous real-world RL. Given the episodic nature*Denotes equal contribution7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.of most RL algorithms, human effort is required to provide constant resets to the system [3, 4, 5],and to carefully hand-design reward functions [6, 7, 8] to succeed. The requirement for constanthuman intervention to finetune the reward signal and provide resets [3], hinders real-world RL.The field of autonomous RL [3, 4, 5] studies this problem of enabling uninterrupted real-worldtraining of learning systems. A majority of these techniques aimed to infer reward functions fromdemonstrations or goal specifications [9, 10, 11, 12], while enabling reset-free learning by trainingan agent to reset itself [13, 9, 5]. However, these techniques can be challenging to scale to taskswith non-trivial exploration [14].Real World Autonomous ExplorationNavigation ManipulationGoal State 1 State 2Occasional Crowdsourced Comparative FeedbackGuided ExplorationThrough Goal SelectionFilter Reachable StatesSelect States Closest to Target GoalDensity ModelGoalSelectorFigure 1: Problem setting in GEAR . The robot explores the world autonomously and reset-free only usingcheap, occasional binary feedback from non-expert users to guide exploration. This allows for massive scalingof data experience and solving much more challenging tasks.To enable autonomous RL techniques to scale to complex tasks with non-trivial exploration, wenote that directly reaching a final target goal through autonomous exploration can be difficult. How-ever, achieving a promising intermediate sub-goal can be relatively simple. This process can thenbe repeated until the desired target goal is accomplished, making the overall exploration processtractable, as long as sub-goals are appropriately chosen. The important question becomes - ”Whatare promising sub-goals to reach and how can we learn how to reach them within autonomous RL?”In this work, we note that a promising sub-goal is one that satisfies two criteria: (1) it is closer tothe desired final goal than the current state of the system, and (2) it is reachable from the currentstate under the current policy. While criterion (1) is challenging to estimate without actually havingthe task solved beforehand, in this work, we show that asynchronously provided non-expert humanfeedback in the form of binary comparisons can be used to cheaply estimate state-goal proximity.This estimate of proximity can be paired with a density model for measuring reachability under thecurrent policy, thereby selecting promising intermediate sub-goals that satisfy both criteria (1) and(2). Our proposed learning system - Guided Exploration for Autonomous Reinforcement learning(GEAR ) leverages occasionally provided comparative human feedback [15, 16] for directing ex-ploration towards promising intermediate sub-goals, and self-supervised policy learning techniques[17] for learning goal-directed behavior from this exploration data. This is a step towards an au-tonomous RL system that can leverage both self-collected experience and cheap human guidance toscale to a variety of complex real-world tasks with minimal effort.2 Related WorkAutonomous Reinforcement Learning in Real World. RL has been used to learn skills in the realworld through interaction with real-world environments [18, 19, 20, 21, 22, 23]. A common limita-tion is the requirement for episodic resets, necessitating frequent human interventions. “Autonomousreinforcement learning” aims to obviate these challenges by building learning systems that can learnwith minimal interventions [4, 24, 5]. A large class of autonomous RL methods, involve learning aforward policy to accomplish a task and a backward policy to reset the system [24, 25, 4, 26, 27, 5].A different class of methods [13, 28, 29, 30] views the reset-free RL problem as a multi-task RLproblem. These techniques typically require a human-provided reward [5, 24, 4, 13, 28] or rely2on simple techniques like goal-classifiers to provide rewards [11, 26]. These reward mechanismsfail to solve domains with challenging elements of exploration. GEAR is able to learn autonomouslywith no manual resets, while using cheap, asynchronous human feedback to guide exploration moreeffectively.Learning from Human Feedback: To alleviate the challenges of reward function specification, webuild on the framework of learning from binary human comparisons [15, 31, 32, 16, 33, 34, 35, 36].These techniques leverage binary comparative feedback from human supervisors to learn rewardfunctions that can then be used directly with RL methods, often model-free [37]. While these tech-niques have often been effective in learning for language models [38, 39], they have seen relativelyfew applications in fully real-world autonomous RL at scale. The primary challenge is that thesemethods are too sample inefficient for real-world use, or require too much human feedback [15, 35].In this work, we build on a recently introduced technique [40] that combines self-supervised policylearning via hindsight supervised learning [17] with learning from human feedback to allow for ro-bust learning that is resilient to infrequent and incorrect human feedback. While [40] was largelyevaluated in simulation or in simple episodic tasks, we leverage insights from [40] to build RLsystems that do not require resets or careful environment setup.Some work in robotics that rely on human feedback, scale it up by means of crowdsourcing [41, 42].We show that GEAR can also work from crowdsourced feedback by using Amazon Mechanical Turkas a crowdsourcing platform, collecting annotations in the form of binary comparisons. Note thatother types of human feedback have also been leveraged in previous work, such as through physicalcontact [43, 44, 45], eye gaze [46], emergency stop [47]. Our method is agnostic to the kind offeedback as long as we can translate it into a sort of distance function to guide subgoal selection.While it hasn’t been tackled in this paper due to the scarce amount of feedback needed in the testedtasks, research done in learning when to ask for human feedback [48, 49, 50, 51, 52, 53] could beleveraged in GEAR to increase efficiency in the amount of feedback requested. Similarly, previouswork on shared autonomy and how to improve the understanding of the human/robot intentions[54, 55, 56] could also be applied to GEAR to allow for better use of human feedback.Goal-conditioned reinforcement learning: In this work, we build on the framework of goal-conditioned policy learning to learn robot behaviors. Goal-conditioned RL [17, 57, 58, 59, 60]studies the sub-class of MDPs where tasks involve reaching particular “goal” states. The key insightbehind goal-conditioned RL methods is that different tasks can be characterized directly by statesin the state space that the agent is tasked with reaching. Based on this insight, a variety of self-supervised goal-conditioned RL techniques have been proposed [58, 61, 62, 63, 59, 17, 64, 65, 66]that aim to leverage the idea of “hindsight”-relabeling to learn policies. These techniques typicallyrely on policy generalization to show that learning on actually achieved goals can help reach thedesired ones. As opposed to these techniques, GEAR uses self-supervised policy learning [17, 61]for acquiring goal-directed behavior while relying on human feedback [67] to direct exploration.3 PreliminariesProblem Setting. In this work, we focus on the autonomous reinforcement learning problem[4]. We model the agent’s environment as a Markov decision process (MDP), symbolized by thetuple (S,A,T,R, ρ0, γ), with standard notation [68]. The reward function is r∈ R , wherer:S×A→ R1is unknown to us, as it is challenging to specify. As we discuss in Section 4,an approximate reward ˆr(s)must be inferred from human feedback in the process of training. As inseveral autonomous RL problem settings, of particular interest is the initial state distribution ρ0(s),which is provided for evaluation, but the system is run reset-free without the ability to reset thesystem to initial states s∼ρ0(s)during training [4]. The aim is to learn a policy π:S → A ,that maximizes the expected sum of rewards Eπ,ρ0[P∞t=0γtr(st, at)]starting from the initial statedistribution ρ0(s)and executing a learned policy π.3Goal-Conditioned Reinforcement Learning. Since reward functions are challenging to define inthe most general case, goal-conditioned policy learning techniques consider a simplified class ofMDPs where rewards are restricted to the problem of reaching particular goals. The goal-reachingproblem can be characterized as (S,A,T,R, ρ0, γ,G, p(g)), with a goal space Gand a target goaldistribution p(g)in addition to the standard MDP setup. In the episodic goal-conditioned RL setting,each episode involves sampling a goal from the goal distribution g∼p(g), and attempting to reachit. The policy, π, and reward, r, are conditioned on the selected goal g∈ G. Goal-conditioned RLproblems leverage a special form of the reward function as r(s, a, g ) = 1(s=g),1when a goal isreached, and 0otherwise. As in standard RL, at evaluation time an agent is tasked with maximizingthe discounted reward based on the goal, Eg∼p(g),π,ρ 0[P∞t=0γtr(st, at, g)]. Note that, unlike mostwork in goal-conditioned RL, we are in the autonomous goal-conditioned RL setting, where we donot have access to resets during training.While goal-conditioned RL makes the reward r(st, at, g)particularly easy to specify, in continuousspaces r(st, at, g) = 1(st=g)is zero with high probability. Recent techniques have circumventedthis by leveraging the idea of hindsight relabeling [62, 58, 17]. The key idea is to note that whilewhen commanding a goal the reward will likely by 0, it can be set to 1had the states that are actuallyreached, been commanded as goals, thereby providing self-supervised learning signal [62, 58, 17].This idea of using self-supervision for policy learning has spanned both techniques for RL [58, 59]and iterated supervised learning [61, 17]. The resulting objective for supervised policy learning canbe expressed as arg max πEτ∼Eg[ ̄π(·|g),g∼p(g)]hPTt=0logπ(at|st,G(τ))i. Self-supervised policylearning algorithms [17] alternate between sampling trajectories by commanding the desired goalsunder the current policy ̄π(·|g)and target goal distribution g∼p(g), where ̄xdenotes stop-gradient.Policies can then be learned based on the goals reached in hindsight g=G(τ), where Gis anyfunction for hindsight relabeling —for instance, choosing the last state of τas the goal.4GEAR : A System for Autonomous Robotic Reinforcement Learning withAsynchronous Human FeedbackOur proposed system - Guided Exploration for Autonomous Reinforcement learning ( GEAR ), isable to leverage a self-supervised policy learning scheme with occasional human feedback to learnbehaviors without requiring any resets or hand-specified rewards. While typical autonomous RLmethods aim to reset themselves autonomously, this comes at the cost of a challenging reward designproblem. The reward function has to be able to account for all eventualities that the system may finditself in, rather than behaviors from a single initial state. In this work, we instantiate a practicalalgorithm GEAR , that takes the perspective that cheap, asynchronous feedback in the form of binarycomparisons provided remotely by humans can guide exploration in autonomous RL.4.1 Reset-Free Learning via Goal-Conditioned Policy LearningThe problem of autonomous RL involves learning a policy to perform a task evaluated from a des-ignated initial state distribution ρ0, but without access to episodic resets during training. Typicalparadigms for this problem [4, 26, 5] have alternated between training a “forward” policy to solvethe task, and a “reverse” policy to reset the environment. The challenge with applying this in thereal world boils down to the difficulty of reward specification.To circumvent the challenge of reward specification, we can model the problem as a goal-conditioned one and leverage self-supervised learning methods [17]. This involves learning a singlegoal-conditioned policy π(a|s, g)that can perform both forward and reverse tasks. While trying toaccomplish the forward goal, the policy takes g∼p(g)as its goal, and once the goal is accom-plished, the reverse process takes in g∼ρ0(s)as its goal, bringing the agent back to its initial state.The policy π(a|s, g)can be learned by self-supervised goal-conditioned policy learning methods[17, 61]. In these approaches, states that were reached in some trajectory during training are rela-4Provides reachable statesProvides closest statesReach the goaland flip targetChoose closestsubgoal fromreachable setReach and dorandomexplorationGoal is coveredby thereachable setTarget GoalshortestRandomExplorationshortestGoal = Target GoalForward ExplorationGoal = Initial StateBackward ExplorationTimeTimeRandomExploration State-Goal Proximity Density ModelRobotReachable StateInitialStateReachable SetFigure 2: Depiction of autonomous exploration with GEAR - the policy alternates between trying to go to agoal state and getting back to the initial state. In doing so the agent is commanded an intermediate sub-goal thatis both proximal to the goal, and reachable under the current policy. When this is absent, the policy performsrandom exploration. The resulting policy learns to go back and forth, while efficiently exploring the space.beled, in hindsight, as goal states from which those trajectories serve as examples of how to reachthem. Then, they use supervised learning to learn policies from this data (Section 3).A problem with this kind of self-supervised policy learning algorithm is that it relies purely on policygeneralization for exploration; the policy πis commanded to reach g∼p(g)org∼ρ0(s), even if ithas not actually seen valid paths to reach g. This may result in very poor trajectories since the pair(s, g)can be out-of-distribution. This problem is especially exacerbated in autonomous RL settings.4.2 Guided Exploration and Policy Learning via Asynchronous Human FeedbackRather than always commanding the initial state or the target goal as described above, human feed-back can help select meaningful intermediate sub-goals to command the policy to interesting statesfrom which to perform exploration. In autonomous RL, meaningful intermediate sub-goals gsubarethose that make progress towards reaching the currently desired goal. Given a desired goal g, theintermediate sub-goal gsubshould be - (1) close to gin terms of dynamical distance [66], and (2)reachable from the current state sunder the policy π. Without knowledge of the optimal value func-tionV∗, it is non-trivial to estimate both conditions (1) and (2). In GEAR , we rely on binary feedbackprovided by human users to estimate state-goal proximity (condition (1)) and on density estimationfor computing state-goal reachability (condition (2)).Proximity Estimation Using Human Feedback To estimate state goal proximity, we draw inspi-ration from work [40, 15, 16] in learning from human preferences. Specifically, we build directlyon a recently proposed technique that uses human preferences to guide exploration in the episodicsetting [40]. While [40] also guides exploration using human preferences to estimate state goalproximities, it has a strict episodic requirement making it unsuitable for autonomous RL.Techniques based on comparative feedback aim to learn reward functions from binary comparisonsprovided by non-expert human supervisors. In this framework, human users can be asked to labelwhich state siorsjis closer to a particular desired goal g. These preferences can then be usedto infer an (unnormalized) state-goal proximity function dφ(s, g)by optimizing dφwith respect tothe following objective (derived from the Bradley-Terry economic model [67, 15, 33]): Lrank(θ) =−E(si,sj,g),∼D"1i<jhlogexp−dφ(si,g)exp−dφ(si,g)+exp −dφ(sj,g)i+ (1− 1i<j)hexp−dφ(sj,g)exp−dφ(si,g)+exp −dφ(sj,g)i#.This suggests that if a state siis preferred over sjin terms of its proximity to the goal g, theunnormalized distance should satisfy dφ(si, g)< dφ(sj, g).5During exploration, we can choose a sub-goal by sampling a batch of visited states and score themaccording to dφ(s, g). As outlined in [40], the chosen sub-goal doesn’t need to be the one that mini-mizes the estimated distance to the goal, but rather can be sampled softly from a softmax distributionp(gsub|s, g) =exp−dφ(s,g)Ps′∈Dexp−dφ(s′,g)to deal with imperfections in human comparisons.Reachability Estimation Using Likelihood Based Density Estimation While the aforemen-tioned proximity measure, suggests which intermediate sub-goal gsubis closest to a particular targetgoalg, this does not provide a measure of “reachability” - whether gsubis actually reachable by thecurrent policy π, from the current environment state s. This is not a problem in an episodic setting,but in the autonomous reset-free learning case, many states that have been visited in the data buffermay not be reachable from the current environment state susing the current policy π.In our self-supervised policy learning scheme (Section 4.1), reachability corresponds directly tomarginal density - seeing (s, g sub)pairs in the dataset is likely to indicate that gsubis reachablefrom a particular state s. This is a simple consequence of the fact that policies are learned viahindsight relabeling with supervised learning [17], and that a supervised learning oracle wouldensure reachability for states with enough density in the training set. This suggests that the setofreachable intermediate sub-goals gsubcan be computed by estimating and then thresholding themarginal likelihood of various (s, g sub)in the training data. To do so, a standard maximum likelihoodgenerative modeling technique can be used to learn a density pψ(st, gsub)[69, 70, 71, 72, 73].The learned density model pψ(st, gsub)can be used to select reachable goals with a simple procedure- given a batch of sampled candidate sub-goals from the states visited thus far, we first filter reachablecandidates by thresholding density pψ(st, gsub)≥ε. The set of reachable candidates can then beused for sampling a proximal goal proportional to the state-goal distance dφestimated from humanfeedback as described above. Note that when there are no viable reachable candidates, the policy canperform random exploration. In our experimental evaluation in simulation, we estimate this densitywith a neural autoregressive density model [69, 70, 71] or a discretized, tabular density model.4.3 System OverviewThe overall system in GEAR learns policies in the real world without needing resets or reward func-tions and minimal non-expert crowdsourced human feedback. GEAR alternates between trying toexplore and learn a policy πθ(a|s, g)to reach the target goal distribution g∼p(g)in the forwardprocess, and reach the initial state g∼ρ0(s)in the reverse one. In each of these processes, theagent selects intermediate sub-goals gsubfor exploration based on the current state and the desiredgoal. The sub-goal selection mechanism first samples a set of visited states from the replay bufferDcandidates =sNi=1∼ D . Then, it uses the density model pψto estimate the reachability of thesestates from the current one, filtering out the unreachable states. Amongst these, Dfilteredcandidates , inter-mediate sub-goals are sampled gsubproportional to their estimated proximity to the desired goal,according to the human-trained model p(gsub|s, g) =exp−dφ(gsub,g)Ps′∈Dfilteredcandidatesexp−dφ(s′,g). If no states arereachable, the agent performs random exploration. This exploration process repeats until the targetgoalg∼p(g)(or the start state g∼ρ0(s)) is reached, then the goal is flipped and the learning con-tinues. Occasionally, the agent updates its density model pψ(st, gsub)by likelihood-based trainingand accepts occasional, asynchronous feedback from crowdsourced human supervisors to update thestate-goal proximity estimate dφ(gsub, g). To warm-start training, we can add a set of teleoperateddata (potentially suboptimal) to the replay buffer, and pretrain our policy π(a|s, g)via supervisedlearning on hindsight relabeled trajectories as described in Section 3 and [17, 61, 40]. Detailedpseudocode and algorithm equations can be found in Appendix A.5 Experimental EvaluationTo evaluate our proposed learning system GEAR , we aim to answer the following questions: (1)IsGEAR able to learn behaviors autonomously in simulation environments without requiring resets or6careful reward specification? (2)IsGEAR able to scale to learning robotic behaviors directly in thereal world using asynchronous crowdsourced human feedback and autonomous RL?5.1 Evaluation DomainsWe evaluate our proposed learning system on four domains in simulation and two in the real-world.The benchmarks are depicted in Fig 3 and consist of the following environments: Pusher : a simplemanipulation task in which we move an object within a table. A Navigation task in which a Turtlebothas to move to a certain goal location. Kitchen : another manipulation task in which the robot has toopen and close a microwave. Four Rooms : another navigation task in which an agent has to movethrough some rooms with tight doors. Details of these environments can be found in Appendix C.Pusher Kitchen Navigation Four Rooms Navigation (real) Pusher (real)Figure 3: Evaluation Domains for GEAR . We consider a mixture of navigation and manipulation tasks both insimulation and the real world for autonomous learning.5.2 Baselines and ComparisonsTo test the effectiveness of GEAR , we compare with several baselines on autonomous reinforcementlearning and learning from human feedback. Details of the baselines are presented in Appendix B5.3 Does GEAR learn behaviors autonomously in simulation?050k100k150k200k250k300k350k400k450k00.20.40.60.81Four Rooms navigationNumber of stepsEvaluation success0 0.2M 0.4M 0.6M 0.8M 1M00.10.20.30.40.50.60.70.80.9Kitchen manipulationNumber of stepsEvaluation success00.2M0.4M0.6M0.8M1M1.2M1.4M1.6M1.8M2M00.20.40.60.81Block PusherNumber of stepsEvaluation success020k 40k 60k 80k 100k 120k 140k00.20.40.60.81LoCoBot navigationNumber of stepsEvaluation successOracle densities Autoregressive model HuGE GCSL VICE Human Preferences FBRLFigure 4: Success rate of autonomous training in simulation of GEAR as compared to baselines. We find thatboth autoregressive and tabular variants of GEAR are able to successfully accomplish all tasks, more efficientlythan alternative reset-free, goal conditioned, and human-in-the-loop techniquesAs seen in Fig 4, when evaluation success rates are averaged over 4 seeds, GEAR is able to suc-cessfully learn across all environments in simulation. In particular, we see that GEAR outperformsprevious work in reset-free RL like VICE or FBRL. In the case of FBRL, this is a consequenceofGEAR not relying on a carefully tailored reward function as well as the fact that hindsight rela-belling methods are more sample efficient than PPO-based methods [40]. This, together with themore accurate reward signal that GEAR gets from comparative human feedback, explains why GEARoutperforms VICE.We also see that GEAR is significantly more performant than HUGE, due to the fact that GEAR ac-counts for policy reachability when commanding subgoals. By ignoring this, the goal selectionmechanism in HUGE often selects infeasible goals that get the agent trapped. We explore this topicfurther in E.1 where we show that GEAR reaches a higher percentage of commanded goals thanHuGE. Additionally, by relying on subgoals to guide exploration, GEAR also outperforms GCSL.Moreover, GEAR beats previous work that rely on human feedback, such as the Human Preferencesmethod, by guiding exploration via subgoal selection and also for the robustness to non-tailoredreward signal (feedback) that subgoal selection together with hindsight relabeling brings [40].7Notice that in Fig 4, we conduct experiments using both autoregressive neural density models [69,70] and tabular densities for measuring policy reachability, except in the kitchen environment, inwhich the high dimensionality inhibits using tabular densities. While the comparisons in Fig 4 aredone from the Lagrangian system state, we will explore scaling this to visual inputs in future work.In order to obtain a fair comparison between all of the baselines, we leveraged a synthetic oracleinstead of a real human. In Section 5.4, we show GEAR succeeds in learning optimal policies whenusing human feedback in the real world. Furthermore, in Appendix D we show that GEAR can learnsuccessful policies no matter where the feedback is coming from: synthetic oracles, non-expertannotators on Amazon Mechanical Turk, and expert annotators.In Appendix E, we provide further analysis of the hyperparameters of GEAR , as well as show theeffect of the amount and frequency of human feedback needed to learn optimal policies.5.4 Does GEAR learn behaviors autonomously in the real world from crowdsourced humanfeedback?0 5 10 15 20 25 3000.20.40.60.81Pusher (real)Number of episodesSuccess RatioCrowdsourced feedback (MTurk)0 200 400 600 800 1000 120000.10.20.30.40.50.60.70.8LoCoBot Navigation (real)Evaluation successLoading [MathJax]/extensions/MathMenu.js0 200 400 600 800 1000 120000.10.20.30.40.50.60.70.8TurtleBot Navigation (real)Evaluation successLoading [MathJax]/extensions/ t n. Number of episodesFigure 5: Evaluation in thereal-world for Franka pusher andTurtleBot Navigation showingcontinuous improvementNext, we trained GEAR in the real world, learning from crowd-sourced human feedback from arbitrary, non-expert users on theweb. We set up two different tasks, first, a navigation one withthe TurtleBot, and second, a manipulation task with the franka armconsisting of pushing a bowl (Fig 3). The first task involves leav-ing the robot in a living room and allowing it to explore how tonavigate the room autonomously to go from one location to an-other through obstacles, with human feedback crowdsourced fromAmazon Mechanical Turk. The robot is provided with around 10trajectory demonstrations of teleoperated seed data and then leftto improve autonomously. Human supervisors provide occasionalfeedback: 453 comparative labels provided asynchronously over 8hours, from 40 different annotators. For the second task, the robotis again provided with 10 trajectories and 200 labels from 22 an-notators over the course of one hour. We observe in Fig 5 that ourpolicies successfully learn to solve each task from minimal humansupervision and a reset-free setting in the real world . A depictionof the interface can be found in Appendix G, and more details on the demographics of crowdsourcedsupervision can be found in Appendix D.6 Conclusion and LimitationsIn this work, we present a framework for autonomous reinforcement learning in the real world usingcheap, asynchronous human feedback. We show how a self-supervised policy learning algorithmcan be efficiently guided using human feedback in the form of binary comparisons to select interme-diate subgoals that direct exploration and density models to account for reachability. The resultinglearning algorithm can be deployed on a variety of tasks in simulation and the real world, learningfrom self-collected data with occasional human feedback.Limitations: This work has several limitations: (1) Safety Guarantees: Real-world exploration withRL can be unsafe, leading to potentially catastrophic scenarios during exploration, (2) Limitation toBinary Comparisons: Binary comparisons provided by humans are cheap, but provide impoverishedfeedback since it only provides a single bit of information per comparison. (3) Requirement for Pre-training Demonstrations: For practical learning in the real world, the efficiency of GEAR is enhancedby using teleoperated pretraining data. This can be expensive to collect (4) Density Model as a Proxyfor Reachability: We use density models as proxies for reachability, but this is only a valid metricin a small set of quasistatic systems. More general notions of reachability can be incorporated. (5)Learning from low dimensional state: the current instantiation of GEAR learns from low dimensionalstate estimates through a visual state-estimation system, which needs considerable tuning.8AcknowledgmentsWe thank all of the participants in our human studies who gave us some of their time to providelabels. We thank the members of the Improbable AI Lab and the WEIRD Lab for their helpfulfeedback and insightful discussions.The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center forproviding HPC resources that have contributed to the research results reported within this paper.This research was supported by NSF Robust Intelligence Grant 2212310, the MIT-IBM Watson AILab, and the Sony Research Award.Author ContributionsMax Balsells led the project and the novel contributions together with Marcel and Zihan. Max andMarcel led the writing. Max led the navigation experiments in simulation and real world and theKitchen simulated experiments. Max and Marcel led work on the baselines. Max led the work onthe ablations.Marcel Torne led the project and novel contributions together with Max and Zihan. Marcel and Maxled the writing of the manuscript. Marcel led the pusher simulation and real-world experiments.Marcel led the human crowdsourcing experiments. Max and Marcel led work on the baselines.Marcel was in charge of the figures in the paper.Zihan Wang led the project and novel contributions together with Max and Marcel. Zihan helpedin the writing of the paper and conducting ablations.Samedh Desai helped to run the navigation experiments in the real world.Pulkit Agrawal provided some of the valuable resources needed to execute this project.Abhishek Gupta provided guidance throughout the development of the project, assisted with thenovel contributions of the paper, and helped in the writing of the paper.References[1] L. M. Smith, I. Kostrikov, and S. Levine. A walk in the park: Learning to walk in 20 minuteswith model-free reinforcement learning. CoRR , abs/2208.07860, 2022. doi:10.48550/arXiv.2208.07860. URL https://doi.org/10.48550/arXiv.2208.07860 .[2] P. Wu, A. Escontrela, D. Hafner, P. Abbeel, and K. Goldberg. Daydreamer: World modelsfor physical robot learning. In K. Liu, D. Kulic, and J. Ichnowski, editors, Conference onRobot Learning, CoRL 2022, 14-18 December 2022, Auckland, New Zealand , volume 205 ofProceedings of Machine Learning Research , pages 2226–2240. PMLR, 2022. URL https://proceedings.mlr.press/v205/wu23c.html .[3] H. Zhu, J. Yu, A. Gupta, D. Shah, K. Hartikainen, A. Singh, V . Kumar, and S. Levine. Theingredients of real world robotic reinforcement learning. In 8th International Conference onLearning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020 . OpenRe-view.net, 2020. URL https://openreview.net/forum?id=rJe2syrtvS .[4] A. Sharma, K. Xu, N. Sardana, A. Gupta, K. Hausman, S. Levine, and C. Finn. Autonomous re-inforcement learning: Formalism and benchmarking. arXiv preprint arXiv:2112.09605 , 2021.[5] B. Eysenbach, S. Gu, J. Ibarz, and S. Levine. Leave no trace: Learning to reset for safe andautonomous reinforcement learning. In 6th International Conference on Learning Representa-tions, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceed-ings. OpenReview.net, 2018. URL https://openreview.net/forum?id=S1vuO-bCW .[6] W. B. Knox, A. Allievi, H. Banzhaf, F. Schmitt, and P. Stone. Reward (mis)design for au-tonomous driving. Artif. Intell. , 316:103829, 2023. doi:10.1016/j.artint.2022.103829. URLhttps://doi.org/10.1016/j.artint.2022.103829 .9[7] A. Handa, A. Allshire, V . Makoviychuk, A. Petrenko, R. Singh, J. Liu, D. Makovi-ichuk, K. V . Wyk, A. Zhurkevich, B. Sundaralingam, Y . Narang, J. Lafleche, D. Fox, andG. State. Dextreme: Transfer of agile in-hand manipulation from simulation to reality.CoRR , abs/2210.13702, 2022. doi:10.48550/arXiv.2210.13702. URL https://doi.org/10.48550/arXiv.2210.13702 .[8] P. Agrawal. The task specification problem. In A. Faust, D. Hsu, and G. Neumann, ed-itors, Conference on Robot Learning, 8-11 November 2021, London, UK , volume 164 ofProceedings of Machine Learning Research , pages 1745–1751. PMLR, 2021. URL https://proceedings.mlr.press/v164/agrawal22a.html .[9] A. Sharma, R. Ahmad, and C. Finn. A state-distribution matching approach to non-episodicreinforcement learning. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv ́ari, G. Niu,and S. Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23July 2022, Baltimore, Maryland, USA , volume 162 of Proceedings of Machine LearningResearch , pages 19645–19657. PMLR, 2022. URL https://proceedings.mlr.press/v162/sharma22a.html .[10] A. Sharma, A. M. Ahmed, R. Ahmad, and C. Finn. Self-improving robots: End-to-end autonomous visuomotor reinforcement learning. CoRR , abs/2303.01488, 2023. doi:10.48550/arXiv.2303.01488. URL https://doi.org/10.48550/arXiv.2303.01488 .[11] K. Xu, Z. Hu, R. Doshi, A. Rovinsky, V . Kumar, A. Gupta, and S. Levine. Dexterous manipu-lation from images: Autonomous real-world RL via substep guidance. CoRR , abs/2212.09902,2022. doi:10.48550/arXiv.2212.09902. URL https://doi.org/10.48550/arXiv.2212.09902 .[12] J. Fu, A. Singh, D. Ghosh, L. Yang, and S. Levine. Variational inverse control withevents: A general framework for data-driven reward definition. In S. Bengio, H. M.Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advancesin Neural Information Processing Systems 31: Annual Conference on Neural Informa-tion Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montr ́eal, Canada ,pages 8547–8556, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/c9319967c038f9b923068dabdf60cfe3-Abstract.html .[13] A. Gupta, J. Yu, T. Z. Zhao, V . Kumar, A. Rovinsky, K. Xu, T. Devlin, and S. Levine. Reset-free reinforcement learning via multi-task learning: Learning dexterous manipulation behav-iors without human intervention. In IEEE International Conference on Robotics and Automa-tion, ICRA 2021, Xi’an, China, May 30 - June 5, 2021 , pages 6664–6671. IEEE, 2021. doi:10.1109/ICRA48506.2021.9561384. URL https://doi.org/10.1109/ICRA48506.2021.9561384 .[14] K. Xu, S. Verma, C. Finn, and S. Levine. Continual learning of control primitives :Skill discovery via reset-games. In H. Larochelle, M. Ranzato, R. Hadsell, M. Bal-can, and H. Lin, editors, Advances in Neural Information Processing Systems 33: AnnualConference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual , 2020. URL https://proceedings.neurips.cc/paper/2020/hash/3472ab80b6dff70c54758fd6dfc800c2-Abstract.html .[15] P. F. Christiano, J. Leike, T. B. Brown, M. Martic, S. Legg, and D. Amodei. Deep re-inforcement learning from human preferences. In I. Guyon, U. von Luxburg, S. Ben-gio, H. M. Wallach, R. Fergus, S. V . N. Vishwanathan, and R. Garnett, editors, Ad-vances in Neural Information Processing Systems 30: Annual Conference on NeuralInformation Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA ,pages 4299–4307, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html .10[16] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welin-der, P. F. Christiano, J. Leike, and R. Lowe. Training language models to follow instructionswith human feedback. In NeurIPS , 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html .[17] D. Ghosh, A. Gupta, J. Fu, A. Reddy, C. Devin, B. Eysenbach, and S. Levine. Learning toreach goals without reinforcement learning. 2019.[18] S. Lange, M. Riedmiller, and A. V oigtl ̈ander. Autonomous reinforcement learning on rawvisual input data in a real world application. In The 2012 international joint conference onneural networks (IJCNN) , pages 1–8. IEEE, 2012.[19] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 , 2018.[20] H. Zhu, A. Gupta, A. Rajeswaran, S. Levine, and V . Kumar. Dexterous manipulation with deepreinforcement learning: Efficient, general, and low-cost. In 2019 International Conference onRobotics and Automation (ICRA) , pages 3651–3657. IEEE, 2019.[21] M. Bloesch, J. Humplik, V . Patraucean, R. Hafner, T. Haarnoja, A. Byravan, N. Y . Siegel,S. Tunyasuvunakool, F. Casarini, N. Batchelor, et al. Towards real robot learning in the wild:A case study in bipedal locomotion. In Conference on Robot Learning , pages 1502–1511.PMLR, 2022.[22] A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser. Tossingbot: Learning to throwarbitrary objects with residual physics. IEEE Transactions on Robotics , 36(4):1307–1319,2020.[23] H. Bharadhwaj, Z. Wang, Y . Bengio, and L. Paull. A data-efficient framework for training andsim-to-real transfer of navigation policies. In 2019 International Conference on Robotics andAutomation (ICRA) , pages 782–788. IEEE, 2019.[24] W. Han, S. Levine, and P. Abbeel. Learning compound multi-step controllers under unknowndynamics. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 6435–6442. IEEE, 2015.[25] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE Inter-national Conference on Robotics and Automation (ICRA) , pages 2786–2793. IEEE, 2017.[26] A. Sharma, A. M. Ahmed, R. Ahmad, and C. Finn. Self-improving robots: End-to-end au-tonomous visuomotor reinforcement learning. arXiv preprint arXiv:2303.01488 , 2023.[27] A. Sharma, A. Gupta, S. Levine, K. Hausman, and C. Finn. Autonomous reinforcement learn-ing via subgoal curricula. In M. Ranzato, A. Beygelzimer, Y . N. Dauphin, P. Liang, and J. W.Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conferenceon Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, vir-tual, pages 18474–18486, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/99c83c904d0d64fbef50d919a5c66a80-Abstract.html .[28] A. Gupta, C. Lynch, B. Kinman, G. Peake, S. Levine, and K. Hausman. Demonstration-bootstrapped autonomous practicing via multi-task reinforcement learning. CoRR ,abs/2203.15755, 2022. doi:10.48550/arXiv.2203.15755. URL https://doi.org/10.48550/arXiv.2203.15755 .11[29] K. Xu, S. Verma, C. Finn, and S. Levine. Continual learning of control primitives: Skilldiscovery via reset-games. Advances in Neural Information Processing Systems , 33:4999–5010, 2020.[30] Z. Zhang and L. Weihs. When learning is out of reach, reset: Generalization in autonomousvisuomotor reinforcement learning. CoRR , abs/2303.17600, 2023. doi:10.48550/arXiv.2303.17600. URL https://doi.org/10.48550/arXiv.2303.17600 .[31] E. Biyik. Learning preferences for interactive autonomy. CoRR , abs/2210.10899, 2022. doi:10.48550/arXiv.2210.10899. URL https://doi.org/10.48550/arXiv.2210.10899 .[32] D. Sadigh, A. D. Dragan, S. Sastry, and S. A. Seshia. Active preference-based learningof reward functions. In N. M. Amato, S. S. Srinivasa, N. Ayanian, and S. Kuindersma,editors, Robotics: Science and Systems XIII, Massachusetts Institute of Technology, Cam-bridge, Massachusetts, USA, July 12-16, 2017 , 2017. doi:10.15607/RSS.2017.XIII.053. URLhttp://www.roboticsproceedings.org/rss13/p53.html .[33] D. S. Brown and S. Niekum. Deep bayesian reward learning from preferences. CoRR ,abs/1912.04472, 2019. URL http://arxiv.org/abs/1912.04472 .[34] K. Lee, L. M. Smith, and P. Abbeel. PEBBLE: feedback-efficient interactive reinforcementlearning via relabeling experience and unsupervised pre-training. In M. Meila and T. Zhang,editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021,18-24 July 2021, Virtual Event , volume 139 of Proceedings of Machine Learning Research ,pages 6152–6163. PMLR, 2021. URL http://proceedings.mlr.press/v139/lee21i.html .[35] K. Lee, L. M. Smith, A. D. Dragan, and P. Abbeel. B-pref: Benchmarkingpreference-based reinforcement learning. In J. Vanschoren and S. Yeung, editors, Pro-ceedings of the Neural Information Processing Systems Track on Datasets and Bench-marks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual , 2021.URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/d82c8d1619ad8176d665453cfb2e55f0-Abstract-round1.html .[36] J. Hejna and D. Sadigh. Inverse preference learning: Preference-based rl without a rewardfunction. arXiv preprint arXiv:2305.15363 , 2023.[37] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. CoRR , abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347 .[38] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback,2022. URL https://arxiv. org/abs/2203.02155 , 13, 2022.[39] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra,P. Bhargava, S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXivpreprint arXiv:2307.09288 , 2023.[40] M. Torne, M. Balsells, Z. Wang, S. Desai, T. Chen, P. Agrawal, and A. Gupta. Breadcrumbsto the goal: Goal-conditioned exploration from human-in-the-loop feedback. arXiv preprintarXiv:2307.11049 , 2023.[41] A. Jain, D. Das, J. K. Gupta, and A. Saxena. Planit: A crowdsourcing approach for learningto plan paths from large scale preference feedback. In 2015 IEEE International Conference onRobotics and Automation (ICRA) , pages 877–884. IEEE, 2015.[42] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, et al. Roboturk: A crowdsourcing platform for robotic skill learning through imita-tion. In Conference on Robot Learning , pages 879–893. PMLR, 2018.12[43] A. Bobu, A. Bajcsy, J. F. Fisac, and A. D. Dragan. Learning under misspecified objectivespaces. In Conference on Robot Learning , pages 796–805. PMLR, 2018.[44] A. Bobu, M. Wiggert, C. Tomlin, and A. D. Dragan. Feature expansive reward learning:Rethinking human input. In Proceedings of the 2021 ACM/IEEE International Conference onHuman-Robot Interaction , pages 216–224, 2021.[45] D. P. Losey, A. Bajcsy, M. K. O’Malley, and A. D. Dragan. Physical interaction as communi-cation: Learning robot objectives online from human corrections. The International Journalof Robotics Research , 41(1):20–44, 2022.[46] A. Saran, E. S. Short, A. Thomaz, and S. Niekum. Understanding teacher gaze patterns forrobot learning. CoRR , abs/1907.07202, 2019. URL http://arxiv.org/abs/1907.07202 .[47] S. Ainsworth, M. Barnes, and S. Srinivasa. Mo’states mo’problems: Emergency stop mecha-nisms from observation. Advances in Neural Information Processing Systems , 32, 2019.[48] S. Tellex, R. Knepper, A. Li, D. Rus, and N. Roy. Asking for help using inverse semantics.2014.[49] C. Basu, E. Bıyık, Z. He, M. Singhal, and D. Sadigh. Active learning of reward dynamics fromhierarchical queries. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 120–127. IEEE, 2019.[50] E. Biyik and D. Sadigh. Batch active preference-based learning of reward functions. In Con-ference on robot learning , pages 519–528. PMLR, 2018.[51] A. Z. Ren, A. Dixit, A. Bodrova, S. Singh, S. Tu, N. Brown, P. Xu, L. Takayama, F. Xia,J. Varley, et al. Robots that ask for help: Uncertainty alignment for large language modelplanners. arXiv preprint arXiv:2307.01928 , 2023.[52] A. Peng, A. Netanyahu, M. K. Ho, T. Shu, A. Bobu, J. Shah, and P. Agrawal. Diagnosis,feedback, adaptation: A human-in-the-loop framework for test-time policy adaptation. 2023.[53] A. Nanavati, C. I. Mavrogiannis, K. Weatherwax, L. Takayama, M. Cakmak, and S. S. Srini-vasa. Modeling human helpfulness with individual and contextual factors for robot planning.InRobotics: Science and Systems , 2021.[54] C. P ́erez-D’Arpino and J. A. Shah. C-learn: Learning geometric constraints from demonstra-tions for multi-step manipulation in shared autonomy. In 2017 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 4058–4065. IEEE, 2017.[55] M. Zurek, A. Bobu, D. S. Brown, and A. D. Dragan. Situational confidence assistance forlifelong shared autonomy. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 2783–2789. IEEE, 2021.[56] S. Booth, S. Sharma, S. Chung, J. Shah, and E. L. Glassman. Revisiting human-robot teachingand learning through the lens of human concept learning. In 2022 17th ACM/IEEE Interna-tional Conference on Human-Robot Interaction (HRI) , pages 147–156. IEEE, 2022.[57] M. Liu, M. Zhu, and W. Zhang. Goal-conditioned reinforcement learning: Problems andsolutions. In L. D. Raedt, editor, Proceedings of the Thirty-First International Joint Conferenceon Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022 , pages 5502–5511.ijcai.org, 2022. doi:10.24963/ijcai.2022/770. URL https://doi.org/10.24963/ijcai.2022/770 .[58] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. To-bin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neuralinformation processing systems , 30, 2017.13[59] A. Levy, G. Konidaris, R. Platt, and K. Saenko. Learning multi-level hierarchies with hindsight.arXiv preprint arXiv:1712.00948 , 2017.[60] M. Plappert, M. Andrychowicz, A. Ray, B. McGrew, B. Baker, G. Powell, J. Schneider, J. To-bin, M. Chociej, P. Welinder, V . Kumar, and W. Zaremba. Multi-goal reinforcement learning:Challenging robotics environments and request for research. CoRR , abs/1802.09464, 2018.URL http://arxiv.org/abs/1802.09464 .[61] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learninglatent plans from play. In Conference on robot learning , pages 1113–1132. PMLR, 2020.[62] L. P. Kaelbling. Learning to achieve goals. In IJCAI , volume 2, pages 1094–8. Citeseer, 1993.[63] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.[64] T. Davchev, O. Sushkov, J.-B. Regli, S. Schaal, Y . Aytar, M. Wulfmeier, and J. Scholz. Wishyou were here: Hindsight goal selection for long-horizon dexterous manipulation. arXivpreprint arXiv:2112.00597 , 2021.[65] A. Nair, V . Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine. Visual reinforcement learning withimagined goals. In S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi,and R. Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Con-ference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018,Montr ́eal, Canada , pages 9209–9220, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/7ec69dd44416c46745f6edd947b470cd-Abstract.html .[66] K. Hartikainen, X. Geng, T. Haarnoja, and S. Levine. Dynamical distance learning for unsu-pervised and semi-supervised skill discovery. arXiv preprint arXiv:1907.08225 , 2019.[67] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method ofpaired comparisons. Biometrika , 39(3/4):324–345, 1952.[68] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. 2018.[69] B. Uria, M. C ˆot ́e, K. Gregor, I. Murray, and H. Larochelle. Neural autoregressive distributionestimation. CoRR , abs/1605.02226, 2016. URL http://arxiv.org/abs/1605.02226 .[70] A. van den Oord, N. Kalchbrenner, L. Espeholt, K. Kavukcuoglu, O. Vinyals, and A. Graves.Conditional image generation with pixelcnn decoders. In D. D. Lee, M. Sugiyama, U. vonLuxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Sys-tems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10,2016, Barcelona, Spain , pages 4790–4798, 2016. URL https://proceedings.neurips.cc/paper/2016/hash/b1301141feffabac455e1f90a7de2054-Abstract.html .[71] A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. InM. Balcan and K. Q. Weinberger, editors, Proceedings of the 33nd International Conferenceon Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016 , volume 48ofJMLR Workshop and Conference Proceedings , pages 1747–1756. JMLR.org, 2016. URLhttp://proceedings.mlr.press/v48/oord16.html .[72] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In Y . Bengio and Y . LeCun,editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB,Canada, April 14-16, 2014, Conference Track Proceedings , 2014. URL http://arxiv.org/abs/1312.6114 .[73] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using real NVP. In 5th Inter-national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26,2017, Conference Track Proceedings . OpenReview.net, 2017. URL https://openreview.net/forum?id=HkpbnH9lx .14[74] J. Fu, A. Singh, D. Ghosh, L. Yang, and S. Levine. Variational inverse control with events:A general framework for data-driven reward definition. CoRR , abs/1805.11686, 2018. URLhttp://arxiv.org/abs/1805.11686 .[75] B. Eysenbach, S. Gu, J. Ibarz, and S. Levine. Leave no trace: Learning to reset for safe andautonomous reinforcement learning. arXiv preprint arXiv:1711.06782 , 2017.[76] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017.15Finally, we provide some more insight into our work. In particular, it consists of:• Appendix A Algorithm , shows the pseudocode of our method.• Appendix B Baselines , explains the different baselines that we compared GEAR to in moredetail.• Appendix C Benchmarks , explains the different comparison environments in more detail.• Appendix D Human Experiments , gives further details on the human experiments, as datacollection procedure and annotators demographics.• Appendix E Ablations , We study the effect of changing key parts of our algorithm orcertain hyperparameters.• Appendix F Hyperparameters , depicts the hyperparameters used in the different runs.• Appendix G Interface , show the web interface that we used for collecting human feedback.A AlgorithmWe reproduce some of the learning objectives here for posterity. The following is the objective fortraining the goal selector with human-provided comparative feedback:Lrank(θ) =−E(si,sj,g),∼D"1i<jhlogexp−dφ(si, g)exp−dφ(si, g) + exp −dφ(sj, g)i+ (1)(1− 1i<j)hexp−dφ(sj, g)exp−dφ(si, g) + exp −dφ(sj, g)i#(2)The density model pψ(st, gsub)can be trained on a dataset D={(sit, gisub)}Ni=1of relabeled (st, gsub)tuples via the following objective:maxψE(st,gsub)∼D[logpψ(st, gsub)] (3)Different choices of family for pψ(st, gsub)yield different variants. We leverage tabular densitymodels and autoregressive. Policies trained via hindsight self-supervision optimize the followingobjective:arg maxπEτ∼Eg[ ̄π(·|g),g∼p(g)]"TXt=0logπ(at|st,G(τ))#(4)To sample goals from the learned proximity metric, we can sample gsub∼p(gsub|s, g), wherep(gsub|s, g) =exp−dφ(s, g)Ps′∈Dexp−dφ(s′, g)(5)Below, we show our algorithm in pseudo-code format.16Algorithm 1 GEAR1:Input: Human H, goal g, starting position s2:Initialize policy π, density model dθ, proximity model fθ, data buffer D, proximity model bufferG3:while True do4: p∼p(g)5:Dτ←PolicyExploration( π,G,g,D)6:D ← D ∪ D τ7: π←TrainPolicy( π,D)(hindsight relabeling [17], Eq 4)8:G ← G ∪ CollectFeedback( D,H)( Sec 4.2)9: fθ←TrainGoalSelector( fθ,G)(Eq 1 via the Bradley-Terry model [67])10: dθ←TrainDensityModel( dθ,G)(Eq 3, [69, 70, 71])11:end whileAlgorithm 2 PolicyExploration1:Input: policy π, goal selector fθ, goal g, data buffer D2:Dτ← {}3:s←s04:fori= 1,2, . . . , N do5: every k timesteps:6:S ∼ObtainReachableStates( dθ, s,D)(Sec 4.2, [69, 70])7: gb∼SampleClosestState( fθ, g,S)(Sec 4.2, Eq 5)8: while NOT stopped do9: Take action a∼π(a|s, gb)10: end while11: Execute πrandom forHtimesteps12: AddτtoDτwithout redundant states13:end for14:return DτB BaselinesWe compare GEAR against relevant baselines in autonomous reinforcement learning and learningfrom human preferences, which are presented next:•(1) GCSL [17]: this baseline involves doing autonomous learning with purely self-supervised policy learning [17], alternating between commanding the start and the goalduring exploration•(2) HuGE [40]: this baseline compares with the HUGE algorithm [40], which leveragescomparative human feedback asynchronously, but without accounting for policy reachabil-ity during training•(3) Classifier Based Rewards (VICE) [12, 9]: this baseline [74, 26] performs forward-backward autonomous RL going between a start position and a goal position with the re-wards provided by a classifier that is trained with goal states as positives and on-policystates as negatives, as in [26]•(4) Learning from Human Preferences [15]: this baseline adapts the learning from hu-man preferences paradigm [15] to the autonomous RL setting by commanding goals bothforward and backward.•(5) Forward Backward RL (FBRL) [24, 75]: this baseline uses dense reward functionsto learn a goal-conditioned policy to reach the goal and go back to the starting set. Toevaluate all methods, we follow the protocol in [4], where training proceeds reset-free butintermediate checkpoints are loaded in for evaluation from the initial state distribution.17C Evaluation EnvironmentsWe briefly discussed the evaluation environments we used to compare our method to previous work.In this section, we will go through the details of each of them.•Pointmass navigation:This is a holonomic navigation task in an environment with four rooms, where the objectiveis to move between the two farthest rooms. This is a modification of a benchmark proposedin [17].In this benchmark, the observation space consists of the position of the agent, that is,(x, y)∈R2, while the action space is discrete of cardinality 9. In particular, there are8actions corresponding to moving a fixed amount relative to the current position, the di-rections are the ones parallel to the axis and their diagonals. Additionally, there is an actionthat encodes no movement.The number of timesteps given to solve this task is 50. Finally, as for a human proxy, weuse the distance to the commanded goal, taking into account the walls, i.e., we consider toshortest distance according to the restrictions of the environment.•LoCoBot navigation:This benchmark is similar to the Four Rooms one since we are also dealing with 2D nav-igation. The main difference is that we are working with a simulated robot in Mujoco,in particular a LoCoBot, in a real-life-like environment, in which there is a kitchen and aliving room, thus presenting some obstacles for the robot such as tables or a couch. Ad-ditionally, the robot works with differential driving, as a LoCoBot or Turtlebot would do.The environment tries to resemble the one we do in the real world with a TurtleBot, sothat results obtained in simulation are, to a certain extent, informative about how our robotwould perform with the different algorithms in the real world.In this environment, the goals the robot should learn how to reach are the lower right andthe upper left corners. In this environment, the state space is the absolute position of therobot, together with its angle (x, y, θ )∈R3. As we are working with differential driving,the action space is discrete encoding 4 actions: rotate clockwise, rotate counterclockwise,move forward, and no movement.The LoCoBot should reach the given goal within 40timesteps. As before, for the humanproxy, we just use the distance to the goal, accounting for obstacles.•Block Pusher:This is a robotic manipulation problem, where a Sawyer robotic arm pushes an obstacle toa given location. This benchmark is also a modification of one of the benchmarks proposedby [17]In this environment the state space consists of the position of the puck and the position ofthe arm (x1, y1, x2, y2)∈R4. The actions space is the same as in the Pointmass navigationbenchmark (i.e. discrete with 9 possible actions).The arm should push the object to the desired location in at most 75timesteps. As for thehuman proxy, the reward function we use is the following:r=max(distance puck finger, 0.05) + distance puck goal•Kitchen:This environment is a modification of one of the benchmarks in [4]. It consists of a Frankarobot arm with 7 DoF doing manipulation in a kitchen. The objective is to learn how toopen and close the microwave.The observation space consists of the position of the end-effector of the robot, together withthe angle of the microwave joint, that is (x, y, z, θ )∈R4. The action space is discrete withcardinality 7, representing moving the end-effector forwards or backwards into any of thethree axes, as well as an action encoding no movement. Note that, despite the fact that the18observation space ⊆R4, as in the Pusher, the actual range that the values can take is largerthan in other benchmarks, and in order to be able to manipulate correctly the microwave,more precision is needed. This is why we couldn’t run GEAR with the oracle densities.The number of timesteps that the Franka has to either open or close the microwave is 100.Finally, when using a human proxy we use the following reward signal:r=distance(arm, goal arm position) +|goal joint - current joint |, if successdistance(arm, microwave handle) + bonus , otherwise(6)Where by joint we mean the angle of the joint of the microwave, success means that thedistance between the current state and the goal state is already below a certain threshold,and the bonus can be any fixed number greater than said threshold.•TurtleBot navigation in the Real World:This benchmark is similar to the LoCoBot navigation one, the major difference betweenthe two is that this one takes place in the real world instead of a simulation.The goal is to learn how to navigate between two opposite corners in a home-lookingenvironment, with a lot of obstacles. The action and the observation space are the sameas in the LoCoBot navigation environment. That is, the action space is discrete with 4possible actions (move clockwise, counterclockwise, forward, and don’t move), while thestate space consists of the absolute position of the TurtleBot and its angle (x, y, θ )∈R3.In order to get this state, we have a top-down camera and the TurtleBot has blue and redsemispheres, whose position can be detected by the camera, thus obtaining the position ofthe TurtleBot, and its angle (by computing the direction of the vector between the blue andred semispheres of the LoCoBot). Finally, we do collision avoidance by leveraging thedepth sensor of the top-down camera.The TurtleBot should reach any goal in 25timesteps. For the human proxy, we just use theEuclidean distance to the goal.•Real World Pusher with Franka Panda: This benchmark is relatively similar to thepusher environment in simulation, except it is with a Franka Emika panda robot in the realworld. The goal is to learn how to push an object on the plane between two different cornersof an arena. The challenge here is that the pusher is a cylindrical object and planar pushingin this case needs careful feedback control, otherwise, it is quite challenging. The actionspace is 9 dimensional denoting motion in each direction, diagonals, and a no-op. The statespace consists of the position of the robot end effector and the object of interest. In order toget this state we use a calibrated camera and an OpenCV color filter, although this could bereplaced with a more sophisticated state-estimation system. The system is provided withvery occasional intervention when the object is stuck in corners, roughly one nudge every30 minutes. Success during evaluation is measured by resetting the object to one corner ofthe arena and commanding it as a goal to reach the other corner.D Human ExperimentsWe ran GEAR from real human feedback on the Four Rooms navigation environment. We comparethe performance of GEAR varying the source of human feedback, coming from a crowdsourcedpool of non-expert and expert annotators, and a single expert and non-expert annotator. Theseexperiments were done through the IRB approval of the Massachusetts Institute of Technology.Qualifications for annotators: We requested certain qualifications from the annotators, some defaultdefined by AMT: Masters qualification. This qualification indicates that the annotator is reliablesince has above a threshold of accepted responses. We also defined our own requirements for ac-cepting the responses:• We required a certain performance on the control task, they had to do better or equal toproviding 1 incorrect label and 1 “I don’t know” label.19• We required the response to be complete, the annotator should have responded to all ques-tions.Payment: We paid 0.50$ for a set of 12 questions which take around 2 minutes to answer, whichwould be equivalent to an hourly pay of 15$/hour.D.1 Human Experiments in SimulationWe ran GEAR from real human feedback on the Four Rooms navigation environment. These experi-ments were run in the span of 4 hours.0 100k 200k 300k 400k 500k 600k 700k00.20.40.60.81Human experiment on Four RoomsNumber of stepsEvaluation successSyntheticNon-expert (327 labels)Non-expert Crowdsourced (445 labels)Expert (341 labels)Expert Crowdsourced (752)Figure 6: Comparison of GEAR trained from different types of real human feedback. We observeGEAR can be trained with non-expert human feedback without degrading performance.D.1.1 Non-expert Crowdsourced feedbackThe experiment for this data was collected from annotators on Amazon Mechanical Turk (AMT).There was a total of 78 users who participated. Out of which 12 were discarded because of notmeeting the requirements (see Qualifications section above), meaning we collected 445 labels froma total of 66 annotators. The users were presented with the interface shown in Appendix G. Theyeach got a set of 12 questions, 4 on a control task and 8 on the target task. The 4 answers onthe control task, where we know the ground truth labels, were used to make sure the annotatorunderstood what the question was and to discard those annotators who did not understand it. As forthe demographics, the annotators could optionally respond to a demographics survey. The resultsare presented in Figure 7.MaleFemale48.6% 51.4%(a) Sex demographics24 25 30 32 33 34 35 36 42 43 44 48 55 6220181614121086420 (b) Age demographicsFigure 7: left: Crowdsource experiment for non-expert annotators sex demographics. right: Crowd-sourced experiment for non-expert annotators age demographics20D.1.2 Expert Crowdsourced FeedbackThe experiment for this data was collected with the same interface presented in Appendix G. Werecruited the expert annotators through a mailing list in our institution. The annotators were notrelated to the project, however, they are mostly experts in the fields of computer science and robotics.We collected 752 labels from 29 annotators. We asked the annotators to respond to a demographicssurvey and the results are presented in Figure 8.MaleFemaleGenderqueerPrefer not to say34.5%6.9%3.4%55.2%(a) Sex demographics19 20 21 22 23 24 25 27 28 30 33 34 40 4643210 (b) Age demographicsFigure 8: left: Crowdsourced experiment for expert annotators’ sex demographics on the simulatednavigation environment. right: Crowdsourced experiment for expert annotators’ age demographicson the simulated navigation environment.D.1.3 Non-expert feedbackWe collected the data from a single annotator, who is an acquaintance of the authors. They are notknowledgeable about either the project or the fields of robotics and computer science. We collected327 labels using the same interface in Appendix G.D.1.4 Expert feedbackWe collected data from a single annotator who is knowledgeable in the field of robotics and computerscience and is very familiar with the project and the underlying algorithm. We collected 341 labelsusing the interface in Appendix G.D.2 Human Experiments in the real-worldBelow we present the demographic statistics of the annotators that provided feedback on the real-world experiments. We note that we left it optional for the annotators to fill in the demographicsform, which is the reason why there are fewer data points for the demographics than actual annota-tors who helped in the experiment.D.2.1 Real-world navigationFor the experiment of the pusher in the real world, we collected 453 labels from 40 annotators onAmazon Mechanical Turk.D.2.2 Real-world pusherFor the experiment of the pusher in the real world, we collected 200 labels from 22 annotators onAmazon Mechanical Turk.21MaleFemale66,66% 33.33%(a) Sex demographics24 32 33 34 36 38 40 42 53 10 (b) Age demographicsFigure 9: left: Crowdsource experiment on the real LoCoBot Navigation task for non-expert anno-tator’s sex demographics. right: Crowdsourced experiment on the real LoCoBot Navigation task fornon-expert annotator’s age demographicsMaleFemale40.9% 59.1%(a) Sex demographics30 32 33 34 35 44181614121086420 (b) Age demographicsFigure 10: left: Crowdsource experiment on the real pusher for non-expert annotators sex demo-graphics. right: Crowdsourced experiment on the real pusher for non-expert annotators age demo-graphicsE AblationsE.1 Analysis of GEAR vs HuGEIn this section, we explore how accounting for reachability makes a crucial difference to the qualityof the commanded subgoals. In particular, in Fig 11 we see the percentage of commanded subgoalsthat are reached in HuGE and in GEAR . Notice that GEAR commands goals every 5timesteps, whileHuGE does so once per episode only, meaning that in HuGE the agent has more time to reach thegoal. Despite this, we can clearly see how subgoals in GEAR are clearly reached more consistentlythan in HuGE, thus, showing how, by accounting for reachability, we manage to command statesthat are more likely to be reached by the agent.E.2 Ablations on the amount of feedback neededIn this section, we study how the frequency at which we provide feedback affects the total amount offeedback or steps we will need to learn a successful policy. In Figure 12 we see that by decreasingthe frequency at which we give feedback, we can get a successful policy using fewer queries, thatis, less feedback overall. However, when working with low frequencies, we see that the algorithmtakes longer to start succeeding. On the other hand, by increasing the frequency at which we providefeedback, we see that the algorithm starts succeeding in terms of timesteps, but we end up needingmore annotations overall. So we see a trade-off between the time it takes to succeed and the amountof feedback that we will require.220 0.5M 1M 1.5M 2M00.050.10.150.20.25PusherNumber of stepssubgoals reachedOracle densities (Ours)HuGEFigure 11: Number of commanded subgoals reached by GEAR (Ours) and HuGE throughout train-ing in the pusher environment. We observe Ours reaches a much higher number of commandedsubgoals throughout training, which means that the goals commanded in HuGE are unreachable,because of the reset-free setting, and clearly shows the effect and necessity of introducing reachablesets.0 20k 40k 60k 80k 100k 120k 140k00.20.40.60.81PusherNumber of labelsEvaluation success0 100k 200k 300k 400k 500k00.20.40.60.81PusherNumber of stepsEvaluation successfreq 1freq 5freq 50*The frequency represents the number of episodes we wait before giving feedback again* Figure 12: left: Comparison of the timesteps needed to succeed depending on the frequency inwhich we provide feedback. The frequency corresponds to the number of episodes, we wait beforegiving feedback again. We see that by lowering the amount of feedback, GEAR takes longer to startreaching the goal, however, it still succeeds. right: Comparison of the labels needed to succeeddepending on the frequency in which we provide feedback. In general, by lowering the frequency offeedback, we can manage to solve tasks with increasingly fewer labels, but as seen in Figure 4, if thefrequency is lowered too much, it will have a negative impact on the required number of timesteps tosucceed. Hence, we see a tradeoff between the number of labels given and the number of timestepsto achieve the goalE.3 Analysis of hyperparametersTo better understand the details of which design decisions affect the performance of GEAR , we con-duct ablation studies to understand the impact of various design decisions on learning progress.Specifically, we aim to understand the impact of (1) the threshold εfor the likelihood at which astate is considered ”reachable”, (2) the frequency at which new intermediate subgoals are sampledduring exploration, (3) the algorithm removes redundant exploration steps during exploration, weablate how important this step is in performance, and (4) we ablate how much pre-training datais required for learning and how this affects learning progress. We find that 1) choosing the rightthreshold for the success of our algorithm is critical. The best reachable threshold used for the point-mass navigation task is 5. Using a larger threshold (10 or 20) or a smaller one (1) would not makethe algorithm work better. 2) the ablation for sampling frequency shows that the right samplingfrequency would help boost the performance. We tried sampling frequency with 1, 5, 10, and 20.23(a) Reachable Threshold (b) Sampling Frequency(c) Buffer Initalization (d) Redundant StepsFigure 13: Ablations of GEAR : We studied the ablations of GEAR with four different setups. In a),we modify the reachable threshold with a parameter of 1, 5, 10, 20. In b), we verify the effects ofsampling frequency. In c), we use a different number of offline data to initialize the buffer. Theoffline data are collected by driving the agent randomly explore the environment. In d), we evaluatethe performance of our algorithm by removing and not removing the redundant exploration steps ateach end of the trajectories.The results show that 5 and 10 have a similar performance. Too small (sampling frequency = 1)or too large (sampling frequency = 20) do not work well. If the sampling frequency is too small,the agent might not be able to reach the subgoal and too frequent subgoal selection would make theperformance drop. If the sampling frequency is too large, there would be more redundant wanderingsteps which make the learning less efficient. 3) We found that removing the redundant steps wouldhelp training significantly. Without removing the redundant steps in the trajectory sampling, therewould be stationary states when the agent is stuck in the environment which could lead to the dropof performance. 4) More random pre-trained data would help build up the reachable set and furtherimprove the performance.F HyperparametersIn this section, we state the primary hyperparameters used across the different experiments. All thevalues are shown in Table 124Parameter Valuedefault (to those that apply)Optimizer Adam [76]Learning rate 5·10−4Discount factor ( γ) 0.99Reward model architecture MLP( 400,600,600,300)Use Fourier Features in reward model TrueUse Fourier Features in policy TrueUse Fourier Features in density model TrueBatch size for policy 100Batch size for reward model 100Epochs policy 100Epochs goal selector 400Train policy freq 10Train goal selector freq 10goal selector num samples 1000Stop threshold 0.05LoCoBot navigationStop threshold 0.25TurtleBot navigation in Real WorldStop threshold 0.1policy updates per step 50Oracle Densitiesreachable threshold 5VICEreward model epochs 20Human Preferencesreward model epochs 20Autoregressivereachable threshold 0.25Epochs density model 30000Train autoregressive model freq 300Batch size for the density model 4096Table 1: Hyperparameters used when GEAR25G Web Interface for Providing FeedbackIn Figure 14 we show an example interface for providing feedback for the TurtleBot navigation task,the same is used for the pusher task.Figure 14: Visualization of the human supervision web interface to provide feedback asynchronously duringrobot execution. Users are able to label which of the two states is closer to a goal or say they are unable tojudge.26 |
uo937r5eTE | Robot Parkour LearningZiwen Zhuang*13Zipeng Fu*2Jianren Wang4Christopher Atkeson4S ̈oren Schwertfeger3Chelsea Finn2Hang Zhao151Shanghai Qi Zhi,2Stanford,3ShanghaiTech,4CMU,5Tsinghua,*project co-leadsproject website: https://robot-parkour.github.ioFigure 1: We present a framework for learning parkour skills on low-cost robots. Our end-to-end vision-basedparkour learning system enable the robot to climb high obstacles, leap over large gaps, crawl beneath lowbarriers, squeeze through thin slits and run. Videos are on the project website.Abstract: Parkour is a grand challenge for legged locomotion that requires robotsto overcome various obstacles rapidly in complex environments. Existing meth-ods can generate either diverse but blind locomotion skills or vision-based butspecialized skills by using reference animal data or complex rewards. However,autonomous parkour requires robots to learn generalizable skills that are bothvision-based and diverse to perceive and react to various scenarios. In this work,we propose a system for learning a single end-to-end vision-based parkour policyof diverse parkour skills using a simple reward without any reference motion data.We develop a reinforcement learning method inspired by direct collocation togenerate parkour skills, including climbing over high obstacles, leaping over largegaps, crawling beneath low barriers, squeezing through thin slits, and running.We distill these skills into a single vision-based parkour policy and transfer it toa quadrupedal robot using its egocentric depth camera. We demonstrate that oursystem can empower two different low-cost robots to autonomously select andexecute appropriate parkour skills to traverse challenging real-world environments.Keywords: Agile Locomotion, Visuomotor Control, Sim-to-Real Transfer1 IntroductionHumans and animals possess amazing athletic intelligence. Parkour is an examplar of athleticintelligence of many biological beings capable of moving swiftly and overcoming various obstaclesin complex environments by running, climbing, and jumping [ 1]. Such agile and dynamic movements7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 2: We illustrate the challenging obstacles that our system can solve, including climbing high obstacles of0.40m (1.53x robot height), leap over large gaps of 0.60m (1.5x robot length), crawling beneath low barriers of0.2m (0.76x robot height), squeezing through thin slits of 0.28m by tilting (less than the robot width).require real-time visual perception and memorization of surrounding environments [ 2,3], tightcoupling of perception and action [ 4,5], and powerful limbs to negotiate barriers [ 6]. One of thegrand challenges of robot locomotion is building autonomous parkour systems.Boston Dynamics Atlas robots [ 7] have demonstrated stunning parkour skills. However, the massiveengineering efforts needed for modeling the robot and its surrounding environments for predictivecontrol and the high hardware cost prevent people from reproducing parkour behaviors given areasonable budget. Recently, learning-based methods have shown robust performance on walking [ 8,9,10,11,12,13,14,15,16,17,18,19,20,12,21,22,23,24,25,26,27], climbing stairs [ 20,28,29,30,31,32,33], mimicking animals [ 34,35,36,37,38,39] and legged mobile manipulation [ 40,41,42] by learning a policy in simulation and transferring it to the real world while avoiding muchcostly engineering and design needed for robot-specific modeling. Can we leverage learning-basedmethods for robot parkour but only using low-cost hardware?There are several challenges for robot parkour learning. First, learning diverse parkour skills (e.g.running, climbing, leaping, crawling, squeezing through, and etc) is challenging. Existing reinforce-ment learning works craft complex reward functions of many terms to elicit desirable behaviors oflegged robots. Often each behavior requires manual tuning of the reward terms and hyper-parameters;thus these works are not scalable enough for principled generation of a wide range of agile parkourskills. In contrast, learning by directly imitating animals’ motion capture data can circumvent tediousreward design and tuning [ 34,43], but the lack of egocentric vision data and diverse animal MoCapskills prevents the robots from learning diverse agile skills and autonomously selecting skills byperceiving environment conditions. Second, obstacles can be challenging for low-cost robots ofsmall sizes, as illustrated in Figure 2. Third, beyond the challenge of learning diverse skills, visualperception is dynamical and laggy during high-speed locomotion. For example, when a robot movesat 1m/s, a short 0.2 second of signal communication delay will cause a perception discrepancy of0.2m (7.9 inches). Existing learning-based methods have not demonstrated effective high-speedagile locomotion. Lastly, parkour drives the electric motors to their maximum capacity, so proactivemeasures to mitigate potential damage to the motors must be included in the system.This paper introduces a robot parkour learning system for low-cost quadrupedal robots that canperform various parkour skills, such as climbing over high obstacles, leaping over large gaps, crawlingbeneath low barriers, squeezing through thin slits, and running. Our reinforcement learning method isinspired by direct collocation and consists of two simulated training stages: RL pre-training with softdynamics constraints and RL fine-tuning with hard dynamics constraints. In the RL pre-training stage,we allow robots to penetrate obstacles using an automatic curriculum that enforces soft dynamicsconstraints. This encourages robots to gradually learn to overcome these obstacles while minimizingpenetrations. In the RL fine-tuning stage, we enforce all dynamics constraints and fine-tune thebehaviors learned in the pre-training stage with realistic dynamics. In both stages, we only use asimple reward function that motivates robots to move forward while conserving mechanical energy.After each individual parkour skill is learned, we use DAgger [ 44,45] to distill them into a singlevision-based parkour policy that can be deployed to a legged robot using only onboard perceptionand computation power.The main contributions of this paper include:•an open-source system for robot parkour learning , offering a platform for researchers to trainand deploy policies for agile locomotion;•a two-stage RL method for overcoming difficult exploration problems, involving a pre-trainingstage with soft dynamics constraints and a fine-tuning stage with hard dynamics constraints;2Figure 3: Soft dynamics constraints and hard dynamics constraints for each skill. Given soft dynamics constraints,the obstacles are penetrable.•extensive experiments in simulation and the real world showing that our parkour policy enableslow-cost quadrupedal robots to autonomously select and execute appropriate parkour skills totraverse challenging environments in the open world using only onboard computation, onboardvisual sensing and onboard power, including climbing high obstacles of 0.40m (1.53x robot height),leap over large gaps of 0.60m (1.5x robot length), crawling beneath low barriers of 0.2m (0.76xrobot height), squeezing through thin slits of 0.28m by tilting (less than the robot width), andrunning;•generalization to different robots , where we demonstrate that our system with the same trainingpipeline can power two different robots, A1 and Go1.2 Related WorkAgile Locomotion. Model-based control has achieved much success in agile locomotion, from MITCheetah robots and A1 robots jumping over or onto obstacles of various heights [ 46,47,48], ETHStarlETH robots jumping vertically [ 49], CMU Unified Snake robots climbing trees [ 50], X-RHexrobots self-righting using tails [ 51], ANYmal ALMA robots opening doors [ 52], ATRIAS robotswalking over stepping stones [ 53,54], Marc Raibert’s One-Legged Hopping Machine [ 55], and BostonDynamics Atlas’ parkour skills [ 7]. Recently, learning-based methods have also demonstrated variousagile locomotion capabilities, including high-speed running [ 56,16,57,35], resetting to the standingpose from random states [ 11,38,15,58], jumping [ 59,60,61], climbing stairs [ 20,10,28,29,30,32,33], climbing over obstacles [ 62], walking on stepping stones [ 29], back-flipping [ 63], quadrupedalstanding up on rear legs [ 43], opening doors [ 40,64,65,66], moving with damaged parts [ 67],catching flying objects [ 68], balancing using a tail [ 69], playing football/soccer [ 70,71,72,73],weaving through poles [ 74] and climbing ramps [ 74]. Most of these skills are blind or rely on stateestimation, and specialized methods are designed for these individual skills. In contrast, we build asystem for learning a single end-to-end vision-based parkour policy for various parkour skills.Vision-Based Locomotion. Classical modular methods rely on decoupled visual perception andcontrol pipelines, where the elevation maps [ 75,76,77,78,79,80,81], traversability maps [ 82,83,84,85], or state estimators [ 86,87,88,89,90,91,92,93,94,95] are constructed as intermediaterepresentations for downstream foothold planning, path planning and control [ 96,97,98,99,100,101,102,103,104,105,106,107,108]. Recently, end-to-end learning-based methods have alsoincorporated visual information into locomotion, where visual perception is performed using depthsensing [ 29,61,31], elevation maps [ 28,109,110,111,112], lidar scans [ 113], RGB images [ 32],event cameras [ 68] or learned neural spaces [ 30,33], but none have demonstrated effective high-speedagile locomotion.3 Robot Parkour Learning SystemsOur goal is to build an end-to-end parkour system that directly uses raw onboard depth sensing andproprioception to control every joint of a low-cost robot to perform various agile parkour skills, suchas climbing over high obstacles, leaping over large gaps, crawling beneath low barriers, squeezingthrough thin slits, and running. Unlike prior work where different methods and training schemesare used for different locomotion skills, we aim to generate these five parkour skills automaticallyand systemically. To achieve this, we develop a two-stage reinforcement learning method that is3inspired by direct collocation to learn these parkour skills under the same framework. In the RLpre-training stage, we allow robots to penetrate obstacles using an automatic curriculum that enforcessoft dynamics constraints. We encourage robots to gradually learn to overcome these obstacleswhile minimizing penetrations and mechanical energy. In the RL fine-tuning stage, we fine-tune thepre-trained behaviors with realistic dynamics. In both stages, we only use a simple reward functionthat motivates robots to move forward while conserving mechanical energy. After each individualparkour skill is learned, we use DAgger [ 44,45] to distill them into a single vision-based parkourpolicy that can be deployed. For robust sim-to-real deployment on a low-cost robot, we employseveral pre-processing techniques for the depth images, calibrate onboard visual delays, and enforceproactive measures for motor safety.3.1 Parkour Skills Learning via Two-Stage RLFigure 4: We show collisions points on the robot.Collision points that penetrate obstacles are in red.Since depth images are costly to render, and directlytraining RL on visual data is not always stable, weuse privileged visual information about the environ-ments to help RL to generate specialized parkourskills in simulation. The privileged visual informa-tion includes the distance from the robot’s currentposition to the obstacle in front of the robot, theheight of the obstacle, the width of the obstacle,and a 4-dimensional one-hot category representingthe four types of obstacles. We formulate each spe-cialized skill policy as a gated recurrent neural net-work (GRU [ 114]). The inputs to a policy other than the recurrent latent state are proprioceptionspropriot2R29(row, pitch, base angular velocities, positions and velocities of joints), last actionat12R12, the privileged visual information evist, and the privileged physics information ephyt. Weuse a similar approach to prior work [ 8,10] to sample physics properties like terrain friction, centerof mass of the robot base, motor strength and etc to enable domain adaptation from simulation to thereal world. The policy outputs the target joint positions at2R12.We train all the specialized skill policies climb;leap;crawl;tilt;runseparately on correspondingterrains shown in Figure 3 using the same reward structure. We use the formulation of minimizingmechanical energy in [ 35] to derive a general skill reward rskillsuitable for generating all skills withnatural motions, which only consists of three parts, a forward reward rforward , an energy reward renergyand an alive bonus ralive:rskill=rforward +renergy +ralive;where rforward =1jvxvtargetxj2jvyj2+3ej!yawj;renergy =4Xj2jointsjj_qjj2; r alive= 2:Measured at every time step, vxis the forward base linear velocity, vtargetx is the target speed, vyisthe lateral base linear velocity, !yawis the base angular yaw velocity, jis the torque at joint j,!yawis the joint velocity at at joint j, andare hyperparameters. We set the target speed for all skills toaround 1 m/s. We use the second power of motor power at each joint to reduce both the average andthe variance of motor power across all joints. See the supplementary for all hyperparameters.Skill Obstacle PropertiesTraining Ranges([leasy; lhard])Test Ranges([leasy; lhard])Climb obstacle height [0.2, 0.45] [0.25, 0.5]Leap gap length [0.2, 0.8] [0.3, 0.9]Crawl clearance [0.32, 0.22] [0.3, 0.2]Tilt path width [0.32, 0.28] [0.3, 0.26]Table 1: Ranges for obstacle properties for each skill duringtraining, measured in meters.RL Pre-training with Soft DynamicsConstraints. As illustrated in Figure 2, thedifficult learning environments for park-our skills prevent generic RL algorithmsfrom effectively finding policies that canovercome these challenging obstacles. In-spired by direct collocation with soft con-straints, we propose to use soft dynamicsconstraints to solve these difficult explo-ration problems. Shown in Figure 3, we set the obstacles to be penetrable so the robot can violate thephysical dynamics in the simulation by directly go through the obstacles without get stuck near theobstacles as a result of local minima of RL training with the realistic dynamics, i.e. hard dynamics4Figure 5: We bridge the visual gap between simulation and real world by applying pre-processing techniques.We use depth clipping, Gaussian noise and random artifacts in simulation, and depth clipping and hole-filling,spatial and temporal filters in the real world.constraints. Similar to the Lagrangian formulation of direct collocation [ 115], we develop a pene-tration reward rpenetrate to gradually enforce the dynamics constraints and an automatic curriculumthat adaptively adjusts the difficulty of obstacles. This idea has also been explored in robot manipula-tion [ 116,117]. Shown in Figure 4, to measure the degree of dynamics constraints’ violation, wesample collision points within the collision bodies of the robot in order to measure the volume andthe depth of penetration. Since the hips and shoulders of the robot contain all the motors, we samplemore collision points around these volumes to enforce stronger dynamics constraints, encouragingfewer collisions of these vulnerable body parts in the real world. Denote a collision point on thecollision bodies as p, an indicator function of whether pviolates the soft dynamics constraints as 1[p],and the distance of pto the penetrated obstacle surface as d(p). The volume of penetration can beapproximated by the sum of 1[p]over all the collision points, and the average depth of penetrationcan be approximated by the sum of d(p). In Figure 4, the collisions points violating the soft dynamicsconstraints ( 1[p] = 1 ) are in red, and those with 1[p] = 0 are in green. Concretely, the penetrationreward isrpenetrate =Xp(5 1[p] +6d(p))vx;where5and6are two fixed constants. We multiply both the penetration volume and the penetrationdepth with the forward base velocity vxto prevent the robot from exploiting the penetration rewardby sprinting through the obstacles to avoid high cumulative penalties over time. In addition, weimplement an automatic curriculum that adaptively adjusts the difficulty of the obstacles after a resetbased on the performance of individual robots simulated in parallel in simulation. We first calculatethe performance of a robot based on its penetration reward averaged over the previous episode beforethe reset. If the penetration reward is over a threshold, we increase the difficulty score sof obstaclesthat the robot will face by one unit (0.05); if lower, then we decrease it by one unit. Every robot startswith a difficulty score 0 and the maximum difficulty score is 1. We set the obstacle property for therobot based on its difficulty score by (1s)leasy+slhard, whereleasyandlhardare the two limitsof the ranges of obstacle properties corresponding to different parkour skills (shown in Table 1). Wepre-train the specialized parkour skills with soft dynamics constraints using PPO [ 118] with the sumof the general skill reward and the penetration reward rskill+rpenetrate .RL Fine-tuning with Hard Dynamics Constraints. After the pre-training stage of RL is nearconvergence, we fine-tune every specialized parkour skill policy on the realistic hard dynamicsconstraints (shown in Figure 3); hence, no penetrations between the robots and obstacles are possibleat the second stage of RL. We use PPO to fine-tune the specialized skills using only the generalskill reward rskill. We randomly sample obstacle properties from the ranges listed in Table 1 duringfine-tuning. Since the running skill is trained on terrains without obstacles, we directly train therunning skill with hard dynamics constraints and skip the RL pre-training stage with soft dynamicsconstraints.3.2 Learning a Single Parkour Policy by DistillationThe learned specialized parkour skills are five policies that use both the privileged visual informationevist, and the privileged physics information ephyt. However, the ground-truth privilege informationis only available in the simulation but not in the real world. Furthermore, each specialized policycan only execute one skill and cannot autonomously execute and switch between different parkourskills based on visual perception of the environments. We propose to use DAgger [ 44,45] to distilla single vision-based parkour policy parkour using only onboard sensing from the five specializedskill policies climb;leap;crawl;tilt;run. We randomly sample obstacles types and properties fromTable 1 to form a simulation terrain consisting of 40 tracks and 20 obstacles on each track. Since we5Success Rate (%)" Average Distance (m) "Climb Leap Crawl Tilt Run Climb Leap Crawl Tilt RunBlind 0 0 13 0 100 1.53 1.86 2.01 1.62 3.6MLP 0 1 63 43 100 1.59 1.74 3.27 2.31 3.6No Distill 0 0 73 0 100 1.57 1.75 2.76 1.86 3.6RMA [8] - - - 74 - - - - 2.7 -Ours (parkour policy) 86 80 100 73 100 2.37 3.05 3.6 2.68 3.6Oracles w/o Soft Dyn 0 0 93 86 100 1.54 1.73 3.58 1.73 3.6Oracles 95 82 100 100 100 3.60 3.59 3.6 2.78 3.6Table 2: We test our method against several baselines and ablations in the simulation with a max distance of3.6m. We measure the success rates and average distances of every skill averaged across 100 trials and 3 randomseeds. Our parkour policy shows the best performance using only sensors that are available in the real world.We evaluate on the test environments with obstacles proprieties that are more difficult than the ones of trainingenvironments shown in Table 1.have full knowledge of the type of obstacle related to every state st, we can assign the correspondingspecialized skill policy specializedst to teach the parkour policy how to act at a state. For example, weassign the climb policy climb to supervise the parkour policy given a high obstacle. We parameterizethe policy as a GRU. The inputs except the recurrent latent state are the proprioception spropriot , theprevious action at1and a latent embedding of the depth image Ideptht processed by a small CNN.The distillation objective isarg minparkourEst;atparkour;simhDparkourspropriot;at1;Ideptht;specializedstspropriot;at1;evist;ephyti;whereparkour are the network parameters of the parkour policy, sim is the simulator with harddynamics constraints, and Dis the divergence function which is binary cross entropy loss for policynetworks with tanh as the last layer. Both polices parkour andspecializedst are stateful. More details ofthe parkour policy network are in the supplementary.3.3 Sim-to-Real and DeploymentAlthough the distillation training in Section 3.2 can bridge the sim-to-real gap in physical dynamicsproperties such as terrain friction and mass properties of the robot [ 8,10], we still need to address thesim-to-real gap in visual appearance between the rendered depth image in simulation and the onboarddepth image taken by a depth camera in the real world. Shown in Figure 5, we apply pre-processingtechniques to both the raw rendered depth image and the raw real-world depth image. We apply depthclipping, pixel-level Gaussian noise, and random artifacts to the raw rendered depth image, and applydepth clipping, hole filing, spatial smoothing and temporal smoothing to the raw real-world depthimage.The depth images in both simulation and real-world have a resolution of 48 * 64. Due to thelimited onboard computation power, the refresh rate of onboard depth image is 10Hz. Our parkourpolicy operates at 50Hz in both simulation and the real world to enable agile locomotion skills, andasynchronously fetches the latest latent embedding of the depth image processed by a small CNN.The output actions of the policy are target joint positions which are converted to torques on the orderof 1000Hz through a PD controller of Kp= 50 andKd= 1. To ensure safe deployment, we apply atorque limits of 25Nm by clipping target joint positions: clip( qtarget;(Kd_q25)=Kp+q;(Kd_q+ 25)=Kp+q).4 Experimental ResultsRobot and Simulation Setup. We use IsaacGym [ 119] as the simulator to train all the policies. Totrain the specialized parkour skills, we construct large simulated environments consisting of 40 tracksand 20 obstacles on each track. The obstacles in each track have linearly increasing difficulties basedon the obstacle property ranges in Table 1. We use a Unitree A1 and a Unitree Go1 that are equippedwith Nvidia Jetson NX for onboard computation and Intel RealSense D435 for onboard visual sensing.More details are in the supplementary.6Climb0.2 ~ 0.5mLeap0.4 ~ 0.7mCrawl0.32 ~ 0.15mTilt0.32 ~ 0.25mrobot heightrobot lengthrobot heightrobot widthFigure 6: Real-world indoor quantitative experiments. Our parkour policy can achieve the best performance,compared with a blind policy and built-in MPC controllers. We control the MPC in A1 special mode byteleoperating the robot lower down or tilt the body during crawling and tilt respectively.Baselines and Ablations. We compare our parkour policy with several baselines and ablations.The baselines include Blind ,RND [120],MLP andRMA [8]. The ablations include No Distill ,Oracles w/o Soft Dyn . We also include Oracles , specialized parkour skills conditioned on priviledgeinformation in simulation, for the completeness of the comparisons.•Blind : a blind parkour policy baseline distilled from the specialized skills, implemented by settingdepth images Idepthas zeros.•RND : a RL exploration baseline method for training specialized skills with bonus rewards basedon forward prediction errors. We train it without our RL pre-training on soft dynamics constraints.•MLP : a MLP parkour policy baseline distilled from the specialized skills. Instead of using a GRU,it uses only the depth image, proprioception and previous action at the current time step withoutany memory to output actions.•RMA : a domain adaptation baseline that distills a parkour policy on a latent space of environmentextrinsics instead of the action space.•No Distill : an ablation training a vision-based parkour policy with GRU directly using PPO withour two-stage RL method but but skipping the distillation stage.•Oracles w/o Soft Dyn : an ablation training specialized skill policies using privileged informationdirectly with hard dynamics constraints.•Oracles (w/ Soft Dyn): our specialized skill policies using privileged information trained withour two-stage RL approach.4.1 Simulation ExperimentsVision is crucial for learning parkour. We compare the Blind baseline with our approach. Shownin Table 2, without depth sensing and relying only on proprioception, the distilled blind policy cannotcomplete any climbing, leaping or tilting trials and can only achieve a 13% success rate on crawling.This is expected, as vision enables sensing of the obstacle properties and prepares the robot forexecute agile skills while approaching the obstacles.Figure 7: Comparison of specialized ora-cles trained with soft dynamics constraintswith baselines averaged across every skill andthree trials.RL pre-training with soft dynamics constraints en-ables parkour skills’ learning. We compare the RND,Oracles w/o Soft Dyn and ours (Oracles w/ Soft Dyn),all trained using privileged information without the dis-tillation stage. We aim to verify that our method of RLpre-training with soft dynamics constraints can performefficient exploration. In Figure 7, we measure the aver-age success rates of each method averaged over 100 trialsacross all the parkour skills that require exploration includ-ing climbing, leaping, crawling and tilting. We trainedusing three random seeds for each method to measure thestandard deviations. Our method using RL pre-trainingwith soft dynamics constraints can achieve much fasterlearning progress and a better final success rate around 95%. We notice that RND struggles to learnmeaningful behaviors with scenarios that require fine-grained maneveurs such as crawling througha thin slit, due to its tendency to reach states where future states are difficult to predict. Both RND7and Oracles w/o Soft Dyn cannot make any learning progress on climbing and leaping, the two mostdifficult parkour skills. More plots showing the success rates for each skill separately are in thesupplementary.Recurrent networks enable parkour skills requiring memories. We compare the MLP baselinewith ours using a GRU to parameterize the vision-based parkour policy. Shown in Table 2, theMLP baseline cannot learn the climbing and leaping skills and achieve much lower performance oncrawling and tilting. Both climbing and leaping requires the robot to hold a short-term memory ofthe past visual perceptions. For example, during climbing when the robot has its front legs on theobstacles, it still needs memory about the spatial dimensions of the obstacle captured in past depthimages to control the rear legs to complete the climbing.Distillation is effective for Sim2Real. We compare the RMA baseline and the No Distill baselinewith ours. Although RMA can achieve similar performance on one skill that it is trained on, i.e. tilting,RMA fixes the network parameters of the MLP which processes the latent embeddings of the backboneGRU, and directly copies them from the specialized skill to the distilled policy. Consequently, itcannot distill multiple specialized skill policies, which have different MLP parameters, into oneparkour policy. No Distill cannot learn climbing, leaping and tilting due to the complexity of trainingdirectly from visual observations without privileged information.4.2 Real-World ExperimentsEmergent Re-trying Behaviors during Climbing. Our parkour policy has emergent re-trying be-haviors in the real world. When trying to overcoming a high obstacle but failing at the first trial,the robot will push itself away from the obstacle to ensure adequate run-up space for subsequentattempts.. Although we do not program such re-trying behaviors, they nicely emerge out of learningwith simple rewards. This behavior is also observed in simulation.Indoor Quantitative Experiments. Shown in Figure 1, we test our parkour policy in a constructedparkour terrain consisting of crawling, climbing, and leaping in sequential. We also conduct quantita-tive indoor experiments in the real world on the A1 robot. In Figure 6, we compare our vision-basedparkour policy, with Blind, MPC (A1 default controller) and MPC (A1 special mode). We show thesuccess rates of each method in every skill under varying difficulties averaged over 10 trials each.We change the skill difficulty by modifying the key obstacle properties, such as obstacle heights forclimbing and gap length for leaping. In A1 special mode, we directly teleoperate the robot to changeits state, such as lowering the body during crawling. We observe that our parkour policy can enablethe robot to climb obstacles as high as 0.40m (1.53x robot height) with an 80% success rate, to leapover gaps as large as of 0.60m (1.5x robot length) with an 80% success rate, to crawl beneath barriersas low as of 0.2m (0.76x robot height) with an 90% success rate, and to squeeze through thin slits of0.28m by tilting (less than the robot width). Our method has the best performance across all skills.Please refer to our project website for indoor experiment videos.Outdoor Experiments. Shown in Figure 1, we test our robot in the various outdoor environments.We observe that the robot controlled by our parkour policy can complete a wide range of agile parkourskills. It can leap over two disconnected stone stools by the river with a 0.6m wide gap. It cancontinuously climb several stairs of 0.42m high each. It can crawl beneath a camping cart as well ashandle slippery grass terrain. Please refer to our project website for outdoor experiment videos.5 Conclusion, Limitations and Future DirectionsWe present a parkour learning system for low-cost robots. We propose a two-stage reinforcementlearning method for overcoming difficult exploration problems for learning parkour skills. Wealso extensively test our system in both simulation and the real world and show that our systemhas robust performance for various challenging parkour skills in challenging indoor and outdoorenvironments. However, the current system requires the simulation environments to be manuallyconstructed. As a result, new skills can only be learned when new environments with differentobstacles and appearances are added to the simulation. This reduces how atuomatically new skillscan be learned. In the future, we hope to leverage recent advances in 3D vision and graphics toconstruct diverse simulation environments automatically from large-scale real-world data. We willalso investigate how we can train agile locomotion skills directly from RGB that contains semanticinformation instead of depth images.8AcknowledgmentsWe would like to thank Wenxuan Zhou and her Emergent Extrinsic Dexterity project [ 116] forinspiring our training pipeline allowing penetration. We would also like to thank Xiaozhu Lin,Wenqing Jiang, Fan Nie, Ruihan Yang, Xuxin Chen, Tony Z. Zhao and Unitree Robotics (YunguoCui) for their help in the real-world experiments. Zipeng Fu is supported by Stanford GraduateFellowship (Pierre and Christine Lamond Fellowship). This project is supported by Shanghai Qi ZhiInstitute and ONR grant N00014-20-1-2675.References[1]Merriam webster: Parkour. URL https://www.merriam-webster.com/dictionary/parkour .[2]A. E. Patla. Understanding the roles of vision in the control of human locomotion. Gait &posture , 1997.[3]A. A. Mohagheghi, R. Moraes, and A. E. Patla. The effects of distant and on-line visualinformation on the control of approach phase and step over an obstacle during locomotion.Experimental brain research , 2004.[4]J. M. Loomis, J. A. Da Silva, N. Fujita, and S. S. Fukusima. Visual space perceptionand visually directed action. Journal of experimental psychology: Human Perception andPerformance , 1992.[5]J. S. Matthis and B. R. Fajen. Visual control of foot placement when walking over complexterrain. Journal of experimental psychology: human perception and performance , 2014.[6]D. L. Puddle and P. S. Maulder. Ground reaction forces and loading rates associated withparkour and traditional drop landing techniques. Journal of sports science & medicine , 2013.[7] Boston dynamics: Atlas. URL https://www.bostondynamics.com/atlas .[8]A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid Motor Adaptation for Legged Robots.InRSS, 2021.[9]J. Tan, T. Zhang, E. Coumans, A. Iscen, Y . Bai, D. Hafner, S. Bohez, and V . Vanhoucke.Sim-to-real: Learning agile locomotion for quadruped robots. In RSS, 2018.[10] J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning quadrupedal locomotionover challenging terrain. Science Robotics , Oct. 2020.[11] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V . Tsounis, V . Koltun, and M. Hutter.Learning agile and dynamic motor skills for legged robots. Science Robotics , 2019.[12] Z. Xie, X. Da, M. van de Panne, B. Babich, and A. Garg. Dynamics randomization revisited:A case study for quadrupedal locomotion. In ICRA , 2021.[13] X. Song, Y . Yang, K. Choromanski, K. Caluwaerts, W. Gao, C. Finn, and J. Tan. Rapidlyadaptable legged robots via evolutionary meta-learning. In IROS , 2020.[14] T. Haarnoja, S. Ha, A. Zhou, J. Tan, G. Tucker, and S. Levine. Learning to walk via deepreinforcement learning. arXiv preprint arXiv:1812.11103 , 2018.[15] L. Smith, J. C. Kew, X. B. Peng, S. Ha, J. Tan, and S. Levine. Legged robots that keep onlearning: Fine-tuning locomotion policies in the real world. In ICRA , 2022.[16] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. Rapid locomotion viareinforcement learning. RSS, 2022.[17] R. Yang, M. Zhang, N. Hansen, H. Xu, and X. Wang. Learning vision-guided quadrupedallocomotion end-to-end with cross-modal transformers. In ICLR , 2022.[18] W. Yu, V . C. V . Kumar, G. Turk, and C. K. Liu. Sim-to-real transfer for biped locomotion. InIROS , 2019.9[19] Z. Li, X. Cheng, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Reinforcementlearning for robust parameterized locomotion control of bipedal robots. In ICRA , 2021.[20] J. Siekmann, K. Green, J. Warila, A. Fern, and J. Hurst. Blind bipedal stair traversal viasim-to-real reinforcement learning. arXiv preprint arXiv:2105.08328 , 2021.[21] S. Ha, P. Xu, Z. Tan, S. Levine, and J. Tan. Learning to walk in the real world with minimalhuman effort. arXiv preprint arXiv:2002.08550 , 2020.[22] X. Da, Z. Xie, D. Hoeller, B. Boots, A. Anandkumar, Y . Zhu, B. Babich, and A. Garg.Learning a contact-adaptive controller for robust, efficient legged locomotion. arXiv preprintarXiv:2009.10019 , 2020.[23] Z. Fu, A. Kumar, A. Agarwal, H. Qi, J. Malik, and D. Pathak. Coupling vision and propriocep-tion for navigation of legged robots. In CVPR , 2022.[24] S. Schmidgall and J. Hays. Synaptic motor adaptation: A three-factor learning rule for adaptiverobotic control in spiking neural networks. arXiv preprint arXiv:2306.01906 , 2023.[25] S. Kareer, N. Yokoyama, D. Batra, S. Ha, and J. Truong. Vinl: Visual navigation andlocomotion over obstacles. ICRA , 2023.[26] M. Seo, R. Gupta, Y . Zhu, A. Skoutnev, L. Sentis, and Y . Zhu. Learning to walk by steering:Perceptive quadrupedal locomotion in dynamic environments. ICRA , 2023.[27] J. Truong, A. Zitkovich, S. Chernova, D. Batra, T. Zhang, J. Tan, and W. Yu. Indoorsim-to-outdoorreal: Learning to navigate outdoors without any outdoor experience. arXiv preprintarXiv:2305.01098 , 2023.[28] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robustperceptive locomotion for quadrupedal robots in the wild. Science Robotics , Jan. 2022.[29] A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrainsusing egocentric vision. In Conference on Robot Learning (CoRL) , 2022.[30] R. Yang, G. Yang, and X. Wang. Neural volumetric memory for visual locomotion control.CVPR , 2023.[31] W. Yu, D. Jain, A. Escontrela, A. Iscen, P. Xu, E. Coumans, S. Ha, J. Tan, and T. Zhang. Visual-locomotion: Learning to walk on complex terrains with vision. In 5th Annual Conference onRobot Learning , 2021.[32] A. Loquercio, A. Kumar, and J. Malik. Learning visual locomotion with cross-modal supervi-sion. arXiv preprint arXiv:2211.03785 , 2022.[33] D. Hoeller, N. Rudin, C. Choy, A. Anandkumar, and M. Hutter. Neural scene representationfor locomotion on structured terrain. IEEE Robotics and Automation Letters , 2022.[34] X. B. Peng, E. Coumans, T. Zhang, T.-W. E. Lee, J. Tan, and S. Levine. Learning agile roboticlocomotion skills by imitating animals. In RSS, 2020.[35] Z. Fu, A. Kumar, J. Malik, and D. Pathak. Minimizing energy consumption leads to theemergence of gaits in legged robots. In CoRL , 2021.[36] T. Li, J. Won, S. Ha, and A. Rai. Model-based motion imitation for agile, diverse andgeneralizable quadupedal locomotion. arXiv preprint arXiv:2109.13362 , 2021.[37] Y . Yang, T. Zhang, E. Coumans, J. Tan, and B. Boots. Fast and efficient locomotion via learnedgait transitions. In CoRL , 2021.[38] C. Yang, K. Yuan, Q. Zhu, W. Yu, and Z. Li. Multi-expert learning of adaptive leggedlocomotion. Science Robotics , 2020.[39] D. Kang, J. Cheng, M. Zamora, F. Zargarbashi, and S. Coros. Rl+ model-based control:Using on-demand optimal control to learn versatile legged locomotion. arXiv preprintarXiv:2305.17842 , 2023.10[40] Z. Fu, X. Cheng, and D. Pathak. Deep whole-body control: learning a unified policy formanipulation and locomotion. In Conference on Robot Learning , 2022.[41] Y . Ma, F. Farshidian, T. Miki, J. Lee, and M. Hutter. Combining learning-based locomotionpolicy with model-based manipulation for legged mobile manipulators. IEEE Robotics andAutomation Letters , 2022.[42] N. Yokoyama, A. W. Clegg, E. Undersander, S. Ha, D. Batra, and A. Rai. Adaptive skillcoordination for robotic mobile manipulation. arXiv preprint arXiv:2304.00410 , 2023.[43] L. Smith, J. C. Kew, T. Li, L. Luu, X. B. Peng, S. Ha, J. Tan, and S. Levine. Learning andadapting agile locomotion skills by transferring experience. RSS, 2023.[44] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predictionto no-regret online learning. In Proceedings of the fourteenth international conference onartificial intelligence and statistics . JMLR Workshop and Conference Proceedings, 2011.[45] D. Chen, B. Zhou, V . Koltun, and P. Kr ̈ahenb ̈uhl. Learning by cheating. In Conference onRobot Learning , 2020.[46] H.-W. Park, P. M. Wensing, S. Kim, et al. Online planning for autonomous running jumpsover obstacles in high-speed quadrupeds. RSS, 2015.[47] Q. Nguyen, M. J. Powell, B. Katz, J. Di Carlo, and S. Kim. Optimized jumping on the mitcheetah 3 robot. In 2019 International Conference on Robotics and Automation (ICRA) , 2019.[48] C. Nguyen, L. Bao, and Q. Nguyen. Continuous jumping for legged robots on stepping stonesvia trajectory optimization and model predictive control. In 2022 IEEE 61st Conference onDecision and Control (CDC) , pages 93–99. IEEE, 2022.[49] C. Gehring, S. Coros, M. Hutter, C. D. Bellicoso, H. Heijnen, R. Diethelm, M. Bloesch,P. Fankhauser, J. Hwangbo, M. Hoepflinger, et al. Practice makes perfect: An optimization-based approach to controlling agile motions for a quadruped robot. IEEE Robotics & Automa-tion Magazine , 2016.[50] C. Wright, A. Buchan, B. Brown, J. Geist, M. Schwerin, D. Rollinson, M. Tesch, and H. Choset.Design and architecture of the unified modular snake robot. In 2012 IEEE internationalconference on robotics and automation , 2012.[51] A. M. Johnson, T. Libby, E. Chang-Siu, M. Tomizuka, R. J. Full, and D. E. Koditschek. Tailassisted dynamic self righting. In Adaptive mobile robotics . 2012.[52] C. D. Bellicoso, K. Kr ̈amer, M. St ̈auble, D. Sako, F. Jenelten, M. Bjelonic, and M. Hutter. Alma-articulated locomotion and manipulation for a torque-controllable robot. In 2019 Internationalconference on robotics and automation (ICRA) , pages 8477–8483. IEEE, 2019.[53] Q. Nguyen, A. Agrawal, X. Da, W. C. Martin, H. Geyer, J. W. Grizzle, and K. Sreenath.Dynamic walking on randomly-varying discrete terrain with one-step preview. In Robotics:Science and Systems , 2017.[54] R. Antonova, A. Rai, and C. G. Atkeson. Deep kernels for optimizing locomotion controllers.InConference on Robot Learning , 2017.[55] M. H. Raibert, H. B. Brown Jr, and M. Chepponis. Experiments in balance with a 3d one-leggedhopping machine. The International Journal of Robotics Research , 1984.[56] Cassie sets a guinness world record. URL https://agilityrobotics.com/news/2022/cassie-sets-a-guinness-world-record .[57] G. Ji, J. Mun, H. Kim, and J. Hwangbo. Concurrent training of a control policy and a stateestimator for dynamic and robust legged locomotion. IEEE Robotics and Automation Letters ,2022.[58] Y . Ma, F. Farshidian, and M. Hutter. Learning arm-assisted fall damage reduction and recoveryfor legged mobile manipulators. arXiv preprint arXiv:2303.05486 , 2023.11[59] Z. Li, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Robust and ver-satile bipedal jumping control through multi-task reinforcement learning. arXiv preprintarXiv:2302.09450 , 2023.[60] Y . Yang, X. Meng, W. Yu, T. Zhang, J. Tan, and B. Boots. Continuous versatile jumping usinglearned action residuals. L4DC , 2023.[61] G. B. Margolis, T. Chen, K. Paigwar, X. Fu, D. Kim, S. Kim, and P. Agrawal. Learning tojump from pixels. CoRL , 2021.[62] N. Rudin, D. Hoeller, M. Bjelonic, and M. Hutter. Advanced skills by learning locomotionand local navigation end-to-end. In 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , 2022.[63] C. Li, M. Vlastelica, S. Blaes, J. Frey, F. Grimminger, and G. Martius. Learning agile skills viaadversarial imitation of rough partial demonstrations. In Conference on Robot Learning , 2022.[64] X. Cheng, A. Kumar, and D. Pathak. Legs as manipulator: Pushing quadrupedal agility beyondlocomotion. ICRA , 2023.[65] E. Arcari, M. V . Minniti, A. Scampicchio, A. Carron, F. Farshidian, M. Hutter, and M. N.Zeilinger. Bayesian multi-task learning mpc for robotic mobile manipulation. IEEE Roboticsand Automation Letters , 2023.[66] H. Ito, K. Yamamoto, H. Mori, and T. Ogata. Efficient multitask learning with an embodiedpredictive model for door opening and entry with whole-body control. Science Robotics , 2022.[67] A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn. Learning toadapt in dynamic, real-world environments through meta-reinforcement learning. ICLR , 2019.[68] B. Forrai, T. Miki, D. Gehrig, M. Hutter, and D. Scaramuzza. Event-based agile object catchingwith a quadrupedal robot. ICRA , 2023.[69] H. Huang, A. Loquercio, A. Kumar, N. Thakkar, K. Goldberg, and J. Malik. More than anarm: Using a manipulator as a tail for enhanced stability in legged locomotion. arXiv preprintarXiv:2305.01648 , 2023.[70] T. Haarnoja, B. Moran, G. Lever, S. H. Huang, D. Tirumala, M. Wulfmeier, J. Humplik,S. Tunyasuvunakool, N. Y . Siegel, R. Hafner, et al. Learning agile soccer skills for a bipedalrobot with deep reinforcement learning. arXiv preprint arXiv:2304.13653 , 2023.[71] Y . Ji, G. B. Margolis, and P. Agrawal. Dribblebot: Dynamic legged manipulation in the wild.arXiv preprint arXiv:2304.01159 , 2023.[72] X. Huang, Z. Li, Y . Xiang, Y . Ni, Y . Chi, Y . Li, L. Yang, X. B. Peng, and K. Sreenath.Creating a dynamic quadrupedal robotic goalkeeper with reinforcement learning. arXivpreprint arXiv:2210.04435 , 2022.[73] Y . Ji, Z. Li, Y . Sun, X. B. Peng, S. Levine, G. Berseth, and K. Sreenath. Hierarchicalreinforcement learning for precise soccer shooting skills using a quadrupedal robot. In 2022IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 2022.[74] K. Caluwaerts, A. Iscen, J. C. Kew, W. Yu, T. Zhang, D. Freeman, K.-H. Lee, L. Lee, S. Saliceti,V . Zhuang, et al. Barkour: Benchmarking animal-level agility with quadruped robots. arXivpreprint arXiv:2305.14654 , 2023.[75] P. Fankhauser, M. Bloesch, and M. Hutter. Probabilistic terrain mapping for mobile robotswith uncertain localization. IEEE Robotics and Automation Letters , 2018.[76] I.-S. Kweon, M. Hebert, E. Krotkov, and T. Kanade. Terrain mapping for a roving planetaryexplorer. In IEEE International Conference on Robotics and Automation , 1989.[77] I.-S. Kweon and T. Kanade. High-resolution terrain map from multiple sensor data. IEEETransactions on Pattern Analysis and Machine Intelligence , 1992.12[78] A. Kleiner and C. Dornhege. Real-time localization and elevation mapping within urban searchand rescue scenarios. Journal of Field Robotics , 2007.[79] S. Gangapurwala, M. Geisert, R. Orsolino, M. Fallon, and I. Havoutis. Real-time trajectoryadaptation for quadrupedal locomotion using deep reinforcement learning. In 2021 IEEEInternational Conference on Robotics and Automation (ICRA) , 2021.[80] D. Belter, P. Łabcki, and P. Skrzypczy ́nski. Estimating terrain elevation maps from sparseand uncertain multi-sensor data. In 2012 IEEE International Conference on Robotics andBiomimetics (ROBIO) , 2012.[81] P. Fankhauser, M. Bloesch, C. Gehring, M. Hutter, and R. Siegwart. Robot-centric elevationmapping with uncertainty estimates. In Mobile Service Robotics . 2014.[82] Y . Pan, X. Xu, Y . Wang, X. Ding, and R. Xiong. Gpu accelerated real-time traversabilitymapping. In 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO) ,2019.[83] B. Yang, L. Wellhausen, T. Miki, M. Liu, and M. Hutter. Real-time optimal navigationplanning using learned motion costs. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , 2021.[84] R. O. Chavez-Garcia, J. Guzzi, L. M. Gambardella, and A. Giusti. Learning ground traversabil-ity from simulations. IEEE Robotics and Automation letters , 2018.[85] J. Guzzi, R. O. Chavez-Garcia, M. Nava, L. M. Gambardella, and A. Giusti. Path planningwith local motion estimations. IEEE Robotics and Automation Letters , 2020.[86] S. Yang, Z. Zhang, Z. Fu, and Z. Manchester. Cerberus: Low-drift visual-inertial-leg odometryfor agile locomotion. ICRA , 2023.[87] M. Bloesch, S. Omari, M. Hutter, and R. Siegwart. Robust visual inertial odometry using adirect ekf-based approach. In 2015 IEEE/RSJ international conference on intelligent robotsand systems (IROS) , pages 298–304. IEEE, 2015.[88] D. Wisth, M. Camurri, and M. Fallon. Vilens: Visual, inertial, lidar, and leg odometry forall-terrain legged robots. IEEE Transactions on Robotics , 2022.[89] D. Wisth, M. Camurri, S. Das, and M. Fallon. Unified multi-modal landmark tracking fortightly coupled lidar-visual-inertial odometry. IEEE Robotics and Automation Letters , 6(2):1004–1011, 2021.[90] R. Buchanan, M. Camurri, F. Dellaert, and M. Fallon. Learning inertial odometry for dynamiclegged robot state estimation. In Conference on Robot Learning , pages 1575–1584. PMLR,2022.[91] S. Yang, H. Kumar, Z. Gu, X. Zhang, M. Travers, and H. Choset. State estimation for leggedrobots using contact-centric leg odometry. arXiv preprint arXiv:1911.05176 , 2019.[92] D. Wisth, M. Camurri, and M. Fallon. Robust legged robot state estimation using factor graphoptimization. IEEE Robotics and Automation Letters , 2019.[93] S. Omari, M. Bloesch, P. Gohl, and R. Siegwart. Dense visual-inertial navigation system formobile robots. In 2015 IEEE International Conference on Robotics and Automation (ICRA) ,pages 2634–2640. IEEE, 2015.[94] C. Forster, M. Pizzoli, and D. Scaramuzza. Svo: Fast semi-direct monocular visual odometry.In2014 IEEE international conference on robotics and automation (ICRA) , 2014.[95] D. Scaramuzza and F. Fraundorfer. Visual odometry [tutorial]. IEEE robotics & automationmagazine , 2011.[96] M. DeDonato, V . Dimitrov, R. Du, R. Giovacchini, K. Knoedler, X. Long, F. Polido, M. A.Gennert, T. Padır, S. Feng, et al. Human-in-the-loop control of a humanoid robot for disasterresponse: a report from the darpa robotics challenge trials. Journal of Field Robotics , 2015.13[97] F. Jenelten, T. Miki, A. E. Vijayan, M. Bjelonic, and M. Hutter. Perceptive locomotion inrough terrain–online foothold optimization. IEEE Robotics and Automation Letters , 2020.[98] D. Kim, D. Carballo, J. Di Carlo, B. Katz, G. Bledt, B. Lim, and S. Kim. Vision aideddynamic exploration of unstructured terrain with a small-scale quadruped robot. In 2020 IEEEInternational Conference on Robotics and Automation (ICRA) , 2020.[99] M. Wermelinger, P. Fankhauser, R. Diethelm, P. Kr ̈usi, R. Siegwart, and M. Hutter. Navigationplanning for legged robots in challenging terrain. In 2016 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , 2016.[100] A. Chilian and H. Hirschm ̈uller. Stereo camera based navigation of mobile robots on roughterrain. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems , 2009.[101] C. Mastalli, I. Havoutis, A. W. Winkler, D. G. Caldwell, and C. Semini. On-line and on-boardplanning and perception for quadrupedal locomotion. In 2015 IEEE International Conferenceon Technologies for Practical Robot Applications (TePRA) , 2015.[102] A. Agrawal, S. Chen, A. Rai, and K. Sreenath. Vision-aided dynamic quadrupedal locomotionon discrete terrain using motion libraries. In 2022 International Conference on Robotics andAutomation (ICRA) , 2022.[103] J. Z. Kolter, M. P. Rodgers, and A. Y . Ng. A control architecture for quadruped locomotionover rough terrain. In 2008 IEEE International Conference on Robotics and Automation , 2008.[104] M. Kalakrishnan, J. Buchli, P. Pastor, and S. Schaal. Learning locomotion over rough terrainusing terrain templates. In 2009 IEEE/RSJ International Conference on Intelligent Robots andSystems , 2009.[105] L. Wellhausen and M. Hutter. Rough terrain navigation for legged robots using reachabilityplanning and template learning. In 2021 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , 2021.[106] C. Mastalli, M. Focchi, I. Havoutis, A. Radulescu, S. Calinon, J. Buchli, D. G. Caldwell, andC. Semini. Trajectory and foothold optimization using low-dimensional models for roughterrain locomotion. In 2017 IEEE International Conference on Robotics and Automation(ICRA) , 2017.[107] O. A. V . Magana, V . Barasuol, M. Camurri, L. Franceschi, M. Focchi, M. Pontil, D. G.Caldwell, and C. Semini. Fast and continuous foothold adaptation for dynamic locomotionthrough cnns. IEEE Robotics and Automation Letters , 2019.[108] F. L. G. Bermudez, R. C. Julian, D. W. Haldane, P. Abbeel, and R. S. Fearing. Performanceanalysis and terrain classification for a legged robot over rough terrain. In 2012 IEEE/RSJInternational Conference on Intelligent Robots and Systems , pages 513–519. IEEE, 2012.[109] V . Tsounis, M. Alge, J. Lee, F. Farshidian, and M. Hutter. Deepgait: Planning and control ofquadrupedal gaits using deep reinforcement learning. IEEE Robotics and Automation Letters ,2020.[110] X. B. Peng, G. Berseth, K. Yin, and M. Van De Panne. Deeploco: Dynamic locomotion skillsusing hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG) , 2017.[111] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massivelyparallel deep reinforcement learning. In Conference on Robot Learning , 2022.[112] D. Jain, A. Iscen, and K. Caluwaerts. From pixels to legs: Hierarchical learning of quadrupedlocomotion. arXiv preprint arXiv:2011.11722 , 2020.[113] A. Escontrela, G. Yu, P. Xu, A. Iscen, and J. Tan. Zero-shot terrain generalization for visuallocomotion policies. arXiv preprint arXiv:2011.05513 , 2020.[114] J. Chung, C. Gulcehre, K. Cho, and Y . Bengio. Empirical evaluation of gated recurrent neuralnetworks on sequence modeling. arXiv preprint arXiv:1412.3555 , 2014.14[115] R. Tedrake. Trajectory optimization. Underactuated Robotics . URL http://underactuated.mit.edu/trajopt.html .[116] W. Zhou and D. Held. Learning to grasp the ungraspable with emergent extrinsic dexterity. InConference on Robot Learning , 2022.[117] I. Mordatch, Z. Popovi ́c, and E. Todorov. Contact-invariant optimization for hand manipulation.InProceedings of the ACM SIGGRAPH/Eurographics symposium on computer animation ,2012.[118] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[119] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac gym: High performance gpu-based physics simulation forrobot learning. arXiv preprint arXiv:2108.10470 , 2021.[120] Y . Burda, H. Edwards, A. Storkey, and O. Klimov. Exploration by random network distillation.arXiv preprint arXiv:1810.12894 , 2018.15A Experiment VideosWe perform thorough real-world analysis of our system. Indoor and outdoor experiment videos canbe found at https://robot-parkour.github.io .B Details of Training in SimulationSpecialized Skills. A specialize skill policy consists of a GRU followed by a MLP that outputs thetarget joint positions. We concatenate all the observations including proprioception, last action,recurrent latent state of the GRU, privileged visual information and privileged physics informationas a flattened vector. It is passed to a one-layer GRU of 256 hidden sizes, followed by an MLPof hidden dimensions of [512, 256, 128]. We use ELU as the activation. The final layer outputs a12-dimensional vector and be fed to tanh activation function. The action ranges from 1to1, whichis scaled by a constant action scale: 0.4 for hip joints, and 0.6 for thigh and knee joints.Rewards, Environments and PPO. We follow the insights from [ 10,8,35] that use fractal noisesto generate terrains, which enforces the foot contact clearance. We use the reward terms for eachspecialized policies as listed in Table 3 to 7 in the supplementary. We use these parameters to trainall five specialized policies, in either RL pre-training with soft dynamics constraint or fine-tuningthe with hard dynamics constraints. The key parameters that related to the difficulties of the tasksare shown in the Table 1 of the main paper. Other parameters of the obstacles are set to constants:the obstacles for climbing is 0.8m wide and 0.8m long along the +x direction. The obstacles for theleaping task are gaps of 0.8m wide and 0.8m depth. For crawling, the obstacle is 0.8m wide and0.3m in the +x direction. For tilting, the length along the +x direction is 0.6m. We use a set of fixedvelocity commands vtargetx for each specialized skill during training. We list them in Table 8 of thesupplementary. We sample environment randomizations on the robot mass, center of mass of therobot base, motor strength, terrain friction, depth perception latency, camera position, field of viewand proprioception delay for each robot during training. The detailed environment randomizationparameters are listed in Table 9. The detailed parameters of the PPO algorithm are listed in Table 10of the supplementary.Table 3: Reward Scales for ClimbingPurposes Hyperparameter Variables Valuesx velocity 1 1:y velocity 2 1:angular velocity 3 0:1energy 4 2e6penetration depth 5 1e2penetration volume 6 1e2Table 4: Reward Scales for LeapingPurposes Hyperparameter Variables Valuesx velocity 1 1:y velocity 2 1:angular velocity 3 0:05energy 4 2e6penetration depth 5 4e3penetration volume 6 4e316Table 5: Reward Scales for CrawlingPurposes Hyperparameter Variables Valuesx velocity 1 1:y velocity 2 1:angular velocity 3 0:05energy 4 2e5penetration depth 5 6e2penetration volume 6 6e2Table 6: Reward Scales for TiltingPurposes Hyperparameter Variables Valuesx velocity 1 1:y velocity 2 1:angular velocity 3 0:05energy 4 1e5penetration depth 5 3e3penetration volume 6 3e3Table 7: Reward Scales for RunningPurposes Hyperparameter Variables Valuesx velocity 1 1:y velocity 2 1:angular velocity 3 0:05energy 4 1e5penetration depth 5 0.penetration volume 6 0.Table 8: Velocity Commands for each Specialized PolicySkillsvtargetx (m/s)Running 0.8Climbing 1.2Leaping 1.5Crawling 0.8Tilting 0.5Table 9: Environment Randomizations ( xy: Gaussian distribution; [x; y ]: uniform distributions)Parameters DistributionsAdded Mass [1.0, 3.0] (kg)Center of Mass (x) [-0.05, 0.15] (m)Center of Mass (y) [-0.1, 0.1] (m)Center of Mass (z) [-0.05, 0.05] (m)Friction Coefficient [0.5, 1.0]Motor Strength [0.9, 1.1]Forward Depth Latency [0.2, 0.26] (s)Camera Position (x) 0:270:01(m)Camera Position (y) 0:00750:0025 (m)Camera Position (z) 0:0330:0005 (m)Camera Pitch [0:0;5:0](deg)Field of View [85;88](deg)Proprioception Latency [0.0375, 0.0475] (s)17Table 10: PPO HyperparametersPPO clip range 0.2GAE 0.95Learning rate 1e-4Reward discount factor 0.99Minimum policy std 0.2Number of environments 4096Number of environment steps per training batch 24Learning epochs per training batch 5Number of mini-batches per training batch 4Parkour Policy. The parkour policy consists of a CNN encoder, a GRU and a MLP. The visualembedding from the CNN encoder is concatenated together with the rest of the observation (proprio-ception, last action and recurrent latent state of the GRU) and fed to the GRU whose output is thenprocessed the MLP module. The detailed parameters of the network structure are listed in Table 11of the supplementary. An illustration of the parkour training environment in simulation is shown inFigure 8.Table 11: Parkour Policy structureCNN channels [16, 32, 32]CNN kernel sizes [5, 4, 3]CNN pooling layer MaxPoolCNN stride [2, 2, 1]CNN embedding dims 128RNN type GRURNN layers 1RNN hidden dims 256MLP hidden sizes 512, 256, 128MLP activation ELUFigure 8: Parkour training environment in simulation during distillation.180.0 1.0climb (0.45m)0.00.51.0Success RateRL fine-tuning starts for OursOracles w/ Soft Dyn (Ours)Oracles w/o Soft DynRND(a)0.0 1.0leap(0.7m)0.00.51.0Success RateRL fine-tuning starts for OursOracles w/ Soft Dyn (Ours)Oracles w/o Soft DynRND (b)0.0 1.0crawl(0.3m)0.00.51.0Success RateRL fine-tuning starts for OursOracles w/ Soft Dyn (Ours)Oracles w/o Soft DynRND(c)0.0 1.0tilt(0.305m)0.00.51.0Success RateRL fine-tuning starts for OursOracles w/ Soft Dyn (Ours)Oracles w/o Soft DynRND (d)Figure 9: Comparison of our method with Oracles w/o Soft Dyn and RND. For our method, the RL finetuningstage started at the late stages of the training.We use binary cross-entropy loss for the parkour policy during distillation. The output of bothspecialized skills and the parkour policy ranges from 1to1.D(aparkour;aspecialized) =1 +aspecialized2log1 +aparkour2+1aspecialized2log1aparkour22;whereaspecializedis the action from the corresponding specialized skill, aparkouris the action from theparkour policy.C Details of Simulation SetupWe use IsaacGym Preview 4 for simulation. We generate a static large terrain map before eachtraining, during the training of specialized policies. The terrain consists of 800 tracks with a 20 by 40grid. We set the difficulty of each track in a linear curriculum manner. The tracks in the same rowhave the same difficulty but differ in non-essential configurations. The tracks in each column areconnected end to end so that whenever the robot finished the current track, it keeps moving forward(+x direction) to the more difficult track. We train each specialized policy in soft dynamics usingone 1 Nvidia 3090 computer for 12 hours and tune it in hard dynamics for 6 hours. For distillation,we use 4 computers, each of which is equipped with 1 Nvidia 3090 GPU, that share the same NFSfile system. We use 3 computers for loading the current training model and collecting the parkourpolicy’s trajectory as well as the specialized policy supervision. We use the other one computer toload the latest trajectories and train the parkour policy.D Details of Robot SetupWe use the Unitree A1 equipped for our real-world experiments which is equipped with an onboardNvidia Jetson NX. The robot has 12 joints. Each joint is equipped with a motor of 33.5Nm instantmaximum torque. It also has a built-in Intel RealSense D435 camera in front of the robot usinginferred and stereo to provide depth images. We use ROS1 on Ubuntu 18.04 which runs on theonboard Jetson NX. We use a ROS package based on Unitree SDK to send and receive the robotstates as well as the policy command at 100Hz. The ROS package is also equipped with a roll/pitchlimit, estimated torque limit, and emergency stop mechanism using the remote control as the meansof protection for the robot. To run the policy, we use two Python scripts: a CNN script to run the19visual encoder asynchronously and a main script to run the rest of the networks. We use the Pythonwrapper of librealsense to capture depth images at the resolution of 240 X 424. We apply the holingfilters, spatial filters, and temporal filters from the librealsense utilities. We crop the 60 pixels on theleft and 46 pixels on the right before down-sampling the depth image to 48 X 64 resolution. Thevisual embedding is sent to the main script using ROS message at 10Hz. We fix the policy inferencefrequency to 50Hz. In each loop, we update the robot proprioception and the visual embedding usingROS and compute the policy output actions. Then we clip the action by a range computed usingthe current joint position and velocity at a maximum torque of 25Nm, and send the position controlcommand to the ROS package, with Kp= 50:0;Kd= 1:0.E Detailed Comparison Studies on RL Pre-Training with Soft DynamicsConstraintsWe compare our method with RND and the Oracles w/o Soft Dyn. Our method trained with softdynamics constraints is the only method that can complete climbing and leaping skills. As shownin Figure 9 of the supplementary, except for crawling, RND fails to learn successful maneuvers toachieve climbing, leaping, and tilting. Although the Oracles w/o Soft Dyn learned to achieve crawland tilt skills, but fail to learn to climb and leap, which are the most difficult skills among all theskills in this paper.20 |
-3G6_D66Aua | Simultaneous Learning ofContact and Continuous DynamicsBibit Bianchini, Mathew Halm, and Michael PosaGRASP Laboratory, University of Pennsylvania{bibit, mhalm, posa }@seas.upenn.eduAbstract: Robotic manipulation can greatly benefit from the data efficiency, ro-bustness, and predictability of model-based methods if robots can quickly gen-erate models of novel objects they encounter. This is especially difficult wheneffects like complex joint friction lack clear first-principles models and are usu-ally ignored by physics simulators. Further, numerically-stiff contact dynamicscan make common model-building approaches struggle. We propose a method tosimultaneously learn contact and continuous dynamics of a novel, possibly multi-link object by observing its motion through contact-rich trajectories. We formulatea system identification process with a loss that infers unmeasured contact forces,penalizing their violation of physical constraints and laws of motion given currentmodel parameters. Our loss is unlike prediction-based losses used in differen-tiable simulation. Using a new dataset of real articulated object trajectories and anexisting cube toss dataset, our method outperforms differentiable simulation andend-to-end alternatives with more data efficiency. See our project page for code,datasets, and media: https://sites.google.com/view/continuous-contact-nets/homeKeywords: system identification, dynamics learning, contact-rich manipulationFigure 1: Our method for learning dynamics of an unknown object. Left: A Franka Panda automates data col-lection by tossing an object onto a table as the object’s configuration is recorded. Middle: Our violation-basedimplicit loss without explicit simulation trains simulator parameters and a residual δthat augments the learnedcontinuous acceleration, encouraging the residual to learn smooth accelerations characteristic of continuous dy-namics, while contact-related parameters implicitly define stiffer contact dynamics. Right: The trained modelcan be used with any simulator (contact solver) during inference for performing dynamics predictions.1 IntroductionIn the future of robotic manipulation off the assembly line, robots will encounter new objects in theirenvironment and be expected to perform useful tasks with them, such as cooking with kitchen uten-sils, using tools, opening doors, and packing items. Model-based control methods work increasinglywell in contact-rich scenarios [1, 2], but rely on models of the manipulated objects. Unlike factorysettings where everything can be precisely modeled, or locomotion where the robot itself is typicallythe only dynamic agent, a challenge of manipulation in the wild lies in the unknown properties ofthe objects to be manipulated. Model-free methods are viable, though potentially require prohibitiveamounts of data [3]. Building models on the fly could enable model-based control and result in moregeneralizable and robust performance, but is only realistic if model-building is fast.Manipulation is fundamentally contact-rich, and the resulting discontinuous dynamics can makemodel construction particularly challenging [4]. Standard system identification methods work well7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.for identifying smooth continuous dynamics parameters even in the presence of stiff contact [5, 6],though assuming contact-related parameters are known. Pfrommer et al. [7] developed a physics-based method which circumvented numerical difficulties directly by leveraging the physical struc-ture to derive a smooth, violation-based loss, though assuming continuous dynamics are known. Itis the aim of this work to learn both continuous and contact dynamics simultaneously.The challenge of jointly learning continuous and contact dynamics is that the overall dynamics in-herit the stiffness of contact, whose impacts can overpower the smaller, smooth, albeit importantcontinuous accelerations. Thus, separating continuous dynamics from contact events, while de-sirable, is not straightforward. We extend Pfrommer et al. [7] which handles the contact-relatedportions, then combine optimization-friendly inertial parameterizations [8] with common deep neu-ral network (DNN) practices of encouraging smoothness to handle the continuous dynamics viaresidual physics, all while enjoying the data efficiency of an implicit model with suitable loss [9].1.1 Contributions and OutlineWe make the following contributions in this work:• Present and make available a dataset of over 500 real toss trajectories of an articulated object,whose state-dependent continuous dynamics are more complicated than single rigid bodies.• Extend prior work on learning contact dynamics [7] by simultaneously learning continuous dy-namics via a combination of model-based parameterization and DNN residual physics.• Demonstrate effective performance of our method on two real datasets and two simulation sce-narios with imposed actual-to-modeled continuous dynamics gaps. We provide comparisons withdifferentiable simulation and end-to-end learning as alternatives.We ground our motivations and methods in §2 with related background. §3 details model represen-tations, followed by loss formulations in §4. With experimental setup in §5, we present and discussthe results in §6, followed by a conclusion in §7 and discussion of limitations and future work in §8.2 Background and Related WorkChallenges of contact-rich dynamics. Detecting contact events is extremely difficult in manypractical scenarios. Many model building works solve simpler variations, e.g. utilizing a contactdetection oracle [10], assuming knowledge of contact distances [5, 6, 11], or operating on simplegeometries like spheres or 2D interactions [6]. Our work builds off [7]’s contact-related parameterlearning that performs automatic segmentation of contact/non-contact effects, without access to anoracle to identify contact events. Our work extends this contact-implicit model building beyond con-tact dynamics, to building full dynamics models, without impractical contact detection assumptions.Implicit representations for discontinuous functions. Recent works have employed implicit ap-proaches to represent sharp functions smoothly, whether those functions represent discontinuouscontrol policies [12] or contact models [13]. Other works demonstrating differentiation throughthese implicit representations make them viable for use in learning [14, 15]. These techniques relyon smooth parameters to implicitly encode signals for discontinuous or extremely stiff events, whichaccurately characterizes contact dynamics. However, the implicit model representation can be datainefficient to optimize when combined with explicit losses [9]. Implicit model representations incombination with an informative loss can successfully learn contact parameters [7].Differentiable simulation. Widely used for policy learning and control, differentiable simulatorsare also useful for system identification [5, 16, 17]. Differentiable simulators use governing dy-namics that can be explicitly differentiated, and often compare simulated predictions with observedmotion, using the difference to supervise model training. However because they use prediction-based losses, differentiable simulators notoriously can have difficult to optimize loss landscapeswhen identifying contact-related parameters [18]. Differentiable simulation can be combined withartificially soft contact models to improve optimization [19], at the cost of model accuracy [4].2Inertial parameterizations. The inertia of a rigid body is completely described by 10 parameters:the mass, center of mass (3), and moments/products of inertia (6). Learning these directly can beproblematic since many members of R10are physically infeasible inertia vectors. There are severalpreviously developed mappings from θinertia∈R10to physically feasible sets of inertial parameters[8, 20, 21], and thus learning θinertia to indirectly yield inertial properties becomes a well posedoptimization problem. We use the Rucker and Wensing [8] parameterization in this work.Residual physics. While model-based structures typically boast data efficiency compared tomodel-free approaches [3], they fundamentally suffer from inaccuracies of the model on which theyare based. Residual physics [22, 23, 24, 25, 26] mitigates this by learning an expressive residual thatfills a data efficient but possibly insufficient structured model’s sim-to-real gap. We use a residualphysics DNN in this work to specifically augment the continuous dynamics of our structured model.3 Model RepresentationsWe consider a discrete dynamics model fparameterized by a set of learnable parameters θthat takesin some state x(k)and set of control inputs u(k)and performs a single simulation step,x(k+ 1) = fθ(x(k), u(k)). (1)This makes no assumptions about the structure of for what the learned parameters θrepresent. Inan unstructured case, fcould be learned as a DNN where θis the weights and biases of the network.In a more structured case (e.g. rigid body dynamics), θrepresents physical parameters of the system.Numerical methods commonly simulate contact by introducing an optimization problem to searchfor contact impulses λ(k)from a feasible set of contact impulses Λover the time step,x(k+ 1) = gθ(x(k), u(k), λ(k)), (2a)where λ(k) = arg minλ∈Λhθ(x(k), u(k), x(k+ 1), λ), (2b)where hθmeasures violation of contact constraints. This generic formulation underpins many com-mon simulators, where the embedded optimization problem may be a linear complementarity prob-lem (LCP) [27, 28], a second-order cone program [29], or some more generic structure [30].3.1 Measuring Violation of Rigid Body Contact DynamicsInspired by the LCP formulation from Stewart and Trinkle [28], we follow standard methods forconversion to an equivalent optimization problem form in (2), introduced by Pfrommer et al. [7].First, we let Λdescribe a Coulomb friction cone,λ∈Λ⇔ ∥ λt,i∥ ≤μiλn,i∀i= 1, . . . p, (3)for a system with pcontacts. Then we use penalty terms to describe violation of force complemen-tarity, energy dissipation, and geometric penetration for each contact i,hθcomp,i(k) =λn,i(k)φi(k+ 1), (4a)hθdiss,i(k) =λTi(k)μi∥Jt,i(k)v(k+ 1)∥Jt,i(k)v(k+ 1), (4b)hθpen,i(k) = min (0 , φi(k+ 1))2, (4c)where φis the set of signed distances, μis set of friction coefficients, J= [Jn;Jt]is the normal andtangential contact Jacobians, and x= [q;v]is the state of system configuration and velocity. Thus,with relative weighting between the terms, hθbecomeshθ(k) =pXi=1Xj∈{comp,diss,pen}wjhθj,i(k). (5)3With the contact dynamics described by hin (5) and the constraint in (3), the function gis a dis-cretized version of Newton’s third law to update system velocities, and an implicit Euler step toupdate the configuration. This gis a function of the implicit variable λ, and can be written asv(k+ 1) = v(k) +acontinuous ∆t+M−1JTλ(k), (6a)q(k+ 1) = q(k) + Γ v(k+ 1)∆ t, (6b)where acontinuous is the acceleration of the system due to continuous dynamics, and Γmaps velocity-space to configuration-space (e.g. mapping angular velocity to the time-derivative of an orientationquaternion). See Appendix A.1 for the selection of the introduced hyperparameter weights in hθ.3.2 Learnable ParametersWith the above model structure, the learnable parameters include the following. Geometry deter-mines Jandφ, both functions of the system configuration, i.e. J(q(k)), φ(q(k)). We parameterizeJandφby a set of vertices whose 3D locations are learnable. Friction , via μ, determines thepermissible set of contact impulses per contact point. In this work, the friction is parameterized bya single scalar μ.Inertia affects the model’s forward predictions in (6), where Mandacontinuousappear. With articulation, Mis a function of the system configuration, i.e. M(q(k)), and acontinuousof the full state, acontinuous (x(k)). We map learnable parameters in R10to a physically feasible setof 10 inertia parameters, per Rucker and Wensing [8]. Under autonomous dynamics, the mass of asystem is unobservable if contact forces are not measured [6]. Thus, we keep the total system massfixed, then learn the remaining moments and products proportionally as well as the center of mass.3.3 Residual NetworkIn this work, we use a residual physics DNN to compensate for inaccuracies in the model structurein (2). Since rigid body contact solvers like [28] work reasonably well to capture real inelasticcontact dynamics [31], we encourage the residual to fill gaps in the continuous dynamics. We addcomponents to the continuous acceleration of the system,acontinuous (x(k)) =acontinuous, model (x(k)) +δθ(x(k)), (7)where δθ∈Rnvelis the output of a residual network whose input is the state of the system. SeeAppendix A.2 for network architecture details.Adding costs on the norm of the network’s output and on its weights encourages the residual to besmall and smooth, respectively, as continuous dynamics are in comparison to contact dynamics. Forend-to-end alternatives which aim to capture both continuous and contact dynamics in one network,weight regularization is no longer beneficial since it leads to unrealistically soft contact dynamics.To capture the stiffness of contact, the result is an end-to-end network with extreme input sensitivity.In contrast, incorporating our residual into the continuous acceleration allows the inherent stiffnessof contact to be implicitly learned via geometric and frictional properties, leaving the residual in asmooth domain better suited for standard DNN approaches [4].4 Loss FormulationOur specific model structure alone does not affect the generalization capabilities of a model, and thechoice of loss function is also of vital importance for stiff function classes [9]. Fig. 2 diagrammat-ically illustrates the differences between the losses presented in this section and how they relate toexplicit versus implicit model usage for simulation.4.1 Prediction LossStandard approaches in model-building or system identification [32, 17, 23] use a prediction-basedloss that penalizes the error in a candidate model’s predictions,Lprediction =x(k+ 1)−fθ(x(k), u(k))2. (8)4Figure 2: Top row: Using an explicit model (left) versus an implicit model (right) for performing dynamicspredictions. Explicit models aim to predict next states directly from current states and inputs. Implicit modelsinstead parameterize and leverage contact solvers, which produce next states as a result of an optimizationproblem. Bottom row: A common way to train an explicit model is via a prediction-based loss (left). Implicitmodels can also be trained with a prediction-based loss (middle), requiring performing and differentiatingthrough the contact solver. Our approach trains an implicit model with a violation-based loss (right), avoidingsimulation during training time and producing smoother, more informative gradients.Differentiable simulators typically employ the implicit optimization problem as in (2) to solve forcontact impulses, so the loss becomes (equivalently)Lprediction =x(k+ 1)−gθ(x(k), u(k), λ(k))2, (9a)such that λ(k) = arg minλ∈Λhθ(x(k), u(k), x(k+ 1), λ). (9b)Both explicit approaches (8) and implicit approaches (9) functionally result in simulating a candidatemodel and penalizing the difference between its prediction and the true dynamics observation.4.2 Violation-Based Implicit LossDespite the increasing prevalence of implicit approaches, prediction-based losses inhibit the gener-alization benefits of implicit models [9]. A violation-based implicit loss of the formLviolation = minλ∈Λhx(k+ 1)−gθ(x(k), u(k), λ)2+hθ(x(k), u(k), x(k+ 1), λ)i, (10)useshas a soft constraint and, as a result, boasts greater data efficiency [9]. This loss itself isan optimization problem that solves for the set of contact impulses λthat balances 1) explainingthe observed motion and 2) matching the learned contact dynamics model. Thus minimizing thisloss function through the training process is a bilevel optimization problem. For full details, see[9, 7]. This loss performs inference over contact mode, a key enabling technique for contact-implicitplanning and control [1, 33].The exact form of the prediction error termx(k+ 1)−gθ(x(k), u(k), λ)2employed herein pe-nalizes errors in velocity space, since configurations are an affine function of velocity predictions(6b). A natural way to combine mixed linear and angular terms is to convert all into energy units vialθpred, energy (k) =M∆v(k) +JTλ2M−1, (11)where ∆v(k) =−v(k+ 1) + v(k) +acontinuous ∆t. (12)5 Experimental SetupFor all experiments, we consider a system autonomously falling under gravity and colliding with aflat plane. In addition to the cube toss dataset contributed by Pfrommer et al. [7], we contribute onenew real dataset and two simulated scenarios. We used a Franka Emika Panda 7 degree-of-freedomrobotic arm to automate the toss data collection of a two-link articulated object . Pose information5Figure 3: Our experimental systems. Left: A real two-link articulated object, with each link as 5cm by 5cmby 10cm. Left middle: A real cube of width 10cm, whose dataset was contributed by [7]. Right: A simulated6-vertex asymmetric object from two views. The volume of the asymmetric is similar to the volume of one ofthe articulated object links.of each link is tracked using TagSLAM [34]. These two body poses are combined into minimalcoordinates via an optimization problem that minimizes pose offset of both links. While we keepthe system mass fixed, our model can freely decide the mass distribution across the two links.We add a vortex simulation of an asymmetric object example, simulating dynamics with aspatially-varying force field pulling towards and swirling around a fixed vertical line. The initial-ized model is unaware of this continuous dynamics augmentation. We test in this scenario withan asymmetric object with 6 vertices. Lastly, we include a gravity simulation of the articulatedobject . This scenario features simulated dynamics of the articulated object from our new datasetwith typical gravitational acceleration of 9.81m/s2. We test at a fixed training set size of 256 tossesfrom poor initial guesses, at some fraction ∈[0,2]of this simulated gravity. All simulation datawas generated using Drake [35] and features a significant gap between the model’s believed and thesimulator’s actual dynamics. See Fig. 3 for visuals of these systems.Parameter Initializations. All experiments were run a minimum of 9 times each with a randomparameter initialization using a process described in Appendix A.4. Shaded regions in the resultsplots indicate 5%/95% normal t-score confidence intervals.Comparisons and Evaluation Metrics. See Table 1 for the five approaches we tested. An-itescu dynamics [36] were selected as a reasonable differentiable simulation baseline, as it formsthe basis of many widely-used, modern simulators, notably including MuJoCo [30] and Drake[35]. We present prediction errors for all approaches and parameter errors for the structuredapproaches. Prediction errors are the average norm error of all bodies’ position or orientationover the course of a trajectory. Defining Vas the set of points inside a body’s geometry andI= [m, p x, py, pz, Ixx, Iyy, Izz, Ixy, Ixz, Iyz]as the set of body inertial parameters, parameter er-rors for quantifying the geometry, friction, and inertial properties for a system with nbodies areevolume =1nnXi=1V ol((Vi,actual\ Vi,learned)∪(Vi,learned\ Vi,actual))V ol(Vi,actual), (13a)efriction =[μ1, . . . , μ n]learned−[μ1, . . . , μ n]actual, (13b)einertia=[s· I1, . . . , s · In]learned−[s· I1, . . . , s · In]actual. (13c)The vector sis akin to a “characteristic length” that is effectively normalized by the inertia of the trueobject. See more details on the volume and inertia metrics in Appendices B.1 and B.2, respectively.Name Parameterization Loss ResidualCCN (ours) Structured Violation implicitCCN-R (ours) Structured Violation implicit ✓DiffSim Structured Prediction errorDiffSim-R Structured Prediction error ✓End-to-end DNN Prediction error N/ATable 1: Tested approaches. CCN stands for our extension of Continuous dynamics learning plus the contactdynamics learning in ContactNets [7]. DiffSim is Differentiable Simulation using differentiable contact dy-namics defined in Anitescu [36]. The -R modifier indicates residual physics is included. DiffSim ablates ourviolation implicit loss function, and the End-to-end baseline ablates the physical structure imposed by the rigidbody model-based parameterization. See Appendix A.3 for End-to-end network details.6Figure 4: Results from the four experiments. Shaded regions indicate normal t-score 95% confidence inter-vals. Left column: The real articulated object featured rotations that were difficult for any of the methodsto capture over a long time horizon (middle), yet our CCN approaches outperforms DiffSim on geometry er-ror (bottom) and all alternatives on positional error (top). Left middle column: For every metric on the realcube experiments, our CCN approaches outperform DiffSim and End-to-end. Right middle column: Whileevery method achieved low geometry error on the asymmetric object in simulated vortex dynamics, our CCNapproaches performed the best in rotational error, and only our approach with residual (CCN-R) was able toachieve low positional error. Right column: The x-axis for the gravity experiments swept over an initial mod-eled gravitational acceleration. Despite poor model discrepancy, only our approach with residual CCN-R isable to maintain good performance across all metrics at different model discrepancies.Figure 5: Friction ( Left,Left middle ) and inertia ( Right middle ,Right ) parameter errors for the vortex andgravity simulated experiments.6 ResultsWe test the methods across challenging datasets featuring collisions through contact-rich trajecto-ries. While the contact dynamics are prominent in all the example trajectories, we built the artic-ulated system in particular for its continuous dynamics: non-trivial due to state-dependent Coriolisand centrifugal effects and unmodeled joint friction, damping, or backlash.7Observations. Both real experiments (articulated object and cube in left and left middle columnsin Fig. 4, respectively) show separation between CCN, DiffSim, and End-to-end methods, with CCNmatching or outperforming alternatives along all metrics, especially with more data. On the cubedataset, CCN and CCN-R consistently converge to <10% volume error while DiffSim struggles toimprove even with more data. The residual does not significantly help with real data but does insimulated examples, where CCN-R in the vortex scenario (right middle column in Fig. 4) improvesits positional trajectory error significantly, achieving consistently 5x better performance than othermethods at the largest dataset size. On the same metric, DiffSim-R sees no improvement beyondDiffSim. Fig. 5 shows CCN and CCN-R nearly always outperforms DiffSim and DiffSim-R, ex-cept for the vortex scenario where all methods perform well. In the gravity scenario (right columnin Fig. 4), the residual helps CCN-R maintain good performance achieved at the correct gravita-tional acceleration, across all initial models. In contrast, DiffSim-R outperforms DiffSim at everyinitial gravitational acceleration model. Since this gravity scenario swept over different modeledgravitational accelerations, End-to-end is unaffected since its representation is unstructured, and itsconsistent performance over the x-axis of these plots are included for reference against CCN andDiffSim. Parameter errors in Fig. 5 indicate DiffSim and DiffSim-R struggled to capture both thefriction and inertial terms in the gravity scenario to levels attained by CCN and CCN-R.Implications. The residual’s lack of effect on real data is in alignment with prior works that foundthe rigid body model already performs well in contact-rich scenarios with simple systems [31].The residual shows the most merit in the more extreme simulation examples, though its effect isn’trealized until larger dataset sizes – unsurprising for a DNN. The residual helps DiffSim to a muchlesser extent because our method better separates continuous and contact dynamics and allows theresidual to identify the smooth nature of the unmodeled vortex dynamics. Relatedly, there is betterperformance for DiffSim and DiffSim-R at overestimated gravitational accelerations rather thanunderestimated, where contact is less often predicted. Without contact, prediction losses experiencea lack of informative parameter gradients, in which case DiffSim-R outperforms DiffSim.7 ConclusionWe demonstrate with real experiments that our violation implicit loss trains models that outperformprediction loss-based structured and unstructured models. Our approach leverages the structureof contact versus continuous dynamics to learn both simultaneously, with physically meaningfulparameters driving separate contact and continuous dynamics with a DNN residual to augment.8 Limitations and Future Work.The articulated object is the most challenging system presented herein for dynamics learning. Whileour methods outperformed alternatives in all other metrics, there is still a significant gap betweenground truth and our models’ trajectory predictions, and the rotational error showed lackluster per-formance from all methods. Further closing this gap remains for future exploration, and we areopen-sourcing our articulated object dataset for the community to contribute their own methods. Thescalability of the method in this paper has not yet been demonstrated on large-scale systems (e.g. arobotic arm) or in multi-object settings. It remains to be seen whether the advantages demonstratedhere will extend as scope increases. While we tested one version of differentiable simulation usingAnitescu [36] dynamics, future studies will compare alternatives against each other and our viola-tion implicit loss in performing system identification. Our approach encouraged the residual to fillgaps in continuous dynamics while relying on a rigid body contact dynamics model to handle con-tact. Other works have demonstrated improved prediction capability by learning the contact model[37], though integrating this with system identification remains future work. While other works havelearned articulated structures from scratch [38, 39, 40], we assumed access to kinematic structurein this paper, leaving joint kinematics/dynamics learning for future studies. Lastly, we relied onAprilTags to estimate poses, which are more challenging to obtain via perception [41, 42, 43].8AcknowledgmentsWe thank our anonymous reviewers, who provided thorough and fair feedback. This work wassupported by a National Defense Science and Engineering Graduate Fellowship, an NSF GraduateResearch Fellowship under Grant No. DGE-1845298, and an NSF CAREER Award under GrantNo. FRR-2238480.References[1] A. Aydinoglu, A. Wei, and M. Posa. Consensus complementarity control for multi-contactmpc. arXiv preprint arXiv:2304.11259 , Apr. 2023.[2] A. Aydinoglu and M. Posa. Real-time multi-contact model predictive control via admm. In2022 International Conference on Robotics and Automation (ICRA) , pages 3414–3421. IEEE,2022.[3] L. Ljung. Perspectives on system identification. Annual Reviews in Control , 34(1):1–12, 2010.[4] M. Parmar, M. Halm, and M. Posa. Fundamental challenges in deep learning for stiff contactdynamics. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 5181–5188. IEEE, 2021.[5] F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter. End-to-enddifferentiable physics for learning and control. Advances in neural information processingsystems , 31:7178–7189, 2018.[6] N. Fazeli, R. Kolbert, R. Tedrake, and A. Rodriguez. Parameter and contact force estimationof planar rigid-bodies undergoing frictional contact. The International Journal of RoboticsResearch , 36(13-14):1437–1454, 2017.[7] S. Pfrommer, M. Halm, and M. Posa. ContactNets: Learning Discontinuous Contact Dynamicswith Smooth, Implicit Representations. In The Conference on Robot Learning (CoRL) , 2020.URL https://proceedings.mlr.press/v155/pfrommer21a.html .[8] C. Rucker and P. M. Wensing. Smooth parameterization of rigid-body inertia. IEEE Roboticsand Automation Letters , 7(2):2771–2778, 2022.[9] B. Bianchini, M. Halm, N. Matni, and M. Posa. Generalization bounded implicit learningof nearly discontinuous functions. In Learning for Dynamics and Control Conference , pages1112–1124. PMLR, 2022.[10] A. Hochlehnert, A. Terenin, S. Sæmundsson, and M. Deisenroth. Learning contact dynamicsusing physically structured neural networks. In International Conference on Artificial Intelli-gence and Statistics , pages 2152–2160. PMLR, 2021.[11] K. R. Allen, Y . Rubanova, T. Lopez-Guevara, W. Whitney, A. Sanchez-Gonzalez, P. Battaglia,and T. Pfaff. Learning rigid dynamics with face interaction graph networks. arXiv preprintarXiv:2212.03574 , 2022.[12] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mor-datch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning , pages158–168. PMLR, 2022.[13] N. Fazeli, S. Zapolsky, E. Drumwright, and A. Rodriguez. Learning data-efficient rigid-bodycontact models: Case study of planar impact. In Conference on Robot Learning , pages 388–397. PMLR, 2017.[14] A. Agrawal, S. Barratt, S. Boyd, E. Busseti, and W. M. Moursi. Differentiating through a coneprogram. arXiv preprint arXiv:1904.09043 , 2019.9[15] B. Amos and J. Z. Kolter. Optnet: Differentiable optimization as a layer in neural networks.InInternational Conference on Machine Learning , pages 136–145. PMLR, 2017.[16] S. Le Cleac’h, M. Schwager, Z. Manchester, V . Sindhwani, P. Florence, and S. Singh. Single-level differentiable contact simulation. IEEE Robotics and Automation Letters , 2023.[17] T. A. Howell, S. L. Cleac’h, J. Z. Kolter, M. Schwager, and Z. Manchester. Dojo: A differen-tiable simulator for robotics. arXiv preprint arXiv:2203.00806 , 2022.[18] R. Antonova, J. Yang, K. M. Jatavallabhula, and J. Bohg. Rethinking optimization with dif-ferentiable simulation from a global perspective. In Conference on Robot Learning , pages276–286. PMLR, 2023.[19] R. Tedrake. Underactuated Robotics . 2023. URL https://underactuated.csail.mit.edu.[20] G. Sutanto, A. Wang, Y . Lin, M. Mukadam, G. Sukhatme, A. Rai, and F. Meier. Encodingphysical constraints in differentiable newton-euler algorithm. In Learning for Dynamics andControl , pages 804–813. PMLR, 2020.[21] C. G. Atkeson, C. H. An, and J. M. Hollerbach. Estimation of inertial parameters of manipu-lator loads and links. The International Journal of Robotics Research , 5(3):101–119, 1986.[22] J. Wong, V . Makoviychuk, A. Anandkumar, and Y . Zhu. Oscar: Data-driven operationalspace control for adaptive and robust robot manipulation. In 2022 International Conference onRobotics and Automation (ICRA) , pages 10519–10526. IEEE, 2022.[23] E. Heiden, D. Millard, E. Coumans, Y . Sheng, and G. S. Sukhatme. Neuralsim: Augmentingdifferentiable simulators with neural networks. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 9474–9481. IEEE, 2021.[24] P. Abbeel, M. Quigley, and A. Y . Ng. Using inaccurate models in reinforcement learning. InProceedings of the 23rd international conference on Machine learning , pages 1–8, 2006.[25] A. Ajay, J. Wu, N. Fazeli, M. Bauza, L. P. Kaelbling, J. B. Tenenbaum, and A. Rodriguez.Augmenting physical simulators with stochastic neural networks: Case study of planar pushingand bouncing. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 3066–3073. IEEE, 2018.[26] A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser. Tossingbot: Learning to throwarbitrary objects with residual physics. IEEE Transactions on Robotics , 36(4):1307–1319,2020.[27] M. Anitescu and F. A. Potra. Formulating dynamic multi-rigid-body contact problems withfriction as solvable linear complementarity problems. Nonlinear Dynamics , 14(3):231–247,1997.[28] D. E. Stewart and J. C. Trinkle. An implicit time-stepping scheme for rigid body dynamicswith inelastic collisions and coulomb friction. International Journal for Numerical Methodsin Engineering , 39(15):2673–2691, 1996.[29] A. M. Castro, F. N. Permenter, and X. Han. An unconstrained convex formulation of compliantcontact. IEEE Transactions on Robotics , 2022.[30] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE, 2012.[31] B. Acosta, W. Yang, and M. Posa. Validating robotics simulators on real-world impacts. IEEERobotics and Automation Letters , 7(3):6471–6478, 2022.10[32] A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. In Conference on Robot Learning , pages 1101–1112. PMLR, 2020.[33] M. Posa, C. Cantu, and R. Tedrake. A direct method for trajectory optimization of rigid bodiesthrough contact. The International Journal of Robotics Research , 33(1):69–81, 2014.[34] B. Pfrommer and K. Daniilidis. Tagslam: Robust slam with fiducial markers. arXiv preprintarXiv:1910.00679 , 2019.[35] R. Tedrake and the Drake Development Team. Drake: Model-based design and verification forrobotics, 2019. URL https://drake.mit.edu .[36] M. Anitescu. Optimization-based simulation of nonsmooth rigid multibody dynamics. Math-ematical Programming , 105:113–143, 2006.[37] K. R. Allen, T. L. Guevara, Y . Rubanova, K. Stachenfeld, A. Sanchez-Gonzalez, P. Battaglia,and T. Pfaff. Graph network simulators can learn discontinuous, rigid contact dynamics. InConference on Robot Learning , pages 1157–1167. PMLR, 2023.[38] J. Sturm, C. Stachniss, and W. Burgard. A probabilistic framework for learning kinematicmodels of articulated objects. Journal of Artificial Intelligence Research , 41:477–526, 2011.[39] L. Ma, J. Meng, S. Liu, W. Chen, J. Xu, and R. Chen. Sim2real2: Actively building explicitphysics model for precise articulated object manipulation. arXiv preprint arXiv:2302.10693 ,2023.[40] N. Heppert, M. Z. Irshad, S. Zakharov, K. Liu, R. A. Ambrus, J. Bohg, A. Valada, and T. Kollar.Carto: Category and joint agnostic reconstruction of articulated objects. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 21201–21210,2023.[41] B. Wen and K. Bekris. Bundletrack: 6d pose tracking for novel objects without instance orcategory-level 3d models. In 2021 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 8067–8074. IEEE, 2021.[42] H. Chen, F. Manhardt, N. Navab, and B. Busam. Texpose: Neural texture learning for self-supervised 6d object pose estimation. In Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition , pages 4841–4852, 2023.[43] Y . Liu, Y . Wen, S. Peng, C. Lin, X. Long, T. Komura, and W. Wang. Gen6d: Generalizablemodel-free 6-dof object pose estimation from rgb images. In Computer Vision–ECCV 2022:17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII ,pages 298–315. Springer, 2022.11A Learning DetailsHyperparameter CCN, real CCN, sim DiffSim, real DiffSim, simwcomp 0.001 0.001 N/A N/Awdiss 0.1 0.1 N/A N/Awpen 100 100 N/A N/Awres 1 0.001 1000 1wres, w 0.1 0 1 0Table 2: Tuned hyperparameters. Rows for residual norm ( wres) and weight ( wres, w) regularization only applyfor -R variations. Real versus simulated experiments performed best with different residual regularizationweights since the simulations featured larger model-to-actual dynamics gaps.A.1 Model-Based Parameter LearningFor our CCN and CCN-R method, we performed a hyperparameter search to determine the mosteffective set of weights for balancing the loss terms in (5). See Table 2 for these sets of weights.A.2 Residual Network Architecture and RegularizationThe residual network featured in both CCN-R and DiffSim-R has the same architecture. The firstlayer takes in the full state of the system and converts the quaternion orientation representation intoa 9-vector of the elements of the corresponding rotation matrix, letting the remaining state positionsand velocities pass through to layer 2. Beyond the first layer, the network is a fully-connected multi-layer perceptron (MLP) with two hidden layers of size 128. The last layer outputs values in theacceleration space of the system. All activations are ReLU.We regularized the residual via both output norm regularization and weight regularization, with as-sociated weight hyperparameters wresandwres, w, respectively. See Table 2 for the optimal values.Since the simulation examples were specifically designed to test the capabilities of the residual net-work, we found the optimal weights for the residual terms were much lower for simulated examplesthan for the real data. We also note that the optimal residual weights were much higher for DiffSimthan for CCN. This is a direct result from the DiffSim residual’s attempts to explain some of thecontact dynamics, whose accelerations are orders of magnitude larger than the continuous accelera-tions. Our CCN method avoids this by better containing its residual in the continuous domain, andthus could use lower residual regularization weights.A.3 End-to-End Network ArchitectureThe best performing network for the End-to-end baseline is an MLP with 4 hidden layers each ofsize 256 with Tanh activation. Its input is the full state of the system, and its output is the nextvelocity. The next configuration is obtained from predicted next velocity with an Euler step (6b).A.4 Parameter InitializationsAll learned parameters are randomly initialized within pre-specified ranges. Geometric parametersare initialized between 0.5 and 1.5 times their true lengths, and friction coefficients between 0.5and 1.5 times their approximate true values. Inertial parameters are initialized to a set of physicallyfeasible values via the following procedure for each link in the body:1. A virtual link is sized via a random set of three length scales lx, ly, lz, chosen between 0.5and 1.5 times the link’s true dimensions.2. The center of mass of the link is initialized to be somewhere within the inner half of thisvirtual link’s geometry.3. A random mass mrandis selected from the range between 0.5murdfand1.5murdf.124. Principal axis moments of inertia are computed using the assumption of uniform densitythroughout the randomly-sized virtual link, via e.g. for Ixx:Ixx,principal axis =mrand12l2y+l2z. (14)5. Rotate the inertia matrix from its principal axis definition by a random rotation in SO(3).The link’s initialized moments and products of inertia are derived from this rotated version.B Evaluation Metric DetailsB.1 Volume Evaluation MetricThe volume error metric defined in Equation (13a) is computed as the fraction of volume that thelearned geometry incorrectly included or incorrectly excluded. To compute this, we used the identityVol(A\B) = Vol( A)−Vol(A∩B). (15)The numerator of (13a) can therefore be computed asVol(Vi,actual) + Vol( Vi,learned)−2Vol(Vi,actual∩ Vi,learned). (16)Vi,learned ,Vi,actual, and their intersection are all convex hulls of a finite number of vertices. Therefore,the interaction operation as well as volume calculation can be conducted with a standard convex hullor halfspace intersection library, such as qhull .B.2 Inertia Evaluation MetricA body’s set of inertial parameters is I= [m, p x, py, pz, Ixx, Iyy, Izz, Ixy, Ixz, Iyz]. Since true in-ertia parameter vectors feature values at wildly different scales, the vector sis selected to normalizeIto more equally evaluate all inertial parameter errors. For example, the true inertial parameters forthe simulated asymmetric object used in the vortex example areIasym= [0.25,0,0,0,0.00081 ,0.00081 ,0.00081 ,0,0,0]. (17)Choosing 3.5cm as a reasonable center of mass location distance, the associated sasymnormalizer issasym=10.25,10.035,10.035,10.035,10.00081,10.00081,10.00081,10.00081,10.00081,10.00081.(18)13 |
9_8LF30mOC | VoxPoser: Composable 3D Value Mapsfor Robotic Manipulation with Language ModelsWenlong Huang1, Chen Wang1, Ruohan Zhang1, Yunzhu Li1,2, Jiajun Wu1, Li Fei-Fei11Stanford University2University of Illinois Urbana-ChampaignAbstract: Large language models (LLMs) are shown to possess a wealth of ac-tionable knowledge that can be extracted for robot manipulation in the form ofreasoning and planning. Despite the progress, most still rely on pre-defined mo-tion primitives to carry out the physical interactions with the environment, whichremains a major bottleneck. In this work, we aim to synthesize robot trajecto-ries, i.e., a dense sequence of 6-DoF end-effector waypoints, for a large variety ofmanipulation tasks given an open-set of instructions and an open-set of objects .We achieve this by first observing that LLMs excel at inferring affordances andconstraints given a free-form language instruction. More importantly, by leverag-ing their code-writing capabilities, they can interact with a vision-language model(VLM) to compose 3D value maps to ground the knowledge into the observationspace of the agent. The composed value maps are then used in a model-basedplanning framework to zero-shot synthesize closed-loop robot trajectories withrobustness to dynamic perturbations. We further demonstrate how the proposedframework can benefit from online experiences by efficiently learning a dynam-ics model for scenes that involve contact-rich interactions. We present a large-scale study of the proposed method in both simulated and real-robot environments,showcasing the ability to perform a large variety of everyday manipulation tasksspecified in free-form natural language. Project website: voxposer.github.io.Keywords: Manipulation, Large Language Models, Model-based Planning“Takeoutbreadfromtoaster”“Pressdownmoisturizerpump”“Turnopenvitaminbottle”“Takeoutanapkin”“Settableforpasta”“Closetopdrawer”“Sweeptrashintodustpan”“Turnonlamp”Code</>LargeLanguageModelhighcosthighreward“Unplugchargerforphone”“Hangtowelonrack”“Sorttrashtobluetray”“Measureweightofapple”VisionLanguageModelOpenthetopdrawer,andwatchoutforthatvase!xyz3DValueMapMotionPlanningFigure 1: V OXPOSER extracts language-conditioned affordances andconstraints from LLMs and groundsthem to the perceptual space using VLMs, using a code interface and without additional training to either com-ponent. The composed map is referred to as a 3D value map, which enables zero-shot synthesis of trajectoriesfor large varieties of everyday manipulation tasks with an open-set of instructions and an open-set of objects .Correspondence to Wenlong Huang <wenlongh@stanford.edu >.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.1 IntroductionLanguage is a compressed medium through which humans distill and communicate their knowledgeand experience of the world. Large language models (LLMs) have emerged as a promising approachto capture this abstraction, learning to represent the world through projection into language space [1–4]. While these models are believed to internalize generalizable knowledge as text, it remains aquestion about how to use it to enable embodied agents to physically actin the real world.We look at the problem of grounding abstract language instructions (e.g., “set up the table”) in robotactions [5]. Prior works have leveraged lexical analysis to parse the instructions [6–8], while morerecently language models have been used to decompose the instructions into a textual sequence ofsteps [9–11]. However, to enable physical interactions with the environment, existing approachestypically rely on a repertoire of pre-defined motion primitives (i.e., skills) that may be invoked byan LLM or a planner, and this reliance on individual skill acquisition is often considered a majorbottleneck of the system due to the lack of large-scale robotic data. The question then arises: how canwe leverage the wealth of internalized knowledge of LLMs at the even fine-grained action level forrobots, without requiring laborious data collection or manual designs for each individual primitive?In addressing this challenge, we first note that it is impractical for LLMs to directly output controlactions in text, which are typically driven by high-frequency control signals in high-dimensionalspace. However, we find that LLMs excel at inferring language-conditioned affordances andcon-straints , and by leveraging their code-writing capabilities, they can compose dense 3D voxel mapsthat ground them in the visual space by orchestrating perception calls (e.g., via CLIP [12] or open-vocabulary detectors [13–15]) and array operations (e.g., via NumPy [16]). For example, given aninstruction “open the top drawer and watch out for the vase”, LLMs can be prompted to infer: 1)the top drawer handle should be grasped, 2) the handle needs to be translated outwards, and 3) therobot should stay away from the vase. By generating Python code to invoke perception APIs, LLMscan obtain spatial-geometric information of relevant objects or parts and then manipulate the 3Dvoxels to prescribe reward or cost at relevant locations in observation space (e.g., the handle regionis assigned high values while the surrounding of the vase is assigned low values). Finally, the com-posed value maps can serve as objective functions for motion planners to directly synthesize robottrajectories that achieve the given instruction1, without requiring additional training data for eachtask or for the LLM. An illustration diagram and a subset of tasks we considered are shown in Fig. 1.We term this approach VOXPOSER , a formulation that extracts affordances and constraints fromLLMs to compose 3D value maps in observation space for guiding robotic interactions. Rather thanrelying on robotic data that are often of limited amount or variability, the method leverages LLMsforopen-world reasoning and VLMs for generalizable visual grounding in a model-based planningframework that directly enables physical robot actions. We demonstrate its zero-shot generalizationforopen-set instructions with open-set objects for various everyday manipulation tasks. We fur-ther showcase how V oxPoser can also benefit from limited online interactions to efficiently learn adynamics model that involves contact-rich interactions.2 Related WorksGrounding Language Instructions. Language grounding has been studied extensively both interms of intelligent agents [19–22] and of robotics [23, 6, 24, 25, 5, 7, 26], where language can beused as a tool for compositional goal specification [5, 27–33], semantic anchor for training multi-modal representation [12, 34, 35], or as an intermediate substrate for planning and reasoning [36–38, 9, 10, 39, 40]. Prior works have looked at using classical tools such as lexical analysis, formallogic, and graphical models to interpret language instructions [27, 7, 6, 26]. More recently, end-to-end approaches, popularized by successful applications to offline domains [41–43, 1], have beenapplied to directly ground language instructions in robot interactions by learning from data with1The approach also bears resemblance and connections to potential field methods in path planning [17] andconstrained optimization methods in manipulation planning [18].2language annotations, spanning from model learning [44], imitation learning [45, 46, 30, 47–54], toreinforcement learning [55–57]. Most closely related to our work is Sharma et al. [50], where anend-to-end cost predictor is optimized via supervised learning to map language instructions to 2Dcostmaps, which are used to steer a motion planner to generate preferred trajectories in a collision-free manner. In contrast, we rely on pre-trained language models for their open-world knowledgeand tackle the more challenging robotic manipulation in 3D.Language Models for Robotics. Leveraging pre-trained language models for embodied applica-tions is an active area of research, where a large body of works focus on planning and reasoningwith language models [9–11, 58, 31, 39, 59–72, 36, 73, 74]. To allow language models to perceivethe physical environments, textual descriptions of the scene [39, 11, 59] or perception APIs [75] canbe given, vision can be used during decoding [67] or can be directly taken as input by multi-modallanguage models [68, 2]. In addition to perception, to truly bridge the perception-action loop, anembodied language model must also know how to act, which typically is achieved by a library ofpre-defined primitives. Liang et al. [75] showed that LLMs exhibit behavioral commonsense thatcan be useful for low-level control. Despite the promising signs, hand-designed motion primitivesare still required, and while LLMs are shown to be capable of composing sequential policy logic, itremains unclear whether composition can happen at spatial level. A related line of works has alsoexplored using LLMs for reward specification in the context of reward design [76] and explorationin reinforcement learning [77–80], and human preference learning [81]. In contrast, we focus exclu-sively on grounding the reward generated by LLMs in the 3D observation space of the robot, whichwe identify as most useful for manipulation tasks.Learning-based Trajectory Optimization. Many works have explored leveraging learning-basedapproaches for trajectory optimization. While the literature is vast, they can be broadly categorizedinto those that learn the models [82–90] and those that learn the cost/reward or constraints [91–94, 50, 95], where data are typically collected from in-domain interactions. To enable generaliza-tion in the wild, a parallel line of works has explored learning task specification from large-scaleoffline data [96–98, 35, 34, 44, 99, 100, 54], particularly egocentric videos [101, 102], or leverag-ing pre-trained foundation models [103–105, 33, 106, 107]. The learned cost functions are thenused by reinforcement learning [103, 100, 108], imitation learning [98, 97], or trajectory optimiza-tion [96, 35] to generate robot actions. In this work, we leverage LLMs for zero-shot in-the-wildcost specification with superior generalization. Compared to prior works that leverage foundationmodels, we ground the cost directly in 3D observation space with real-time visual feedback, whichmakes V oxPoser amenable to closed-loop MPC that’s robust in execution.3 MethodWe first provide the formulation of V oxPoser as an optimization problem (Sec. 3.1). Then we de-scribe how V oxPoser can be used as a general zero-shot framework to map language instructionsto 3D value maps (Sec. 3.2). We subsequently demonstrate how trajectories can be synthesizedin closed-loop for robotic manipulation (Sec. 3.3). While zero-shot in nature, we demonstratehow V oxPoser can learn from online interactions to efficiently solve contact-rich tasks (Sec. 3.4).3.1 Problem FormulationConsider a manipulation problem given as a free-form language instruction L(e.g., “open the topdrawer”). Generating robot trajectories according to Lcan be very challenging because Lmay be ar-bitrarily long-horizon or under-specified (i.e., requires contextual understanding). Instead, we focuson individual phases (sub-tasks) of the problem lithat distinctively specify a manipulation task (e.g.,“grasp the drawer handle”, “pull open the drawer”), where the decomposition L → (l1, l2, . . . , l n)is given by a high-level planner (e.g., an LLM or a search-based planner)2. The central probleminvestigated in this work is to generate a motion trajectory τrifor robot rand each manipulation2Note that the decomposition and sequencing of these sub-tasks are also done by LLMs in this work, thoughwe do not investigate this aspect extensively as it is not the focus of our contributions.3def affordance_map():msize=(100,100,100) map = np.zeros(msize) handles = detect('handle') k = lambda x: x.pos[2] handles.sort(key=k)top_handle=handles[-1] x,y,z = top_handle.pos map[x,y,z] = 1 return smooth(map)def constraint_map():msize=(100,100,100) map = np.zeros(msize) vases = detect('vase') vase = vases[0] xyz = vase.occupancy_grid map[xyz] = -1 return smooth(map)...VisionLanguageModelOpenthetopdrawer.Pleasealsowatchoutforthatvase!LargeLanguageModelAffordanceMapsConstraintMapsView#1View#2View#1View#2(a)3DValueMapComposition(b)MotionPlanningCam#1Cam#2Figure 2: Overview of V OXPOSER . Given the RGB-D observation of the environment and a language in-struction, LLMs generate code, which interacts with VLMs, to produce a sequence of 3D affordance maps andconstraint maps (collectively referred to as value maps) grounded in the observation space of the robot (a). Thecomposed value maps then serve as objective functions for motion planners to synthesize trajectories for robotmanipulation (b). The entire process does not involve any additional training.phase described by instruction li. We represent τrias a sequence of dense end-effector waypointsto be executed by an Operational Space Controller [109], where each waypoint consists of a desired6-DoF end-effector pose, end-effector velocity, and gripper action. However, it is worth noting thatother representations of trajectories, such as joint space trajectories, can also be used. Given the i-thsub-task described by li, we formulate an optimization problem defined as follows:minτri{Ftask(Ti, li) +Fcontrol (τri)}subject to C(Ti) (1)where Tiis the evolution of environment state, and τri⊆Tiis the robot trajectory. Ftaskscores theextent of Ticompletes the instruction liwhileFcontrol specifies the control costs, e.g., to encourageτrito minimize total control effort or total time. C(Ti)denotes the dynamics and kinematics con-straints, which are enforced by the known model of the robot and a physics-based or learning-basedmodel of the environment. By solving this optimization for each sub-task li, we obtain a sequenceof robot trajectories that collectively achieve the overall task specified by the instruction L.3.2 Grounding Language Instruction via VoxPoserCalculating Ftaskwith respect to free-form language instructions is extremely challenging, not onlybecause of the rich space of semantics language can convey but also because of the lack of robot datalabeled with Tandl. However, we provide a critical observation that a large number of tasks canbe characterized by a voxel value map V∈Rw×h×din robot’s observation space, which guides themotion of an “entity of interest” in the scene, such as the robot end-effector, an object, or an objectpart. For example, consider the task “open the top drawer” and its first sub-task “grasp the topdrawer handle” (inferred by LLMs) in Fig. 2. The “entity of interest” is the robot end-effector, andthe voxel value map should reflect the attraction toward the drawer handle. By further commanding“watch out for the vase”, the map can also be updated to reflect the repulsion from the vase. Wedenote the “entity of interest” as eand its trajectory as τe. Using this voxel value map for a giveninstruction li,Ftask can be approximated by accumulating the values of etraversing through Vi,formally calculated as Ftask=−P|τei|j=1V(pej), where pej∈N3is the discretized (x, y, z )positionofeat step j.Notably, we observe large language models, by being pre-trained on Internet-scale data, exhibit ca-pabilities not only to identify the “entity of interest” but also to compose value maps that accuratelyreflect the task instruction by writing Python programs. Specifically, when an instruction is givenas a comment in the code, LLMs can be prompted to 1) call perception APIs (which invoke vision-language models (VLM) such as an open-vocabulary detector [13–15]) to obtain spatial-geometricalinformation of relevant objects, 2) generate NumPy operations to manipulate 3D arrays, and 3) pre-scribe precise values at relevant locations. We term this approach as VOXPOSER . Concretely, weaim to obtain a voxel value map Vti=V oxPoser (ot, li)by prompting an LLM and executing thecode via a Python interpreter, where otis the RGB-D observation at time tandliis the current4“sweepthepapertrashtothebluedustpan”“pushclosethetopdrawer”“turnonthelamp”“openthevitaminbottleontheright”“takeoutthebreadfromthetoasterandputitflatonthewoodenplate”t=1t=1t=1t=1t=1t=2t=2t=2t=2t=2t=5t=3t=3t=3t=3t=3t=6t=4Figure 3: Visualization of composed 3D value maps and rollouts in real-world environments. The top rowdemonstrates where “entity of interest” is an object or part, and the value maps guide them toward targetpositions. The bottom two rows showcase tasks where “entity of interest” is the robot end-effector. Thebottom-most task involves two phases, which are also orchestrated by LLMs.instruction. Additionally, because Vis often sparse, we densify the voxel maps via smoothingoperations, as they encourage smoother trajectories optimized by motion planners.Additional Trajectory Parametrization. The above formulation of V oxPoser uses LLMs to com-poseV:N3→Rto map from discretized coordinates in voxel space to a real-valued “cost”, whichwe can use to optimize a path consisting only of the positional terms. To extend to SE (3)poses,we can also use LLMs to compose rotation maps Vr:N3→SO(3)at coordinates relevant to thetask objectives (e.g., “end-effector should face the support normal of the handle”). Similarly, wefurther compose gripper maps Vg:N3→ {0,1}to control gripper open/close and velocity mapsVv:N3→Rto specify target velocities. Note that while these additional trajectory parametriza-tions are not mapped to a real-valued “cost”, they can also be factored in the optimization procedure(Equation 1) to parametrize the trajectories.3.3 Zero-Shot Trajectory Synthesis with VoxPoserAfter obtaining the task cost Ftask, we can now approach the full problem defined in Equation 1to plan a motion trajectory. We use simple zeroth-order optimization by randomly sampling trajec-tories and scoring them with the proposed objective. The optimization is implemented in a modelpredictive control framework that iteratively replans the trajectory at every step using the currentobservation to robustly execute the trajectories even under dynamic disturbances3, where eithera learned or physics-based model can be used. However, because V oxPoser effectively provides“dense rewards” in the observation space and we are able to replan at every step, we surprisinglyfind that the overall system can already achieve a large variety of manipulation tasks considered inthis work even with simple heuristics-based models. Since some value maps are defined over “entityof interest”, which may not necessarily be the robot, we also use the dynamics model to find theneeded robot trajectory to minimize the task cost (i.e., what interactions between the robot and theenvironment achieve the desired object motions).3.4 Efficient Dynamics Learning with Online ExperiencesWhile Sec. 3.3 presents a zero-shot framework for synthesizing trajectories for robot manipula-tion, V oxPoser can also benefit from online experiences by efficiently learning a dynamics model.Consider the standard setup where a robot interleaves between 1) collecting environment transitiondata(ot,at,ot+1), where otis the environment observation at time tandat=MPC (ot), and 2)training a dynamics model gθparametrized by θby minimizing the L2 loss between predicted next3Although involving an LLM in the loop, closed-loop execution is possible because the generated coderemains the same throughout task li, which allows us to cache its output for the current task.5observation ˆot+1andot+1. A critical component that determines the learning efficiency is the ac-tion sampling distribution P(at|ot)in MPC, which typically is a random distribution over the fullaction space A. This is often inefficient when the goal is to solve a particular task, such as openinga door, because most actions do not interact with the relevant objects in the scene (i.e., the doorhandle) nor do they necessarily interact with the objects in a meaningful way (i.e., pressing downthe door handle). Since V oxPoser synthesizes robot trajectories with LLMs, which have a wealth ofcommonsense knowledge, the zero-shot synthesized trajectory τr0can serve as a useful prior to biasthe action sampling distribution P(at|ot, τr0), which can significantly speed up the learning process.In practice, this can be implemented by only sampling actions in the vicinity of τr0by adding smallnoise εto encourage local exploration instead of exploring in the full action space A.4 Experiments and AnalysisWe first discuss our implementation details. Then we validate V oxPoser for real-world everyday ma-nipulation (Sec. 4.1). We also study its generalization in simulation (Sec. 4.2). We further demon-strate how V oxPoser enables efficient learning of more challenging tasks (Sec. 4.3). Finally, weanalyze its source of errors and discuss how improvement can be made (Sec. 4.4).LLMs and Prompting. We follow prompting structure by Liang et al. [75], which recursively callsLLMs using their own generated code, where each language model program (LMP) is responsiblefor a unique functionality (e.g., processing perception calls). We use GPT-4 [2] from OpenAI API.For each LMP, we include 5-20 example queries and corresponding responses as part of the prompt.An example can be found in Fig. 2 (simplified for clarity). Full prompts are in Appendix.VLMs and Perception. Given an object/part query from LLMs, we first invoke open-vocab detectorOWL-ViT [15] to obtain a bounding box, then feed it into Segment Anything [110] to obtain a mask,and finally track the mask using video tracker XMEM [111]. The tracked mask is used with RGB-Dobservation to reconstruct the object/part point cloud.Value Map Composition. We define the following types of value maps: affordance, avoidance, end-effector velocity, end-effector rotation, and gripper action. Each type uses a different LMP, whichtakes in an instruction and outputs a voxel map of shape (100,100,100, k), where kdiffers for eachvalue map (e.g., k= 1 for affordance and avoidance as it specifies cost, and k= 4 for rotation asit specifies SO (3)). We apply Euclidean distance transform to affordance maps and Gaussian filtersfor avoidance maps. On top of value map LMPs, we define two high-level LMPs to orchestrate theirbehaviors: planner takes user instruction Las input (e.g., “open drawer”) and outputs a sequenceof sub-tasks l1:N, and composer takes in sub-task liand invokes relevant value map LMPs withdetailed language parameterization.Motion Planner. We consider only affordance and avoidance maps in the planner optimization,which finds a sequence of collision-free end-effector positions p1:N∈R3using greedy search. Thenwe enforce other parametrization at each pby the remaining value maps (e.g., rotation map, velocitymap). The cost map used by the motion planner is computed as the negative of the weighted sumof normalized affordance and avoidance maps with weights 2and1. After a 6-DoF trajectory issynthesized, the first waypoint is executed, and then a new trajectory is re-planned at 5 Hz.Dynamics Model. We use the known robot dynamics model in all tasks, where it is used in motionplanning for the end-effector to follow the waypoints. For the majority of our considered tasks wherethe “entity of interest” is the robot, no environment dynamics model is used (i.e., scene is assumedto be static), but we replan at every step to account for the latest observation. For tasks in whichthe “entity of interest” is an object, we study only a planar pushing model parametrized by contactpoint, push direction, and push distance. We use a heuristic-based dynamics model that translatesan input point cloud along the push direction by the push distance. We use MPC with randomshooting to optimize for the action parameters. Then a pre-defined pushing primitive is executedbased on the action parameters. However, we note that a primitive is not necessary when actionparameters are defined over the end-effector or joint space of the robot, which would likely yield6LLM + Prim. [75] VoxPoserTask Static Dist. Static Dist.Move & Avoid 0/10 0/10 9/10 8/10Set Up Table 7/10 0/10 9/10 7/10Close Drawer 0/10 0/10 10/10 7/10Open Bottle 5/10 0/10 7/10 5/10Sweep Trash 0/10 0/10 9/10 8/10Total 24.0% 0.0% 88.0% 70.0%Table 1: Success rate in real-world domain. V ox-Poser performs everyday manipulation tasks withhigh success and is more robust to disturbancesthan the baseline using action primitives.U-Net Language ModelsTrain/Test Category MP [50] Prim. [75] MP (Ours)SI SA Object Int. 21.0% 41.0% 64.0%SI SA Composition 53.8% 43.8% 77.5%SI UA Object Int. 3.0% 46.0% 60.0%SI UA Composition 3.8% 25.0% 58.8%UI UA Object Int. 0.0% 17.5% 65.0%UI UA Composition 0.0% 25.0% 76.7%Table 2: Success rate in simulated domain. “SI” and “UI”are seen and unseen instructions. “SA” and “UA” are seenand unseen attributes. V oxPoser outperforms both base-lines across 13 tasks from two categories on both seen andunseen tasks and maintains similar success rates.smoother trajectories but takes more time for optimization. We also explore the use of a learning-based dynamics model in Section 4.3, which enables V oxPoser to benefit from online experiences.4.1 VoxPoser for Everyday Manipulation TasksWe study whether V oxPoser can zero-shot synthesize robot trajectories to perform everyday manip-ulation tasks in the real world. Details of the environment setup can be found in Appendix A.3.While the proposed method can generalize to an open-set of instructions and an open-set of objectsas shown in Fig. 1, we pick 5 representative tasks to provide quantitative evaluations in Table 1.Qualitative results including environment rollouts and value map visualizations are shown in Fig. 3.We find that V oxPoser can effectively synthesize robot trajectories for everyday manipulation taskswith a high average success rate. Due to fast replanning capabilities, it is also robust to external dis-turbances, such as moving targets/obstacles and pulling the drawer open after it has been closed bythe robot. We further compare to a variant of Code as Policies [75] that uses LLMs to parameterize apre-defined list of simple primitives (e.g., move topose ,open gripper ). We find that compared tochaining sequential policy logic, the ability to compose spatially while considering other constraintsunder a joint optimization scheme is a more flexible formulation, unlocking the possibility for moremanipulation tasks and leading to more robust execution.4.2 Generalization to Unseen Instructions and AttributesTo provide rigorous quantitative evaluations on generalization, we set up a simulated environmentthat mirrors our real-world setup [112] but features 13 highly-randomizable tasks with 2766 uniqueinstructions. Eash task comes with a templated instruction (e.g., “push [obj] to [pos]”) that con-tains randomizable attributes chosen from a pre-defined list. Details are in Appendix A.4. Seeninstructions/attributes may appear in the prompt (or in the training data for supervised baselines).The tasks are grouped into 2 categories, where “Object Interactions” are tasks that require interac-tions with objects, and “Spatial Composition” are tasks involving spatial constraints (e.g., movingslower near a particular object). For baselines, we ablate the two components of V oxPoser, LLMand motion planner, by comparing to a variant of [75] that combines an LLM with primitives andto a variant of [50] that learns a U-Net [113] to synthesize costmaps for motion planning. Table 2shows the success rates averaged across 20 episodes per task. We find V oxPoser exhibits superiorgeneralization in all scenarios. Compared to learned cost specification, LLMs generalize better byexplicitly reasoning about affordances and constraints. On the other hand, grounding LLM knowl-edge in robot perception through value map composition rather than directly specifying primitiveparameters offers more flexibility that generalizes beyond the prompt examples.4.3 Efficient Dynamics Learning with Online ExperiencesAs discussed in Sec. 3.4, we investigate how V oxPoser can optionally benefit from online experi-ences for tasks that involve more intricacies of contact, such as opening doors, fridges, and windows,in a simulated environment. Specifically, we first synthesize kzero-shot trajectories using V oxPoser,7Zero-Shot No Prior w/ PriorTask Success Success Time(s) Success Time(s)Door 6.7% ±4.4% 58.3±4.4% TLE 88.3% ±1.67%142.3±22.4Window 3.3% ±3.3%36.7% ±1.7% TLE 80.0% ±2.9%137.0±7.5Fridge 18.3% ±3.3%70.0% ±2.9% TLE 91.7% ±4.4%71.0±4.4Table 3: V oxPoser enables efficient dynamics learning byusing zero-shot synthesized trajectories as prior. TLE (timelimit exceeded) means exceeding 12 hours. Results are re-ported over 3 runs different seeds.Figure 4: Error breakdown of components. V ox-Poser significantly reduces specification error.each represented as a sequence of end-effector waypoints, that act as priors for exploration (e.g.,“handle needs to be pressed down first in order to open a door”). Then an MLP dynamics model islearned through an iterative procedure where the agent alternates between data collection and modellearning. During data collection, we add ε∼ N (0, σ2)to each waypoint in τr0to encourage localexploration. As shown in Tab. 3, we find zero-shot synthesized trajectories are typically meaningfulbut insufficient. However, we can learn an effective dynamics model with less than 3 minutes ofonline interactions by using these trajectories as exploration prior, leading to high eventual successrates. In comparison, exploring without prior all exceed the maximum 12-hour limit.4.4 Error BreakdownIn this section, we analyze the errors resulting from each component of V oxPoser and how the overallsystem can be further improved. We conduct experiments in simulation where we have access toground-truth perception and dynamics model (i.e., the simulator). . “Dynamics error” refers to errorsmade by the dynamics model4. “Perception error” refers to errors made by the perception module5.“Specification error” refers to errors made by the module specifying cost or parameters for the low-level motion planner or primitives. Examples for each method include 1) noisy prediction by theU-Net, 2) incorrect parameters specified by the LLM, and 3) incorrect value maps specified by theLLM. As shown in Fig. 4, V oxPoser achieves lowest “specification error” due to its generalizationand flexibility. We also find that having access to a more robust perception pipeline and a physically-realistic dynamics model can contribute to better overall performance. This observation aligns withour real-world experiment, where most errors are from perception. For example, we find that thedetector is sensitive to initial poses of objects and is less robust when detecting object parts.5 Conclusion, Limitations, & Future WorksIn this work, we present VOXPOSER , a general framework for extracting affordances and con-straints, grounded in 3D perceptual space, from LLMs and VLMs for everyday manipulation tasksin the real world, offering significant generalization advantages for open-set instructions and ob-jects. Despite compelling results, V oxPoser has several limitations. First, it relies on external per-ception modules, which is limiting in tasks that require holistic visual reasoning or understand-ing of fine-grained object geometries. Second, while applicable to efficient dynamics learning, ageneral-purpose dynamics model is still required to achieve contact-rich tasks with the same level ofgeneralization. Third, our motion planner considers only end-effector trajectories while whole-armplanning is also feasible and likely a better design choice [115–117]. Finally, manual prompt engi-neering is required for LLMs. We also see several exciting venues for future work. For instance,recent success of multi-modal LLMs [68, 2, 118] can be directly translated into V oxPoser for directvisual grounding. Methods developed for alignment [119, 120] and prompting [121–124] may beused to alleviate prompt engineering effort. Finally, more advanced trajectory optimization methodscan be developed that best interface with value maps synthesized by V oxPoser.4LLM + Primitives [75] does not use model-based planning, thus not having a dynamics module.5U-Net + MP [50] maps RGB-D to costmaps using U-Net [113, 114], thus not having perception module.Errors by which are attributed to “specification error”.8AcknowledgmentsWe would like to thank Andy Zeng, Igor Mordatch, and the members of the Stanford Vision andLearning Lab for the fruitful discussions. This work was in part supported by AFOSR YIP FA9550-23-1-0127, ONR MURI N00014-22-1-2740, ONR MURI N00014-21-1-2801, ONR N00014-23-1-2355, the Stanford Institute for Human-Centered AI (HAI), JPMC, and Analog Devices. WenlongHuang is partially supported by Stanford School of Engineering Fellowship. Ruohan Zhang is par-tially supported by Wu Tsai Human Performance Alliance Fellowship.References[1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advancesin neural information processing systems , 33:1877–1901, 2020.[2] OpenAI. Gpt-4 technical report. arXiv , 2023.[3] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W.Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways.arXiv preprint arXiv:2204.02311 , 2022.[4] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein,J. Bohg, A. Bosselut, E. Brunskill, et al. On the opportunities and risks of foundation models.arXiv preprint arXiv:2108.07258 , 2021.[5] S. Tellex, N. Gopalan, H. Kress-Gazit, and C. Matuszek. Robots that use language. AnnualReview of Control, Robotics, and Autonomous Systems , 2020.[6] S. Tellex, T. Kollar, S. Dickerson, M. Walter, A. Banerjee, S. Teller, and N. Roy. Under-standing natural language commands for robotic navigation and mobile manipulation. InProceedings of the AAAI Conference on Artificial Intelligence , volume 25, pages 1507–1514,2011.[7] T. Kollar, S. Tellex, D. Roy, and N. Roy. Toward understanding natural language directions.In2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages259–266. IEEE, 2010.[8] M. Bollini, S. Tellex, T. Thompson, N. Roy, and D. Rus. Interpreting and executing recipeswith a cooking robot. In Experimental Robotics , pages 481–495. Springer, 2013.[9] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners:Extracting actionable knowledge for embodied agents. In International Conference on Ma-chine Learning . PMLR, 2022.[10] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J. Ruano,K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee, S. Levine,Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes, P. Ser-manet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu, and M. Yan.Do as i can and not as i say: Grounding language in robotic affordances. In arXiv preprintarXiv:2204.01691 , 2022.[11] A. Zeng, A. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. Ryoo, V . Sind-hwani, J. Lee, V . Vanhoucke, et al. Socratic models: Composing zero-shot multimodal rea-soning with language. arXiv preprint arXiv:2204.00598 , 2022.[12] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language super-vision. In International Conference on Machine Learning , pages 8748–8763. PMLR, 2021.9[13] X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision andlanguage knowledge distillation. arXiv preprint arXiv:2104.13921 , 2021.[14] A. Kamath, M. Singh, Y . LeCun, G. Synnaeve, I. Misra, and N. Carion. Mdetr-modulateddetection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF Inter-national Conference on Computer Vision , pages 1780–1790, 2021.[15] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Ma-hendran, A. Arnab, M. Dehghani, Z. Shen, et al. Simple open-vocabulary object detectionwith vision transformers. arXiv preprint arXiv:2205.06230 , 2022.[16] C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau,E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk,M. Brett, A. Haldane, J. F. del R ́ıo, M. Wiebe, P. Peterson, P. G ́erard-Marchant, K. Sheppard,T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, and T. E. Oliphant. Array programming withNumPy. Nature , 585(7825):357–362, Sept. 2020. doi:10.1038/s41586-020-2649-2. URLhttps://doi.org/10.1038/s41586-020-2649-2 .[17] Y . K. Hwang, N. Ahuja, et al. A potential field approach to path planning. IEEE transactionson robotics and automation , 8(1):23–32, 1992.[18] M. Toussaint, J. Harris, J.-S. Ha, D. Driess, and W. H ̈onig. Sequence-of-constraints mpc:Reactive timing-optimal control of sequential manipulation. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 13753–13760. IEEE, 2022.[19] J. Andreas, D. Klein, and S. Levine. Learning with latent language. arXiv preprintarXiv:1711.00482 , 2017.[20] R. Zellers, A. Holtzman, M. Peters, R. Mottaghi, A. Kembhavi, A. Farhadi, and Y . Choi.Piglet: Language grounding through neuro-symbolic interaction in a 3d world. arXiv preprintarXiv:2106.00188 , 2021.[21] R. Zellers, X. Lu, J. Hessel, Y . Yu, J. S. Park, J. Cao, A. Farhadi, and Y . Choi. Merlot:Multimodal neural script knowledge models. Advances in Neural Information ProcessingSystems , 2021.[22] V . Shwartz, P. West, R. L. Bras, C. Bhagavatula, and Y . Choi. Unsupervised commonsensequestion answering with self-talk. arXiv preprint arXiv:2004.05483 , 2020.[23] T. Winograd. Procedures as a representation for data in a computer program for understandingnatural language. Technical report, MASSACHUSETTS INST OF TECH CAMBRIDGEPROJECT MAC, 1971.[24] V . Blukis, R. A. Knepper, and Y . Artzi. Few-shot object grounding and mapping for naturallanguage robot instruction following. arXiv preprint arXiv:2011.07384 , 2020.[25] S. Tellex, R. Knepper, A. Li, D. Rus, and N. Roy. Asking for help using inverse semantics.Robotics: Science and Systems Foundation , 2014.[26] T. Kollar, S. Tellex, D. Roy, and N. Roy. Grounding verbs of motion in natural languagecommands to robots. In Experimental robotics , pages 31–47. Springer, 2014.[27] J. Thomason, S. Zhang, R. J. Mooney, and P. Stone. Learning to interpret natural languagecommands through human-robot dialog. In Twenty-Fourth International Joint Conference onArtificial Intelligence , 2015.[28] J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y . Jiang, H. Yedidsion, J. Hart, P. Stone,and R. Mooney. Jointly improving parsing and perception for natural language commandsthrough human-robot dialog. Journal of Artificial Intelligence Research , 67:327–374, 2020.10[29] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn.Bc-z: Zero-shot task generalization with robotic imitation learning. In Conference on RobotLearning , pages 991–1002. PMLR, 2021.[30] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world controlat scale. arXiv preprint arXiv:2212.06817 , 2022.[31] D. Shah, B. Osinski, B. Ichter, and S. Levine. Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action. arXiv preprint arXiv:2207.04429 , 2022.[32] Y . Cui, S. Karamcheti, R. Palleti, N. Shivakumar, P. Liang, and D. Sadigh. ” no, to the right”–online language corrections for robotic manipulation via shared autonomy. arXiv preprintarXiv:2301.02555 , 2023.[33] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich,F. Xia, C. Finn, et al. Open-world object manipulation using pre-trained vision-languagemodels. arXiv preprint arXiv:2303.00905 , 2023.[34] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[35] Y . J. Ma, V . Kumar, A. Zhang, O. Bastani, and D. Jayaraman. Liv: Language-image repre-sentations and rewards for robotic control. arXiv e-prints , 2023.[36] P. A. Jansen. Visually-grounded planning without vision: Language models infer detailedplans from high-level instructions. arXiv preprint arXiv:2009.14259 , 2020.[37] V . Micheli and F. Fleuret. Language models are few-shot butlers. arXiv preprintarXiv:2104.07972 , 2021.[38] P. Sharma, A. Torralba, and J. Andreas. Skill induction and planning with latent language.arXiv preprint arXiv:2110.01517 , 2021.[39] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mor-datch, Y . Chebotar, P. Sermanet, N. Brown, T. Jackson, L. Luu, S. Levine, K. Hausman, andB. Ichter. Inner monologue: Embodied reasoning through planning with language models. InarXiv preprint arXiv:2207.05608 , 2022.[40] B. Z. Li, W. Chen, P. Sharma, and J. Andreas. Lampp: Language models as probabilisticpriors for perception and action. arXiv e-prints , pages arXiv–2302, 2023.[41] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-ScaleHierarchical Image Database. In CVPR09 , 2009.[42] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolu-tional neural networks. Communications of the ACM , 60(6):84–90, 2017.[43] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models areunsupervised multitask learners. OpenAI blog , 1(8):9, 2019.[44] S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditionedrobot behavior from offline data and crowd-sourced annotation. In Conference on RobotLearning , pages 1303–1315. PMLR, 2022.[45] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manip-ulation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[46] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Proceedings of the 6th Conference on Robot Learning (CoRL) , 2022.11[47] S. Li, X. Puig, Y . Du, C. Wang, E. Akyurek, A. Torralba, J. Andreas, and I. Mordatch. Pre-trained language models for interactive decision-making. arXiv preprint arXiv:2202.01771 ,2022.[48] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imita-tion learning. arXiv preprint arXiv:2204.06252 , 2022.[49] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances overunstructured data. arXiv preprint arXiv:2210.01911 , 2022.[50] P. Sharma, B. Sundaralingam, V . Blukis, C. Paxton, T. Hermans, A. Torralba, J. An-dreas, and D. Fox. Correcting robot plans with natural language feedback. arXiv preprintarXiv:2204.05186 , 2022.[51] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[52] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data.Robotics: Science and Systems , 2021. URL https://arxiv.org/abs/2005.07648 .[53] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.[54] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. Concept2robot: Learning manip-ulation concepts from instructions and human demonstrations. The International Journal ofRobotics Research , 40(12-14):1419–1434, 2021.[55] J. Luketina, N. Nardelli, G. Farquhar, J. N. Foerster, J. Andreas, E. Grefenstette, S. Whiteson,and T. Rockt ̈aschel. A survey of reinforcement learning informed by natural language. InIJCAI , 2019.[56] J. Andreas, D. Klein, and S. Levine. Modular multitask reinforcement learning with policysketches. ArXiv , abs/1611.01796, 2017.[57] Y . Jiang, S. S. Gu, K. P. Murphy, and C. Finn. Language as an abstraction for hierarchicaldeep reinforcement learning. Advances in Neural Information Processing Systems , 32, 2019.[58] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler.Open-vocabulary queryable scene representations for real world planning. arXiv preprintarXiv:2209.09874 , 2022.[59] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. Progprompt: Generating situated robot task plans using large language models.arXiv preprint arXiv:2209.11302 , 2022.[60] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation.arXiv preprint arXiv:2210.05714 , 2022.[61] S. S. Raman, V . Cohen, E. Rosen, I. Idrees, D. Paulius, and S. Tellex. Planning with largelanguage models via corrective re-prompting. arXiv preprint arXiv:2211.09935 , 2022.[62] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y . Su. Llm-planner: Few-shot grounded planning for embodied agents with large language models. arXiv preprintarXiv:2212.04088 , 2022.[63] B. Liu, Y . Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone. Llm+ p: Empoweringlarge language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477 ,2023.12[64] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor. Chatgpt for robotics: Design principlesand model abilities. 2023 , 2023.[65] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans. arXiv preprint arXiv:2303.12153 , 2023.[66] Y . Ding, X. Zhang, C. Paxton, and S. Zhang. Task and motion planning with large languagemodels for object rearrangement. arXiv preprint arXiv:2303.06247 , 2023.[67] W. Huang, F. Xia, D. Shah, D. Driess, A. Zeng, Y . Lu, P. Florence, I. Mordatch, S. Levine,K. Hausman, et al. Grounded decoding: Guiding text generation with grounded models forrobot control. arXiv preprint arXiv:2303.00855 , 2023.[68] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprintarXiv:2303.03378 , 2023.[69] H. Yuan, C. Zhang, H. Wang, F. Xie, P. Cai, H. Dong, and Z. Lu. Plan4mc: Skill reinforce-ment learning and planning for open-world minecraft tasks. arXiv preprint arXiv:2303.16563 ,2023.[70] Y . Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh. Translating natural language to planninggoals with large-language models. arXiv preprint arXiv:2302.05128 , 2023.[71] Y . Lu, P. Lu, Z. Chen, W. Zhu, X. E. Wang, and W. Y . Wang. Multimodal procedural planningvia dual text-image prompting. arXiv preprint arXiv:2305.01795 , 2023.[72] D. Patel, H. Eghbalzadeh, N. Kamra, M. L. Iuzzolino, U. Jain, and R. Desai. Pretrainedlanguage models as visual planners for human assistance. arXiv preprint arXiv:2304.09179 ,2023.[73] G. Wang, Y . Xie, Y . Jiang, A. Mandlekar, C. Xiao, Y . Zhu, L. Fan, and A. Anandku-mar. V oyager: An open-ended embodied agent with large language models. arXiv preprintarXiv:2305.16291 , 2023.[74] J. Yang, W. Tan, C. Jin, B. Liu, J. Fu, R. Song, and L. Wang. Pave the way to graspanything: Transferring foundation models for universal pick-place robots. arXiv preprintarXiv:2306.05716 , 2023.[75] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code aspolicies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753 ,2022.[76] M. Kwon, S. M. Xie, K. Bullard, and D. Sadigh. Reward design with language models. arXivpreprint arXiv:2303.00001 , 2023.[77] A. Tam, N. Rabinowitz, A. Lampinen, N. A. Roy, S. Chan, D. Strouse, J. Wang, A. Banino,and F. Hill. Semantic exploration from language abstractions and pretrained representations.Advances in Neural Information Processing Systems , 35:25377–25389, 2022.[78] J. Mu, V . Zhong, R. Raileanu, M. Jiang, N. Goodman, T. Rockt ̈aschel, and E. Grefenstette.Improving intrinsic exploration with language abstractions. arXiv preprint arXiv:2202.08938 ,2022.[79] C. Colas, T. Karch, N. Lair, J.-M. Dussoux, C. Moulin-Frier, P. Dominey, and P.-Y . Oudeyer.Language as a cognitive tool to imagine goals in curiosity driven exploration. Advances inNeural Information Processing Systems , 33:3761–3774, 2020.13[80] Y . Du, O. Watkins, Z. Wang, C. Colas, T. Darrell, P. Abbeel, A. Gupta, and J. Andreas.Guiding pretraining in reinforcement learning with large language models. arXiv preprintarXiv:2302.06692 , 2023.[81] H. Hu and D. Sadigh. Language instructed reinforcement learning for human-ai coordination.arXiv preprint arXiv:2304.07297 , 2023.[82] I. Lenz, R. A. Knepper, and A. Saxena. Deepmpc: Learning deep latent features for modelpredictive control. In Robotics: Science and Systems , volume 10. Rome, Italy, 2015.[83] L. Hewing, K. P. Wabersich, M. Menner, and M. N. Zeilinger. Learning-based model pre-dictive control: Toward safe learning in control. Annual Review of Control, Robotics, andAutonomous Systems , 3:269–296, 2020.[84] M. B. Chang, T. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-basedapproach to learning physical dynamics. arXiv preprint arXiv:1612.00341 , 2016.[85] P. Battaglia, R. Pascanu, M. Lai, D. Jimenez Rezende, et al. Interaction networks for learningabout objects, relations and physics. Advances in neural information processing systems , 29,2016.[86] Z. Xu, J. Wu, A. Zeng, J. B. Tenenbaum, and S. Song. Densephysnet: Learning dense physicalobject representations via multi-step dynamic interactions. arXiv preprint arXiv:1906.03853 ,2019.[87] A. Byravan and D. Fox. Se3-nets: Learning rigid body motion using deep neural networks.In2017 IEEE International Conference on Robotics and Automation (ICRA) , pages 173–180.IEEE, 2017.[88] A. Nagabandi, K. Konolige, S. Levine, and V . Kumar. Deep dynamics models for learningdexterous manipulation. In Conference on Robot Learning , pages 1101–1112. PMLR, 2020.[89] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell,and P. Battaglia. Graph networks as learnable physics engines for inference and control. InInternational Conference on Machine Learning , pages 4470–4479. PMLR, 2018.[90] Y . Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics formanipulating rigid bodies, deformable objects, and fluids. arXiv preprint arXiv:1810.01566 ,2018.[91] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control viapolicy optimization. In International conference on machine learning , pages 49–58. PMLR,2016.[92] J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcementlearning. arXiv preprint arXiv:1710.11248 , 2017.[93] D. Driess, O. Oguz, J.-S. Ha, and M. Toussaint. Deep visual heuristics: Learning feasibility ofmixed-integer programs for manipulation planning. In 2020 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 9563–9569. IEEE, 2020.[94] B. Amos, I. Jimenez, J. Sacks, B. Boots, and J. Z. Kolter. Differentiable mpc for end-to-endplanning and control. Advances in neural information processing systems , 31, 2018.[95] M. Mittal, D. Hoeller, F. Farshidian, M. Hutter, and A. Garg. Articulated object interactionin unknown scenes with whole-body mobile manipulation. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 1647–1654. IEEE, 2022.[96] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. arXiv preprintarXiv:2207.09450 , 2022.14[97] S. Bahl, R. Mendonca, L. Chen, U. Jain, and D. Pathak. Affordances from human videosas a versatile representation for robotics. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 13778–13790, 2023.[98] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar.Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprintarXiv:2302.12422 , 2023.[99] H. Bharadhwaj, A. Gupta, S. Tulsiani, and V . Kumar. Zero-shot robot manipulation frompassive human videos. arXiv preprint arXiv:2302.02011 , 2023.[100] Y . J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V . Kumar, and A. Zhang. Vip: Towardsuniversal visual reward and representation via value-implicit pre-training. arXiv preprintarXiv:2210.00030 , 2022.[101] D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti,J. Munro, T. Perrett, W. Price, et al. Scaling egocentric vision: The epic-kitchens dataset. InProceedings of the European Conference on Computer Vision (ECCV) , pages 720–736, 2018.[102] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger,H. Jiang, M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 18995–19012, 2022.[103] Y . Cui, S. Niekum, A. Gupta, V . Kumar, and A. Rajeswaran. Can foundation models performzero-shot task specification for robot manipulation? In Learning for Dynamics and ControlConference , pages 893–905. PMLR, 2022.[104] T. Yu, T. Xiao, A. Stone, J. Tompson, A. Brohan, S. Wang, J. Singh, C. Tan, J. Peralta,B. Ichter, et al. Scaling robot learning with semantically imagined experience. arXiv preprintarXiv:2302.11550 , 2023.[105] Z. Mandi, H. Bharadhwaj, V . Moens, S. Song, A. Rajeswaran, and V . Kumar. Cacti: Aframework for scalable multi-task multi-scene visual imitation learning. arXiv preprintarXiv:2212.05711 , 2022.[106] T. Xiao, H. Chan, P. Sermanet, A. Wahid, A. Brohan, K. Hausman, S. Levine, and J. Tompson.Robotic skill acquisition via instruction augmentation with vision-language models. arXivpreprint arXiv:2211.11736 , 2022.[107] C. Wang, D. Xu, and L. Fei-Fei. Generalizable task planning through representation pretrain-ing. IEEE Robotics and Automation Letters , 7(3):8299–8306, 2022.[108] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Mart ́ın-Mart ́ın, C. Wang, G. Levine,M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everydayactivities and realistic simulation. In Conference on Robot Learning , pages 80–93. PMLR,2023.[109] O. Khatib. A unified approach for motion and force control of robot manipulators: Theoperational space formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987.[110] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643 , 2023.[111] H. K. Cheng and A. G. Schwing. Xmem: Long-term video object segmentation with anatkinson-shiffrin memory model. In Computer Vision–ECCV 2022: 17th European Con-ference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII , pages 640–658.Springer, 2022.15[112] F. Xiang, Y . Qin, K. Mo, Y . Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y . Yuan, H. Wang, et al.Sapien: A simulated part-based interactive environment. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 11097–11107, 2020.[113] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical imagesegmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings,Part III 18 , pages 234–241. Springer, 2015.[114] ̈O. C ̧ ic ̧ek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. 3d u-net: learn-ing dense volumetric segmentation from sparse annotation. In Medical Image Computingand Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens,Greece, October 17-21, 2016, Proceedings, Part II 19 , pages 424–432. Springer, 2016.[115] L. E. Kavraki, P. Svestka, J.-C. Latombe, and M. H. Overmars. Probabilistic roadmaps forpath planning in high-dimensional configuration spaces. IEEE transactions on Robotics andAutomation , 12(4):566–580, 1996.[116] N. D. Ratliff, J. Issac, D. Kappler, S. Birchfield, and D. Fox. Riemannian motion policies.arXiv preprint arXiv:1801.02854 , 2018.[117] T. Marcucci, M. Petersen, D. von Wrangel, and R. Tedrake. Motion planning around obstacleswith convex optimization. arXiv preprint arXiv:2205.04422 , 2022.[118] J. Li, D. Li, S. Savarese, and S. Hoi. Blip-2: Bootstrapping language-image pre-training withfrozen image encoders and large language models. arXiv preprint arXiv:2301.12597 , 2023.[119] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agar-wal, K. Slama, A. Ray, et al. Training language models to follow instructions with humanfeedback. arXiv preprint arXiv:2203.02155 , 2022.[120] Y . Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirho-seini, C. McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprintarXiv:2212.08073 , 2022.[121] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thoughtprompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 ,2022.[122] Y . Wang, Y . Kordi, S. Mishra, A. Liu, N. A. Smith, D. Khashabi, and H. Hajishirzi.Self-instruct: Aligning language model with self generated instructions. arXiv preprintarXiv:2212.10560 , 2022.[123] T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916 , 2022.[124] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y . Cao, and K. Narasimhan. Treeof thoughts: Deliberate problem solving with large language models. arXiv preprintarXiv:2305.10601 , 2023.[125] J. Wei, Y . Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma,D. Zhou, D. Metzler, et al. Emergent abilities of large language models. arXiv preprintarXiv:2206.07682 , 2022.[126] T. Gupta and A. Kembhavi. Visual programming: Compositional visual reasoning with-out training. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 14953–14962, 2023.[127] D. Sur ́ıs, S. Menon, and C. V ondrick. Vipergpt: Visual inference via python execution forreasoning. arXiv preprint arXiv:2303.08128 , 2023.16[128] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. Viola: Imitation learning for vision-based manipulationwith object proposal priors. 6th Annual Conference on Robot Learning , 2022.17A AppendixA.1 Emergent Behavioral CapabilitiesWhichblockisheavier?Iamleft-handed.You’reoffby1cmtotheleft.Openthedrawerpreciselybyhalf.Figure 5: Emergent behavioral capabilities by V oxPoser inherited from the language model, including behav-ioral commonsense reasoning ( top left ), fine-grained language correction ( top right ), multi-step visual program(bottom left ), and estimating physical properties of objects ( bottom right ).Emergent capabilities refer to unpredictable phenomenons that are only present in large mod-els [125]. As V oxPoser uses pre-trained LLMs as backbone, we observe similar embodied emergentcapabilities driven by the rich world knowledge of LLMs. In particular, we focus our study on thebehavioral capabilities that are unique to V oxPoser. We observe the following capabilities:•Behavioral Commonsense Reasoning : During a task where robot is setting the table, theuser can specify behavioral preferences such as “I am left-handed”, which requires therobot to comprehend its meaning in the context of the task. V oxPoser decides that it shouldmove the fork from the right side of the bowl to the left side.•Fine-grained Language Correction : For tasks that require high precision such as “cov-ering the teapot with the lid”, the user can give precise instructions to the robot such as“you’re off by 1cm”. V oxPoser similarly adjusts its action based on the feedback.•Multi-step Visual Program [126, 127]: Given a task “open the drawer precisely by half”where there is insufficient information because object models are not available, V oxPosercan come up with multi-step manipulation strategies based on visual feedback that firstopens the drawer fully while recording handle displacement, then close it back to the mid-point to satisfy the requirement.•Estimating Physical Properties : Given two blocks of unknown mass, the robot is taskedto conduct physics experiments using an existing ramp to determine which block is heav-ier. V oxPoser decides to push both blocks off the ramp and choose the block traveling thefarthest as the heavier block. Interestingly, this mirrors a common human oversight: inan ideal, frictionless world, both blocks would traverse the same distance under the influ-ence of gravity. This serves as a lighthearted example that language models can exhibitlimitations similar to human reasoning.18A.2 APIs for VoxPoserCentral to V oxPoser is an LLM generating Python code that is executed by a Python interpreter.Besides exposing NumPy [16] and the Transforms3d library to the LLM, we provide the followingenvironment APIs that LLMs can choose to invoke:detect(obj name) : Takes in an object name and returns a list of dictionaries, where each dictionarycorresponds to one instance of the matching object, containing center position, occupancy grid, andmean normal vector.execute(movable,affordance map,avoidance map,rotation map,velocity map,gripper map) :Takes in an “entity of interest” as “movable” (a dictionary returned by detect ) and (optionally)a list of value maps and invokes the motion planner to execute the trajectory. Note that in MPCsettings, “movable” and the input value maps are functions that can be re-evaluated to reflect thelatest environment observation.cm2index(cm,direction) : Takes in a desired offset distance in centimeters along direction andreturns 3-dim vector reflecting displacement in voxel coordinates.index2cm(index,direction) : Inverse of cm2index . Takes in an integer “index” and a “direction”vector and returns the distance in centimeters in world coordinates displaced by the “integer” invoxel coordinates.pointat2quat(vector) : Takes in a desired pointing direction for the end-effector and returns asatisfying target quaternion.setvoxel byradius(voxel map,voxel xyz,radius cm,value) : Assigns “value” to voxelswithin “radious cm” from “voxel xyz” in “voxel map”.getempty affordance map() : Returns a default affordance map initialized with 0, where a highvalue attracts the entity.getempty avoidance map() : Returns a default avoidance map initialized with 0, where a highvalue repulses the entity.getempty rotation map() : Returns a default rotation map initialized with current end-effectorquaternion.getempty gripper map() : Returns a default gripper map initialized with current gripper action,where 1 indicates “closed” and 0 indicates “open”.getempty velocity map() : Returns a default affordance map initialized with 1, where the numberrepresents scale factor (e.g., 0.5 for half of the default velocity).reset todefault pose() : Reset to robot rest pose.19A.3 Real-World Environment SetupWe use a Franka Emika Panda robot with a tabletop setup. We use Operational Space Controllerwith impedance from Deoxys [128]. We mount two RGB-D cameras (Azure Kinect) at two oppositeends of the table: bottom right and top left from the top down view. At the start of each rollout, bothcameras start recording and return the real-time RGB-D observations at 20 Hz.For each task, we evaluate each method on two settings: without and with disturbances. For taskswith disturbances, we apply three kinds of disturbances to the environment, which we pre-select asequence of them at the start of the evaluation: 1) random forces applied to the robot, 2) randomdisplacement of task-relevant and distractor objects, and 3) reverting task progress (e.g., pull draweropen while it’s being closed by the robot). We only apply the third disturbances to tasks where“entity of interest” is an object or object part.We compare to a variant of Code as Policies [75] as a baseline that uses an LLM with ac-tion primitives. The primitives include: move topos,rotate byquat ,setvel,open gripper ,close gripper . We do not provide primitives such as pick-and-place as they would be tailoredfor a particular suite of tasks that we do not constrain to in our study (similar to the control APIsfor V oxPoser specified in Sec. A.2).A.3.1 TasksMove & Avoid : “Move to the top of [obj1] while staying away from [obj2]”, where [obj1] and [obj2]are randomized everyday objects selected from the list: apple, banana, yellow bowl, headphones,mug, wood block.Set Up Table : “Please set up the table by placing utensils for my pasta”.Close Drawer : “Close the [deixis] drawer”, where [deixis] can be “top” or “bottom”.Open Bottle : “Turn open the vitamin bottle”.Sweep Trash : “Please sweep the paper trash into the blue dustpan”.20A.4 Simulated Environment SetupWe implement a tabletop manipulation environment with a Franka Emika Panda robot inSAPIEN [112]. The controller takes as input a desired end-effector 6-DoF pose, calculates a se-quence of interpolated waypoints using inverse kinematics, and finally follows the waypoints usinga PD controller. We use a set of 10 colored blocks and 10 colored lines in addition to an articulatedcabinet with 3 drawers. They are initialized differently depending on the specific task. The lines areused as visual landmarks and are not interactable. For perception, a total of 4 RGB-D cameras aremounted at each end of the table pointing at the center of the workspace.A.4.1 TasksWe create a custom suite of 13 tasks shown in Table 4. Each task comes with a templated instruction(shown in Table 4) where there may be one or multiple attributes randomized from the pre-definedlist below. At reset time, a number of objects are selected (depending on the specific task) and arerandomized across the workspace while making sure that task is not completed at reset and that taskcompletion is feasible. A complete list of attributes can be found below, divided into “seen” and“unseen” categories:Seen Attributes:•[pos] : [“back left corner of the table”, “front right corner of the table”, “right side of thetable”, “back side of the table”]•[obj] : [“blue block”, “green block”, “yellow block”, “pink block”, “brown block”]•[preposition] : [“left of”, “front side of”, “top of”]•[deixis] : [“topmost”, “second to the bottom”]•[dist] : [3, 5, 7, 9, 11]•[region] : [“right side of the table”, “back side of the table”]•[velocity] : [“faster speed”, “a quarter of the speed”]•[line] : [“blue line”, “green line”, “yellow line”, “pink line”, “brown line”]Unseen Attributes:•[pos] : [“back right corner of the table”, “front left corner of the table”, “left side of thetable”, “front side of the table”]•[obj] : [“red block”, “orange block”, “purple block”, “cyan block”, “gray block”]•[preposition] : [“right of”, “back side of”]•[deixis] : [“bottommost”, “second to the top”]•[dist] : [4, 6, 8, 10]•[region] : [“left side of the table”, “front side of the table”]•[velocity] : [“slower speed”, “3x speed”]•[line] : [“red line”, “orange line”, “purple line”, “cyan line”, “gray line”]21A.4.2 Full Results on Simulated EnvironmentsU-Net + MP LLM + Prim. VoxPoserTasks SA UA SA UA SA UAmove to the [preposition] the [obj] 95.0% 0.0% 85.0% 60.0% 90.0% 55.0%move to the [pos] while staying on the [preposition] the [obj] 100.0% 10.0% 80.0% 30.0% 95.0% 50.0%move to the [pos] while moving at [velocity] when within [dist]cm from the obj 80.0% 0.0% 10.0% 0.0% 100.0% 95.0%close the [deixis] drawer by pushing 0.0% 0.0% 60.0% 60.0% 80.0% 80.0%push the [obj] along the [line] 0.0% 0.0% 0.0% 0.0% 65.0% 30.0%grasp the [obj] from the table at [velocity] 35.0% 0.0% 75.0% 70.0% 65.0% 40.0%drop the [obj] to the [pos] 70.0% 10.0% 60.0% 100.0% 60.0% 100.0%push the [obj] while letting it stay on [region] 0.0% 5.0% 10.0% 0.0% 50.0% 50.0%move to the [region] 5.0% 0.0% 100.0% 95.0% 100.0% 100.0%move to the [pos] while staying at least [dist]cm from the [obj] 0.0% 0.0% 15.0% 20.0% 85.0% 90.0%move to the [pos] while moving at [velocity] in the [region] 0.0% 0.0% 90.0% 45.0% 85.0% 85.0%push the [obj] to the [pos] while staying away from [obstacle] 0.0% 0.0% 0.0% 10.0% 45.0% 55.0%push the [obj] to the [pos] 0.0% 0.0% 20.0% 25.0% 80.0% 75.0%Table 4: Full experimental results in simulation on seen tasks and unseen tasks. “SA” indicates seen attributesand “UA” indicates unseen attributes. Each entry represents success rate averaged across 20 episodes.22A.5 PromptsPrompts used in Sec. 4.1 and Sec. 4.2 can be found below.planner : Takes in a user instruction Land generates a sequence of sub-tasks liwhich is fed into“composer” (Note that planner is not used in simulation as the evaluated tasks consist of a singlemanipulation phase).real-world: voxposer.github.io/prompts/real planner prompt.txt.composer : Takes in sub-task instruction liand invokes necessary value map LMPs to composeaffordance maps and constraint maps.simulation: voxposer.github.io/prompts/sim composer prompt.txt.real-world: voxposer.github.io/prompts/real composer prompt.txt.parse query obj: Takes in a text query of object/part name and returns a list of dictionaries, whereeach dictionary corresponds to one instance of the matching object containing center position, oc-cupancy grid, and mean normal vector.simulation: voxposer.github.io/prompts/sim parse query objprompt.txt.real-world: voxposer.github.io/prompts/real parse query objprompt.txt.getaffordance map: Takes in natural language parametrization from composer and returns aNumPy array for task affordance map.simulation: voxposer.github.io/prompts/sim getaffordance map prompt.txt.real-world: voxposer.github.io/prompts/real getaffordance map prompt.txt.getavoidance map: Takes in natural language parametrization from composer and returns aNumPy array for task avoidance map.simulation: voxposer.github.io/prompts/sim getavoidance map prompt.txt.real-world: voxposer.github.io/prompts/real getavoidance map prompt.txt.getrotation map: Takes in natural language parametrization from composer and returns a NumPyarray for end-effector rotation map.simulation: voxposer.github.io/prompts/sim getrotation map prompt.txt.real-world: voxposer.github.io/prompts/real getrotation map prompt.txt.getgripper map: Takes in natural language parametrization from composer and returns a NumPyarray for gripper action map.simulation: voxposer.github.io/prompts/sim getgripper map prompt.txt.real-world: voxposer.github.io/prompts/real getgripper map prompt.txt.getvelocity map: Takes in natural language parametrization from composer and returns a NumPyarray for end-effector velocity map.simulation: voxposer.github.io/prompts/sim getvelocity map prompt.txt.real-world: voxposer.github.io/prompts/real getvelocity map prompt.txt.23 |
X7okQlJz9M | Seeing-Eye Quadruped Navigation with ForceResponsive Locomotion ControlDavid DeFazio Eisuke Hirota Shiqi ZhangBinghamton University{ddefazi1,ehirota1,zhangs }@binghamton.eduAbstract:Seeing-eye robots are very useful tools for guiding visually impaired people, po-tentially producing a huge societal impact given the low availability and highcost of real guide dogs. Although a few seeing-eye robot systems have al-ready been demonstrated, none considered external tugs from humans, whichfrequently occur in a real guide dog setting. In this paper, we simultaneouslytrain a locomotion controller that is robust to external tugging forces via Rein-forcement Learning (RL), and an external force estimator via supervised learn-ing. The controller ensures stable walking, and the force estimator enables therobot to respond to the external forces from the human. These forces are usedto guide the robot to the global goal, which is unknown to the robot, while therobot guides the human around nearby obstacles via a local planner. Experi-mental results in simulation and on hardware show that our controller is robustto external forces, and our seeing-eye system can accurately detect force di-rection. We demonstrate our full seeing-eye robot system on a real quadrupedrobot with a blindfolded human. The video can be seen at our project page:https://bu-air-lab.github.io/guide_dog/Before Tug During Tug After Tug Tug Left Figure 1: Tugs on the seeing-eye robot are used as navigation cues during blindfolded navigation.1 IntroductionIn the typical seeing-eye dog (also known as guide dog) setting, a human holds a rigid handleattached to a harness on a dog. The dog will safely guide the human around nearby obstacles, whilethe human can tug on the dog to indicate which general direction to move in. Thus, the human hassome idea of where they are, and is capable of making high-level navigation decisions, while thedog can better sense its immediate surroundings for obstacle avoidance and locomotion. Guide dogshave been shown to improve the lives of visually impaired people, via increasing independence,confidence, companionship, and mobility [1]. Unfortunately, guide dogs need roughly two years oftraining, and cost over $50,000USD per dog [2]. Despite efforts to reduce the production cost ofguide dogs [3], their supply is still significantly lower than demand.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.To better meet this demand, and potentially improve performance, seeing-eye robot systems arebeing developed [4, 5, 6, 7, 8, 9]. There is a growing interest in the development of systems of thistype, which have undergone various human studies in order to measure the extent to which they arecompatible with visually impaired humans, and their level of societal acceptance [10, 11, 12]. Whilethese seeing-eye robot systems have successfully demonstrated navigation tasks, none of them haveconsidered settings involving human tugs, which is very common for seeing-eye dogs. It’s importantfor seeing-eye robots to be robust to human forces, as the human is constantly holding a rigid handledirectly attached to the robot, and large forces can cause the robot to stray from the optimal path, oreven fall over. Furthermore, a typical method of communication between a human and a real guidedog is through tugs. This makes force estimation useful for a seeing-eye robot, as knowledge of thedirection and magnitude of the applied force can be used to better facilitate navigation according tothe human’s intentions (see Figure 1).1To address the above mentioned issues in human-robot communication, we develop a novel seeing-eye robot system which is robust to external forces, and estimates the magnitude and directionof these forces to determine which navigation actions to take for human-robot co-navigation. Weachieve this by simultaneously training a locomotion controller via RL, and a force estimator viasupervised learning. Our locomotion policy is trained over simulated tugs by sampling differentbase velocities which are suddenly applied to the robot [13]. These tugs serve as labels for trainingour force estimator, and are estimated during deployment.While the controller is running on a real robot, the force estimator estimates the direction and mag-nitude of force the human applies. These force estimates are computed exclusively from sensorsonboard the robot (joint encoders and IMU). From force estimates, the robot detects when and inwhat direction the human tugs occur. This informs the robot which direction to go at a global level,while a local planner using information from a LIDAR sensor is used to navigate the immediateenvironment. Different from existing seeing-eye robot systems that require major hardware up-grades, e.g., customized traction device [6], or button interface [7], our seeing-eye system needs iscompatible with any attachable leash or handle.Our main contributions include the following:1. The first seeing-eye robot system which takes directional cues via human tugs, while alsosafely navigating the immediate environment.2. A force tolerant locomotion controller, jointly trained with a force estimator which canestimate the magnitude and direction of human forces.3. Experimental results in simulation and on hardware to evaluate the robustness of our loco-motion controller and accuracy of our force estimator.4. Demonstration of our seeing-eye robot system in an indoor environment with a blindfoldedhuman.2 Related WorkVarious seeing-eye robot systems have been demonstrated for the task of blindfolded navigation.Among these works, one of them considers an MPC-based motion planner for a wheeled robot [4].Another considers an optimization based approach for path planning, which models a taut or slackleash [5]. A third system designs an adjustable leash, and optimizes for human comfort duringnavigation [6]. These works all stray from real-world guide dog settings, in part because they assumethe robot has full knowledge of the destination beforehand, and that the human does not need tocommunicate with the robot during navigation. In real-world settings, the human must decide onwhich high-level navigation actions to take, and communicate these actions to the robot.1The term of “Seeing-eye robots” has been used by researchers that refer to quadruped robots to guidevisually-challenged people [26]. Although our robot doesn’t use vision, it serves as a seeing-eye platformthrough its Lidar sensors for navigation and obstacle avoidance.2State Buffer Privileged Info Force Estimator Base Velocity Estimator . . . . . . Locomotion Policy . . . . . . . . . PD Controller Target Joint Angles Joint Torques Robot State Sensor Readings T rain Deploy Navigation System Local Navigation Goal Navigation Goal Selection Force Direction Velocity Command Estimated Force Target Joint Angles PD Controller Joint Torques Controller Velocity Estimator Force Estimator Policy Peak Detection State Buffer Robot State Privileged Info Force Estimator Base Velocity Estimator Locomotion Policy . . . . . . . . . PD Controller Target Joint Angles Joint Torques Sensor Readings T rain Deploy Navigation System Controller Local Navigation Goal Navigation Goal Selection Force Direction Velocity Command Estimated Force Target Joint Angles PD Controller Joint Torques Force Estimator Policy Velocity Estimator Peak Detection . . . . . . . . . Figure 2: Overview of our approach. Our locomotion controller (circled in red) contains a velocityestimator, force estimator, and locomotion policy, all of which are trained in simulation. The basevelocity estimator and force estimator are trained via supervised learning, using privileged infor-mation from the simulator as labels. The locomotion policy is trained via RL, and outputs targetjoint angles to a PD controller which converts them to joint torques which are directly applied tothe robot. During deployment, our locomotion controller estimates external force at each time step.Force direction is derived from peaks in the estimated force signal. The direction of force determinesthe next local navigation goal for our navigation system to take, which returns velocity commandsto our controller.More similar to our work, other approaches consider settings where the human communicates high-level directions to the robot during navigation [7, 8, 9]. However, the medium of communicationin these works are either a custom designed handle with buttons [7], predefined directional actionsprior to navigation [8], or verbal commands [9]. In our work, the human communicates directionalcues via tugging, and is compatible with any rigid connection to the robot. Additionally, none ofthese works consider human forces being applied to the robot, making their systems unresponsiveor susceptible to failure upon navigation under human forces.Numerous quadruped locomotion controllers have been developed, many of which are robust toexternal forces [14, 15, 16, 17, 18, 19, 20, 21, 13, 22, 23, 24]. These works do not explicitly estimatethe applied forces, as the focus of these works is to develop controllers that are generally robust tovarying environments. In our work, we explicitly estimate forces to the robot generated by humantugs, in order to communicate the human’s navigation intentions with the robot.3 MethodIn this section, we present our seeing-eye robot system that is robust and responsive to externalforces from human tugs. Figure. 2 presents an overview of how we train our locomotion controller,and deploy for seeing-eye navigation.3.1 Locomotion ControllerWe train our locomotion controller via RL, which models environments as a Markov Decision Pro-cess (MDP). An MDP is defined as M= (S, A, T, R, γ ), where Sis the set of states, Ais the set ofactions, T:S×A×S→[0,1]is the transition function which outputs the probability of reachingstates′given state sand action a,R:S×A×S→Ris the reward function which returns feedbackfrom taking action afrom state sand ending up in state s′, andγ∈[0,1]is the discount factor whichdetermines how valuable future reward should be considered in comparison to immediate reward.We define a robot state at time tasxt= (c, ̇ v,q, ̇ q,at−1), where c= (cx, cy, cω)is the commandedbase linear and angular velocity, ̇ vis the base acceleration, qand ̇ qare the joint angles and velocitiesrespectively, and at−1is the action taken at time t−1. Actions are target joint angles, which are3Table 1: All terms of the reward function our locomotion policy is trained on. vrefers to basevelocity, crefers to commanded linear and angular base velocity, ωrefers to base angular velocity,τrefers to joint torques, ̇ qrefers to joint velocities, tairrefers to each foots air time, arefers to anaction, and dtrefers to the simulation time step.Term Description Definition ScaleLinear Velocity x, y exp(−∥cx,y−vx,y∥2/0.25) 1.0dtLinear Velocity z v2z −2.0dtAngular Velocity x, y ∥ωx,y∥2−0.05dtAngular Velocity z exp(−(cω−ωz)2/0.25) 0.5dtJoint Torques ∥τ∥2−0.0002dtJoint Accelerations ∥( ̇ qlast− ̇ q)/dt∥2−2.5e−7dtFeet Air TimeP4f=1(tair,f−0.5) 1.0dtAction Rate ∥alast−a∥2−0.01dtconverted to torques via a PD controller. The reward function encourages tracking commands cwhile minimizing energy consumption and large action changes [13] – fully defined in Table 1.Our locomotion policy also takes the base velocity and external force vector as input, to more easilylearn to track commands cand respond to external forces. These variables are not easily estimatedthrough robot sensors. Thus, we train state estimators via supervised learning over privileged infor-mation, which has been shown to be more effective than classical methods, e.g., Kalman Filters [25].It is a common setting to learn with privileged information in simulation, and insert a correspondingstate estimator in real-world deployment. In line with those systems, we train a velocity estimatorand force estimator for the real robot to bridge the sim-to-real gap in locomotion policy learning.These estimators are trained jointly with the locomotion policy, where v= (vx, vy, vz)is groundtruth base velocity, and F= (Fx, Fy, Fz)is ground truth external force, both obtained as privilegedinformation from the simulator.Two variations of forces are applied to the robot during training. One variation is small and frequentbackward pushes, which are designed to simulate a human following the robot with a taut leash,where a human incidentally applies frequent small backward forces on the robot as they are beingguided. The other variation is larger, less frequent pushes occurring in any direction. These pushesare designed to simulate human directional tugs, in which the human intentionally tugs the robot tocommunicate the direction they want to move in. The force estimator is only trained on data fromthe second variation of tugs, as we do not want to detect the small, incidental backward tugs whichnaturally occur during guided navigation.The base velocity estimator is a multilayer perceptron (MLP), whose parameters are updated overthe same data as the locomotion policy. To estimate forces, we find it helpful to access a historyof states, to better capture the robot’s behavior over the duration of the applied force. Thus, similarto training adaptation modules [17], our force estimator uses 1-D convolutional layers to capturetemporal relationships between states. Force estimator parameters are updated less frequently thanthe base velocity estimator and locomotion policy, because most time steps do not include externalforces. Thus, most of the labels for our force estimator includes zero vectors, indicating that noforce was applied at those time steps. This causes imbalanced training data, which we resolve byonly training the force estimator when the training data contains nonzero forces. We then furtherre-balance this to ensure at most 20% of the force estimator’s training samples include zero vectorsas labels.3.2 Seeing-eye Robot NavigationTo perform navigation tasks with our seeing-eye robot, we need to estimate when and in what generaldirection a force is being applied. This is done by running peak detection [26] on the previous 200time steps of the estimated force signal Fy, at a rate of 2Hz. In this work, we only analyze Fy,to determine whether a left or right tug has occurred. A peak in the signal indicates that the forceestimator detected a significant external force applied to the robot. Thus, if a peak is detected within4Start EndNavigate through narrow doorway Move around person Turn left Figure 3: Map of our navigation environment. Yellow circles correspond to decision points , wherethe human needs to decide which direction to move in via tugging. The blue lines indicate the paththe human took in our demonstration.the past 50 time steps, then we consider it as a recent tug applied by the human. Positive peakscorrespond to left tugs, while negative peaks correspond to right tugs.Local navigation goals are then selected based on the robot’s location, orientation, and tug direction.A domain expert manually labels a map with decision points , where the human must decide whichdirection to go in next, based on the direction they came from, and their tug direction. A labelledmap of the hallway domain we demonstrate our system on can be seen in Figure 3.The local navigation goal is then sent to our navigation system, which uses a LIDAR sensor tolocalize itself on the map via AMCL [27], and compute local plans via DWA [28]. The local plannerreturns velocity commands cto our controller, which our controller tracks.4 Implementation DetailsWe use Isaac Gym [29] physics simulator, and train our controller with 2048 robots in parallel [13].Our locomotion policy is trained via PPO [30], while our base velocity estimator and force estimatorare trained via supervised learning, with Mean Squared Error loss. Our locomotion policy andbase velocity estimator are both MLPs with three and two hidden layers respectively, while ourforce estimator is a convolutional neural network with three 1-D convolutional layers, which makespredictions over the past 25 time steps. The PD controller converts target joint angles to torqueswith proportional gain set to 20 and derivative gain set to 0.5. The policy is queried at 50Hz, andcontrol signals are sent at 200Hz. New velocity commands are sampled after each episode, wherecx,cy, andcωare sampled uniformly from [-1,1]. An episode terminates when any link other than afoot touches the ground, the base height is below 0.25 meters, or the episode has lasted 20 seconds.To better facilitate sim-to-real transfer, we train over RANDOM UNIFORM TERRAIN which increasesin difficulty based on a curriculum [13]. We also include noise in observations, and domain random-ization over different surface frictions.We add random external forces in our environment, to improve robustness of our locomotion policyand collect data to train our force estimator. The small and frequent backward pushes occur every 0.6seconds, have a duration of 0.1 seconds, and sets the base velocity to 0.25 m/s backward. Meanwhile,the large and infrequent pushes used to train the force estimator occur every 3 seconds, have aduration sampled from [0.24, 0.48] seconds, and sets the base velocity to a vector sampled fromFx∈[−0.75,0.75],Fy∈[−0.75,0.75], andFz∈[0,0.1]. Note that forces from Fare implementedas spontaneous updates in base velocity.5 ExperimentsWe design experiments to evaluate the robustness of our controller, and accuracy of our force esti-mator. Although we are able to learn a force estimator in simulation, we could not directly evaluate5Table 2: Our learned controllers fell significantly less frequently, and better tracked velocity com-mands under external forces when compared to an MPC controller. Including the output of the forceestimator in the state marginally improved robustness. The large variance in drift is caused by thelarge range of force strengths and directions we sample from.Controller Proportion Fell Drift from TrajectoryMPC 0.5990 1.1256±0.5908Learned No Est 0.1904 0.6824±0.5447Learned Est 0.1762 0.6790±0.5309it in the real world due to the missing ground-truth values. Instead, we chose to evaluate tug detec-tion (LEFT, RIGHT, and NONE) on the real robot, where the participants followed our instructions,and hence ground truth was available. We then demonstrate our full seeing-eye system via guidinga blindfolded human.5.1 Force Tolerance EvaluationTo evaluate whether our learned force controller is actually robust to external forces, we run exper-iments in simulation where we apply random forces to the base of the robot. In each trial, a singleforce in a random direction is applied, whose duration is sampled from [0.25, 0.5] seconds, andstrength is sampled from [25, 100] Newtons. Meanwhile, the robot is commanded to walk forwardat 0.5 meters/second, for a duration of five seconds.We run this experiment on three different types of locomotion controllers, for 1000 trials each. Thecontrollers include a commonly used MPC controller [31], a variant of our learned controller whichdoes not consider estimated force in its state (referred to as Learned No Est ), and our controllerdescribed in Section 3.1 (referred to as Learned Est ). All controllers are deployed in PyBullet [32].We measure how frequently the robot fell across all trials (Proportion Fell), and how far the externalforce caused the robot to drift from its current trajectory on average (Drift from Trajectory). Weconsider a robot to have fell if a non-foot part of the robot touches the ground. Results are reportedin Table 2, which indicate that our learned controllers fell significantly less frequently, and bettermaintained velocity tracking under external forces than the MPC controller. Including estimatedforces in the state appears to marginally improve robustness. Note that for our two types of learning-based controllers, we train over five different random seeds each and average the results.5.2 Force Estimation EvaluationWe evaluate the accuracy of our force estimator through experiments in simulation, and on hardware.5.2.1 Simulation−0.20.00.51.0AccuracyFull StateOnly Vel25 30 35 40 45 50 55 60Force applied (in Newtons)−0.20.00.51.0False Positive RateFull StateOnly VelForce Detection Accuracy and False Positive RateFigure 4: We report the accuracy andfalse positive rate of our force estima-tors, given forces of varied strength.The shaded region indicates the stan-dard deviation between the five policiestrained over five different random seeds.We deploy our learned controllers in PyBullet with a ve-locity command of 0.5 meters/second, and a single ex-ternal push per trial, for 1000 trials. Each push has aforce whose x-component is sampled from [-50, 50] New-tons, and a y-component which is at a fixed magnitude,and random direction (either left or right). There arethree possible classes the force estimator can predict over:{LEFT, RIGHT, NONE }. Each trial includes 150 timesteps and takes 3 seconds. In each trial, the force estima-tor is queried every 25 time steps, or six total queries pertrial. Thus, in each trial, the force estimator makes a totalof six predictions. A trial is deemed correct if one of thesix queries matches the force direction being applied inthe simulator.In order to evaluate whether force direction can be esti-mated through only a history of base velocities, we traina baseline force estimator which only makes force esti-6mates based on a history of ground-truth base velocity and base velocity commands. We refer to thisbaseline as only vel , while our force estimator trained over the full state and described in Section 3.1is referred to as full state .In Figure 4, we report the accuracy and false positive rates of our force estimators over varyingforce strengths. Accuracy is computed by dividing the number of trials in which the force estimatorpredicted the correct force direction at any of the six queries in the trial, by the number of totaltrials. Computing accuracy alone in this manner is not informative enough, because it is possible toachieve high accuracy by predicting LEFT at one point in the trial, and RIGHT at a different pointwithin the same trial.Thus, we also consider the false positive rate, which we compute by dividing the number of extraforces (LEFT or RIGHT predicted when ground truth is NONE) predicted during the trial, by thenumber of times the force estimator is queried (every 25 time steps). A high false positive rate corre-sponds to the force estimator oftentimes predicting forces when they do not occur. As force strengthincreases, our estimators achieve a higher accuracy while maintaining a relatively low false positiverate. Results indicate that knowledge of the full state is significantly beneficial in estimating forcedirection, when compared to a force estimator which is only trained over base velocity information.5.2.2 HardwareWhen a human tugs our seeing-eye robot with sufficient force, the base of the robot will momentarilyaccelerate in the direction of the tug. Thus, one might wonder why we do not consider accelerometersignals to detect tug direction, rather than train a force estimator. In this sub-section, we validate theusefulness of our estimated force signals, which we compare to accelerometer readings.Figure 5: Measured acceleration (top) andestimated force (bottom) during a singletrial. Tugs are denoted by red boxes.In this experiment, we command a real Unitree A1robot to move forward at 0.5m/s, while a human par-ticipant tugs left after a few seconds of forward lo-comotion, followed by a right tug after another fewseconds of locomotion. This trial is performed byfour human participants, three of which have no priorexperience in operating this tugging interface. Eachparticipant is a robotics researcher in a university lab.The three participants with no prior experience withthis system were given a demonstration of an exampletrial, before completing their own trials. In total, 42trials were conducted, and data was collected from40 trials (two trials were removed due to the robotfalling over and data not being saved). Of these 40trials, each participant performed ten of them.Forces are detected every 25 timesteps, and can bepredicted as one of three classes. We compute the ac-curacy and false positive rate for force detectors usingthe estimated force signal, compared to force detec-tion using the signal directly from the accelerometeron the robot. Accuracy is computed as the percentageof trials which contain a LEFT force prediction be-fore the halfway point of the trial, and a RIGHT forceprediction after the halfway point of the trial. False positive rate is the average percentage of falsepositive forces being predicted across all trials. A force prediction is considered as a false positiveif it is either LEFT or RIGHT, and does not correspond to the first expected LEFT force or laterexpected RIGHT force.Results are reported in Table 3, where we find the force estimator more accurately predicted thecorrect forces, with fewer false positives compared to predictions from raw accelerometer signals.Both methods of detecting tug direction (accelerometer and force estimator) perform worse for be-7Table 3: Force estimation via accelerometer readings vs force estimator signal.Participant Type Method Accuracy False Positive RateExpert (10 trials)Accelerometer 0.20 0.2149Force Estimator 1.00 0.0442Beginner (30 trials)Accelerometer 0.13 0.1906Force Estimator 0.70 0.0498ginner participants, indicating that adding diverse tugging styles is important for a more completeevaluation of force estimators.In Figure 5, we plot the raw signals from the accelerometer and force estimator from a single trial.We find the accelerometer signal is more noisy than the estimated force signal, which we hypothe-size is because the force estimator has access to other sensor information along with accelerometerreadings. Note that our force estimator is co-learned with the locomotion policy. The locomotionpolicy takes the estimated force as input during training, and the states generated from the policyare used to train the force estimator. We believe this co-learning mechanism leverages the additionalsensor information to help the force estimator outperform the tug detection from raw accelerometerreadings.6 Hardware DemonstrationWe demonstrate our full seeing-eye robot system on a Unitree A1 robot, in an indoor hallway envi-ronment (see link in Abstract). Our locomotion policy and force estimator are deployed on hardwarewithout any additional fine-tuning.In this demonstration, a human is blindfolded, and holding a taut leash attached to the robot. Thehuman desires to reach some particular goal location, and chooses the rout to take by tugging therobot at decision points, while the robot autonomously avoids obstacles (including boxes, narrowdoorways, and another human) along the way. Decision points occur at intersections in the hallway,where the human needs to decide in which direction they want to go. Similar to the setting in [7], theblindfolded human is not familiar with localizing himself without vision, so another sighted humanverbally indicates when a decision point is approaching.This demonstration indicates that our locomotion policy and force detector are transferable to hard-ware, and can cooperate with a local planner to enable blindfolded navigation. Thus, successfulhuman-robot communication occurred, such that the human avoided all obstacles while the robotnavigated to the desired goal location through detecting human tugs at decision points.7 DiscussionLimitations and Future Work While our seeing-eye system can avoid obstacles and select routesat a course level in indoor hallway environments, real guide dog settings include outdoor environ-ments, and situations with many possible directions to navigate in. In future work, researchers canleverage methods to determine which paths are traversable [33], and train force detectors which canestimate the direction of force at a finer level. Another future direction is to incorporate intelligentdisobedience, which refers to rejecting a human’s navigation decision if unsafe [34]. Additionally,The evaluation can be further improved by replacing sighted people with those with visual impair-ments in the experiments. Finally, our robot’s locomotion speed is relatively low. It might be thehuman tugs, the learned locomotion policy, or both causing the low speed. Further investigation intothose factors can potentially lead to very interesting future research and higher-speed seeing-eyerobot systems.Conclusion We train a locomotion controller which is tolerant to human tugs, and can estimatethe direction of external forces. We evaluate the robustness of our controller, and accuracy of ourforce estimator through experiments in simulation and on hardware. Finally, we demonstrate ourcontroller on a real robot for the task of blindfolded navigation, where a blindfolded human issuccessfully guided to a destination, while giving directional cues through tugging on the robot.8AcknowledgementThis work has taken place at the Autonomous Intelligent Robotics (AIR) Group, SUNY Bing-hamton. AIR research is supported in part by grants from the National Science Foundation (NRI-1925044), Ford Motor Company, OPPO, and SUNY Research Foundation.References[1] L. Whitmarsh. The benefits of guide dog ownership. Visual impairment research , 7(1):27–42,2005.[2] C. Morita. How much does a guide dog cost? https://puppyintraining.com/how-much-does-a-guide-dog-cost/#: ~:text=Initial%20cost%20for%20Guide%20Dog,for%20a%20guide%20dog%20%3D%20%2459%2C600 .[3] L. M. Tomkins, P. C. Thomson, and P. D. McGreevy. Behavioral and physiological predictorsof guide dog success. Journal of Veterinary Behavior , 6(3):178–187, 2011.[4] L. Wang, J. Zhao, and L. Zhang. Navdog: robotic navigation guide dog via model predictivecontrol and human-robot modeling. In Proceedings of the 36th Annual ACM Symposium onApplied Computing , pages 815–818, 2021.[5] A. Xiao, W. Tong, L. Yang, J. Zeng, Z. Li, and K. Sreenath. Robotic guide dog: Leading ahuman with leash-guided hybrid physical interaction. In 2021 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 11470–11476. IEEE, 2021.[6] Y . Chen, Z. Xu, Z. Jian, G. Tang, Y . Yangli, A. Xiao, X. Wang, and B. Liang. Quadrupedguidance robot for the visually impaired: A comfort-based approach. arXiv preprintarXiv:2203.03927 , 2022.[7] H. Hwang, T. Xia, I. Keita, K. Suzuki, J. Biswas, S. I. Lee, and D. Kim. System configurationand navigation of a guide dog robot: Toward animal guide dog-level guiding work. arXivpreprint arXiv:2210.13368 , 2022.[8] J. T. Kim, W. Yu, J. Tan, G. Turk, and S. Ha. How to train your guide dog: Wayfinding and safenavigation with human-robot modeling. In Companion of the 2023 ACM/IEEE InternationalConference on Human-Robot Interaction , pages 221–225, 2023.[9] J. T. Kim, W. Yu, Y . Kothari, J. Tan, G. Turk, and S. Ha. Transforming a quadruped intoa guide robot for the visually impaired: Formalizing wayfinding, interaction modeling, andsafety mechanism. arXiv preprint arXiv:2306.14055 , 2023.[10] Q. Chen, L. Wang, Y . Zhang, Z. Li, T. Yan, F. Wang, G. Zhou, and J. Gong. Can quadrupednavigation robots be used as guide dogs? arXiv preprint arXiv:2210.08727 , 2022.[11] Y . Zhang, Z. Li, H. Guo, L. Wang, Q. Chen, W. Jiang, M. Fan, G. Zhou, and J. Gong. ” iam the follower, also the boss”: Exploring different levels of autonomy and machine formsof guiding robots for the visually impaired. In Proceedings of the 2023 CHI Conference onHuman Factors in Computing Systems , pages 1–22, 2023.[12] P. Chonkar, G. Hemkumar, H. Wang, D. Dua, S. Gupta, Y .-C. Chan, J. Hart, E. Hauser,R. Mirsky, J. Biswas, et al. Look to my lead: How does a leash affect perceptions of aquadruped robot?[13] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massivelyparallel deep reinforcement learning. In Conference on Robot Learning , pages 91–100. PMLR,2022.9[14] N. Kohl and P. Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion.InIEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04.2004 , volume 3, pages 2619–2624. IEEE, 2004.[15] J. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim. Dynamic locomotion in the mitcheetah 3 through convex model-predictive control. In 2018 IEEE/RSJ international confer-ence on intelligent robots and systems (IROS) , pages 1–9. IEEE, 2018.[16] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y . Bai, D. Hafner, S. Bohez, and V . Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332 ,2018.[17] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.arXiv preprint arXiv:2107.04034 , 2021.[18] S. Chen, B. Zhang, M. W. Mueller, A. Rai, and K. Sreenath. Learning torque control forquadrupedal locomotion. arXiv preprint arXiv:2203.05194 , 2022.[19] L. Campanaro, S. Gangapurwala, W. Merkt, and I. Havoutis. Learning and deploying robustlocomotion policies with minimal dynamics randomization. arXiv preprint arXiv:2209.12878 ,2022.[20] G. B. Margolis and P. Agrawal. Walk these ways: Tuning robot control for generalization withmultiplicity of behavior. arXiv preprint arXiv:2212.03238 , 2022.[21] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robust per-ceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62):eabk2822,2022.[22] A. Agarwal, A. Kumar, J. Malik, and D. Pathak. Legged locomotion in challenging terrainsusing egocentric vision. In Conference on Robot Learning , pages 403–415. PMLR, 2023.[23] Z. Zhuang, Z. Fu, J. Wang, C. Atkeson, S. Schwertfeger, C. Finn, and H. Zhao. Robot parkourlearning. arXiv preprint arXiv:2309.05665 , 2023.[24] X. Cheng, K. Shi, A. Agarwal, and D. Pathak. Extreme parkour with legged robots. arXivpreprint arXiv:2309.14341 , 2023.[25] G. Ji, J. Mun, H. Kim, and J. Hwangbo. Concurrent training of a control policy and a stateestimator for dynamic and robust legged locomotion. IEEE Robotics and Automation Letters ,7(2):4630–4637, 2022.[26] M. Dede. peakdetect. https://github.com/avhn/peakdetect , 2022.[27] B. P. Gerkey. Amcl, 2013. URL http://wiki.ros.org/amcl .[28] D. Fox, W. Burgard, and S. Thrun. The dynamic window approach to collision avoidance.IEEE Robotics & Automation Magazine , 4(1):23–33, 1997.[29] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac gym: High performance gpu-based physics simulation forrobot learning. arXiv preprint arXiv:2108.10470 , 2021.[30] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[31] X. B. Peng, E. Coumans, T. Zhang, T.-W. E. Lee, J. Tan, and S. Levine. Learning agilerobotic locomotion skills by imitating animals. In Robotics: Science and Systems , 07 2020.doi:10.15607/RSS.2020.XVI.064.10[32] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. 2016.[33] J. Frey, D. Hoeller, S. Khattak, and M. Hutter. Locomotion policy guided traversability learningusing volumetric representations of complex environments. In 2022 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 5722–5729. IEEE, 2022.[34] R. Mirsky and P. Stone. The seeing-eye robot grand challenge: Rethinking automated care.InProceedings of the 20th International Conference on Autonomous Agents and MultiagentSystems (AAMAS 2021) , 2021.11 |
rxlokRzNWRq | ManiCast: Collaborative Manipulationwith Cost-Aware Human ForecastingKushal KediaCornell UniversityPrithwish DanCornell UniversityAtiksh BhardwajCornell UniversitySanjiban ChoudhuryCornell UniversityAbstract: Seamless human-robot manipulation in close proximity relies on ac-curate forecasts of human motion. While there has been significant progress inlearning forecast models at scale, when applied to manipulation tasks, these mod-els accrue high errors at critical transition points leading to degradation in down-stream planning performance. Our key insight is that instead of predicting themost likely human motion, it is sufficient to produce forecasts that capture howfuture human motion would affect the cost of a robot’s plan. We present M ANI-CAST, a novel framework that learns cost-aware human forecasts and feeds themto a model predictive control planner to execute collaborative manipulation tasks.Our framework enables fluid, real-time interactions between a human and a 7-DoFrobot arm across a number of real-world tasks such as reactive stirring, object han-dovers, and collaborative table setting. We evaluate both the motion forecasts andthe end-to-end forecaster-planner system against a range of learned and heuristicbaselines while additionally contributing new datasets. We release our code anddatasets at https://portal-cornell.github.io/manicast/ .Keywords: Collaborative Manipulation, Forecasting, Model Predictive Control1 IntroductionSeamless collaboration between humans and robots requires the ability to accurately anticipate hu-man actions. Consider a shared manipulation task where a human and a robot collaborate to cooka soup – as the robot stirs the pot, the human adds in vegetables. Such close proximity interactionsrequire fluid adaptions to the human partner while staying safe. To do this, the robot must predictor forecast the human’s arm movements, and plan with such forecasts. This paper addresses theproblem of generating human motion forecasts that enable seamless collaborative manipulation.Recent works have made considerable progress in training human motion forecast models by lever-aging large-scale human activity datasets such as AMASS [1] and Human 3.6M [2]. However,directly applying such models for human-robot collaboration presents several challenges. First, thespace of all possible human motions is very large and pre-trained forecast models typically averageout their performance over the distribution of activities seen at train time. Second, although fine-tuning with task-specific data helps improve overall forecasting accuracy, it does not necessarilylead to better performance for downstream planning. This because these models are usually veryaccurate at predicting frequent and predictable events in the data, e.g. continuing to stir a ladle.However, they struggle with rare and more unpredictable transitions, e.g. pulling the ladle back as ahuman hand enters the workspace. Such transitions are critical for seamless collaboration and errorsat such data points have a substantial impact on the overall performance of the robot.Our key insight is that instead of predicting the most likely human motion, it is sufficient toproduce forecasts that capture how future human motion would affect the cost of a robot’s plan.For instance, in the cooking task, the robot’s planned trajectory has high cost if it comes close tothe human arm, and low cost otherwise. While trying to accurately predict how the human arm maymove at any given moment is difficult, it is much easier to predict whether that movement resultsin a high cost. We achieve this by modifying the forecast training objective to match the cost offkk837, pd337, ab2635, sc2582 g@cornell.edu7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Closed-loop, real-time collaborative human-robot manipulation across three differentkitchen tasks by combining learned human pose forecasts with model predictive control.ground truth future motions rather than the exact motion itself. We propose a novel framework,MANICAST(Manipulation Forecast), that learns cost-aware human motion forecasts and plans withsuch forecasts for collaborative manipulation tasks. At train time, we fine-tune pre-trained humanmotion forecasting models on task specific datasets by upsampling transition points and upweightingjoint dimensions that dominate the cost of the robot’s planned trajectory. At inference time, wefeed these forecasts into a model predictive control (MPC) planner to compute robot plans that arereactive and keep a safe distance from the human. To the best of our knowledge, this is the first paperto leverage large-scale human motion data and state-of-the-art pre-trained human forecast models tointegrate with a real-time MPC planner for collaborative human-robot manipulation tasks. Our keycontributions are:1. A method to train human motion forecast models to be used with a downstream planner forcollaborative human-robot manipulation.2. A new dataset of human-human collaborative manipulation on 3 kitchen tasks.3. Real-world evaluation of a human-robot team on each of the 3 tasks and comparison against arange of learned and heuristic baselines.2 Related WorkNavigation with Human Forecasts. Navigating around humans has been a long-standing challengein areas such as self-driving [3–8] and social robotics [9–15]. In the context of self-driving, there’sa rich history in trajectory forecasting of agents observed by the autonomous vehicle (A V) [16]ranging from physics-based methods (e.g. Constant Velocity/Acceleration [17–21], Kalman Fil-ter [22–24]) to machine learning-based methods (e.g. Gaussian Processes [25, 26], Hidden MarkovModels [27, 28]) to inverse reinforcement learning [29, 30]. In recent years, the rise of sequencemodels in deep learning has led to great advances in accurately predicting future states of trafficparticipants. Parallel to the developments in autonomous driving, social navigation in crowd en-vironments has similarly converged on sequence models [31–35] to forecast trajectories within acrowd. In addition to making predictions about trajectories, many recent works have used forecastsas inputs to motion planners [36–38] or jointly forecasted and planned [39, 40]. The locations sweptout by forecasts are considered to be “unsafe” collision regions that the planner avoids. Conse-quently, evaluation of trajectory forecasting has shifted from accuracy-based metrics [41] to moretask-aware metrics that focus on the performance of a downstream planner. However, all of theseworks approach forecasting as a 2D problem, whereas we look to plan 7D robot manipulation tra-jectories by forecasting 21D human pose, which is higher dimensional and much more complex.Human Pose Forecasting. Human pose forecasting involves predicting how a human pose evolvesin both space and time. Several works have leveraged Recurrent Neural Networks (RNN) [42, 43]and Transformer Networks [44] to model time, while graph-based models such as Graph Convolu-tional Networks (GCN) [45, 46] have been used to model the interaction of joints in space. Mao et al.[45] note that human motion over a time horizon is smooth and periodic. They exploit this obser-vation by encoding the context of an agent with a Discrete Cosine Transform (DCT) and then apply2an attention module to capture the similarities between current and historical motion sub-sequences.STS-GCN [46] learns the adjacency matrix between different joints and across different timestepsseparately to bottleneck the cross-talk interactions between joints across time and use a TemporalConvolutional Network (TCN) for decoding the predictions over a fixed time horizon. Recent workshave extended forecasting from single-person to multi-person forecasting [47, 48]. Although therehave been many advances in pose forecasting, to the best of our knowledge, our work is the first tointegrate an entire upper body forecast into a robot manipulation planner.Close-Proximity Human Robot Collaboration For close proximity human-robot tasks where flu-idity is important, forecasting human pose is essential. While there are various works that forecasthuman pose in the context of robotics [49–53], they don’t integrate such predictions into robotplanners. Research focused on robot manipulation planning [54–56] around humans has modeledhumans in some form at planning time, whether that be assuming static poses [56, 57], tracking thecurrent pose [58, 59], or making predictions only about the motion of specific joints such as the wristor head [60–63]. Mainprice et al. [64] predicts single-arm reaching motion using motion capture oftwo humans performing a collaborative task in a shared workspace and learns a collision avoidancecost function using Inverse Optimal Control (IOC). He et al. [65] introduce a hierarchical approachwith a high and low-level controller to generate feasible trajectories and ensure safety. Scheele et al.[60] use an unsupervised online learning algorithm to build a model that predicts the remainder ofa human reaching motion given the start of it. Ling et al. [61] use the human’s head pose, wristposition, and wrist speed as inputs to their forecasting Long Short-Term Memory (LSTM) model topredict future wrist positions which are used as an input to their robot motion planner. Oguz et al.[66] propose a framework to detect and classify interactions during close-proximity human-robotinteractions. Prasad et al. [67] propose a framework to plan humanoid motion plans representinghuman motion as a latent variable. In contrast to these methods, we utilized orecasting modelspre-trained on larger, more diverse data to solve human-robot collaboration tasks. We focus on theefficacy of the forecaster, especially in extended interaction settings where performance in transitionwindows is critical but forms a small part of our collected dataset.3 Problem FormulationNotation. The human’s state at timestep t,sHt2RJ3, are the 3- Dcoordinates of Jupper-bodyjoints. Let the context2,f=fsHk+1;:::; sH0g, be the history of human states over the past ktimesteps. Let future human and robot trajectories over a horizon TbexH=fsH1;sH2;:::; sHTgandxR=fsR1;sR2;:::; sRTgrespectively. Let C(xRjxH)denote the cost of the robot trajectory given thehuman trajectory. Let Pq(xHjf)μexp(kxHmq(f)k)be the learnt forecast model, modelled as aGaussian, where mq(f)is the predicted mean trajectory, and qare the learned parameters.Planning. The goal of the planner is to compute the optimal robot trajectory xRgiven humantrajectory xH. At inference time, xHis unknown, and the planner must rely on a forecastˆxHPq(:jf)of the human trajectory generated by the model. The planner then solves for the lowest costrobot trajectory given the forecast — argminxREˆxHPq(:jf)C(xRjˆxH).Cost-Aware Human-Pose Forecasting. Typically, the forecast model is trained by maximizinglog-likelihood (MLE) of the ground truth human trajectory xH, i.e., max qlogPq(xHjf), which for aGaussian model corresponds to a L2 loss. However, low log-loss can still result in a high mismatchbetween the costs C(:jxH)andC(:jˆxH), which can result in poor downstream planner performance.For instance, in Fig. 4, the “base model” minimizing log-loss has low errors everywhere except atcritical transition points, which has a significant impact on the cost function. Instead, we choose aloss function `(q)such that the costs calculated with the forecasts match the cost calculated withground truth for any robot plan xR:`(q) =EfhExHP(:jf)C(xRjxH)EˆxHPq(:jf)C(xRjˆxH)i(1)This loss is equivalent to matching moments of a cost function [68, 69]. However, directly imple-menting this as a loss function is challenging in practice for a number of reasons. First, the cost2The context can have other factors like the history of the robot’s past states, which we exclude for simplicity3Figure 2: Overview of training and inference pipeline for M ANICAST. During training, we pre-train M AN-ICAST on a large human activity database and finetune on CoMaD using a cost-aware loss by upsamplingtransitions and upweighting wrist joints. During inference, we use a sampling-based MPC, STORM, to rolloutmultiple robot trajectories, evaluate their costs with M ANICAST forecasts and select the best plan.function has non-differentiable components like collision check modules. Second, the cost functionmay not be convex and have poor convergence rates. Finally, the cost function may not be knownupfront at train time, and likely to be tuned frequently. We address these challenges next.4 ApproachWe present M ANICAST, a framework that learns cost-aware human forecasts and plans with suchforecasts for collaborative manipulation tasks. Fig. 2 shows both the training time and inference timeprocess. At train time, we pre-train a state-of-the-art forecast model (STS-GCN [46]) on large-scalehuman activity data (AMASS [1]) and fine-tune the forecasts on our Collaborative ManipulationDataset (CoMaD), with a focus on optimizing downstream planning performance. At inference time,we detect the human pose at 120Hz using an Optitrack motion capture system, generate forecasts at120Hz, feed the forecasts into a MPC planner (STORM [70]) to compute robot plans at 50 Hz.4.1 Train Time: Fine-tune Forecasts with Cost-Aware LossWe first pre-train the model using standard MLE loss on a large scale human activity dataset toget a baseline forecast model. We then fine-tune this model on our own Collaborative Manipula-tion Dataset (CoMaD), that has data of two humans collaboratively performing manipulation tasks.Since directly optimizing the cost-aware loss in (1) is challenging, we propose two strategies toapproximately optimize the loss without significant changes to the model training pipeline.Strategy 1: Importance Sampling. While the MLE loss measures errors uniformly on the distri-bution P(f), the cost-aware loss `(q)(1) is sensitive to errors at contexts where costs are generallyhigher. Notably, for many tasks, transition points where the human comes into the robot’s workspaceare typically high cost and hence dominate the loss. Since transition points are infrequent, the fine-tuned model has higher errors on them. Our strategy is to assign greater importance to transitionpoints by importance sampling and then finetuning on a new distribution, which we formalize below.LetCmax(f)be the maximum cost of a robot trajectory that a given context can induce, which wecompute from collected data. We defined a transition distribution asPT(f)μP(f)I(Cmax(f)d),i.e., the distribution over contexts that induce a cost higher than a percentile threshold d. Instead ofminimizing the loss on P(f), we choose a new distribution Q(f) =0:5P(f)+0:5PT(f)that mixesthe original distribution with the transition distribution, effectively up-sampling the transitions. Weprove the following performance bounds.Lemma 1. For a model qwith bounded loss of eon P(f), the final loss is bounded as `(q)Cmaxe,where C max=max fCmax(f). In contrast, for a model with bounded loss of eon the new distributionQ(f), the final loss is bounded by `(q)2max (d;CmaxEP(f)[I(Cmax(f)d)])eWe refer the reader to the Appendix for the proof and interpretation of the bound. For our tasks, wechoose a small d(10%) that works well in practice.4Figure 3: Forecasting Metrics (All Joints FDE, T-All Joints FDE, T-Wrist FDE) across all tasks in CoMaD.Strategy 2: Dimension Weighting. While the MLE or L2 loss sets equal weights to all joint dimen-sions J, the cost-aware loss `(q)(1) is sensitive to errors along certain dimensions. For example fora handover task, a small error in predicting the wrist position can have a large impact on the cost.We empirically tune weights w2RJon the MLE loss to upweight joints that are explicitly used asterms in the cost functions we define.We combine these two strategies to optimize the following proxy loss:ˆ`(q) =E0:5P(f)+0:5PT(f)"ExHP(:jf)JåjwjlogPq(xjHjf)#(2)4.2 Inference Time: Sampling-Based Model Predictive Control with Learned ForecastsAt inference time, we invoke a sampling based model predictive control (STORM [70]) to computeplans for a 7-DoF robot arm. We designed a set of three cost functions, one for each collaborativemanipulation task, that take as input a candidate robot plan and the forecasts generated by M AN-ICAST and computes a cost. At every timestep, given a context f, the model predictive controlsamples plans, evaluates the cost of each plan and updates the sampling distribution till conver-gence, returning the minimum cost plan. The robot takes a step along the plan and replans. Weprovide details on the cost function and the MPC planner in the Appendix.5 Experiments5.1 Experimental SetupCollaborative Manipulation Dataset (CoMaD). We design 3 collaborative manipulation tasksshown in Fig 1 (1) Reactive Stirring : Robot stirs a pot while making way for human pouring invegetables (2) Object Handovers : Robot moves to receive an object behind handed over by the hu-man (3) Collaborative Table Setting : robot and the human manipulate objects on a table while notgetting in each other’s way. To train our forecasting model, we collect a dataset of two humansexecuting these tasks. It contains 19 episodes of reactive stirring, 27 episodes of handovers, and 15episodes of collaborative table setting. We refer to Appendix for more details.Large Human-Activity Databases. The AMASS (Archive of Motion Capture As SurfaceShapes) [1] dataset is a large and diverse collection of human motion. It consists of over 40 hours ofsingle-human motion spanning over 300 subjects. Hence we pre-train models on AMASS.Forecast Models. We run the planner with different forecasts. (1) Ground Truth : CUR assumesthe current human pose remains constant over the planning horizon, and FUT uses the real fu-5Model ! GROUND TRUTH BASELINES LEARNING -BASEDMetric# CUR CVM F INETUNED MANICAST-T M ANICAST MANICAST-WREACTStop Time (ms) 0 ( 0) 367.8 (50:9) 203.3 (22:3)271.1 (25:6) 246.7 (25:8) 290.0 (29:9)Restart Time (ms) 0 ( 0) 235.6 (18:2) 441.1 (30:8)454.4 (36:7) 455.6 (33:5) 496.7 (27:9)FDR 0% 67% 0% 11% 0% 11%HAND .Goal Detection (ms) 0 ( 0) 246.0 (146:0) 189.3 (47:8)473.3 (123:7)450.0 (124:8)488.4 (133:4)Correct Goal Rate 100% 20% 100% 100% 100% 100%Path Length (mm) 459.0 ( 37:0) 485.0 (85:0) 432.0 (39:0) 428 (42:0) 404.0 (45:0) 436.0 (49:4)Time to Goal (s) 4.59 ( 0:37) 3.77 (0:86) 3.87 (0:28) 3.91 (0:32) 3.96(0:54) 4.17 (0:45)Table 1: We integrate forecasts of different models into STORM for the reactive stirring and object handovertasks. The standard error for each planning metric is shown inside parentheses.tures obtained from playing back CoMaD’s episodes. While the latter can not be used for real-worldclosed-loop planning, it’s the gold standard for comparing other models. (2) Baselines : The ConstantVelocity Model (CVM) calculates future human pose by integrating a constant velocity, calculatedfrom the immediate 0.4s history of each joint. The W ORST case model is a conservative model thatconstructs two large spheres (arm length radius) centered at each shoulder joint. (3) L EARNING -BASED : We train our forecasting models using the STS-GCN [46] architecture. The B ASEmodel istrained only on AMASS data, whereas the S CRATCH model is trained only on CoMaD. The F INE-TUNED model pre-trains on AMASS, but is finetuned on CoMaD only using the MLE objective.MANICAST is finetuned using the loss objective in Eq.2 with the weights wj=1. M ANICAST-T isfinetuned only on transition data in CoMaD. M ANICAST-W upweights loss for wrist joints by 5x.Figure 4: (Top) The x-position of the reaching human’s wrist in a Reactive Stirring test set episode.x0:4 indicates the wrist is near the pot. (Bottom) B ASE model’s forecasts have high errors dur-ing transitions and lag behind the current pose. M ANICAST, trained on CoMaD by upsamplingtransitions, predicts the reaching human’s pose faster than tracking the current pose.Forecasting Metrics. We measure the Average Displacement Error (ADE) across all 25 timestepsof prediction and the Final Displacement Error (FDE) at the final timestep of prediction. Metrics aremeasured separately for 7 upper body joints and wrist joints. We also report the wrist errors insidethe transition windows for the stirring and handover tasks.Planning Metrics. For quantitative results, we ran a 7-DoF Franka Emika Research 3 in the real-world and played back a recording of a human partner from CoMaD. Reactive Stirring : Test set had9 human movements into the pot. We report average time required by the robot to stop and restartmotion compared to using the current position for planning. We also report the False Detection Rate(FDR) for the human coming into the robot’s workspace. Object Handover : Test set had 10 han-dovers. We report the average gain in Goal Detection time compared to using the current pose andthe percentage of times when the correct goal is detected by the forecaster. Among instances wherethe correct goal is detected, we report the average time the robot arm took to reach the handoverlocation and the total path distance moved by its end-effector. For qualitative closed-loop evaluation(Fig 1, Fig 5, Fig 6), we run our entire pipeline with two new human subjects not part of CoMaD .6Figure 5: Integrating forecasting models with MPC for real-world human-robot handovers. M AN-ICAST forecasts allow the robot to track a shorter path to the handover location in less time thanfollowing the current pose or the unrealistic constant velocity model’s predictions.Planning with current pose tracking Planning with M ANICAST forecastsFigure 6: Collision avoidance in Collaborative Table Setting. Following the current pose, the robotarm nearly collides with the human due to delayed prediction. M ANICAST avoids this by slowingdown when the human reaches in and speeding up when the human retracts their arm.5.2 Results and AnalysisO1. MANICAST forecasts outperform current pose tracking. Figure 2 shows that M ANICASTmodels have low FDE in forecasting human pose with improved planning performance (Table 1).In the reactive stirring task, M ANICAST models predict both the arrival and departure of humansbefore the current pose. Fig 4 showcases M ANICAST predictions over an entire episode from theCoMaD’s test set. In the object handover task, current pose tracking predicts the goal locationslower than the M ANICAST models. As Fig 5 demonstrates, the robot’s end effector chases thecurrent wrist pose towards its eventual final location, leading to a longer trajectory that takes moretime to execute. Further, planning with forecasts leads to safe motion as collisions can be preventedwith future human positions (Fig 6).O2.BASELINE models predict dynamically infeasible forecasts leading to suboptimal planningperformance. As seen in Fig 5, since the CVM’s predictions are not dynamically constrained, itsforecasts overpredict future human positions making the robot arm deviate from the optimal pathduring handovers. On the other hand, as seen in Table 1 in the Appendix, the robot arm remainsretracted during the reactive stirring task and does not approach the human’s wrist for handoversfollowing the extremely conservative W ORST case model.O3. Training on AMASS or CoMaD alone is not sufficient to achieve optimal performance.The B ASE and S CRATCH models produce erroneous forecasts on CoMaD (Fig 3). In most cases,they have larger prediction errors than simply tracking the current pose. While the B ASE modelproduces the most accurate forecasts on AMASS (Tab 2 in Appendix), the activities in CoMaDcontain reaching arm motions and abrupt transitions that are not captured by the model. Fig 4 showsthat its predictions lag behind current pose, when predicting the arm to retract. Whereas, S CRATCH7is only trained on CoMaD. While this model is exposed to task-specific data, its training does notconverge as forecasting 21D human pose is a data-intensive complex task. Consequently, planningwith both of these forecasters leads to worse performance than just tracking the current pose (Table1).O4.By upsampling transition points, M ANICAST improves the accuracy and efficiency of taskcompletion. Observing Fig 3, we note that M ANICASTmodels have higher displacement errors onCoMaD’s test set than the F INETUNED model trained on just the MLE objective. However, since theMANICAST models upsample transitions, they have lower prediction errors in transition windowsrelevant to planning performance. In the reactive stirring task, they detect the arrival and departureof the human in the robot’s workspace quicker than F INETUNED . In the object handover task, thisdifference is pronounced as the M ANICAST models predict the final handover location more than 2times faster than F INETUNED (Table 1). M ANICAST-T is trained on only transitions and planningwith it can lead to erratic performance in some cases. For example, in the reactive stirring task, itfalsely predicts the human’s arrival into the robot’s workspace in a test set episode.O5.Upweighting wrist joints in the loss function can improve performance in planning tasks.For the collaborative manipulation tasks considered in this work, we note that predicting the locationof the wrist joints has higher priority than the other upper body joints. M ANICAST-W assigns 5times more weight to the wrist joints in its loss function. This leads to lower forecasting errors forthe wrist joint in CoMaD compared to all other models (Fig 3). In most cases (Table 1), this modelhas the best planning performance with faster detection of human pose in the robot’s workspaceduring both the reactive stirring and handover tasks. In some cases, we note that its predictions canbe unstable leading to one false detection in the reactive stirring task and larger end-effector pathmovement in the handover task.6 DiscussionThis work considers the problem of collaborative robot manipulation in the presence of humans inthe workspace. We present M ANICAST, a novel framework for seamless human-robot collaborationthat generates cost-aware forecasts and plans with them in real-time. Our approach was thoroughlytested on three collaborative tasks with a human partner in a real-world setting. M ANICAST lever-ages large databases of human activity . Producing dynamically consistent 21D human pose is acomplex and high-dimensional problem. However, by training on a large dataset, ManiCast is ableto learn the statistical regularities of human motion. M ANICASToptimizes planning performanceby using an approximate cost-aware loss function. We finetune the forecasting model on a novelCollaborative Manipulation Dataset (CoMaD) by upsampling transition points and upweighting rel-evant joint dimensions. Our system can track a human user’s pose, forecast their movements, andcontrol a robotic arm in real-time and at high speed . Our system’s forecasting module runs at120Hz, used by an MPC at 50Hz. Fast replanning enables robots to collaborate safely with humansby allowing them to adjust their motion in response to abrupt changes in their environment. In fu-ture directions of work, we plan to extend our framework to produce conditional forecasts of humanmotion given robot trajectories. Further, we will attempt to directly optimize the cost function for atask instead of approximating the forecasting loss function.7 LimitationsSeveral reasons limit our approach’s deployment to real-world human users. Firstly, our methodrelies on a motion capture system consisting of 10 cameras to track the history of a human’s pose.Further, the user is required to wear a motion capture suit which can be cumbersome. Future workwill attempt to detect human skeletons using an ego-centric camera view. Secondly, CoMaD cap-tures just two human subjects across its entire dataset. While we qualitatively show generalizationto a new human subject, we do not extensively test it on a range of users. Additionally, the fore-caster may not generalize to users with movement styles not represented in the data. Future datasetsshould encompass multiple subjects with diverse movement styles for widespread usage of human-pose forecasting in personal robotics. This must be accompanied by a comprehensive user study totest the overall pipeline. Finally, we only considered the upper body skeleton for forecasting motionas there was significant occlusion in detecting the lower body. Our forecaster can not be directlyapplied to robotics applications in which the human’s lower body movement is of interest.8AcknowledgmentsThis work was partially funded by NSF RI (#2312956).References[1] N. Mahmood, N. Ghorbani, N. F. Troje, G. Pons-Moll, and M. J. Black. AMASS: Archiveof motion capture as surface shapes. In International Conference on Computer Vision , pages5442–5451, Oct. 2019.[2] C. Ionescu, D. Papava, V . Olaru, and C. Sminchisescu. Human3. 6m: Large scale datasetsand predictive methods for 3d human sensing in natural environments. IEEE transactions onpattern analysis and machine intelligence , 36(7):1325–1339, 2013.[3] Z. Huang, H. Liu, J. Wu, and C. Lv. Differentiable integrated motion prediction and planningwith learnable cost function for autonomous driving. arXiv preprint arXiv:2207.10422 , 2022.[4] A. Sadat, M. Ren, A. Pokrovsky, Y .-C. Lin, E. Yumer, and R. Urtasun. Jointly learnablebehavior and trajectory planning for self-driving vehicles. In 2019 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 3949–3956. IEEE, 2019.[5] Z. Huang, H. Liu, J. Wu, and C. Lv. Conditional predictive behavior planning with inversereinforcement learning for human-like autonomous driving. IEEE Transactions on IntelligentTransportation Systems , 2023.[6] K. Mangalam, E. Adeli, K.-H. Lee, A. Gaidon, and J. C. Niebles. Disentangling human dy-namics for pedestrian locomotion forecasting with noisy supervision. In Proceedings of theIEEE/CVF Winter Conference on Applications of Computer Vision , pages 2784–2793, 2020.[7] A. Monti, A. Bertugli, S. Calderara, and R. Cucchiara. Dag-net: Double attentive graph neuralnetwork for trajectory forecasting. In 2020 25th International Conference on Pattern Recog-nition (ICPR) , pages 2551–2558. IEEE, 2021.[8] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone. Trajectron++: Dynamically-feasibletrajectory forecasting with heterogeneous data. In Computer Vision–ECCV 2020: 16th Eu-ropean Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16 , pages683–700. Springer, 2020.[9] P. Kothari and A. Alahi. Safety-compliant generative adversarial networks for human trajectoryforecasting. IEEE Transactions on Intelligent Transportation Systems , 2023.[10] H. Nishimura, B. Ivanovic, A. Gaidon, M. Pavone, and M. Schwager. Risk-sensitive sequentialaction control with multi-modal human trajectory forecasting for safe crowd-robot interaction.In2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages11205–11212. IEEE, 2020.[11] C. Chen, Y . Liu, S. Kreiss, and A. Alahi. Crowd-robot interaction: Crowd-aware robot navi-gation with attention-based deep reinforcement learning. In 2019 international conference onrobotics and automation (ICRA) , pages 6015–6022. IEEE, 2019.[12] S. Poddar, C. Mavrogiannis, and S. S. Srinivasa. From crowd motion prediction to robotnavigation in crowds. arXiv preprint arXiv:2303.01424 , 2023.[13] S. Liu, P. Chang, Z. Huang, N. Chakraborty, W. Liang, J. Geng, and K. Driggs-Campbell.Intention aware robot crowd navigation with attention-based interaction graph. arXiv preprintarXiv:2203.01821 , 2022.[14] K. Li, M. Shan, K. Narula, S. Worrall, and E. Nebot. Socially aware crowd navigation withmultimodal pedestrian trajectory prediction for autonomous vehicles. In 2020 IEEE 23rd In-ternational Conference on Intelligent Transportation Systems (ITSC) , pages 1–8. IEEE, 2020.[15] C. Mavrogiannis, K. Balasubramanian, S. Poddar, A. Gandra, and S. S. Srinivasa. Windingthrough: Crowd navigation via topological invariance. IEEE Robotics and Automation Letters ,8(1):121–128, 2022.9[16] Y . Huang, J. Du, Z. Yang, Z. Zhou, L. Zhang, and H. Chen. A survey on trajectory-predictionmethods for autonomous driving. IEEE Transactions on Intelligent Vehicles , 7(3):652–674,2022.[17] A. Polychronopoulos, M. Tsogas, A. J. Amditis, and L. Andreone. Sensor fusion for predictingvehicles’ path for collision avoidance systems. IEEE Transactions on Intelligent Transporta-tion Systems , 8(3):549–562, 2007.[18] R. Schubert, E. Richter, and G. Wanielik. Comparison and evaluation of advanced motionmodels for vehicle tracking. In 2008 11th international conference on information fusion ,pages 1–6. IEEE, 2008.[19] N. Kaempchen, B. Schiele, and K. Dietmayer. Situation assessment of an autonomous emer-gency brake for arbitrary vehicle-to-vehicle collision scenarios. IEEE Transactions on Intelli-gent Transportation Systems , 10(4):678–687, 2009.[20] R. Pepy, A. Lambert, and H. Mounier. Reducing navigation errors by planning with realisticvehicle model. In 2006 IEEE Intelligent Vehicles Symposium , pages 300–307. IEEE, 2006.[21] C.-F. Lin, A. G. Ulsoy, and D. J. LeBlanc. Vehicle dynamics and external disturbance esti-mation for vehicle path prediction. IEEE Transactions on Control Systems Technology , 8(3):508–518, 2000.[22] V . Lefkopoulos, M. Menner, A. Domahidi, and M. N. Zeilinger. Interaction-aware motionprediction for autonomous driving: A multiple model kalman filtering scheme. IEEE Roboticsand Automation Letters , 6(1):80–87, 2020.[23] B. Jin, B. Jiu, T. Su, H. Liu, and G. Liu. Switched kalman filter-interacting multiple modelalgorithm based on optimal autoregressive model for manoeuvring target tracking. IET Radar,Sonar & Navigation , 9(2):199–209, 2015.[24] N. Kaempchen, K. Weiss, M. Schaefer, and K. C. Dietmayer. Imm object tracking for highdynamic driving maneuvers. In IEEE Intelligent Vehicles Symposium, 2004 , pages 825–830.IEEE, 2004.[25] J. Joseph, F. Doshi-Velez, A. S. Huang, and N. Roy. A bayesian nonparametric approach tomodeling motion patterns. Autonomous Robots , 31:383–400, 2011.[26] Q. Tran and J. Firl. Online maneuver recognition and multimodal trajectory prediction forintersection assistance using non-parametric regression. In 2014 ieee intelligent vehicles sym-posium proceedings , pages 918–923. IEEE, 2014.[27] Q. Deng and D. S ̈offker. Improved driving behaviors prediction based on fuzzy logic-hiddenmarkov model (fl-hmm). In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 2003–2008.IEEE, 2018.[28] H. Berndt, J. Emmert, and K. Dietmayer. Continuous driver intention recognition with hiddenmarkov models. In 2008 11th International IEEE Conference on Intelligent TransportationSystems , pages 1189–1194. IEEE, 2008.[29] D. S. Gonz ́alez, O. Erkent, V . Romero-Cano, J. Dibangoye, and C. Laugier. Modeling driverbehavior from demonstrations in dynamic environments using spatiotemporal lattices. In 2018IEEE International Conference on Robotics and Automation (ICRA) , pages 3384–3390, 2018.doi:10.1109/ICRA.2018.8460208.[30] N. Deo and M. M. Trivedi. Trajectory forecasts in unknown environments conditioned ongrid-based plans, 2021.[31] P. Kothari, S. Kreiss, and A. Alahi. Human trajectory forecasting in crowds: A deep learn-ing perspective. IEEE Transactions on Intelligent Transportation Systems , 23(7):7386–7400,2021.[32] A. Alahi, K. Goel, V . Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese. Social lstm:Human trajectory prediction in crowded spaces. In Proceedings of the IEEE conference oncomputer vision and pattern recognition , pages 961–971, 2016.10[33] D. Varshneya and G. Srinivasaraghavan. Human trajectory prediction using spatially awaredeep attention models. arXiv preprint arXiv:1705.09436 , 2017.[34] F. Bartoli, G. Lisanti, L. Ballan, and A. Del Bimbo. Context-aware trajectory prediction. In2018 24th International Conference on Pattern Recognition (ICPR) , pages 1941–1946. IEEE,2018.[35] T. Fernando, S. Denman, S. Sridharan, and C. Fookes. Soft+ hardwired attention: An lstmframework for human trajectory prediction and abnormal event detection. Neural networks ,108:466–478, 2018.[36] L. Chen, L. Platinsky, S. Speichert, B. Osinski, O. Scheel, Y . Ye, H. Grimmett, L. D. Pero,and P. Ondruska. What data do we need for training an av motion planner? 2021 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 1066–1072, 2021.[37] A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J. M. Allen, V .-D. Lam, A. Bewley, andA. Shah. Learning to drive in a day. 2019 International Conference on Robotics and Automa-tion (ICRA) , pages 8248–8254, 2018.[38] D. S. Gonz ́alez, J. P ́erez, V . M. Montero, and F. Nashashibi. A review of motion planningtechniques for automated vehicles. IEEE Transactions on Intelligent Transportation Systems ,17:1135–1145, 2016.[39] J. L. V ́azquez, A. Liniger, W. Schwarting, D. Rus, and L. V . Gool. Deep interactive motion pre-diction and planning: Playing games with motion prediction models. ArXiv , abs/2204.02392,2022.[40] J. Ngiam, V . Vasudevan, B. Caine, Z. Zhang, H.-T. L. Chiang, J. Ling, R. Roelofs, A. Bewley,C. Liu, A. Venugopal, D. J. Weiss, B. Sapp, Z. Chen, and J. Shlens. Scene transformer:A unified architecture for predicting future trajectories of multiple agents. In InternationalConference on Learning Representations , 2022.[41] B. Ivanovic and M. Pavone. Rethinking trajectory forecasting evaluation, 2021.[42] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik. Recurrent network models for humandynamics. In Proceedings of the IEEE international conference on computer vision , pages4346–4354, 2015.[43] A. Jain, A. Zamir, S. Savarese, and A. Saxena. Deep learning on spatio-temporal graphs. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas,NV , USA , pages 27–30, 2016.[44] Y . Cai, L. Huang, Y . Wang, T.-J. Cham, J. Cai, J. Yuan, J. Liu, X. Yang, Y . Zhu, X. Shen, et al.Learning progressive joint propagation for human motion prediction. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings,Part VII 16 , pages 226–242. Springer, 2020.[45] W. Mao, M. Liu, and M. Salzmann. History repeats itself: Human motion prediction viamotion attention. In European Conference on Computer Vision , 2020.[46] T. Sofianos, A. Sampieri, L. Franco, and F. Galasso. Space-time-separable graph convolutionalnetwork for pose forecasting. 2021 IEEE/CVF International Conference on Computer Vision(ICCV) , pages 11189–11198, 2021.[47] E. Vendrow, S. Kumar, E. Adeli, and H. Rezatofighi. Somoformer: Multi-person pose fore-casting with transformers. arXiv preprint arXiv:2208.14023 , 2022.[48] J. Wang, H. Xu, M. G. Narasimhan, and X. Wang. Multi-person 3d motion prediction withmulti-range transformers. In Neural Information Processing Systems , 2021.[49] L. Gui, K. Zhang, Y .-X. Wang, X. Liang, J. M. F. Moura, and M. M. Veloso. Teaching robotsto predict human motion. 2018 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 562–567, 2018.11[50] J. Laplaza, A. Pumarola, F. Moreno-Noguer, and A. Sanfeliu. Attention deep learning basedmodel for predicting the 3d human body pose using the robot human handover phases. In2021 30th IEEE International Conference on Robot & Human Interactive Communication(RO-MAN) , pages 161–166. IEEE, 2021.[51] J. Zhang, H. Liu, Q. Chang, L. Wang, and R. X. Gao. Recurrent neural network for motiontrajectory prediction in human-robot collaborative assembly. CIRP annals , 69(1):9–12, 2020.[52] J. Laplaza, F. Moreno-Noguer, and A. Sanfeliu. Context attention: Human motion predictionusing context information and deep learning attention models. In ROBOT2022: Fifth IberianRobotics Conference: Advances in Robotics, Volume 1 , pages 102–112. Springer, 2022.[53] Q. Li, G. Chalvatzaki, J. Peters, and Y . Wang. Directed acyclic graph neural network for humanmotion prediction. 2021 IEEE International Conference on Robotics and Automation (ICRA) ,pages 3197–3204, 2021. URL https://api.semanticscholar.org/CorpusID:239039533 .[54] M. Faroni, M. Beschi, and N. Pedrocchi. An mpc framework for online motion planning inhuman-robot collaborative tasks. In 2019 24th IEEE International Conference on EmergingTechnologies and Factory Automation (ETFA) , pages 1555–1558. IEEE, 2019.[55] M. V . Minniti, R. Grandia, K. F ̈ah, F. Farshidian, and M. Hutter. Model predictive robot-environment interaction control for mobile manipulation tasks. In 2021 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 1651–1657. IEEE, 2021.[56] W. Yang, B. Sundaralingam, C. Paxton, I. Akinola, Y .-W. Chao, M. Cakmak, and D. Fox.Model predictive control for fluid human-to-robot handovers. In 2022 International Confer-ence on Robotics and Automation (ICRA) , pages 6956–6962. IEEE, 2022.[57] E. A. Sisbot and R. Alami. A human-aware manipulation planner. IEEE Transactions onRobotics , 28(5):1045–1057, 2012.[58] P. A. Lasota, G. F. Rossano, and J. A. Shah. Toward safe close-proximity human-robot inter-action with standard industrial robots. In 2014 IEEE International Conference on AutomationScience and Engineering (CASE) , pages 339–344. IEEE, 2014.[59] H. Liu and L. Wang. Collision-free human-robot collaboration based on context awareness.Robotics and Computer-Integrated Manufacturing , 67:101997, 2021.[60] S. Scheele, P. Howell, and H. Ravichandar. Fast anticipatory motion planning for close-proximity human-robot interaction. arXiv preprint arXiv:2305.11978 , 2023.[61] H. Ling, G. Liu, L. Zhu, B. Huang, F. Lu, H. Wu, G. Tian, and Z. Ji. Motion planning combineshuman motion prediction for human-robot cooperation. In 2022 12th International Conferenceon CYBER Technology in Automation, Control, and Intelligent Systems (CYBER) , pages 672–677. IEEE, 2022.[62] V . Unhelkar, P. A. Lasota, Q. Tyroller, R.-D. Buhai, L. Marceau, B. Deml, and J. A. Shah.Human-aware robotic assistant for collaborative assembly: Integrating human motion predic-tion with planning in time. IEEE Robotics and Automation Letters , 3:2394–2401, 2018.[63] J. Mainprice, R. Hayne, and D. Berenson. Predicting human reaching motion in collabora-tive tasks using inverse optimal control and iterative re-planning. 2015 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 885–892, 2015.[64] J. Mainprice, R. Hayne, and D. Berenson. Goal set inverse optimal control and iterative re-planning for predicting human reaching motions in shared workspaces. IEEE Transactions onRobotics , 32(4):897–908, 2016.[65] S. He, W. Zhao, C. Hu, Y . Zhu, and C. Liu. A hierarchical long short term safety frame-work for efficient robot manipulation under uncertainty. Robotics and Computer-IntegratedManufacturing , 82:102522, 2023. ISSN 0736-5845.12[66] O. S. Oguz, W. Rampeltshammer, S. Paillan, and D. Wollherr. An ontology for human-humaninteractions and learning interaction behavior policies. ACM Transactions on Human-RobotInteraction (THRI) , 8(3):1–26, 2019.[67] V . Prasad, D. Koert, R. M. Stock-Homburg, J. Peters, and G. Chalvatzaki. Mild: Multi-modal interactive latent dynamics for learning human-robot interaction. 2022 IEEE-RAS 21stInternational Conference on Humanoid Robots (Humanoids) , pages 472–479, 2022. URLhttps://api.semanticscholar.org/CorpusID:253098854 .[68] B. D. Ziebart, A. L. Maas, J. A. Bagnell, A. K. Dey, et al. Maximum entropy inverse reinforce-ment learning. In Aaai , volume 8, pages 1433–1438. Chicago, IL, USA, 2008.[69] G. Swamy, S. Choudhury, J. A. Bagnell, and S. Wu. Of moments and matching: A game-theoretic framework for closing the imitation gap. In International Conference on MachineLearning , pages 10022–10032. PMLR, 2021.[70] M. Bhardwaj, B. Sundaralingam, A. Mousavian, N. D. Ratliff, D. Fox, F. Ramos, and B. Boots.Fast joint space model-predictive control for reactive manipulation. In Conference on RobotLearning , 2021.[71] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou.Information theoretic mpc for model-based reinforcement learning. 2017 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 1714–1721, 2017.13A AppendixWe run additional experiments, especially focusing on the importance of wrist prediction. We alsoprovide the proof for lemma 1 and include additional details about the MPC planner, the collabora-tive manipulation tasks, our dataset, and model implementation.A.1 Additional BaselinesWe report forecasting metrics across AMASS and CoMaD datasets in Table 2 and planning metrics,including more baselines in Table 3. Section 5.1 explains the different model implementations.A.2 Focusing on Wrist PredictionsIn this section, we experiment with different wrist weights for M ANICAST-W models. Further, wecompare with a variant of the F INETUNED that upweights wrist dimensions, F INETUNED -W (MLE+ Wrist Weighting).Figure 7: (Reactive Stirring) Forecasting Errors (with error bars) for different wrist weights.Observing the graphs in Fig 7, we note that all M ANICAST-W models that upsample transition datahave lower all joints and wrists ADE in the transition periods than the F INETUNED -W model thatupweights wrist errors on the MLE loss. We observe a trade-off exists between all joints’ and wrists’forecasting errors. An increase in the wrist weight in M ANICAST-W reduces prediction errors onthe wrist joints, but at the same time, the prediction errors on all joints increase. Furthermore,the decrease in wrist errors eventually plateaus around a wrist weight of 5, which justifies thishyperparameter choice for the M ANICAST-W model presented in the main paper.A.3 Fitting a Typical Pose to a Wrist Only ForecastWe consider the simpler problem of only predicting the wrist joint position in the Reactive StirringTask. We still utilize the entire upper body history as input to the forecasting model which wename W RIST ONLY. We construct the rest of the upper body by assuming the upper body at the lastobservable timestep remains still in the future. We consider a variant of the model that upsamplestransition points, naming it W RIST ONLY-T.Figure 8: (Reactive Stirring) Comparing Forecasting Errors with T YPICAL POSE modelIn Fig 8, we observe that wrist forecasting errors for the W RIST ONLY and W RIST ONLY-T are sim-ilar to the F INETUNED -W and M ANICAST-T models that also predict the rest of the human upperbody. We will still find that upsampling transition data points helps reduce the forecasting error.14Metrics (mm)# BASE SCRATCH FINETUNED MANICAST-T M ANICAST MANICAST-WAMASSAll Joints ADE 63.0 (0:1)181.2 (0:2)99.0 (0:1) 103.5 (0:1)103.5 (0:1) 99.1 (0:1)All Joints FDE 92.1 (0:2)196.7 (0:2)136.8 (0:2)141.6 (0:2)141.5 (0:2) 135.3 (0:2)Wrists ADE 103.9 (0:2)263.0 (0:3)157.5 (0:3)160.0 (0:3)157.2 (0:3) 156.0 (0:3)Wrists FDE 154.8 (0:4)275.4 (0:4)218.6 (0:4)219.7 (0:4)212.8 (0:4) 214.4 (0:4)REACTIVE STIRAll Joints ADE 67.7 ( 1:0) 67.4 (0:9) 44.8 (0:7) 50.6 (0:6) 45.1 (0:6) 45.2 (0:6)All Joints FDE 110.8 ( 2:0)102.1 (1:7)81.0 (1:6) 91.3 (1:5) 80.5 (1:5) 81.7 (1:5)Wrists ADE 94.8 ( 1:4) 93.6 (1:3) 64.7 (1:1) 75.9 (1:0) 67.1 (1:0) 62.9 (1:0)Wrists FDE 154.2 ( 2:8)137.9 (2:3)113.2 (2:3)131.7 (2:1)115.0 (2:2) 110.2 (2:1)T-All Joints ADE 92.2( 2:1) 84.7 (1:7) 63.3 (1:5) 60.7 (1:3) 58.4 (1:3) 60.7 (1:3)T-All Joints FDE 163.8 ( 4:1)143.1 (3:5)124.9 (3:2)120.3 (3:0)115.4 (3:0) 120.1 (3:0)T-Wrists ADE 134.3 ( 3:3)129.2 (2:7)94.9 (2:4) 91.1 (2:1) 89.5 (2:2) 85.3 (2:1)T-Wrists FDE 233.8 ( 6:0)203.5 (4:9)179.5 (4:9)171.3 (4:4)168.9 (4:6) 163.9 (4:5)HANDOVERAll Joints ADE 45.2 ( 0:3) 49.9 (0:3) 31.8 (0:3) 34.1 (0:3) 32.5 (0:3) 32.4 (0:3)All Joints FDE 70.4 ( 0:7) 68.3 (0:6) 54.6 (0:6) 57.5 (0:6) 54.7 (0:6) 54.8 (0:6)Wrists ADE 62.0 ( 0:5) 68.4 (0:5) 46.8 (0:5) 50.7 (0:4) 48.2 (0:4) 45.8 (0:4)Wrists FDE 100.1 ( 1:0)92.6 (0:8) 79.0 (0:9) 84.5 (0:8) 79.4 (0:8) 77.2 (0:8)T-All Joints ADE 55.3 ( 0:7) 61.7 (0:6) 42.1 (0:6) 41.3 (0:5) 41.0 (0:5) 40.7 (0:5)T-All Joints FDE 91.9 ( 1:3) 87.2 (1:0) 76.9 (1:1) 74.8 (1:1) 73.7 (1:0) 73.0 (1:0)T-Wrists ADE 84.8 ( 1:2) 91.2 (1:0) 67.6 (1:0) 64.8 (0:9) 65.8 (0:9) 62.6 (0:9)T-Wrists FDE 146.7 ( 2:1)130.7 (1:6)124.5 (1:9)118.3 (1:7)118.7 (1:7) 113.4 (1:7)TABLE SETAll Joints ADE 80.4 ( 1:3) 90.2 (1:3) 58.6 (1:0) 57.6 (1:3) 53.6 (0:7) 56.1 (1:0)All Joints FDE 137.7 ( 2:1)134.3 (1:9)110.5 (1:7)106.7 (2:0)101.7 (1:5) 105.1 (1:6)Wrists ADE 107.3 ( 1:5)114.8 (1:3)82.5 (1:2) 79.6 (1:4) 76.7 (1:0) 74.8 (1:0)Wrists FDE 185.5 ( 2:6)165.0 (2:2)151.1 (2:2)143.1 (2:3)140.9 (2:1) 138.8 (2:0)Table 2: Average forecast metrics (in mm) for all models across all datasets.Model ! GROUND TRUTH BASELINES LEARNING -BASEDMetric# CUR FUT CVM W ORST BASE SCRATCH FINETUNED MANICAST-T M ANICAST MANICAST-WREACTStop Time (ms) 0 ( 0) 1000 (0) 367.8 (50:9) 1000 -138.9 ( 18:8)-237.5 (54:4)203.3 (22:3)271.1 (25:6) 246.7 (25:8) 290.0 (29:9)Restart Time (ms) 0 ( 0) 1000 (0) 235.6 (18:2) 0 (0) 256.7 (45:6) 235.0 (16:8)441.1 (30:8)454.4 (36:7) 455.6 (33:5) 496.7 (27:9)FDR 0% 0% 67% 100% 11% 11% 0% 11% 0% 11%HAND .Goal Detection (ms) 0 ( 0) 1000 (0) 246.0 (146:0) - -127.0 (16:6)-61.6 (23:6)189.3 (47:8)473.3 (123:7)450.0 (124:8)488.4 (133:4)Correct Goal Rate 100% 100% 20% 0% 90% 80% 100% 100% 100% 100%Path Length (mm) 459.0 ( 37:0)381.1 (39:8) 485.0 (85:0) - 488.9 (56:2) 466.2 (33:2)432.0 (39:0) 428 (42:0) 404.0 (45:0) 436.0 (49:4)Time to Goal (s) 4.59 ( 0:37) 2.87 (0:24) 3.77 (0:86) - 4.32 (0:48) 4.60 (0:68) 3.87 (0:28) 3.91 (0:32) 3.96(0:54) 4.17 (0:45)Table 3: We integrate forecasts of different models into STORM for the reactive stirring and object handovertasks. The standard error for each planning metric is shown inside parentheses.WRIST ONLY-T has lower wrist ADE than W RIST ONLY during transition windows. As expected,All Joints ADE for these models are significantly higher than F INETUNED and M ANICAST. Whilethis might be fine for the reactive stirring and handover task, we cannot use this model for the col-laborative table-setting task. Such an approach would generally be limited to tasks solely relying onwrist forecasting.A.4 Proof for Lemma 1Proof. We first analyze the performance of a model trained on P(f).Let’s assume that a model trained with MLE loss on P(f)bounds the average L1 distance betweenthe ground truth distribution P(xHjf)and the learned distribution Pq(xHjf)onP(f)bye, i.e.åfP(f)åxHjP(xHjf)Pq(xHjf)jeThen, for any given xRbe the robot trajectory xR, the final loss `(q), i.e. the expected cost differenceofxRdue to the ground truth distribution and the forecast model can be expressed as:15`(q) =åfP(f) åxHC(xRjxH)P(xHjf)åxHC(xRjxH)Pq(xHjf)!=åfP(f) åxHC(xRjxH)(P(xHjf)Pq(xHjf))!åfP(f) kC(xRjxH;f)k¥åxHjP(xHjf)Pq(xHjf)j!(Holder’s ineq.)åfP(f) Cmax(f)åxHjP(xHjf)Pq(xHjf)j!maxfCmax(f)åfP(f) åxHjP(xHjf)Pq(xHjf)j!Cmaxewhere Cmax(f) =kC(xRjxH;f)k¥is the maximum cost of a robot trajectory given a context, Cmax=max fCmax(f)is the maximum cost across all context. Cmaxcan be high in general, resulting in aninflated bound for the model above.Now let’s assume we train a model to minimize loss on the new distribution Q(f) =0:5P(f) +0:5PT(f)and get the following boundåfQ(f)åxHjP(xHjf)Pq(xHjf)jeThen the loss can be expressed as:`(q) =åfP(f) åxHC(xRjxH)P(xHjf)åxHC(xRjxH)Pq(xHjf)!åfP(f) Cmax(f)åxHjP(xHjf)Pq(xHjf)j!åfQ(f)P(f)Cmax(f)Q(f)åxHjP(xHjf)Pq(xHjf)jmaxfP(f)Cmax(f)Q(f)åfQ(f)åxHjP(xHjf)Pq(xHjf)jForQ(f) =0:5P(f)+0:5PT(f), we need to bound the ratiomaxfP(f)Cmax(f)Q(f)=maxfP(f)Cmax(f)0:5P(f)+0:5PT(f)There are two cases to consider:Case 1 :Cmax(f)d. Then PT(f) =0, and the ratio is bounded bymaxfP(f)Cmax(f)0:5P(f)+0:5PT(f)P(f)d0:5P(f)2dCase 2 :Cmax(f)d. Then ratio is maximized when Cmax(f)is maximized for f=fmaxfP(f)Cmax(f)0:5P(f)+0:5PT(f)P(f)Cmax(f)0:5PT(f)P(f)CmaxåfP(f)I(Cmax(f)d)0:5P(f)2CmaxEP(f)[I(Cmax(f)d)]16Combining these cases, we can bound the ratio max fP(f)Cmax(f)Q(f)asmaxfP(f)Cmax(f)Q(f)2max (d;CmaxEP(f)[I(Cmax(f)d)])The ratio above can be no worse than Cmaxby a factor of 2, and can be much smaller based onthe choice of d. Intuitively setting dto be very high makes the transition probability PT(f)peakydriving down the second term, while making dto be small makes the transition probability close tothe original distribution, driving down the first term.A.5 MPC Planner DetailsWe use the open-sourced STORM codebase3to implement sampling-based model-predictive controlon a 7-DOF Franka Research 3 robot arm. At every timestep, the planner samples robot trajectoriesand evaluates the cost function with M ANICAST forecasts. The robot executes the first action fromthe lowest-cost plan and updates its sampling distribution for the next timestep using the MPPI [71]algorithm. The manipulation components of the cost function independent of the human remainunchanged. We additionally introduce a collaborative task-specific cost component ( T(xRjˆxH)) thatdepends on the future human trajectory. The cost function optimized by the planner is laid out inEq.3. Self-collisions are checked by training the jointNERF model introduced by Bhardwaj et al.[70].C(xRjˆxH) =asˆCstop(xR)+ajˆCjoint(xR)+amˆCmanip(xR)+acˆCcoll(xR)+aaatT(xxxRjˆxˆxˆxH) (3)A.6 Tasks for Collaborative ManipulationWe describe three collaborative manipulation tasks that focus on house-hold cooking activities.Reactive Stirring : In this cooking task, the human and robot share a common workspace. Whilethe robot arm is performing a stirring motion, the human may add vegetables into the pot. The robotarm preemptively predicts the arrival of the human arm and retracts back to give the human armsufficient space to reach into the pot. The task-specific component of the cost function is:T(xRjˆxH) =Tåt=11D(ˆsHt;spot)eksRtsrestk+ 1D(ˆsHt;spot)>eksRtxtstirk (4)The cost function checks whether the human’s position ( ˆ sHt) is close to the pot’s position ( spot) anddecides whether to move to a pre-defined resting position ( srest) or to continue stirring in a circulartrajectory ( xstir) starting from the current state of the robot ( sR0). A cost-aware forecasting model forthis task should be able to predict the arrival and departure of the human ahead of time.Human-Robot Handovers : Handovers of objects are an important task in the kitchen. When ahuman is handing over an object, a robot arm should move towards the intended handover location.The task-specific component can be described as:T(xRjˆxH) =Tåt=11IsOb jectInHand (sH0)ˆCposeXeet;GraspPose (Xee0;ˆXHwristT)(5)Similar to prior work [56], the robot motion is initiated when the human arm has picked up thehandover object. The robot’s end-effector ( Xeet) moves towards a grasp location that is computedusing the final wrist position of the human ( ˆXHwristT ). The orientation of the grasp pose is calculatedby drawing a straight line from the current end-effector position ( Xeet) to the grasp location.Collaborative Table Setting : Movements on top of a table in the presence of a human in theworkspace are a common collaborative manipulation task. Motion planners should not only avoid3https://github.com/NVlabs/storm17collision in the current timestep but also be able to forecast future motion and preemptively avoidcollisions with the human body. The cost function is simply given by:T(xRjˆxH) =Tåt=1ˆCposeXeet;XGt+bˆCcollsRt;sHt(6)Here, bis the relative weight given to the collision avoidance component compared to the goal-reaching component. Collisions are checked between the human body and robot arm by representingthem as a pack of sphere and cuboid rigid bodies.A.7 Collaborative Manipulation Dataset (CoMaD)Similar to a real-world collaborative activity, in much of the episode, both humans perform theirrespective cooking tasks in isolation. Episodes of reactive stirring and handovers contain 3-5 close-proximity interactions, each of which are short (4-5 seconds) compared to the length of the overallepisode (30-60 seconds). Often, these interactions are initiated by verbal requests or subtle facialgestures. Collaborative table setting consists almost entirely of close-proximity fast human armmovements. We collect an RGB visual view of the scene containing audio along with motion capturedata of both humans’ upper bodies. We also annotate transition windows for interactions in eachepisode.A.8 Model Implementational DetailsWe train our forecasting models using the STS-GCN [46] architecture on an upper body skele-ton consisting of 7 joints (Wrists, Elbows, Shoulders, and Upper Back). The last 0.4 seconds (10timesteps) of motion is input to the models and the next 1 second (25 timesteps) of motion is pre-dicted. We pretrain for 50 epochs on AMASS (1 hour) and finetune on CoMaD for 50 epochs (5minutes). We divided the episodes in CoMaD into train, validation, and test sets (8:1:1).18 |
qVc7NWYTRZ6 | An Unbiased Look at Datasets for Visuo-MotorPre-TrainingSudeep DasariCMUMohan Kumar SriramaCMUUnnat JainyFAIR at MetaAbhinav GuptayCMUhttps://data4robotics.github.ioAbstract: Visual representation learning hold great promise for robotics, but isseverely hampered by the scarcity and homogeneity of robotics datasets. Recentworks address this problem by pre-training visual representations on large-scalebut out-of-domain data (e.g., videos of egocentric interactions) and then transfer-ring them to target robotics tasks. While the field is heavily focused on developingbetter pre-training algorithms, we find that dataset choice is just as important tothis paradigm’s success. After all, the representation can only learn the structuresor priors present in the pre-training dataset. To this end, we flip the focus on algo-rithms, and instead conduct a dataset centric analysis of robotic pre-training. Ourfindings call into question some common wisdom in the field. We observe thattraditional vision datasets (like ImageNet, Kinetics and 100 Days of Hands) aresurprisingly competitive options for visuo-motor representation learning, and thatthe pre-training dataset’s image distribution matters more than its size. Finally, weshow that common simulation benchmarks are not a reliable proxy for real worldperformance and that simple regularization strategies can dramatically improvereal world policy learning.Keywords: Visual Representation Learning, Datasets, Robotic Manipulation1 Introduction?What Data Should RobotsBe Pre-Trained On?Figure 1: Due to the scarcity of di-verse, large-scale robotic data, visuo-motor representations – which are nec-essary to solve tasks (e.g., put breadin toaster) from visual inputs – mustbe learned from external datasets [1].But which datasets contain the best pri-ors for robotics? Surprisingly, we findthat simply pre-training on standard vi-sion datasets (e.g., ImageNet) can out-perform SOTA baseline representationsfrom the robot learning community, de-spite using roughly 5x less data.Consider a robot that must perform a manipulation task inan unstructured environment: e.g., toasting a bread slice.To accomplish this, the robot must locate the target ob-jects (bread, toaster, etc.) in the scene and reason abouttheir physical properties (e.g., Center-of-Mass, etc.), us-ing RGB camera inputs. However, the real world hasinnumerable objects, lightning conditions, and environ-ments that a robot may run into. This incredible range ofscenarios makes hand-engineering a vision pipeline im-possible. Thankfully, the computer vision and representa-tion learning communities have highlighted a successfulparadigm to overcome this challenge: learn end-to-endneural representations directly from data [2, 3], which canthen be used for downstream vision tasks. We seek to dothe same for policy learning.But what data should these representations be trained on?In an ideal world, we would leverage task-specific roboticdata (i.e., trajectories) to jointly learn a visual representa-tion and a controller, using end-to-end reinforcement orimitation learning [4, 5, 6, 7]. Unfortunately, learning vi-sual representations in conjunction with action policies isfrequently intractable [8, 9] or requires a large amount ofdata, that may be too expensive to collect on real hard-ware. Furthermore, the homogeneity of robotics dataFirst/Corresponding Author: sdasari@cs.cmu.eduyDenotes equal advising7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.(collected in single lab) hinders generalization to novel scenarios, which is the motivation for learn-ing in the first place! To overcome this, the field has trended towards pre-training visual represen-tations on large-scale, unlabeled vision datasets, using self-supervised learning algorithms – e.g.,Masked Auto-Encoders [10] (MAEs), contrastive learning [11, 12, 13], etc. These pre-trained rep-resentations decouple policy learning from perception, allowing us to learn behaviors with far lessrobotic data [1, 14, 15, 16, 17].The key insight is that self-supervised visual pre-training can learn useful priors from out-of-domaindata that will be useful for robotics. Recent work in robot manipulation [14, 1, 18, 15] has investi-gated different neural architectures and algorithms for learning these priors. However, there is oneimportant commonality – all these methods train (primarily) on the same dataset, Ego4D [19]. In-deed, this seems like an intuitive choice, because: (1) Ego4D contains first-person camera views,which are analogous to the robot’s camera; (2) Ego4D focuses on human-object interaction – i.e., itis aligned with the downstream manipulation task; and (3) the dataset offers thousands of hours ofvideo frames to train on. But is this intuitive bias empirically tested?In this paper, we empirically investigate these research questions from the perspective of roboticmanipulation tasks. Specifically, we pre-train a total of 15 representations on various datasets usingMAEs [10], a state-of-the-art (SOTA) self-supervised learning algorithm. We then fine-tune each ofthese representations to solve various manipulation tasks in simulated and real settings via BehaviorCloning (w/50demonstrations). Our experiments reveal that many intuitive biases and commonassumptions in our field need to be revisited . Surprisingly, we find that standard image datasetsbased on curated internet data (e.g., ImageNet [2], Kinetics [20], 100 Days of Hands [21]) can learnstronger visuo-motor representations than egocentric human interaction data (of Ego4D)! In fact,pre-training on the ImageNet compares favorably against SOTA (visuo-motor) baseline representa-tions, which were trained on far more data (e.g. MVP [22] was trained on 2M+ frames) using theexact same algorithm and hyperparameters. This leads us to an importance insight – the pre-trainingimage distribution is far more crucial for effective representation learning than naively increasingthe number of images to train on. Building on this, we investigate various schemes for scalingpre-training dataset size while creating a broader image distribution. Our best model improves per-formance by 30% (v.s. SOTA baselines [15, 22]) on real world robotics tasks and is the direct resultof this search. Finally, we show how simple implementation details (like using dropout [23] duringevaluation) can have a significant impact on policy performance, and how these trends are poorlycaptured in simulation studies. Our project code and models are released publicly, and we encouragethe reader to view our website for added context3.2 Related WorksLearning Actionable Representations The robotics field has long focused on learning actionablerepresentations, which focus on task relevant details and are maximally predictive of the actionsthe robot should take. These representations can be learned end-to-end as part of policy learning,using data collected by expert demonstrations (e.g., Imitation Learning [24]) or the robot itself (e.g.,Reinforcement Learning [4]). This paradigm has been successfully applied to a wide range of taskslike in-the-wild grasping [25], bin-picking [26, 27, 28], insertion [5], pick-place [29], and evenself-driving [30, 31, 8]. Prior work also added tertiary optimization objectives (e.g. observationreconstruction [32], inverse modeling [33], dynamics modeling [34], etc.) on top of policy learning,in order to make representation learning more efficient. However, all of these techniques share thesame flaw: they require a wealth of task-specific robotic data for learning representations.Self-Supervised Visual Pre-Training Thus, the robotics community has trended towards pre-training representations on out-of-domain, vision datasets, (which are both larger and more diverse)and transferring them to robotics tasks. Prior works [1, 18, 22, 14, 15] all seem to follow a commonformula: representations are trained using SOTA self-supervised vision algorithms (e.g., contrastivelearning [11, 13, 12], masked image modeling [10, 35], etc.) on frames (primarily) sampled from theEgo4D dataset [19]. These representations are then evaluated mostly in sim [36], using a commonpolicy learning framework [1, 15]. These choices may seem reasonable (see Sec. 1), but there issurprisingly little evidence backing them. Importantly, R3M [1] and MVP [14] compared onlywith supervised ImageNet representations but not apples-to-apples with self-supervised ImageNetrepresentations [10] . Our investigation fills in these critical gaps. We find that representations3https://data4robotics.github.io2KineticsRoboNet100 DoHEgo4DImageNetDatasetsPre-TrainingEvaluationBehavior Cloning TasksMasked Auto-EncoderEncodersDecoderEncoderSimRealBlock StackingPouringToastingFranka KitchenMeta-WorldRoboMimicFigure 2: Our investigation considers 5 standard datasets from both the computer vision and robotics:ImageNet [2], 100 Days of Hands [21] (DoH), Ego4D [19], Kinetics [20], and RoboNet [37] (left).For each dataset, we pre-train a visual representation on it using the Masked Auto-Encoders (MAE)algorithm [10]. This masked image modeling method works by randomly masking patches in theimage, and training an encoder/decoder to reconstruct them (center). Once pre-training is concluded,we fine-tune the representation to various robotics tasks, both in sim and in the real world (right).learned on standard image datasets (like ImageNet) are surprisingly applicable to the robotics space,and that common evaluation/experimental techniques can give a misleading sense of progress.3 Experimental MethodsOur investigation follows a simple formula (see Fig. 2). Step 1: We pre-train visual representationson various datasets using the same self-supervised algorithm (masked image modeling). Step 2:We fine-tune each representation for downstream manipulation tasks in both simulated and real(via behavior cloning). For evaluation of representation quality, we rate based on performance ondownstream tasks, with emphasis on performance in the real world.Visual Pre-Training This project requires a scalable representation learning algorithm that canseamlessly operate on heterogeneous data sources, with the highest possible performance. We choseto use Masked Auto-Encoders (MAEs), a self-supervised representation learning algorithm withSOTA performance on various vision [10, 35, 38, 39, 40, 41, 42], multimodal [43, 44], and roboticstasks [15, 22]. The MAE encoder ( E) is a Vision Transformer (ViT) network [45] that produces anembedding vector to represent an input image I: i.e.E(I)2R768. During training Eis taskedto represent I, using only 25% random patches sampled from the image. Then a decoder network(D) attempts to reconstruct Iin its entirety (see Fig. 1, middle). Both EandDare trained end-to-end, minimizing the MSE reconstruction objective: jjD(E(I))Ijj2. During training, the visualencoder learns to reason spatially: i.e., it learns how patches relate to each other, and how they cancome together to form the final image. Thus, Elearns a highly efficient image descriptor that can betransferred to downstream tasks without any algorithmic changes (e.g. no masking needed duringtransfer). The MAE hyperparameters are described in Appendix A. Note that they are directly copiedfrom the original MAE work by He et al. [10] and shared by prior works in robot learning [15, 22].Fine-Tuning w/ Behavior Cloning Pre-trained visual representations are fine-tuned to solvedownstream tasks of robotic manipulation. To this end, we adopt the paradigm of Learning fromDemonstration (LfD) [46, 47, 24, 48, 49, 50]. Our goal is to learn a policy that uses the givenobservation otto predict an optimal action distribution for the task: at(jot). Note that theactionsatare commands sent to the robot controller, while the observations consist of the currentimage and robot joint information: ot= [it;jt]. The policy must be learned given a set of expertdemonstrations (D=f1;:::;ng), where each demonstration i= [(o0;a0);:::; (oT;aT)]is atrajectory with optimal observation-action tuples (i.e. collected by a proficient agent).3is parameterized using a 2-layer network ( p), built atop the pre-trained encoder E. The forwardpass works as follows: first, the observation image is encoded E(it); thenjtis concatenated to theencoding and passed to the policy network – p(E(it);jt).ppredicts a policy distribution, whichin our case is a Gaussian Mixture Model [51, 52]. During test time, actions are sampled from thisdistribution and then executed on the robot. The entire policy network (both pandE) is fine-tunedend-to-end (using Adam [53] for 50Kiterations) to maximize the log probability of actions from theexpert demonstrations: minlog((atjE(it);jt)). This procedure was designed to closely matchprior work with two important modifications: we apply dropout to the policy network p, and weapply data augmentation to itbefore passing it to the encoder. Both of these deviations are validatedin our experiments (see Sec. 5). Please refer to Appendix B for the exact hyperparameters.Evaluation Procedure To evaluate a representation, we apply the above fine-tuning stack sepa-rately on 13 sim tasks and 3 real tasks. Each policy’s final checkpoint is evaluated on Ntest rolloutsfor every task, with novel initializations (e.g., test objects, new initial positions, etc.). All evaluationhyperparameters (e.g., demonstration set, number of test-time rollouts, initial positions, test objects,BC hyperparameters, etc.) are kept fixed within a task. This allows for maximally fair evaluation.Simulation Tasks: Our simulated tasks set spans a set of 3 MuJoCo [36] environments – Meta-World [54], RoboSuite [51], Franka Kitchen [55] – that are frequently used by the robot learningcommunity, and the exact setups (e.g., task rewards/success criteria, camera positioning, object sets,demonstration trajectories, etc.) were directly taken from prior work [15, 1, 51] (fully documentedin Appendix C). As a result, our simulated results should be very accessible to the community.Real World Tasks: While simulation is a useful tool, there is a significant sim2real gap in ma-nipulation. Thus, we designed 3 distinct tasks for real world validation on a Franka Panda Robot(visualized in Fig. 2).(1)Block Stacking requires the robot to pick up the red block and place it on the green block. Thisis the simplest of three tasks as the robot only has to adapt to new object configurations during testtime. However, the robot still needs to precisely localize and grasp the (small) red block.(2)Pouring requires the robot to lift the cup and pour almonds in the target bowl. At test time,the cup and target bowls are both novel objects (unseen during training), and are placed in randompositions, requiring the robot to generalize to new visual inputs.(3)Toasting is our most challenging task, and it requires the robot to pick up the object, place itin the toaster, and then shut the toaster. At test time, we use a novel object and randomize boththe object’s initial pose and the toaster’s initial orientation. Toasting requires the robot to execute amulti-stage manipulation strategy, while also generalizing to new visual scenarios.Each of the three tasks use a shared action space: Cartesian velocity control; and a shared ob-servation space: proprioceptive inputs and 3rd person camera view (visualized in Fig. 3, left).We collectn= 50 tele-op demonstrations per task. Please refer to Appendix C for all other taskhyperparameters and our website for task videos: https://data4robotics.github.io.4 Probing Dataset BiasesIn our empirical study, we evaluate 5 widely-used datasets as pre-training candidates (seeFig. 2, left): ImageNet [2], Ego4D [19], 100 Days of Hands [21] (DoH), Kinetics [20], andRoboNet [37] (see Sec. 4.1 for descriptions). We apply our experimental methodology (from Sec. 3)on various sub-samplings/combinations of these datasets. First, we conduct single dataset pre-training: i.e., we evaluate a dataset’s performance in isolation to empirically determine which ismost suited for our diverse downstream manipulation tasks (Sec. 4.1). In our second suite of ex-periments, we analyze how well various combinations of the data perform (Sec. 4.2). Finally, weinvestigate dataset scaling for pre-training, and find that the pre-training image distributions mattermost (Sec. 4.3).4.1 Comparing Datasets Apples-to-ApplesBefore diving into the details, let’s take a step back and add context about the datasets we probe.(1)ImageNet (ImageNet-1K) contains 1000 train images for each of its 1000 classes. ImageNetis a popular and classical computer vision dataset, i.e., curated carefully from internet images. Thebroad image distribution may result in more expressive representations (as observed in purely visualtasks like classification). However, ImageNet is focus on centered, single-object, and high-qualityinternet images. As a result of this domain gap, many believe that ImageNet is an ill fit for robotics.44 Frames Sampled Randomly from ImageNet-1M4 Frames Sampled Randomly from Ego4D-1MObservationsfromTrainDemo(PouringTask)Robot Observations vs Pre-Train Image DistributionsFigure 3: Observations from our pouring task (left) are compared against random pre-training im-ages from ImageNet-1M/Ego4D-1M (right). Note that all the pre-train images are very differentfrom the evaluation task. Nonetheless, the curated, single-object images from ImageNet-1M yieldstronger visuo-motor representations than the Ego4D-1M frames do (see Table 1).Single Dataset Models (1M Images) BaselinesTask ImageNet Ego4D Kinetics 100 DoH RoboNet Scratch VC-1 [15] MVP [22]SimRoboSuite [51] 62% 61% 65% 52% 58% 2% 63% 51%MetaWorld [54] 90% 93% 87% 86% 74% 72% 70% 83%Franka Kitchen [55] 63% 68% 55% 62% 62% 40% 61% 61%Average (Sim) 72% 74% 69% 67% 65% 38% 65% 65%RealBlock Stacking 68%9:5% 64%9:5% 72%9:8% 55%9:2% 76%8:7% 0%0% 60%10% 52%10%Pouring 44%12% 19%9:0% 50%9:0% 19%12% 13%7:6% 0%0% 25%10% 13%7:6%Toasting 10%16% 0%0% 10%16% 40%17% 0%0% 0%0% 10%10% 10%10%Average (Real) 41%7:1% 28%6:9% 44%7:1% 38%6:9% 30%7:0% 0%0% 33%6:9% 25%6:6%Table 1: Comparing Datasets Apples-to-Apples. We compare pretrained representations, learnedon 1M images from five datasets. We report success rates after finetuning representations withBC, and the real world evaluations also include standard error (i.e., Success %Std. Err. %). Foradditional context, we benchmark SOTA baselines [15, 22] and a “Scratch” representation with nopretaining. We find that visual representations learned on standard vision datasets with internetimages and curation (e.g., ImageNet) provide surprisingly strong performance in the real world.(2)Ego4D is a modern, ego-centric, and in-the-wild video dataset, with 3.6K hrs of video collectedby humans performing daily tasks. It is conjectured that Ego4D is well suited for robot learning asit contains realistic images that a robot may observe in real-world environments. However, framescollected within the same video tend to look similar to each other, and the lack of curation mean thatsome object classes (e.g. blocks, cups, etc.) rarely appear.(3)DoH is focussed on human hands and contains curated YouTube videos of people manipulatingvarious household objects (e.g. in cooking video). The curation ensures that action classes arebalanced and that videos look distinct from each other. Furthermore, the focus on manipulation mayhelp the representations pick useful cues (e.g. where objects can be grasped). However, YouTubevideos look quite different from robot’s visual observations, so its an open question if priors learnedfrom DoH would benefit downstream tasks in robotics.(4)Kinetics (-700) is similar to DoH in that it’s curated from YouTube, but its videos contain a muchwider distribution of human actions (e.g. with objects and/or other humans) instead of the focus ofDoH on hands and manipulation.(5)RoboNet contains 13M+ image observations of robots randomly interacting with objects placedin a bin in front of them. RoboNet could be invaluable for pre-training, since its images are highlydomain-specific for our use case. But robot data is collected in sterile lab setting, which could causethe representations to overfit to only those specific settings.It is clear that these five datasets have complex trade-offs that may affect their usability for robotics.However, none of them clearly match the robot observation space (see Fig. 3). The only way to settlethe question is to undertake an unbiased and empirical study, comparing them apples-to-apples.Thus, we apply our evaluation methodology to 1 Million frames sampled randomly from everydataset. This is easy to do in ImageNet, since it has 1M balanced train images. But for the video5Soup-1M + 1M Extra Frames (2M Total)Task Soup-1M Soup 2M ImageNet Ego4D Kinetics 100 DoH RoboNetSimRoboSuite [51] 64 64 52 53 59 67 58MetaWorld [54] 87 89 92 86 87 92 88Franka Kitchen [55] 66 67 56 61 64 62 60Average (Sim) 72 73 67 67 70 74 69RealBlock Stacking 76%8:7% 44%10% 72%9:2% 60%10% 76%8:7% 92%5:5% 76%8:7%Pouring 38%13% 3813% 32%12% 38%13% 32%12% 38%13% 32%12%Toasting 10%10% 40%16% 0%0% 10%10% 22%13% 50%17% 0%0%Average (Real) 41%7:1% 41%7:0% 35%7:1% 36%7:1% 43%7:1% 60%6:7% 36%7:1%Table 2: Marginal Value of Each Dataset. Soup-f1M,2Mgmodels are trained f1M,2Mgimageswithf200K,400Kgimages from each of the five target datasets. The models on the right are trainedwith the Soup 1M images and an additional 1M frames from the target dataset. We find that imagedistribution matters more than the number of images trained on: Soup-2M does not improve onSoup-1M, but Soup-1M + 1M DoH does. Results are reported as success rates for each task, and thereal world evaluations also report standard error (i.e., Success %Std. Err. %).datasets (Ego4D/Kinetics/DoH), we first processed them into frames (sub-sampled at 3FPS) andthen randomly select 1M images from the whole set. For RoboNet, we followed a similar procedureas used for the video datasets, but randomly sampled 1M image observations instead. Visual pre-training on each of these results in 5 representations that we evaluate on our task suite (via BC). Foradditional context, we also evaluate a ‘ Scratch ’ model with no pre-training, i.e., randomly initializedweights before BC. We also evaluate pre-trained weights downloaded from two SOTA baselines(VC-1 [15], MVP [22]). Note that both MVP and VC-1 share our vision transformer architectureand pre-training recipe of masked image modeling, but were trained on significantly more # offrames (2.5M+), sampled primarily from Ego4D. The results are presented in Table 1.Our first observation is that performance trends from popular simulation benchmarks do not transferto the real world at all (see Sec. 5 for more). Thus, we focus the rest of our analysis on the realworld trends, since that is the primary focus of this work. The real robot experiments reveal that Im-ageNet/Kinetics/DoH representations all perform better than those trained on RoboNet/Ego4D(roughly 40% v.s.30% success rate). Critically, this result goes beyond just MAE pre-training. Aswe show in Appendix D, our finding that ImageNet/Kinetics/DoH performs best also holds withcontrastive pre-training [13]! This is surprising and important since both Ego4D and RoboNetseem like better matches to the downstream tasks (e.g., RoboNet entirely contains images of robotinteractions) and as more works in the research community implicitly assume/expect Ego4D to dobetter. Note that ImageNet/Kinetics/DoH were all sampled and curated from the internet (usingYouTube/image search), so they contain cleaner images with a much greater range of content (e.g.,1000 classes [2] vs 4 robot labs [37]). These unbiased, empirical results strongly suggest that thepre-training image distribution is far more important than the images’ content .4.2 Combining Data from Different SourcesAnother surprising result from Table 1 is that the baselines representations perform worse than theImageNet/Kinetics/DoH representations, despite being trained on significantly more images. Forexample, VC-1 pre-trains on ImageNet alongside 2.5M+ images from Ego4D, while using the exactsame pre-training strategy that we do. A possible explanation for this discrepancy is that VC-1’srepresentation is functionally very similar to our only-Ego4D ablation, since the majority of its pre-training frames come from Ego4D. Consequently, each batch the encoder sees during pre-trainingprimarily consists of Ego4D frames. The key insight here is that distribution of the pre-training setmatters more than the sheer number of frames trained on. We experimentally test this hypothesis,and find that VC-1’s representation performs only marginally better than Ego4D’s ( 33% vs28%).The natural next question is, “ How does one optimally combine datasets for visuo-motor pre-training? ” A simple idea is to proportionally mix the datasets so that the model is pre-trained on anequal number of frames from each dataset. Particularly, we create a “Soup-1M” containing 200Kimages randomly sampled from each of the 5datasets. We then evaluate this model on our test suite(see Table 2, left). Note that the Soup-1M model performs about the same as the ImageNet/Kinetic-s/DoH models ( 41%), even though it was trained on a significant amount of Ego4D/RoboNet frames(recall, these performed lowest in Sec. 4.1). This suggests that scaling to multiple datasets canincrease robustness, so long as the datasets are kept carefully balanced during pre-training.64.3 Analyzing the Marginal Value of Each DatasetSoup-1M provides a sensible first step for combining datasets for visuo-motor pre-training – keep-ing training set size to 1M using equal proportions of data sources. This leads to the natural nextquestion: “how can we effectively scale dataset size to improve performance?” To answer this ques-tion, we’ll also need to understand the marginal value of adding additional data to the soup. Toanswer this, we undertake another empirical study. Particularly, we obtain visual representationson pre-training sets containing the aforementioned Soup-1M along with 1M images from each ofthe five subject datasets (e.g. Soup-1M + 1M ImageNet frames). In effect, this both scales the sizeof pre-training dataset, while shifting the train distribution towards that of the subject dataset. Forfair comparison, we also train a Soup-2M model (identical to Soup-1M but with 400Kimages perdataset) that tests a naive scaling of the Soup-1M model. All six models are evaluated and resultsare present in Table 2.As reported in Table 2 (left), we find that Soup-2M model performs marginally better in simulationthan Soup-1M, and performs exactly the same (on average) in the real world. That is, data scal-ing is more nuanced than naively increasing the number of frames . In contrast, the strongestmodel, Soup-1M + 1M DoH, is able to perform 20% better than Soup-1M (and 30% better than thestrongest baseline) on the real world tasks (bold results in Table 2)! Finally, the Soup-1M + 1MfEgo4D/ImageNet/RoboNet gmodels perform slightly worse than Soup-1M, whereas the Soup-1M+ 1M Kinetics model performs slightly better. These results are mostly in line with our expectationsfrom Sec. 4.1 (e.g. adding more RoboNet data reduces performance, while adding more Kinetics/-DoH increases performance).5 Ablating our Experimental SetupThis section presents some insights from our real-world experiments. We find: (1) that old-schooldropout regularization is highly effective; and (2) sim evaluation does not transfer to real world.Regularizing Policies w/ Dropout Early in our physical robot evaluations, we noticed that thepolicies often produced jerky motions that could damage the robot and its environment. Thus, wesearched for a simple fix that could improve robustness in the real world. We found that addingdropout [23] to the policy network (w/ p= 0:2) significantly improved the robot’s qualitative be-havior: the commanded motions became smoother, with improved generalization to new scenarios.We quantify this with ablations on the Block Stacking task, i.e., fine-tuning the five 1M models (seeSec. 4.1) with and without dropout. Note how adding dropout to visuo-motor policies almostconsistently improves policy performance on the physical robot (see Fig. 4, orange bars). How-ever, the opposite effect is observed in simulation. This indicates that adopting sim benchmarks asa (fast) proxy to make policy design choices may warrant caution and a healthy doze of skepticism.We further test the regularization effect of dropout in this setting using a new task: Block StackingRobust . In this task, a human adversary pushes the cube out of the robot’s gripper in the middleof the episode (i.e. right before the robot grasps). This forces the robot to dynamically replan itsactions, and adapt to a scenario it never saw during fine-tuning with BC (in the demonstrations).We find that the success rate on Block Stacking Robust (average across all models) increases to 24%from 10% thanks to this regularization.Analyzing Sim-to-Real Transfer On one hand the sim2real gap in manipulation is well knownand on the other it’s still very common practice in prior work [1, 15, 22, 18, 56] to draw infer-ences about pre-trained representations using simulated benchmarks (e.g. CortexBench [15], FrankaKitchen [55], Isaac Gym [22], etc.) In several unbiased experiments we undertook, we have foundthat trends in simulation are almost entirely disconnected from their real world performance. First ,the simulation suite predicts that Ego4D is the best representation for robotics, but the real worldresults consistently disagree with that assessment (see Table 1). Second , key implementation detailsin the real world (like Dropout) can actually hurt performance in simulation (see how Dropout hurtssim performance in Fig. 4). To objectively investigate this, we plot the sim performance vs realperformance for all our models (trained on dataset configurations detailed in Sec. 4) in Fig. 5. Wefind that sim and real performance are almost entirely uncorrelated (a very low R2= 32%).Even if we were to ‘cherry-pick’ the two most similar sim/real tasks (RoboMimic’s block-lift [51]vs our stacking task) the correlation is still very low: R2= 34% .7ImageNet Ego4D Kinetics 100 DoH RoboNet MVP VC1Representation10010203040% Change in PerformanceEffect of Dropout on Real/Sim PerformanceTaskSimReal (Block Stacking)Figure 4: Effect of adding dropout [23] insimvs.real(block stacking tasks). Dropout fre-quently harms performance in simulation (blue)but consistently improves real world (orange)success rate. Positive values on Y axis indicateimprovement by adding dropout and vice-versa.66 68 70 72 74Simulated Performance (Average)2530354045505560Real World Performance (Average)Real vs Simulated Performance for All ModelsFigure 5: Sim vs. real performance acrosspretraing datasets. We plot average model per-formance, in sim and real, for all the modelstested in our study. Note how the sim scoresare only weakly predictive of real world perfor-mance (R2= 32% ).6 DiscussionIn this paper, we investigated how dataset biases and implementation choices can affect visual pre-training for robotics. With the modus operandi of masked image modeling [10], our experimentsanalyzed pre-trained visuo-motor representations trained on 15different data distributions, sourcedfrom 5common computer vision and robotics datasets. These models were evaluated on standardsim environments, alongside 3 unique and challenging real world tasks (each with 50+ robot eval-uations, for rigor). We find that traditional computer vision datasets (e.g. ImageNet) provide sur-prisingly strong performance for robotic pre-training. In fact, our simple ImageNet representationsoutperform both Ego4D representations and representative baselines in the field. The key insight isthat the image distribution matters much more than the sheer number of images during pre-training.Guided by this insight, we explore various strategies for scaling data distribution, by carefully mix-ing data from different sources. As part of this investigation, we train a final model (Soup-1M +1M DoH) that exhibits a 30% improvement over the baselines on real world robotic tasks. Finally,we analyze our experimental methodology, and show how simple regularization techniques (e.g.dropout [23]) can boost real world performance, and conclude that trends in simulation do not cor-relate to real world deployment. We hope that our unbiased empirical probes and associated findingswill inspire others in the field to study how various sources of offline data can transfer to roboticstasks. To enable future efforts, we released all of our pre-trained representations and evaluation codeon our website4.Limitations and Future Work While our experiments were extensive, there are some limitationsthat should be addressed by future work. First, there is no simple theory or experimental test thatcan predict if a representation will actually work well on the robot after pre-training. In fact, ourexperiments show that the easiest evaluation technique, i.e. the proxy of simulation, may give amisleading sense of progress in the real world! Thus, it is vital for our community to find fasterways to evaluate representations, and share reproducible results. One possibility is a standardizedcloud robotics benchmark [57, 58] that could greatly reduce the load for researchers. Next, ourexperiments heavily focused on Behavior Cloning combined with MAE pre-training (though we didexplore SimCLR pre-training in Appendix D). Finally, it would be valuable to extend our study inmore scenarios (e.g., Reinforcement Learning), and on other robotic tasks, like visual navigationand grasping.4https://data4robotics.github.io8AcknowledgmentsWe’d like to recognize our CMU colleagues – Shubham Tulsiani, Jason Y . Zhang, Shikhar Bahl,Russell Mendonca, Yufei Ye, Alex Li, and Mihir Prabhudesai – whose feedback and commentshelped strengthen the paper. We are grateful to Xinlei Chen for guidance towards MAE pre-trainingand self-supervised learning adaptations to the pycls codebase [59, 60, 61]. Finally, SD’s PhDresearch is generously supported by the NDSEG fellowship.References[1] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierar-chical image database. In 2009 IEEE conference on computer vision and pattern recognition ,pages 248–255. Ieee, 2009.[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutionalneural networks. Communications of the ACM , 60(6):84–90, 2017.[4] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction . MIT press, 2018.[5] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research , 17(1):1334–1373, 2016.[6] S. Ross and D. Bagnell. Efficient reductions for imitation learning. In AISTATS , 2010.[7] S. Haldar, J. Pari, A. Rai, and L. Pinto. Teach a robot to fish: Versatile imitation from oneminute of demonstrations. arXiv preprint arXiv:2303.01497 , 2023.[8] D. Chen, B. Zhou, V . Koltun, and P. Kr ̈ahenb ̈uhl. Learning by cheating. In CoRL , 2020.[9] U. Jain, I.-J. Liu, S. Lazebnik, A. Kembhavi, L. Weihs, and A. G. Schwing. Gridtopix: Trainingembodied agents with minimal supervision. In ICCV , 2021.[10] K. He, X. Chen, S. Xie, Y . Li, P. Doll ́ar, and R. Girshick. Masked autoencoders are scalablevision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 16000–16009, 2022.[11] A. v. d. Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 , 2018.[12] K. He, H. Fan, Y . Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visualrepresentation learning. In Proceedings of the IEEE/CVF conference on computer vision andpattern recognition , pages 9729–9738, 2020.[13] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learningof visual representations. In International conference on machine learning , pages 1597–1607.PMLR, 2020.[14] I. Radosavovic, T. Xiao, S. James, P. Abbeel, J. Malik, and T. Darrell. Real-world robotlearning with masked visual pre-training. CoRL , 2022.[15] A. Majumdar, K. Yadav, S. Arnaud, Y . J. Ma, C. Chen, S. Silwal, A. Jain, V .-P. Berges,P. Abbeel, J. Malik, et al. Where are we in the search for an artificial visual cortex for embodiedintelligence? arXiv preprint arXiv:2303.18240 , 2023.[16] A. Khandelwal, L. Weihs, R. Mottaghi, and A. Kembhavi. Simple but effective: Clip embed-dings for embodied ai. In CVPR , 2022.[17] S. Y . Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Cows on pasture: Baselinesand benchmarks for language-driven zero-shot object navigation. In CVPR , 2023.[18] Y . J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V . Kumar, and A. Zhang. Vip: Towardsuniversal visual reward and representation via value-implicit pre-training. arXiv preprintarXiv:2210.00030 , 2022.9[19] K. Grauman, A. Westbury, E. Byrne, Z. Chavis, A. Furnari, R. Girdhar, J. Hamburger, H. Jiang,M. Liu, X. Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages18995–19012, 2022.[20] L. Smaira, J. Carreira, E. Noland, E. Clancy, A. Wu, and A. Zisserman. A short note on thekinetics-700-2020 human action dataset. arXiv preprint arXiv:2010.10864 , 2020.[21] D. Shan, J. Geng, M. Shu, and D. Fouhey. Understanding human hands in contact at internetscale. In CVPR , 2020.[22] T. Xiao, I. Radosavovic, T. Darrell, and J. Malik. Masked visual pre-training for motor control.arXiv preprint arXiv:2203.06173 , 2022.[23] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simpleway to prevent neural networks from overfitting. JMLR , 2014.[24] S. Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences , 3(6):233–242, 1999.[25] A. Gupta, A. Murali, D. P. Gandhi, and L. Pinto. Robot learning in homes: Improving gen-eralization and reducing dataset bias. Advances in neural information processing systems , 31,2018.[26] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 , 2018.[27] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordinationfor robotic grasping with deep learning and large-scale data collection. The Internationaljournal of robotics research , 37(4-5):421–436, 2018.[28] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700robot hours. In 2016 IEEE international conference on robotics and automation (ICRA) , pages3406–3413. IEEE, 2016.[29] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. Joshi, R. Ju-lian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath,I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1: Robotics transformer for real-world control at scale. In arXiv preprint arXiv:2212.06817 ,2022.[30] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXivpreprint arXiv:1604.07316 , 2016.[31] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances inneural information processing systems , 1, 1988.[32] A. V . Nair, V . Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine. Visual reinforcement learningwith imagined goals. Advances in neural information processing systems , 31, 2018.[33] S. Dasari and A. Gupta. Transformers for one-shot visual imitation. In Conference on RobotLearning , pages 2071–2084. PMLR, 2021.[34] W. Whitney, R. Agarwal, K. Cho, and A. Gupta. Dynamics-aware embeddings. arXiv preprintarXiv:1908.09357 , 2019.[35] H. Bao, L. Dong, S. Piao, and F. Wei. BEit: BERT pre-training of image transformers. InICLR , 2022.10[36] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ international conference on intelligent robots and systems , pages 5026–5033. IEEE,2012.[37] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, andC. Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215 , 2019.[38] Z. Tong, Y . Song, J. Wang, and L. Wang. Videomae: Masked autoencoders are data-efficientlearners for self-supervised video pre-training. NeurIPS , 2022.[39] L. Wang, B. Huang, Z. Zhao, Z. Tong, Y . He, Y . Wang, Y . Wang, and Y . Qiao. Videomae v2:Scaling video masked autoencoders with dual masking. In CVPR , 2023.[40] M. Singh, Q. Duval, K. V . Alwala, H. Fan, V . Aggarwal, A. Adcock, A. Joulin, P. Doll ́ar,C. Feichtenhofer, R. Girshick, et al. The effectiveness of mae pre-pretraining for billion-scalepretraining. arXiv preprint arXiv:2303.13496 , 2023.[41] D.-K. Nguyen, V . Aggarwal, Y . Li, M. R. Oswald, A. Kirillov, C. G. Snoek, and X. Chen.R-mae: Regions meet masked autoencoders. arXiv preprint arXiv:2306.05411 , 2023.[42] C. Feichtenhofer, Y . Li, K. He, et al. Masked autoencoders as spatiotemporal learners.NeurIPS , 2022.[43] P.-Y . Huang, V . Sharma, H. Xu, C. Ryali, H. Fan, Y . Li, S.-W. Li, G. Ghosh, J. Malik, andC. Feichtenhofer. Mavil: Masked audio-video learners. arXiv preprint arXiv:2212.08071 ,2022.[44] R. Bachmann, D. Mizrahi, A. Atanov, and A. Zamir. Multimae: Multi-modal multi-taskmasked autoencoders. In ECCV , 2022.[45] X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer. Scaling vision transformers. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 12104–12113, 2022.[46] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[47] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Survey: Robot programming by demon-stration. Handbook of robotics , 59(BOOK CHAP), 2008.[48] B. Kang, Z. Jie, and J. Feng. Policy optimization with demonstrations. In ICML . PMLR, 2018.[49] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan,A. Sendonaris, I. Osband, et al. Deep q-learning from demonstrations. In AAAI , 2018.[50] L. Weihs, U. Jain, I.-J. Liu, J. Salvador, S. Lazebnik, A. Kembhavi, and A. Schwing. Bridgingthe imitation gap by adaptive insubordination. NeurIPS , 2021.[51] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. In Conference on Robot Learning (CoRL) , 2021.[52] R. Rahmatizadeh, P. Abolghasemi, A. Behal, and L. B ̈ol ̈oni. From virtual demonstration toreal-world manipulation using lstm and mdn. In Proceedings of the AAAI Conference on Arti-ficial Intelligence , volume 32, 2018.[53] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[54] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In CoRL , 2019.[55] A. Gupta, V . Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solvinglong horizon tasks via imitation and reinforcement learning. Conference on Robot Learning(CoRL) , 2019.11[56] N. Hansen, Z. Yuan, Y . Ze, T. Mu, A. Rajeswaran, H. Su, H. Xu, and X. Wang. On pre-training for visuo-motor control: Revisiting a learning-from-scratch baseline. In InternationalConference on Machine Learning (ICML) , 2023.[57] S. Dasari, J. Wang, J. Hong, S. Bahl, Y . Lin, A. S. Wang, A. Thankaraj, K. S. Chahal, B. Calli,S. Gupta, et al. Rb2: Robotic manipulation benchmarking with a twist. In Thirty-fifth Confer-ence on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) ,2021.[58] G. Zhou, V . Dean, M. K. Srirama, A. Rajeswaran, J. Pari, K. Hatch, A. Jain, T. Yu, P. Abbeel,L. Pinto, et al. Train offline, test online: A real robot learning benchmark. arXiv preprintarXiv:2306.00942 , 2023.[59] I. Radosavovic, J. Johnson, S. X. W.-Y . Lo, and P. Doll ́ar. On network design spaces for visualrecognition. In ICCV , 2019.[60] I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Doll ́ar. Designing network designspaces. In CVPR , 2020.[61] P. Doll ́ar, M. Singh, and R. Girshick. Fast and accurate model scaling. In CVPR , 2021.[62] Y . Zhu, J. Wong, A. Mandlekar, R. Mart ́ın-Mart ́ın, A. Joshi, S. Nasiriany, and Y . Zhu. robo-suite: A modular simulation framework and benchmark for robot learning. In arXiv preprintarXiv:2009.12293 , 2020.[63] B. Chen, S. Song, H. Lipson, and C. V ondrick. Visual hide and seek. In Artificial Life Confer-ence Proceedings . MIT Press, 2020.[64] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CVPR ,2016.12Hyperparameter ValueMAE Pretrainingoptimizer AdamW [53]base learning rate 1e-4weight decay 0.05optimizer momentum 1;2= 0:9;0:95batch size 4096learning rate schedule cosine decaytotal batches or iterations 249600warmup iterations 1/8 total iterationsaugmentation RandomResizedCrop#GPU 64 V100 (32 gb)Wall-clock time 36 hoursEncoder ViT Architecture#layers 12#MHSA heads 12hidden dim 768class token yespositional encoding sin cosDecoder ViT Architecture#layers 8#MHSA heads 16hidden dim 512class token used yespositional encoding sin cosTable 3: Training and architectural hyperparameters for MAE pretraining.A MAE HyperparametersWe list key hyperparameters for the MAE training loop in Table 3. Note that these parameterswere employed directly from original MAE paper [10] and are actually shared by relevant roboticsbaselines [15, 22]. Consistent with the terminology in [10], the employed learning rate is the baselearning rate scaled by (total batch size / 256). For a head-on comparison with prior work [10,15], we train the ViT for iterations equivalent of 800 epochs over ImageNet dataset. This rigorousbenchmarking took # GPUs wall clock time# data ablations = 641:512 = 1152 GPU days.B BC HyperparametersThe following section describes the hyperparameters used in our behavior cloning loop. As dis-cussed in Sec. 3, the BC policy begins by taking in the image and passing it through the pre-trainedencoder to get a representation, E(it). That representation is then concatenated to the joint in-formation to get a policy input, xt= [E(it);jt]. The policy input is fed through a 2-layer mlpnetwork, with a batchnorm preceding the first layer, ReLU activations [3], and hidden dimensionsof[512;512]. Additionally, we add dropout [23] to the two mlp layers w/ probability p= 0:2afterthe ReLU activations. The result of the top layer is then passed to 2 linear layers, that predict themean (), mixing parameters ( ), and standard deviation ( ) of a Gaussian Mixture Model (GMM)distribution w/ mmodes:p(x) = mi=1iN(xji;i)The choice of GMM was based on prior work [51, 52] that showed it could dramatically improveperformance. After some tuning, we used m= 5on the RoboSuite tasks (note their benchmark [51]usedm= 5) and the real world tasks, since it worked best. However, for Franka Kitchen andMetaWorld, we found no significant difference. As a result, we used m= 1(i.e. standard Gaussiandistribution) for those tasks to maximize comparability with prior benchmarks [15, 55].13The policy was optimized for 50000 iterations using the ADAM optimizer [53], with a learning rateof0:0001 and a L2 weight decay of 0:0001 . In addition, we applied data augmentation (randomcrops and random blur) to the input image it, before passing it E. This was based on recommenda-tions for best practices from Hansen et. al. [56]. The full code for this setup is open-sourced on ourwebsite: https://data4robotics.github.io.C Task HyperparametersThis section describes the hyperparameters made while setting up both sim and real world tasks. Allcode (for robot/sim environments and BC training) is open sourced: https://data4robotics.github.io.Simulation We evaluate on 5 tasks from Metaworld [54] (BinPick, ButtonPress, Hammering,Drawer Opening, and Assembly), 5 tasks from Franka Kitchen [55] (Knob Turning, Door Opening,Light Switch, Microwave, and Sliding Door), and 3 tasks from RoboSuite [62, 51] (Lift, Can, andSquare). These environments are frequently used by the robot learning community, and the exactsetups (e.g., camera positioning, object sets, demonstration trajectories, etc.) were directly takenfrom prior work [15, 1, 51]. As a result, our simulated results should be very accessible to thecommunity.The training demonstrations for these tasks were collected by previous work (CortexBench [15],Relay Policy Learning [55], RoboMimic [51] respectively). We fine-tune on n= 25 demos forMetaWorld/Franka Kitchen, and n= 200 demos on RoboSuite (again to stay consistent with olderpapers). Task success is measured by the environments themselves, and we get numbers by estimat-ing success rates empirically using 50test trajectories. Note that we only evaluate the policy at theend of training (unlike some prior work that evaluated multiple times over the course of training).This was done to ensure the sim evaluation setup matched the real world (i.e. we can’t evaluate realpolicies multiple times during training).Real World As discussed in Sec. 3, our real world tasks were built using a Franka Panda robot,and we collected 50demonstrations for each task using a VR tele-op setup. We heavily encouragethe reader to get a feel for the training data and tasks by viewing the supplemental video on ourwebsite: https://data4robotics.github.io.The following section expands on our real world task descriptions from Sec. 3, and provides someadditional details:•Block stacking requires the robot to pick up the red block and place it on the green block.This is the simplest task, since the robot only has to adapt to new object configurationsduring test time, but it still requires the robot to precisely localize and grasp the (small) redblock.We evaluated agents on this task using 25test positions for the red/green block. These testpositions were kept fixed for all policies to ensure maximum reproducibility.•Pouring requires the robot to lift the cup and pour almonds in the target bowl. Duringtest time the cup and target bowls are both novel objects (unseen during training), and areplaced in random locations. Thus, this task forces the robot to generalize to new visualinputs.We evaluated 3 separate cup/target bowl pairs in 5 positions each (so 15trials total). Notethat none of these objects or positions were seen during test time. Again, the object andposition combinations were kept fixed across every model tested.•Toasting is the final task, and it requires the robot to pick up the object, place it in thetoaster, and then shut the toaster. During test time, we use a novel object and randomizeboth the object’s initial pose and the toaster’s initial orientation. This is the most difficulttask, since it requires the robot to execute a multi-stage manipulation strategy, while alsogeneralizing to new visual scenarios.We evaluated 2 target objects pairs and randomized the toaster orientation into 5 separateposes (so 10trials total). Note that none of these objects or toaster orientations were seenduring test time. As before, all the test conditions were shared across all policies.14Single Dataset Models (1M Images) BaselinesTask ImageNet Ego4D Kinetics 100 DoH RoboNet R3M [1]RealBlock Stacking 60%10% 52%10% 60%10% 76%8:7% 56%10% 4%4%Pouring 25%11% 13%8:8% 22%11% 25%12% 6%6% 19%10%Toasting 10%10% 10%10% 10%10% 30%10% 0%0% 0%0%Average (Real) 32%6:9% 25%6:6% 31%6:9% 44%7:1% 21%6:5% 8%3:8%Table 4: This table analyzes if Table 1’s conclusions apply to different pre-training schemes, or ifthey are limited to MAE [10]. Specifically, we apply a contrastive visual pre-training algorithm(SimCLR [13]) to 1M images from each of the target datasets. We also add an additional baseline– R3M [1] – that was trained using temporal contrastive learning on Ego4D clips. We evalautethese representations on our 3 real world tasks, and report results as success rates for each task w/standard error (i.e., Success %Std. Err. %). This experiment reveals that the trends do generalizeto different pre-training schemes (e.g., vision datasets still stronger than Ego4D), and that the MAErepresentations are stronger on average.D Replication with SimCLROur results from Sec. 4.1 raise questions about several key assumptions in the field. For example,we find that visuo-motor representations learned on the classic ImageNet [2] dataset are strongerthan those learned on Ego4D [19] (in-the-wild data) and RoboNet [37] (random robot interactions).But are these trends fundamental to the data, or are they just a quirk of the specific pre-trainingalgorithm/network?To test this, we repeat the real world evaluation from Table 1 using the SimCLR [63] pre-trainingalgorithm and ResNet-18 architecture [64]. As a refresher, SimCLR is a contrastive learning algo-rithm that optimizes a network Rto “pull together” different views of the same image (i.e., tworandomly augmented versions of the same image: R(zi);R(zi)) and “push apart” different im-ages from each other ( R(zi);R(zj)). This is accomplished with the following loss function, wheresim(x;y) =xTy=(jxjjyj):L=logexp(sim(R(zi);R(zi))=)Pi6=jexp(sim(R(zi);R(zj))=)This SimCLR pre-training scheme is applied to each of the 1M images from our target datasets,using the same hyperparameters from the original paper [13].We compare the newly trained representations alongside an additional ResNet-18 baseline, R3M [1],which was also trained using contrastive learning applied to Ego4D. The results for real world tasksare presented in Table 4. Note that the trends we found in the ViT + MAE evaluations are replicatedin these ResNet + SimCLR experiments: the vision datasets – ImageNet/Kinetics/DoH – createstronger visuo-motor features w/ SimCLR compared to Ego4D/RoboNet! We also find thatdespite additional tuning, which was not given to anyother model (including trying the bigger R3MResNet-34/50 architectures), the R3M baseline struggles heavily on our tasks (especially stacking).Finally, we note that the average performance of MAEs in Table 1 is stronger than the SimCLRperformance ( 36% v.s.31%), which further justifies our choice of setup. It is unclear if this isbecause of the pre-training scheme or architecture choice.E ImageNet Diversity AblationsOne potential hypothesis that would explain our results is that the dataset’s diversity is critical foreffective visuo-motor pre-training. This explanation is intuitive, since information compression isthe basis of most self-supervised pre-training algorithms – e.g., MAEs [10] are based upon recon-structing a whole image from an encoding calculated from patches of the image. Thus, a cleanerand more diverse data distribution (like cureated ImageNet dataset) will make pre-text compressiontask “harder,” which in turn could result in a stronger, more robust visuo-motor representation.While this hypothesize is attractive, the main results in our paper are not able to eval-uate its veracity. Thus, we’ve added an additional experiment to try and shed somelight on this theory. Specifically, we take two 500K subsets from ImageNet [2] thathave varying levels of diversity. The first, IN-500K-500C consists of 500 classes15each with 1000 images (500K frames total). The second, IN-500K-1000C uses all1000 ImageNet classes with 500 images sampled from each (again 500K frames total).Task IN-500K-500C IN-500K-1000CStacking 70% 70%Pouring 16% 32%Toasting 25% 32%Average 37% 46%Figure 6: This table compares two rep-resentations trained on the same num-ber of frames from ImageNet, but withdifferent diversity levels (500 classes vs1000). We find that the more diverse im-age set results in a marginally strongerrepresentation.Note that these two datasets are the same size, but thesecond dataset is more diverse (2x more classes)! Thus,if diversity is critical, we should expect the 2nd dataset toperform better even though it’s the same size.We evaluate these two models on our real world tasks andpresent the success rates in Table. 6. Note how the morediverse representation (IN-500K-1000C) performs betteron the Pouring and Toasting tasks (w/ equal performanceon Stacking), resulting in marginally better performanceoverall ( 46% vs37%). In other words, keeping all elseequal a more diverse pre-training set results in a 7%performance boost! While this result isn’t fully defini-tive, it is an encouraging sign in favor of the diversityhypothesis. However, further work is needed to test thishypothesis in more settings.16 |
48qUHKUEdBf | STOW: Discrete-Frame S egmentation and T rackingof Unseen O bjects for W arehouse Picking RobotsYi LiUniversity of WashingtonMuru ZhangUniversity of WashingtonMarkus GrotzUniversity of WashingtonKaichun MoNVIDIADieter FoxUniversity of Washington and NVIDIAAbstract: Segmentation and tracking of unseen object instances in discreteframes pose a significant challenge in dynamic industrial robotic contexts, suchas distribution warehouses. Here, robots must handle object rearrangement, in-cluding shifting, removal, and partial occlusion by new items, and track theseitems after substantial temporal gaps. The task is further complicated when robotsencounter objects not learned in their training sets, which requires the ability tosegment and track previously unseen items. Considering that continuous observa-tion is often inaccessible in such settings, our task involves working with a discreteset of frames separated by indefinite periods during which substantial changes tothe scene may occur. This task also translates to domestic robotic applications,such as rearrangement of objects on a table. To address these demanding chal-lenges, we introduce new synthetic and real-world datasets that replicate theseindustrial and household scenarios. We also propose a novel paradigm for jointsegmentation and tracking in discrete frames along with a transformer modulethat facilitates efficient inter-frame communication. The experiments we conductshow that our approach significantly outperforms recent methods. For additionalresults and videos, please visit website. Code and dataset will be released.Keywords: Unseen Object Instance Segmentation, Unsupervised Multi ObjectTracking, Zero-shot, Discrete Frames1 IntroductionFigure 1: A densely packed shelf environ-ment. The shelf holds objects from a widearray of categories. During the stowing pro-cess, a human operator may obscure the cam-era’s view and rearrange the objects withinthe bin. The robot’s task is to pick a specificobject as directed by the given order index ofthe object’s placement in the bin.Object segmentation and tracking, a key percep-tion task for robotic picking, is particularly impor-tant in warehouse environments, where millions ofcommodity items are organized daily on warehouseshelves for storage and categorization, as shown inFigure 1. Future intelligent robots must acquirestrong perception capabilities to help human workersstow and fetch items from these shelves. These ca-pabilities include detecting objects with diverse ge-ometries in cluttered scenes and tracking them whileother items are being added or picked up. Despiteconsiderable research progress in this area, it re-mains notoriously difficult to detect and track un-known objects in highly cluttered environments.Researchers commonly address this task using asegment-then-track paradigm [1, 2], which executesthe two procedures sequentially. During segmentation, each frame is handled as an independent7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.image on which advanced unseen object instance segmentation methods are applied [3, 4, 5, 6]. Thesubsequent tracking step leverages varied techniques [7, 8, 9, 10] to group masks of the same objectacross frames. However, this approach has inherent limitations. Segmentation methods struggle toresolve ambiguities because they cannot utilize information from other frames to enhance segmenta-tion within each frame. Tracking lacks alternatives when segmentation fails, since its success heavilyrelies on consistent object appearance or location across consecutive frames. These drawbacks limitthe paradigm’s effectiveness when there are crowded scenes and discrete frames.Our task resembles video instance segmentation (VIS), which involves segmenting and trackingobjects in videos, with the leaderboard predominantly occupied by simultaneous segmenting andtracking methods [11, 12, 13]. This similarity suggests the potential to adopt their methods torealize an end-to-end solution in our task. However, their methods, mainly designed for videoswith continuous frames, display subpar performance when faced with significant object movementsbetween frames, a prominent challenge in our task.We therefore introduce STOW, Discrete-Frame Segmentation and Tracking of Unseen Objects forWarehouse Picking Robots, a new framework for addressing challenges in our context. STOW con-sists of a new paradigm to jointly perform segmentation and tracking and a novel module, calledthe multi-frame attention layer, that facilitates efficient inter-frame communication. It succeeds insimultaneously achieving high segmentation accuracy, high tracking accuracy, and high robust-ness to the sim-to-real gap. Remarkably, even when trained exclusively on synthetic images, ourmethod significantly surpasses baseline on real data and live robot experiments.In summary, our main contributions are: (1) Task formulation for unseen object instance segmen-tation and tracking in discrete frames as well as realistic synthetic data generation andreal datasetdata collection and manual labeling for bins in the shelf and tabletop environments, facilitating re-search in this domain (2) A new paradigm to perform joint segmentation and tracking in discreteframes, along with a new module, i.e., multi-frame attention , that efficiently communicates infor-mation across frames (3) Experiments conducted on real data and on a working robot to verify ournetwork’s superior performance2 Related WorksUnseen object instance segmentation. In computer vision, traditional object instance segmentationrequires prior knowledge about the objects. In contrast, our work targets potentially unseen objectswithout such knowledge. Previous efforts, such as UCN [4], UOIS [3], and MF [5], addressedunseen object discovery in single-frame images using varied strategies, e.g., RGB-D feature embed-dings and metric learning loss. A recent model, SAM [6], also demonstrates robust segmentationacross a wide variety of objects by training on a large amount of data. While these works focus onsegmenting and tracking in single-frame images, our research extends it to multiple frames.Video object segmentation. Like the video object segmentation task [14, 15, 9], we track unseenobjects in the test set. However, the VOS task assumes accessibility to an object’s mask in oneframe, which is not applicable in our task. For video object instance segmentation, previous worksadhere to the tracking by detection paradigm, e.g., [16] and similarly [17, 18, 19], and often addressthe problem of multi-object tracking (MOT), i.e., the estimation of bounding boxes and identities ofobjects in consecutive RGB image streams. MOT tasks usually focus on tasks in traffic scenes andobjects like people [20, 21] and vehicles.Video Instance Segmentation. Our task diverges from existing VIS (Video Instance Segmentation)datasets [22, 23] in two main aspects. First, while VIS employs a closed-set category approach fordetection/segmentation, our open-set problem adds complexity by recognizing instances regardlessof class, as opposed to relying on learned patterns. Second, unlike VIS’s focus on continuous,limited-changes video sequences, our dataset emphasizes tracking amid drastic changes betweenframes, making it ill-suited for benchmarking with VIS datasets.Unsupervised multi-object tracking. The unsupervised video instance segmentation task intro-duced in DA VIS 2019 [24] bears similarity to Video Instance Segmentation (VIS), with a focus on2open-set category objects akin to our task. However, our approach diverges in two key aspects:(1) despite targeting unseen objects, DA VIS 2019 predominantly includes humans, vehicles, andanimals, contrasting with the warehouse objects our study emphasizes, and (2) akin to VIS, it neces-sitates objects to exhibit continuous movement, thereby avoiding an ill-defined task.Image co-segmentation. Our task is similar to object co-segmentation [25], which extracts recur-ring objects from an image pair or a set of images. While the goal of co-segmentation is to identifyshared objects in a scene, we focus on instance segmentation and do not allow similar objects of thesame class. Distinguishing between an object instance and an object class is a crucial requirementfor industrial warehouses.Temporal Attention. In temporal attention, our multi-frame method aligns with but uniquely standsout from prior works. Context R-CNN integrates per-RoI features from various frames to enhancecurrent detections. Unlike its static approach and two-phase method, we embed temporal attentionwithin each block, updating all frame object queries post each temporal attention. Meanwhile,[26] proposed module called Alignment-guided Attention (ATA) which apply temporal attention onpatches with similar features through bipartite matching. Unlike ATA’s 1D fixed-size patch focus,our technique employs all object queries across all images, capturing varied masks and accessingbroader information.3 Problem FormulationFigure 2: Task Illustration: The left col-umn presents the images inputted intoour network, while the right columnshowcases the expected segmentationand tracking outcomes. Identical colorsindicate the same object.In this work, we introduce the novel task of segmentingand tracking unseen objects given a series of input dis-crete image frames. Though challenging, this task hasbroad applications in the field of robotics and is partic-ularly useful in warehouse environments, such as thatshown in Figure 1. Scenes involving warehouse shelvescan be exceedingly packed and cluttered, consisting of avast assortment of items, including some that have notbeen previously encountered. Moreover, there may betemporal gaps between successive snapshots of the scene,during which human workers may place new objects,robots may retrieve existing items, and some objects mayundergo changes in pose.Formally, we formulate the problem as follows. The inputto the task is a sequence of images I={I1, I2,···, IT|It∈RH×W×CI}, where HandWare the height andwidth of the images, respectively, and CIrepresents thenumber of channels, i.e., 3 for RGB images and 4 for RGB-D inputs. The task involves detecting,segmenting, and tracking KIobject instances that appear in these input images, where KImaynot be known beforehand. The output of the task is a set of binary object instance masks, Mt={M1t, M2t,···, MKIt|Mit∈ {0,1}H×W, i= 1,2,···, KI}, corresponding to each input imageIt∈ I. Figure 2 shows the problem setting under consideration.4 MethodOur system (Figure 3) uses query-based transformer architectures [27, 13] for object detection andsegmentation, which we describe in Sec. 4.1. To enable the tracking of object instances across dis-crete image frames, we introduce the learning of additional object embeddings for tracking (Sec. 4.3)as well as a multi-frame attention layer to distinguish between identical or distinct object instances(Sec. 4.2). Sec. 4.4 discusses training specifics and loss functions.4.1 Backbone ArchitectureFollowing [27, 13], the backbone of our network consists of three components: (1) a ResNet-basedimage encoder, (2) a transformer-based object query decoder, and (3) several prediction heads taskedwith determining object properties, such as the likelihood of object existence and object masks.3x L Transformer DecoderBackboneCross Attn.Self Attn.FFNMulti-frame Attn.ObjectQueriesQTTTPred. HeadPredictorTAssociatorMask EmbeddingConfidence ScoreObject EmbeddingTInput FramesH,W,CIDense Feature mapH/S,W/S,C F *InitialQueriesCqMask PredictionK, VCe1CrPixel EmbeddingH/4,W/4,Cemaskpred. 0maskpred. 0maskpred. 0H/4,W/4dot productObjectTokensCqPixel EmbeddingH/4,W/4,CeTNNNTransformerFigure 3: As outlined by a dashed rectangle, our transformer decoder ingests dense feature mapsconverted from input frames and produces object tokens for each image. These tokens predict con-fidence scores, mask embeddings for mask prediction, and object embeddings for association. Wealso introduce a novel ”multi-frame attention” layer, which attends to object queries from all frames.4.2 Multi-Frame AttentionResNet-based image encoder. We use ResNet-50 [28] to transform every input frame It∈RH×W×CI(t∈1,2, . . . , T ) into a dense low-resolution feature map Ft∈RHS×WS×CF. Here, CFdenotes the channel dimension of the output dense feature map, while S= 32 is the down-samplingratio used in this work.Transformer-based object query decoder. We employ a DETR-like transformer decoder [29] thattakes the produced dense feature map Ftas input and learns to decode a set of Nqobject tokens{q1t,q2t,···,qNqt} ∈RCqas the outputs. Each object token contains the latent information neces-sary for tasks such as classification estimation, mask prediction, and tracking in discrete frames. Ourtransformer decoder consists of L= 10 transformer blocks; each block contains one cross-attentionlayer, one self-attention layer, one feed-forward layer, and one novel multi-frame attention layer thatcorrelates object features across different image frames (Sec. 4.2).Prediction head for object masks. To get a per-pixel segmentation mask Mit∈ {0,1}H×Wforeach object token qit, we first use a two-layer multilayer perceptron (MLP), which maps an inputobject token qitto a mask embedding eit∈RCe; we then employ a multi-scale deformable attentiontransformer module (MSDeformAttn) [30] to convert the dense feature map Ftto a pixel embeddingmapPt∈RH4×W4×Ce. Here, Cedenotes the channel dimensions used for the mask and pixelembeddings. We calculate the dot product between the object embedding eitand the pixel embeddingPtin order to obtain the mask prediction ˆMitat a reduced resolution ofH4×W4. Subsequently, abilinear upsampling operator is applied to map ˆMitback to the original image resolution for the finalmask prediction Mit∈ {0,1}H×W.Prediction head for object existence scores. Taking the object token qitas input, we leverage asimple linear layer to estimate an object existence likelihood score sit∈[0,1]. In the context ofunseen object instance segmentation, we are not concerned with precise target object categories, nordo we have access to this knowledge. This characteristic simplifies the task into a binary classifica-tion, the objective of which is to estimate the confidence score of whether each segment correspondsappropriately to an object.4.3 Object Embedding for TrackingIn addition to the predicted object existence score and its segmentation mask in each frame, weadd a new prediction head for object tracking embedding to enable the association of object tokensbelonging to the same object from different input frames. Specifically, we employ a two-layer MLPto learn a mapping from the input object query qitto another object embedding used for trackingrit∈RCr. Here, Crrepresents the number of channels of the object embedding vector.During the inference phase, we implement an associator to group object tokens with similar objectembeddings. Specifically, we sequentially traverse each frame in the sequence. For each frame,4we retain only those object tokens qiwhose confidence scores surpass a predefined threshold δscore.For each input sequence, we maintain a trajectory bank, T, that encompasses the trajectories of allobserved objects. Each trajectory in the trajectory bank Ti∈ T consists of object tokens qithat areconsidered to belong to the object i. Subsequently, we compute the similarity score between theseselected object tokens and the previous trajectory using the following equation:Sim(qi,Tj) = max( R(qi)·R(qjk)),forqjk∈ Tj. (1)In this equation, Ris the function that transforms the object token qiinto the object embedding ri.After this step, we use the Hungarian algorithm [31] to search for an optimal bipartite matchingˆρfrom all possible bipartite matchings Pthat can maximize the overall similarity between objecttokens qand the trajectory bank T:ˆρ= arg maxρ∈PNqXiSim(qi,Tρ(i)). (2)We initialize the trajectory bank, T, with an adequate number of false alarm tokens, TFA, eachof which exhibits a constant similarity δmatch to all object tokens. The hyper-parameter δmatch canalso be interpreted as a false alarm threshold in matching. Any predicted object tokens assigned totokens from TFAare assumed not to match any existing trajectory; thus, a new trajectory is openedfor them.Our tracking method also enables the possibility of handling multiple identical objects in the samescene. Specifically, object tokens corresponding to identical objects are likely to be recognized dueto the expectation that object embeddings are near to them.The previous modules we introduced work independently for each frame without any cross-frameinformation exchange. To facilitate efficient communication between frames, we introduce a newcomponent, the multi-frame attention layer , into the transformer decoder.The multi-frame attention layer is an extension of the self-attention layer, which operates on objectqueries from a single frame; it attends to object queries from all accessible frames. To illustrate, wedenote the intermediate object queries after each feed-forward network as Xtl, where landtindicatethe index of transformer blocks and frames, respectively. A standard self-attention layer (with aresidual path) computes the following (we omit the normalization term√dkhere for simplicity):SelfAttn (Xtl) =softmax (fQ(Xtl)·fK(Xtl)T)fV(Xtl) +Xtl. (3)In contrast, our multi-frame attention layer computes:MultiFrameAttn (Xtl) =softmax (fQ(Xtl)·fK(Xl)T)fV(Xl) +Xtl. (4)Here, Xlrepresents the set of object queries from all frames Xl={Xtl,fort= 1,2, . . . , T }. Thefunctions fQ,fK, and fVare linear transformations that convert XlorXtlinto a C-dim space.The multi-frame attention layer is positioned at the end of each transformer decoder block, whichis repeated Ltimes in our network. Therefore, output object queries can incorporate the densefeature map of the current frame to update their prediction after communicating with object queriesfrom other frames. Furthermore, the multi-frame attention layer is computationally efficient since itattends only to object queries from all frames, typically around 100 queries per frame vs the 1200queries per frame used in Mask2Former-video [12] for images with size 640×480. Such efficiencylets it process long-term input sequences and large-scale images.4.4 Training and LossesOur model adopts the same loss function as utilized in Mask2Former [27] for classification andmask prediction. This includes the softmax cross-entropy loss, denoted as Lclass, for classification,along with binary cross-entropy loss, denoted as Lce, and the Dice loss, denoted as Ldice, for maskprediction. To address tracking loss, we employ the contrastive loss Lcontra used in DCN [32] alongwith the softmax loss (also referred to as the n-pair loss or InfoNCE loss) from CLIP [33]. For anobject token qoiˆt, object tokens paired with the same object oiin different frames qoit, t̸=ˆtare treated5as positive samples and pushed closer, while tokens assigned to different objects or backgrounds aretreated as negative pairs and pushed away. See the supplementary material for more details.Consistent with the approach in DETR [29] and Mask2Former [27], we leverage the Hungarianalgorithm [31] to establish a bipartite matching between the predicted object tokens and groundtruth that minimizes the overall loss. Notably, we exclude the tracking loss from the Hungarianalgorithm’s computation given that Lsoftmax is influenced by the object tokens contributing to thisloss. The final loss is Ltotal=λclassLclass+λceLce+λdiceLdice+λcontraLcontra+λsoftmaxLsoftmax .5 ExperimentsWe assess our methodology in two typical environments—bin in a shelf and tabletop—both of whichare representative settings in a multitude of warehouse and domestic applications. For each setting,we generate corresponding synthetic data for the training phase and collect and annotate real-worlddata for evaluation. We also integrate our approach into a bin-picking robotic system for practicalexperimentation.We train our models separately on distinct synthetic datasets for the shelf and table environments.Each run exclusively uses one dataset and is evaluated against the corresponding real-world test set.Inference takes around 0.4 seconds for a 15-frame sequence on an RTX 2080Ti. Additional trainingdetails are in the supplementary material.5.1 Dataset and EvaluationSynthetic data. We construct mesh models for shelf and table bins using textures from the CC0dataset [34], and object meshes from the Google Scanned dataset [35], consisting of over 1000models. Excluding 70 with isolated parts, we utilize 900 objects for the training set and 100 forvalidation. Objects are randomly rotated and positioned to avoid collision, with no heavy occlusionin the shelf environment. In total, we generate around 10,000 sequences for the shelf, each with atleast 2 packed frames, and 2,000 sequences for the tabletop, each containing 15 frames.Real-world data. For real-world scenarios, we collect and manually label 44 sequences with 220images for the shelf scenario and 20 sequences with 280 images for the tabletop scenario. Ourdataset includes over 150 diverse objects. These range from relatively simple objects, such as boxesand bottles, to more complex ones, like transparent water bottles enclosed in plastic bags. In eachsequence, we progressively add objects until either the bin is full or around 10 objects are placed onthe table. Object rearrangement could occur between any two frames, leading to significant changesin object location and appearance.Evaluation metrics. We adopt the evaluation method of the VIS challenge [22], a modified versionof the MS-COCO metric [36]. In video instance segmentation, each object is represented by a seriesof masks, and the Intersection over Union (IoU) is calculated at the level of these mask sequences.To construct the Precision-Recall (PR) curve, the confidence threshold is systematically varied, witheach threshold yielding a distinct data point on the curve. The area under the PR curve provides theAverage Precision (AP).In the context of this study, if not further specified, AP@0.5 denotes the average precision at an IoUthreshold of 0.5. Similarly, AP@all represents the mean AP calculated over multiple IoU thresholds,specifically from 50% to 95% in 5% increments.5.2 Results and AnalysisBaseline methods. We benchmark our approach against three state-of-the-art Video Instance Seg-mentation (VIS) methods: MinVIS [13], Mask2Former-video [12], and VITA [11]. To ensure anequitable comparison, all methods utilize ResNet-50 as the backbone, and the hyperparameters (suchas batch size, maximum iterations, and the number of sampled frames) are standardized to matchours. Hence, all methods are trained on an identical number of images.Qualitative results. As depicted in Table 1, our method notably outperforms existing VIS tech-niques, yielding an approximate 10% improvement in the shelf environment and a 20% increasein the table environment, even without using the multi-frame attention layer. Interestingly, MinVIS6MethodShelf TabletopAP@all AP@0.5 AP@all AP@0.5MinVIS 6.3 21.2 0.7 0.0Mask2Former Video 35.0 66.1 27.7 56.7VITA 42.7 70.1 26.6 55.0STOW (Ours) 55.6 81.3 49.7 75.4Table 1: Comparison between our method and leading video instance segmentation methods. Net-works are trained on synthetic data and tested on real, unseen data. All use ResNet-50[28] and trainon an identical number of images.Figure 4: Visualized results for the shelf environment. Masks with the same color and index areassociated and predicted as the same object by the network.exhibits subpar performance in our task. We speculate that this is due to its reliance on the proximitybetween object tokens as a measure of similarity; this method presupposes that the changes betweenframes are minimal and that objects maintain similar locations and appearances. However, theseconditions do not consistently hold in our task, potentially explaining the subpar performance.Quantative results. We also show visualization results on a subset of real test data in Figure 4and Figure 5. Our approach adeptly handles frame changes amid various types of noise. Despitesignificant movement and rotation between frames, the network successfully segments and tracks abroad array of object categories. It efficiently segments and continues tracking any new objects thatare introduced. Figure 5 also demonstrates the method’s robustness against backgrounds: it avoidspredicting them as objects even though walls are not included in the training set. Supplementarymaterials contain additional results that contrast our approach with other methods.5.3 Ablation StudyFrame attention. We evaluate the performance of our method with and without the cross-frameattention module on both the shelf and table scenes using ResNet-50 and Swin-T backbones. Thecross-frame attention module yields consistent performance improvements across all configurations,as shown in Table 2.multiframeshelf tabletopAP@all AP@0.5 AP@all AP@0.5- 51.8 78.7 44.4 68.5✓ 55.6 81.3 49.7 75.4Table 2: With and without the multi-frame at-tention layer. The left column denotes whetherwe incorporate multi-frame attention in this ex-periement. All other hyper-parameters remain thesame. Use of the frame attention layer boosts bothshelf and tabletop environment performance by∼5%.Sim2Real gap. We evaluate our methods onthe synthetic validation set, which contains ob-jects not included in the training set, and com-pare it to the number we tested on the real testset. Results, shown in Table 3, reveal that othermethods get results that are relatively to ours forthe synthetic set, but their performance dropsdramatically when we evaluate on the test set.This implies that they have difficulties solvingthe sim2real gap.5.4 Real Robot ApplicationsWe integrated our visual perception techniqueinto an autonomous shelf-picking system [37]. The system’s multi-component software architectureis managed by a state machine. The system setup uses a UR16e industrial robot situated in frontof an industrial warehouse shelf filled with objects. Within the robotics community, it’s standard to7Figure 5: Visualized results for the table environment. Masks with the same color and index areassociated and predicted as the same object by the network.address perception problems by combining unseen instance segmentation methods with other estab-lished techniques, as seen in [1],[38], [39], [40]. Other than VITA [11] in Table 1, we also combineUCN [4], a staple in unseen instance segmentation, with SIFT [7], a renowned keypoint extractionmethod, reflecting the conventional solution for this challenge. The center of the mask is used as thegrasping point for suction cap. Our evaluation protocol employs a fixed set of diverse items, stowedat specific locations and orientations within the bins, to ensure reproducibility and comparabilityof results since performance can fluctuate with different item configurations and inherent systemstochasticity.methodsynthetic realAP@all AP@0.5 AP@all AP@0.5MinVIS[13] 0.3 2.6 0.7 0.0M2F-V[12] 71.6 83.7 27.7 56.7VITA[11] 69.4 81.9 26.6 55.0Ours 74.1 89.3 49.7 75.4Table 3: Ablation study on solving the sim2realgap. After training on synthetic tabletop trainingset, we separately evaluate each method on a syn-thetic tabletop validation set and a real tabletoptest set; note that objects in the synthetic valida-tion set are not included in the synthetic trainingset.We testing each method with 50 trials acrossdifferent levels of difficulty, involving over 100objects. For UCN [4]+SIFT [7], the perceptionsuccess rates stand at 56%. With our STOWmethod, this rate increase to 76%.6 LimitationsOur method, while effective, has limitationsin handling highly cluttered environments andcomplex objects. False positives and negativesoccur in object detection, especially in intricatesettings. The segmentation process can resultin over- or under-segmentation due to complexobject boundaries and textures. In object track-ing, we sporadically encounter mistracking incidents and occasional failures to distinguish betweenmultiple objects. Refer to the supplementary material for detailed analyses and examples of theselimitations.7 ConclusionIn this paper, we introduce the task of segmenting and tracking unseen objects in discrete frameswhich is widely used in robotics tasks but under investigation. We formulated the problem andcollected both synthetic and real datasets. We also propose a novel paradigm for joint segmentationand tracking, incorporating multi-frame attention for better inter-frame communication. Even whentrained solely on synthetic data, our method adeptly handles clustering and large movements inreal-world sequences. Our innovative approach excels in segmenting and tracking within both shelfand tabletop settings, surpassing state-of-the-art techniques with a 10%-20% improvement in AP inreal-world scenarios and more than 20% success rate in robot experiments.8AcknowledgmentsThis research is funded by the UW + Amazon Science Hub as part of the project titled, “RoboticManipulation in Densely Packed Containers.” We would like to thank Dr. Michael Wolf from Ama-zon for valuable discussions. We further would like to thank our students Sanjar Normuradov andSoofiyan AtarReferences[1] A. Goyal, A. Mousavian, C. Paxton, Y .-W. Chao, B. Okorn, J. Deng, and D. Fox. Ifor: Iter-ative flow minimization for robotic object rearrangement. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 14787–14797, 2022.[2] Y . Lu, N. Khargonkar, Z. Xu, C. Averill, K. Palanisamy, K. Hang, Y . Guo, N. Ruozzi, andY . Xiang. Self-supervised unseen object instance segmentation via long-term robot interaction.arXiv preprint arXiv:2302.03793 , 2023.[3] C. Xie, Y . Xiang, A. Mousavian, and D. Fox. Unseen object instance segmentation for roboticenvironments. IEEE Transactions on Robotics , 37(5):1343–1359, 2021.[4] Y . Xiang, C. Xie, A. Mousavian, and D. Fox. Learning rgb-d feature embeddings for unseenobject instance segmentation. In Conference on Robot Learning , pages 461–470. PMLR, 2021.[5] Y . Lu, Y . Chen, N. Ruozzi, and Y . Xiang. Mean shift mask transformer for unseen objectinstance segmentation. arXiv preprint arXiv:2211.11679 , 2022.[6] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643 , 2023.[7] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International journalof computer vision , 60:91–110, 2004.[8] Z. Teed and J. Deng. Raft: Recurrent all-pairs field transforms for optical flow. In ComputerVision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceed-ings, Part II 16 , pages 402–419. Springer, 2020.[9] H. K. Cheng and A. G. Schwing. Xmem: Long-term video object segmentation with anatkinson-shiffrin memory model. In Computer Vision–ECCV 2022: 17th European Con-ference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII , pages 640–658.Springer, 2022.[10] J. Yang, M. Gao, Z. Li, S. Gao, F. Wang, and F. Zheng. Track anything: Segment anythingmeets videos, 2023.[11] M. Heo, S. Hwang, S. W. Oh, J.-Y . Lee, and S. J. Kim. Vita: Video instance segmentation viaobject token association. arXiv preprint arXiv:2206.04403 , 2022.[12] B. Cheng, A. Choudhuri, I. Misra, A. Kirillov, R. Girdhar, and A. G. Schwing. Mask2formerfor video instance segmentation. arXiv preprint arXiv:2112.10764 , 2021.[13] D.-A. Huang, Z. Yu, and A. Anandkumar. Minvis: A minimal video instance segmentationframework without video-based training. arXiv preprint arXiv:2208.02245 , 2022.[14] N. Xu, L. Yang, Y . Fan, D. Yue, Y . Liang, J. Yang, and T. Huang. Youtube-vos: A large-scalevideo object segmentation benchmark. arXiv preprint arXiv:1809.03327 , 2018.[15] J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbel ́aez, A. Sorkine-Hornung, and L. Van Gool. The2017 davis challenge on video object segmentation. arXiv preprint arXiv:1704.00675 , 2017.9[16] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. Simple online and realtime tracking. In2016 IEEE international conference on image processing (ICIP) , pages 3464–3468, 2016.[17] N. Wojke, A. Bewley, and D. Paulus. Simple online and realtime tracking with a deep as-sociation metric. In 2017 IEEE international conference on image processing (ICIP) , pages3645–3649. IEEE, 2017.[18] J. Luiten, I. E. Zulfikar, and B. Leibe. Unovost: Unsupervised offline video object segmen-tation and tracking. In Proceedings of the IEEE/CVF winter conference on applications ofcomputer vision , pages 2000–2009, 2020.[19] S. Garg and V . Goel. Mask selection and propagation for unsupervised video object segmenta-tion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision ,pages 1680–1690, 2021.[20] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification:A benchmark. In Proceedings of the IEEE international conference on computer vision , pages1116–1124, 2015.[21] A. Wu, W.-S. Zheng, H.-X. Yu, S. Gong, and J. Lai. Rgb-infrared cross-modality person re-identification. In Proceedings of the IEEE international conference on computer vision , pages5380–5389, 2017.[22] L. Yang, Y . Fan, and N. Xu. Video instance segmentation. In Proceedings of the IEEE/CVFInternational Conference on Computer Vision , pages 5188–5197, 2019.[23] J. Qi, Y . Gao, Y . Hu, X. Wang, X. Liu, X. Bai, S. Belongie, A. Yuille, P. H. Torr, and S. Bai.Occluded video instance segmentation: A benchmark. International Journal of ComputerVision , 130(8):2022–2039, 2022.[24] S. Caelles, J. Pont-Tuset, F. Perazzi, A. Montes, K.-K. Maninis, and L. Van Gool. The 2019davis challenge on vos: Unsupervised multi-object segmentation. arXiv:1905.00737 , 2019.[25] C. Rother, T. Minka, A. Blake, and V . Kolmogorov. Cosegmentation of image pairs by his-togram matching-incorporating a global constraint into mrfs. In 2006 IEEE Computer SocietyConference on Computer Vision and Pattern Recognition (CVPR’06) , volume 1, pages 993–1000. IEEE, 2006.[26] Y . Zhao, Z. Li, X. Guo, and Y . Lu. Alignment-guided temporal attention for video actionrecognition. Advances in Neural Information Processing Systems , 35:13627–13639, 2022.[27] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention mask trans-former for universal image segmentation. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 1290–1299, 2022.[28] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778,2016.[29] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-end ob-ject detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 , pages 213–229. Springer, 2020.[30] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai. Deformable detr: Deformable transformersfor end-to-end object detection. arXiv preprint arXiv:2010.04159 , 2020.[31] H. W. Kuhn. The hungarian method for the assignment problem. Naval research logisticsquarterly , 2(1-2):83–97, 1955.10[32] P. R. Florence. Dense visual learning for robot manipulation . PhD thesis, MassachusettsInstitute of Technology, 2020.[33] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[34] ambientCG. Creative Commons Zero (CC0). https://ambientcg.com , 2023.[35] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh,and V . Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned householditems. In 2022 International Conference on Robotics and Automation (ICRA) , pages 2553–2560. IEEE, 2022.[36] T.-Y . Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll ́ar, and C. L. Zitnick.Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th EuropeanConference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13 , pages 740–755. Springer, 2014.[37] M. Grotz, S. Atar, Y . Li, P. Torrado, B. Yang, N. Walker, M. Murray, M. Cakmak, andJ. Smith. Towards robustly picking unseen objects from densely packed shelves. In Work-shop on Perception and Manipulation Challenges for Warehouse Automation , 2023. URLhttp://armbench.s3.amazonaws.com/rss23.html .[38] A. Gouda, A. Ghanem, and C. Reining. Dopose-6d dataset for object segmentation and 6dpose estimation. 2022 21st IEEE International Conference on Machine Learning and Applica-tions (ICMLA) , pages 477–483, 2022. URL https://api.semanticscholar.org/CorpusID:254044017 .[39] J. Chen, M. Sun, T. Bao, R. Zhao, L. Wu, and Z. He. 3d model-based zero-shot pose estimationpipeline. ArXiv , abs/2305.17934, 2023. URL https://api.semanticscholar.org/CorpusID:258960481 .[40] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dofgrasp generation in cluttered scenes. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 13438–13444. IEEE, 2021.[41] P. R. Florence, L. Manuelli, and R. Tedrake. Dense object nets: Learning dense visual objectdescriptors by and for robotic manipulation. arXiv preprint arXiv:1806.08756 , 2018.A Dataset DetailA.1 Synthetic DataWe built a synthetic dataset using high-quality household models from the GoogleScanneddataset[35] with two typical settings: a) Shelf and b) Tabletop.Shelf environment. In shelf environments or other bin-based object arrangements, the objects areakin to books and are constrained to a shortest-dimension-faces-outward orientation. This schemeensures that each object is guaranteed to have at least one visible face, but it also leads to significantocclusion among objects. The camera is positioned at the front of the bin to capture images of thescene, subject to random perturbations in the location that inject noise into the data.Given that each bin contains a maximum of 3 to 5 objects, segmentation and tracking tasks becometrivial if the scene contains fewer than 3 objects. To address this issue, image frames are generatedonly when the bin is nearly full. We leverage approximately 900 objects sourced from the GoogleScanned dataset, resulting in a training set of approximately 9000 image pairs. We use the remaining11Figure 6: Some objects used during the evaluation. Objects vary greatly in shape and physicalproperties, with some being partially transparent or wrapped in a bag.100 objects to generate approximately 1000 image pairs for the test set. Each image pair may exhibitthe introduction of a new object in addition to existing objects undergoing a flipping operation orrelocation with a certain probability.Tabletop environments. Generating datasets of objects placed on a table requires different set-tings given the absence of walls and typically larger surface area than in a bin-based object ar-rangements. As a result, we adopt an alternative strategy for dataset generation. Specifically, eachsequence consists of 15 images, with the first 10 images incrementally introducing new objects whileshuffling existing objects between frames. No new objects are added in the final 5 frames, though theshuffling of existing objects persists. Due to the random placement of objects on the table, instancesof full occlusion may occur in certain frames and subsequently reappear in subsequent frames.To construct our training and testing datasets, we utilize 900 objects sourced from the GoogleScanned dataset, producing 2000 sequences for the training set, with the remaining 100 objectsused to generate 500 sequences for the test set.A.2 Real DataAs we did for the synthetic evaluation, we split the evaluation into shelf and tabletop environments,the most common real-world scenarios encountered. To evaluate our method on challenging real-world scenarios, we need a large variety of objects; Figure 6 depicts some of the objects used duringthe tablefop evaluation. For shelf environments, we use an Azure Kinect RGB-D sensor, and fortabletop ones we use an Intel Realsense D455 camera. Camera distance ranges from 1 to 1.5 meters.Each time an object is placed on the table or in a new bin, a new image is captured. Objects canbe rearranged to maximize space utilization as they are placed in the scene. After all the objectsare placed in the scene, we also displace the objects for a more refined evaluation. Camera imagesare manually labeled using the interactive segmentation of the object tracking framework XMem[9]. We collected and annotated more than 280 images with more than 150 different objects for thetabletop scenario and 220 images for the shelf scenario.B Training and Inference DetailsB.1 Training DetailsWe set the maximum number of iterations to 16k using an initial learning rate of 1e-5, which wasthen dropped by 0.1 after 14k iterations. The number of classes is set to 1 since we are aimingto handle unseen objects. For the shelf dataset, we trained our network with a batch size of 32 andleveraged 2 frames from each sequence; for the table dataset, we set the batch size to 8 and randomlyselected 4 frames from each sequence. To enhance the diversity of our dataset, we applied randomcolor jittering and rotation to the input before feeding it to the network. The training process wasexecuted on a single NVIDIA A-40 GPU and took approximately 13 hours.12During the training phase, we excluded the initial predicted object embedding, which was directlygenerated from the query feature. Additionally, when handling negative queries, we adopted a moreselective approach by considering only queries whose IoU with any ground truth was lower than 0.6rather than regarding all unmatched queries as negatives. This was motivated by the lack of clarityregarding which patches truly represent objects in unseen object settings (in contrast to close-setsettings).B.2 AssociatorWe show below an example of code demonstrating how to associate object tokens from a new framewith the trajectory bank built in previous frames. In implementation, we set σscore = 0.6andσmatch= 0.2(similarity ranged in [−1,1]).def associate_one_frame(traj_bank, object_tokens_cur_frame, delta_score,delta_track):object_tokens = [x for x in queries_this_frame if x[’score’]>delta_score]num_trackers = len(traj_bank)Nq = len(object_tokens)similarity = torch.ones(num_trackers+Nq, num_pred) *delta_track# Extract object embedding from current frame’s object tokensobj_embed = torch.stack([x[’obj_embed’] for x in object_tokens])# Compute similarity between object embedding of trajectory andcurrent frame’s object tokensfor traj_idx, traj in enumerate(traj_bank):traj_obj_embed = torch.stack([x[’obj_embed’] for x in traj])sim = traj_obj_embed @ obj_embedsimilarity[traj_idx] = sim.max(dim=0)[0]# Perform Hungarian matching to find bipartite matching which havehightest similaritytraj_indices, obj_token_indices = hungarian_matching(-similarity)# Update trackerfor traj_idx, token_idx in zip(traj_indices, obj_token_indices):if traj_idx > num_trackers:# if it is not matched with any existing trajectorytraj_bank.append([object_tokens[token_idx]])else:traj_bank[traj_idx].append(object_tokens[token_idx])return traj_bankB.3 LossWe keep the loss function that Mask2Former used for classification and mask prediction, whichmeans binary cross entropy and dice loss for mask prediction and softmax cross entropy loss forclassification.For the object embedding head, we also use two losses: contrastive loss and softmax loss (or n-pairloss and InfoNCE loss).13Contrastive Loss. We use contrastive loss modified from DCN[41] with hard-negative scalingfrom [32].Lmatches (Q) =1NmatchesXNmatchesD(qoit1, qoit2)2(5)Lnon-matches (Q) =1Nhard-negXNnon-matches(0, M−D(qoit1, qojt2)i̸=j) (6)L(Q) =Lmatches (Q) +Lnon-matches (Q), (7)whereNhard-negatives =XNnon-matches1(M−D(qoit1, qoit2)>0). (8)Here, Qdenotes all object tokens from images, and qoitdenotes the object tokens assigned to objectoiin frame t.Mis the margin parameter used to ensure that non-matched pairs have a distance ofat least Mapart. The distance function Dis the cosine distance function, as in UCN [4], which isdefined as:D(qi, qj) =12(1−ri·rj). (9)Here, ri=f(qi)|f(qi)|is the object embedding of object token i, which is computed by first forwardingthe query to a linear layer fand then normalizing it to a unit vector. To expedite the training process,we selectively incorporate a subset of negative queries to contribute to the contrastive loss, therebyenhancing its efficiency.An illustration of the contrastive loss is shown in Figure 7. Assuming that three frames are sampledfrom a sequence during training, the contrastive loss will be computed between all frames. In thefigure, matched pairs are denoted by dark gray and apply loss according to Equation 5, while non-matched pairs are denoted by light gray and apply loss according to Equation 6.Softmax Loss We also modified the N-pair/InfoNCE loss used in CLIP[33].Lsoftmax (t) =−Xk∈Q+Xi∈O(t)exp(rkt·rit·eτ)Pj∈Qtexp(rkt·rit·eτ)(10)Lsoftmax =1TXt∈1,···,TLsoftmax (t), (11)where Q+denotes all positive queries, O(t)denotes all objects in frame t, andQtdenotes all queriesin frame t. This is also shown in Fig. 7-b, where the label of each row corresponds to the index ofthe query assigned to the same object. If there are nidentical objects in the same frame, the softmaxloss should be extended by copying all queries in this frame ntimes, each time keeping only onequery for that object. This allows queries containing the same object to be converted into multiplequery sets, each consisting of only the target object.Thus, the final tracking loss can be represented asLtrack=λcontraLcontra+λsoftmaxLsoftmax . (12)C Detailed AnalysisC.1 Sim-to-real GapResults are shown in Table 4. The experiment is conducted in the tabletop environment with thesame setting as in main manuscript. We see the following from(1) The Image AP results from both the synthetic validation set and real test set demonstratethat MinVIS outperforms Mask2Former Video and VITA. This indicates that the approach of14Figure 7: Tracking loss. In this example, three frames are sampled from a sequence, denoted withdifferent border colors. Object tokens that match the objects in the images are represented by ablue square, an orange triangle, and a red circle. The hexagon denotes background object tokensthat do not match to any objects. (a) the contrastive loss is computed between all frames, wherematched pairs (dark gray) apply loss using Equation 5, non-matched pairs (white) apply loss usingEquation 6, and ignored pairs (light gray) do not contribute to the loss. (b) the n-pair/InfoNCE lossis computed over all positive queries and queries from each frame. Equivalent to using a softmaxcross-entropy while setting the label of the index of queries assigned to the same object.methodsyn., video AP syn., image AP real, video AP real, image APAP@all AP@0.5 AP@all AP@0.5 AP@all AP@0.5 AP@all AP@0.5UCN [4] - - 67.3 82.9 - - 52.4 86.6MinVIS [13] 0.3 2.6 82.4 91.8 0.7 0.0 54.5 72.7Mask2Former Video [12] 71.6 83.7 72.8 82.6 27.7 56.7 38.9 57.3VITA [11] 69.4 81.9 70.6 80.2 26.6 55.0 41.4 63.0Ours 74.1 89.3 87.6 95.3 49.7 75.4 80.1 97.6Table 4: Evaluation of SOTA VIS methods on the unseen object instance segmentation task. ”syn”indicates evaluation on a synthetic tabletop dataset; ”real” denotes evaluation on a real-world table-top dataset. The evaluation metrics include ”video” and ”image” for assessing performance in dif-ferent contexts (as described in subsection B.1). We see that MinVIS exhibits superior performancein detection, while Mask2Former Video and VITA excel in matching. Remarkably, our proposedmethod harnesses the strengths of both approaches, surpassing all evaluated methods in overall per-formance.Mask2Former Video and VITA, which utilizes a single object token to predict object masks acrossan entire sequence, is less effective than using distinct object tokens for each frame. Consequently,it is difficult for object tokens to efficiently manage discrete frames with considerable movementand appearance variations.(2) A significant decline is apparent in the Image AP and Video AP results on the real test set forMinVIS. This suggests a suboptimal tracking performance when applying object tokens directly.(3) Notably, our method experiences a less dramatic drop in performance, as evidenced by the VideoAP results on both the synthetic validation set and the real test set. This suggests that our method ismore adept at managing the simulation-to-reality gap.D Failure CasesThe efficacy of our method is sometimes compromised in scenarios characterized by crowded scenesor significant alterations in object appearance, as illustrated in Figure 10. We categorize these short-comings into two principal groups: segmentation failures and tracking failures.Segmentation failures arise from issues such as:•Under-segmentation : This occurs when objects have similar colors or lack clear borders,leading to a blending of distinct entities.•Over-segmentation : In this case, a single object is erroneously identified as multiple entitiesdue to recognition failures.15Figure 8: Results from different methods on the tabletop dataset. Methods ordered from top tobottom: MinVIS, Mask2Former-Video, VITA, and Ours (STOW).•Detection failure : Here, an object is entirely missed, leading to its absence in the segmentedoutput.Tracking failures, on the other hand, include:•Mismatch : This involves incorrect associations between objects across frames or confusionarising from similar-looking distractors.•Mistrack : In these instances, the algorithm fails to consistently identify the same object,resulting in tracking inconsistencies.In both categories of failure, the complexity of scene compositions and variations in object appear-ances are pivotal factors that undermine the performance of our tracking method. We are devotedto exploring advanced strategies to mitigate these limitations, aiming for enhanced robustness indiverse and dynamic environments.16Figure 9: Results from different methods on the bin dataset. Methods ordered from top to bottom:MinVIS, Mask2Former-Video, VITA, and Ours (STOW).Figure 10: Failure Cases Illustrated. In the Tabletop settings (top 2 rows), we observe under-segmentation (top left), over-segmentation (top right), mismatch (bottom left), and mistrack (bot-tom right). In the Shelf settings (bottom two rows), the failures include a combination of under-segmentation and failure to detect new objects (top left), failure to track and mismatch (top right),failure to detect (bottom left), and failure to segment (bottom right).17 |
IeKC9khX5jD | Affordance-Driven Next-Best-View Planning forRobotic GraspingXuechao Zhang1,2,*, Dong Wang2,B, Sun Han1,2, Weichuang Li2,Bin Zhao2, Zhigang Wang2, Xiaoming Duan1, Chongrong Fang1, Xuelong Li2, Jianping He1,B1Shanghai Jiao Tong University,2Shanghai Artificial Intelligence LaboratoryAbstract: Grasping occluded objects in cluttered environments is an essentialcomponent in complex robotic manipulation tasks. In this paper, we introducean AffordanCE-driven Next-Best-View planning policy (ACE-NBV) that tries tofind a feasible grasp for target object via continuously observing scenes fromnew viewpoints. This policy is motivated by the observation that the grasp af-fordances of an occluded object can be better-measured under the view when theview-direction are the same as the grasp view. Specifically, our method leveragesthe paradigm of novel view imagery to predict the grasps affordances under pre-viously unobserved view, and select next observation view based on the highestimagined grasp quality of the target object. The experimental results in simula-tion and on a real robot demonstrate the effectiveness of the proposed affordance-driven next-best-view planning policy. Project page: https://sszxc.net/ace-nbv/.Keywords: Grasp Synthesis, Neural SDF, Next-Best-View Planning1 IntroductionWhen we aim to pick up an occluded object from an unstructured environment, observations froma single perspective often fail to provide sufficient affordances information, and we spontaneouslymove our heads to obtain new perspectives of the occluded object. The driving force behind theactions that lead us to seek the next observation relies on our imagination and spatial reasoningabilities. We know in which direction we can better interact with objects, and we will choose toobserve in this direction the next time. However, current intelligent robotic systems are not able toperform these tasks efficiently, and a unified framework for addressing this challenge is lacking. Inthis work, we aim to investigate the feasibility of endowing robots with this capability.As shown in Fig. 1, we focus on the task of grasping a specific object in cluttered scenes by a roboticarm with a parallel-jaw gripper. There are relatively mature approaches for predicting grasp posesfor unknown objects in cluttered scenes, and most of them first observe the entire scene from one ormore fixed viewpoints [1, 2] and predict grasps of all objects at once. However, these methods mayfail to predict a feasible grasp for a specific object due to heavy occlusions between objects. In orderto design a more stable grasp affordances prediction pipeline, some previous works [3, 4] introducedactive perception modules to observe the scene from several new selected viewpoints before execut-ing the grasp. They all select the new observation view directions based on the information gainof object geometry reconstruction. However, the improvement of geometry reconstruction does notalways indicate a better grasp quality.In this paper, as shown in Fig. 1, we build on the intuition that the grasp affordances can be better-measured using the observation when the observation view direction is the same as the grasp di-rection. Based on this insight, we leverage the novel view imagery ability of the implicit neural* Work done while the author was an intern at Shanghai Artificial Intelligence Laboratory.BCorresponding authors: Dong Wang (wangdong@pjlab.org.cn), Jianping He (jphe@sjtu.edu.cn)7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.12Occluded ViewImagined ViewFigure 1: The task of the robotic arm is to grasp the red target object, but the view of its in-handcamera is hindered by a cluttered scene, making it difficult to provide high-quality and collision-freegrasping pose predictions. In this study, we draw insight that the grasp affordance of occluded targetobject can be well-measured when the view direction is the same as the grasp direction, and proposea framework that plans the next observation based on the increment of grasp affordances to find afeasible grasp on the target object.representation to predict the grasp affordances of imagined novel grasps, and set the next best ob-servation view to the imagined grasp view that yields the highest gain in the grasp quality ratherthan object geometry reconstruction. Specifically, we first propose a view-aware grasp affordancesprediction module to effectively exploit the target object geometry information and occlusion re-lationships between objects for better grasp synthesis. Then, we adopt similar training paradigmas NeRF [5] to enable our model to imagine the scene representation from previously unobservedviewpoints. With this scene imagery ability, we predict the grasp affordances of target object undermany imagined views with proposed view-aware grasp affordances prediction module. Last, a next-best-view planning policy is designed to continuously observe the scene from selected new viewsuntil a feasible grasp on target object is found. In summary, the contributions of this work are asfollows:• We propose a view-aware grasp affordances prediction module for better grasp synthesison an occluded target object in cluttered environments.• We design a next-best-view planning framework that leverages the implicit neural repre-sentation to jointly predict imagined grasp affordances under unseen views and select thenext observation view based on the grasp affordances prediction.• We demonstrate significant improvements of our model over the state-of-the-art for thegrasp task in cluttered scenes in simulation and on a real robot.2 Related Works2.1 Grasp DetectionGrasping objects is one of the fundamental abilities for robotic manipulators in manipulation tasks.In order to grasp diverse unknown objects in any environment, a robotic system must effectivelyutilize the geometric information gathered from its sensors to calculate the feasible grasping poses.Recent advances in deep learning methods have led to rapid developments in robot object grasp-ing [6, 7, 8, 9, 10, 11, 12]. A significant portion of these methods do not require object localizationand object pose estimation, but instead perform grasp affordances prediction using end-to-end ap-proaches [1, 13]. In particular, Dex-Net [13, 14] adopts a two-step generate-and-evaluate approachfor top-down antipodal grasping, and VGN [1] introduced a one-step approach for predicting 6-DoFgrasping configurations in cluttered environments. GIGA [2] exploits the synergistic relationshipsbetween the grasp affordances prediction and 3D reconstruction of scenes in cluttered environmentsfor grasp detection. These works all take fixed single or multiple images as input [1, 2, 13, 15], andthe robustness of these methods is largely influenced by the observation camera viewpoints [16].When dealing with complex environments with strong occlusions, many works [3, 4, 17] try tograsp objects by dynamically moving the observation sensors to obtain additional scenes and objectgeometry information.22.2 Next-Best-View PlanningNext-Best-View (NBV) planning, which aims to recursively plan the next observation position forsensors, is one of the most challenging problems in active vision for robotics [18]. Compared tothe passive observation paradigm, active perception with next-best-view planning enables a moreflexible way of obtaining environment information. It has been applied in various fields, such asobject reconstruction [19, 20, 21], object recognition [22, 23], and grasp detection [3, 4, 17]. NBVplanning is typically divided into two categories: synthesis methods and search methods. Synthesismethods directly calculate the next observation position based on current observations and taskconstraints [24], with some methods [25] working within the paradigm of reinforcement learning.On the other hand, search methods first generate a certain number of candidate viewpoints and thenselect viewpoints based on human-designed criteria. Most search-based approaches use the gain of3D geometry reconstruction as the metric to select next viewpoints [3, 4, 19]. In particular, Arrudaet al. [3] propose a next-best-view planning policy that maximizes object surface reconstructionquality between the object and a given grasp. Breyer et al. [4] design a closed-loop next-best-viewplanner based volumetric reconstruction of the target object. Recent work based on neural radiancefields has also proposed some uncertainty-driven methods [26, 27, 28, 29, 30]. However, for grasptasks, evaluating viewpoints from grasp affordances is a more direct approach [17]. In this article,we mainly explore how to use grasp affordances information for NBV planning.3 Problem FormulationNoYesPlan Next ViewUpdate PerceptionCompute GraspExecute GraspStopping criteria satisfied?Figure 2: Overview of the next-best-view plan-ning policy for grasping.We consider the same active grasp problem asin [4]: picking up an occluded target object incluttered scenes using a robotic arm with aneye-in-hand depth camera. As shown in Fig. 2,the target object is partly visible within the ini-tial camera view field and a 3D bounding boxis given to locate the target object. Our goal isto design a policy that moves the robotic arm tofind a feasible grasp for the target object.An overview of the whole system is shown inFig. 2. Given a cluttered scene on a tabletop andan occluded target object Twith correspondingbounding box Tbbox, we aim to predict a feasi-ble 6-DoF grasp Gfor the target object T. Specifically, at each time t, we obtain the observationDtand integrate it into a Truncated Signed Distance Function (TSDF) Mt. Then we predict sev-eral possible grasps G1,G2, . . . ,GNfor the target object based on current Mt. Next, we use astopping criterion to determine whether a feasible grasp on the target object has been found. If thestopping criterion is satisfied, we select the grasp of G∗with the highest predicted quality to exe-cute. Otherwise, our proposed model computes a Next-Best-View Ot+1and moves the robotic armto this viewpoint to get a new observation Dt+1which will be integrated into Mt+1. Then, a set ofnew grasps are predicted using Mt+1. This observe-predict-plan closed-loop policy is continuouslyrunning until the stopping criterion is met.4 MethodWe now present the AffordanCE-driven Next-Best-View planning policy (ACE-NBV), a learningframework that leverages the paradigm of novel view imagery to predict the grasp affordances forthe unseen views and achieve the closed-loop next-best-view planning according to predicted graspaffordances. As shown in Fig. 3, our model is composed of two modules: 1) a view-aware graspaffordances prediction module, and 2) an affordance imagery of unseen novel views module. Wediscuss these two modules in Sec. 4.1 and Sec. 4.2, respectively, followed by the proposed next-best-view planning policy in Sec. 4.3 and training details in Sec. 4.4.3Grasp QualityGripper WidthGrasp Rotation3D Feature VolumeCSDFdirectionvGwGqGrGvDepth RendererDepth ImageCrayCgeoTSDFMGrasp3D ConvEncoderxzyFigure 3: Architecture of the proposed ACE-NBV . The input is a TSDF voxel field Mobtained fromthe depth image. The upper branch predicts the grasp affordances for the target object and the lowerbranch synthesizes the depth image of different views, including the previously unseen views. Bothbranches share the same tri-plane feature volume C.4.1 View-Aware Grasp Affordances PredictionIn this work, we define the grasp affordances in the form of grasp quality Gq, grasp centerGp= (x, y, z ), grasp orientation GR∈SO(3), and opening width Gwof the parallel-jaw grip-per. Following the grasp pose representation in [31], we decouple the grasp orientation GRas thegrasp view Gvand in-plane rotation Gr. Because of the heavy occlusions in the cluttered scenes,we draw insight that the grasp affordances can be better estimated using the observation whose theview-direction Ovis the same as the grasp direction Gv. Motivated by this, we propose a novelview-aware grasp affordances prediction module to predict the in-plane rotation Gr, grasp qualityGqand gripper width Gwgiven a specific grasp center Gpand grasp direction Gv.Specifically, given a depth image Dt∈RH×Wcaptured by the depth camera on a robotic arm, wefirst integrate it into a TSDF Mt∈R40×40×40, which represents a cubic workspace of size Landcan be incrementally updated with the new observed depth images. As shown in Fig. 3, at each timestept, our model takes the current TSDF voxel Mtas input and processes it with a 3D CNN networkto obtain a tri-plane feature volume C∈Rh×w×3as in [32], which is shared for view-aware graspaffordances prediction and novel view depth synthesis.For grasp affordances prediction of the occluded target object, we first uniformly sample Npointsin 3D as grasp centers Gp1,Gp2, ...,GpNin the given 3D bounding box, and set the current obser-vation view Ovto grasp view Gv. To predict grasp affordance for a specific grasp center Gp, wecast a ray r= (Gp,Gv)from orthographic cameras [33] origins oalong the direction Gvpassingthrough the grasp center Gp. In particular, as shown in Fig. 3, these rays are cast into the tri-planefeature volume Candnpoints S={s1,s2, ...,sn}are uniformly sampled along each ray. Then, wequery the local features Csiof these 3D points from the shared tri-plane feature volume, and theselocal features are integrated together as a ray feature Crayvia max pooling, i.e.,Cray=maxpool (Cs1,Cs2, ...,Csn). (1)This ray feature captures the occlusion relationships between objects along the ray direction, whichis essential for grasp affordances prediction in cluttered scenes.In addition, local geometry information around the grasp center is another key factor for grasp af-fordances prediction on a target object. Therefore, as shown in Fig. 3, we draw a small 3D boundingbox along the view-direction Ovwith fixed length and width around grasp center Gp. Then weobtain the tri-plane features of the eight vertices of this cuboid Cvert 1,Cvert 2, ...,Cvert 8and con-catenate these features with the feature of grasp center CGp. This concatenated feature is denotedas local geometry feature Cgeo, i.e.,Cgeo=concat (Cvert 1,Cvert 2, ...,Cvert 8,CGp). (2)4Based on the above ray feature and local geometry feature, we implement the grasp affordancesprediction module as a small fully-connected neural network fGthat takes Gv,Cray,Cgeoas inputand outputs the in-plane rotation Gr, grasp quality Gqand gripper width Gw,Gr,Gq,Gw←fG(Gv,Cray,Cgeo). (3)In (3), Gvis a 3-dimensional unit vector that represents the view direction, Gw∈[0, wmax]wherewmaxis the maximum gripper width, and grasp quality Gq∈[0,1].4.2 Affordance Imagery with Implicit Neural RepresentationInspired by the impressive performance of neural radiance fields in the new view synthesis, we adoptthe same paradigm to enable our model to imagine the scene geometry from previously unobservedviewpoints. With this scene imagery ability, our model can predict reasonable grasp affordancesunder unseen viewpoints, and the imagined grasp affordances are used for the next-best-view selec-tion. Specifically, as in Fig. 3, we share the same tri-plane feature volume Cfor novel view depthsynthesis and grasp affordances prediction, and the network is trained with two tasks simultaneously.First, we build a geometry decoder upon the shared tri-plane feature volume Cfor novel view depthsynthesis. We implement this geometry decoder as an MLP network that takes local feature Cxyzofa 3D point p= (x, y, z )as input and predict its signed distance function (SDF) value. Then, for agiven view direction Dv, we sample a series of 3D points along the ray and synthesize depth imagesDusing their corresponding SDF values, following the approach described in NeuS [34], i.e.,D←FS(C,Dv), (4)where FSdenotes the whole network branch for novel view depth synthesis. Note that here we onlyutilize depth images as supervision, which differs from the approach described in the original NeuSpaper. The optimization with (4) makes the shared tri-plane feature volume able to reason scenegeometry under unseen viewpoints, which aids us in grasping affordances imagery.For grasp affordances imagery, we utilize the method described in (3) to predict grasps at points inthe bounding box Tbboxfrom given direction Gv. The model takes current feature volume Casinput and imagine a grasp affordance map for a novel view. Let FGrepresent the whole networkbranch for grasp affordances prediction, the affordances imagery pipeline is formulated as:G←FG(Tbbox,C,Gv). (5)4.3 Next-Best-View Planning for GraspingWe design a closed-loop next-best-view planning policy πto determine the next observation viewwhich is most beneficial for grasping the target object when no feasible grasp is found. Let Gvbea view from a set of potential next observation views Gv⊂SE(3). The goal of our next-best-viewplanning policy is to find the next observation view Ov,t+1with the highest predicted grasp qualityG∗qfrom a set of imagined grasp affordances for the target object, i.e.,Ov,t+1←arg maxGv∈GvG∗q(Tbbox,Ct,Gv). (6)We adopt a methodology similar to that presented in [4] to generate potential next grasping viewsGv, and predict the imagined grasp affordances under these potential views with the above affor-dances imagery module. In addition, we use two simple stopping criteria to decide whether to stopor to continue to find the next observation view. First, the policy is terminated if the highest graspquality G∗qof currently predicted grasps is above a given threshold qmax. Second, we impose amaximum number of next-best-view planning steps Tmax. We summarize the overall next-best-viewplanning policy for grasping as an algorithm in the appendix.4.4 TrainingThe network is trained end-to-end using ground-truth grasps obtained through simulated trials.To achieve generalizable grasp affordances imagery, we generate three types of input-output data5pairs: front-observe-front-grasp ,front-observe-side-grasp , and multi-observe-front-grasp . Thefront-observe-front-grasp means that our model takes one front view depth image as input andpredicts grasp affordances and depth image under the same front view. Similarly, the other twotypes of data pairs represent different input and prediction task pairs under different views. Notethat multi-observe means the input TSDF is fused from several depth images from different views,aiming to construct data that closely resembles the input distribution during the reasoning processof the closed-loop grasping. By incorporating such data pairs into the dataset, we enable the modelto predict the affordance of objects in unobserved directions, thus allowing for a more accurateevaluation of candidate observation directions.The training loss consists of two components: the grasp affordances prediction loss LAand thenovel view depth synthesis loss LS. For the grasp affordances prediction loss, we adopt a similartraining objective as VGN [1]:LA(G,ˆG) =Lq(Gq,ˆGq) +Lr(Gr,ˆGr) +Lw(Gw,ˆGw)). (7)In (7), ˆGrepresents the ground-truth grasp, and Grepresents the predicted grasp. The ground-truthgrasp quality is denoted by ˆGq, which takes on a value of 0 (representing failure) or 1 (representingsuccess). The binary cross-entropy loss between the predicted and ground-truth grasp quality isrepresented by Lq. The cosine similarity between the predicted rotation ˆGrand the ground-truthrotation Gris denoted as Lr, while Lwrepresents the l2-distance between the predicted gripperwidth ˆGwand the ground-truth gripper width Gw. The supervision of the grasp rotation and gripperwidth is only applied when the grasp is successful (i.e., ˆGq= 1).The geometry loss is calculated using the standard l1loss between the synthesized depth image andthe actual depth image, and is denoted by LS. The final loss Lis obtained by adding the affordancesloss and the geometry loss together, i.e., L=LA+LS.5 ExperimentsWe evaluate the performance of our algorithm by grasping an occluded target object in simula-tion and real-world environments. We use a 7-DoF Panda robotic arm from Franka Emika, witha RealSense D435 attached to the end effector. Our algorithm was implemented in Python, usingPyTorch for neural network inference and ROS as the hardware interface. We use Open3D [35]for TSDF fusion, and all experiments use TRAC-IK [36] for IK computations, and MoveIt [37] formotion planning and are run on a same computer.In our experiments, we evaluate the performance of our method and existing methods with thefollowing metrics. Success Rate (SR) : the proportion of successful grasps. Failure Rate (FR) : theproportion of failed grasps. Abort Rate (AR) : the proportion of cases where no valid grasp wasfound even after the maximum number of views was reached. #Views : the average number of viewsplanned by the algorithm for each round. Time (only in real world experiments): the total timeconsumed, including observation, planning, and execution.We compare the performance of our algorithm with the following baselines: 1) initial-view : mostwork in visual grasp detection considers a single viewpoint for grasp detection. In this baselinethe robot detects a grasp using only the initial view. 2) top-view : the robot detects grasps from asingle top-down image captured on the top of the workspace center, which is a typical setting fortabletop robotic manipulation. 3) fixed-traj. : the robot captures 4 images by moving along a circulartrajectory centered on the target object, looking down at 30°, with uniform intervals. The imagesare subsequently used for TSDF fusion and grasp detection. 4) GIGA : a simple next-best-viewpolicy based on GIGA [2], which uses the predicted best grasp direction as the next view direction.5)Breyer’s : the state-of-the-art closed-loop next-best-view policy from [4], using the geometry-based information gain approach to plan next-best-view for target object grasping. All baselines usethe same controller described above to generate the robot motion.65.1 Simulated ExperimentsAs shown in Fig. 4, in simulation environments, we generate simulation scenes in PyBullet [38] us-ing the “packed” approach described in [1] and the object with the smallest amount of visible pixelsin initial view is selected as the grasping target. Table 1 shows the results of the 400 experimenttrials with each method in simulation environments.We observe that the success rate of the initial-view method is the lowest since the strong occlusionof the target object in the initial view leads to a difficult grasp affordance prediction. Top-view re-sults in a high success rate because the target can always be grasped from the top in the generatedscenes, making it a simple and effective strategy to find feasible grasps. The success rate of thefixed-traj. algorithm is higher than the initial-view as it collects more scene information from 4predefined viewpoints. On the other hand, the state-of-the-art closed-loop next-best-view planningmethod Breyer’s achieves superior grasping performance and it requires only a few new observa-tions. Finally, compared to Breyer’s , our method achieves a comparable success rate with fewer newobservations and obtains a significant improvement on the success rate when only one new obser-vation (2-Views SR) is allowed. This indicates that our model can find more informative views forgrasping the target object. Moreover, the qualitative results of next-best-view planning of our modelis shown in Fig. 4, and the examples verify the effectiveness next-best-view planning ability of ourproposed method.To investigate the influence of different components within our model, we test two variants in Ta-ble 1: (i) ours w/o feature CgeoandCraywhere the features CgeoandCrayare replaced withfeature CGpof the grasp center, and (ii) ours w/o novel view synthesis branch that removes thenovel view depth synthesis in Sec. 4.2. We find that ours w/o features CgeoandCrayresults insignificantly worse grasp affordances prediction, and ours w/o novel view synthesis branch needsmore observation views to achieve a comparable performance. Moreover, the results of 2-View SRsuggests their importance in finding informative views.Table 1: Results from the simulation experimentsMethod SR FR AR #Views 2-Views SRinitial-view 71% 7% 22% 1.00 N/Atop-view 79% 6% 15% 1.00 N/Afixed-traj. 77% 6% 17% 4.00 N/ABreyer’s [4] 81% 8% 11% 4.31 77%Our w/o feature CgeoandCray 74% 8% 18% 2.54 72%Ours w/o novel view synthesis branch 80% 10% 10% 3.86 75%Ours 83% 7% 10% 2.97 80%Figure 4: The above images illustrate the next-best-view planning in two simulation scenes. Red andblue pixels in the depth images represent randomly sampled grasp candidates, with red indicatingsuccessful grasps and blue indicating unsuccessful grasps as predicted by the model.75.2 Real Robot ExperimentsWe test our model with a 7-DoF Panda robotic arm in real-world cluttered scenes shown in Fig. 1.We initialize the robotic arm to a position where the target object is partially visible in the initialview. Note that each scene is tested 5times with small perturbations in the initial robotic armposition and object locations.The results from 5 grasping trials are reported in Table 2. Intuitively, the difficulty varies amongdifferent scenarios. Some objects are heavily occluded, requiring the robot to find suitable directionsfor observation. Additionally, some objects have unique shapes, limiting stable grasping pose tospecific directions. In the relatively easier scenes 4 and 5, our method outperforms the initial-viewandfixed-traj. baselines, while achieving a comparable grasping performance as the state-of-the-artGIGA andBreyer’s method. Furthermore, our algorithm has advantages in much more difficultscenes 1 and 2, where it finds a feasible grasp for the occluded target object with fewer additionalobservations. For more results, please refer to the supplementary appendix and videos.Table 2: Real-world experiments setup and results. The first column shows the arrangement of the scene, andthe second column displays the view from the initial position of the robotic arm. The target object has beencircled with a red dashed line. Our method is capable of achieving comparable grasp success rates (SR) usingfewer views (#Views).Setup Initial Method SR FR AR #Views Time/stop-down 0/5 3/5 2/5 1.0 18.3fixed-traj. 3/5 2/5 0/5 4.0 29.4GIGA [2] 3/5 1/5 1/5 5.4 32.1Breyer’s [4] 4/5 0/5 1/5 4.8 24.7Ours 3/5 2/5 0/5 3.4 23.1top-down 1/5 2/5 2/5 1.0 18.0fixed-traj. 3/5 1/5 1/5 4.0 30.5GIGA [2] 4/5 1/5 0/5 3.6 31.8Breyer’s [4] 3/5 0/5 2/5 5.2 25.9Ours 4/5 1/5 0/5 3.0 22.2top-down 2/5 2/5 1/5 1.0 16.5fixed-traj. 4/5 1/5 0/5 4.0 29.8GIGA [2] 4/5 0/5 1/5 3.8 30.2Breyer’s [4] 3/5 2/5 0/5 4.8 23.0Ours 5/5 0/5 0/5 3.2 24.7top-down 0/5 0/5 5/5 1.0 19.1fixed-traj. 2/5 3/5 0/5 4.0 30.5GIGA [2] 3/5 1/5 1/5 5.0 28.6Breyer’s [4] 2/5 1/5 2/5 4.4 23.2Ours 3/5 2/5 0/5 2.8 23.7top-down 3/5 2/5 0/5 1.0 15.3fixed-traj. 4/5 1/5 0/5 4.0 29.7GIGA [2] 4/5 1/5 0/5 1.8 23.5Breyer’s [4] 4/5 1/5 0/5 3.0 21.7Ours 5/5 0/5 0/5 2.6 20.86 Conclusion and LimitationsIn this paper, we introduce a next-best-view planning framework that leverages the imagined graspaffordances to plan the robotic arm’s new observation views for grasping a target object in occludedenvironments. This framework is motivated by the idea that the grasp affordances can be well-predicted when the observation direction is aligned with the grasping direction. Through both simu-lated and real-world experiments, we demonstrate the effectiveness and robustness of our approachcompared to previous works.Limitations: Our next-best-view planning framework involves neural network inference and re-quires the sampling of multiple views, leading to a high computational cost. In addition, the roboticarm motion planning is not considered in our method, and some unsatisfactory grasp executionsexist in real robot experiments. In the future, we plan to integrate motion planning into our methodto perform more complex tasks in more challenging environments.8AcknowledgmentsThis work is supported by the Shanghai Artificial Intelligence Laboratory, National Key R&D Pro-gram of China (2022ZD0160100) and the National Natural Science Foundation of China (62106183and 62376222).References[1] M. Breyer, J. J. Chung, L. Ott, R. Siegwart, and J. Nieto. V olumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter. In Conference onRobot Learning, pages 1602–1611.PMLR, Oct. 2021.[2] Z. Jiang, Y . Zhu, M. Svetlik, K. Fang, and Y . Zhu. Synergies Between Affordance and Ge-ometry: 6-DoF Grasp Detection via Implicit Representations. Robotics: Science andSystems(RSS), 2021.[3] E. Arruda, J. Wyatt, and M. Kopicki. Active Vision for Dexterous Grasping of Novel Objects.InIEEE/RSJ International Conference onIntelligent Robots andSystems (IROS), pages 2881–2888. IEEE, Oct. 2016.[4] M. Breyer, L. Ott, R. Siegwart, and J. J. Chung. Closed-Loop Next-Best-View Planning forTarget-Driven Grasping. In IEEE/RSJ International Conference onIntelligent Robots andSystems (IROS), pages 1411–1416, Oct. 2022.[5] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. NeRF:Representing Scenes as Neural Radiance Fields for View Synthesis. Communications oftheACM, 65(1):99–106, 2021.[6] R. Newbury, M. Gu, L. Chumbley, A. Mousavian, C. Eppner, J. Leitner, J. Bohg, A. Morales,T. Asfour, D. Kragic, and others. Deep Learning Approaches to Grasp Synthesis: A Review.arXiv e-prints, pages arXiv–2207, 2022.[7] G. Du, K. Wang, S. Lian, and K. Zhao. Vision-based Robotic Grasping from Object Localiza-tion, Object Pose Estimation to Grasp Estimation for Parallel Grippers: A Review. ArtificialIntelligence Review, 54(3):1677–1734, 2021.[8] A. Mousavian, C. Eppner, and D. Fox. 6-DoF GraspNet: Variational Grasp Generation forObject Manipulation. In Proceedings oftheIEEE/CVF International Conference onComputerVision, pages 2901–2910, 2019.[9] K. Fang, Y . Zhu, A. Garg, A. Kurenkov, V . Mehta, L. Fei-Fei, and S. Savarese. Learning Task-Oriented Grasping for Tool Manipulation from Simulated Self-Supervision. TheInternationalJournal ofRobotics Research, 39(2-3):202–216, 2020.[10] Q. Dai, Y . Zhu, Y . Geng, C. Ruan, J. Zhang, and H. Wang. GraspNeRF: Multiview-based6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF. In2023 IEEE International Conference onRobotics andAutomation (ICRA), pages 1757–1763.IEEE, 2023.[11] S. Jauhri, I. Lunawat, and G. Chalvatzaki. Learning Any-View 6DoF Robotic Grasping inCluttered Scenes via Neural Surface Rendering. arXiv preprint arXiv:2306.07392, 2023.[12] H. Huang, D. Wang, X. Zhu, R. Walters, and R. Platt. Edge Grasp Network: A Graph-BasedSE(3)-invariant Approach to Grasp Detection. In 2023 IEEE International Conference onRobotics andAutomation (ICRA), pages 3882–3888, 2023.[13] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic GraspMetrics. Robotics: Science andSystems (RSS), 2017.9[14] J. Mahler, M. Matl, V . Satish, M. Danielczuk, B. DeRose, S. McKinley, and K. Goldberg.Learning Ambidextrous Robot Grasping Policies. Science Robotics, 4(26):eaau4984, 2019.[15] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-GraspNet: Efficient 6-DoFGrasp Generation in Cluttered Scenes. In IEEE International Conference onRobotics andAutomation (ICRA), pages 13438–13444. IEEE, 2021.[16] M. Gualtieri, A. Ten Pas, K. Saenko, and R. Platt. High Precision Grasp Pose Detectionin Dense Clutter. In IEEE/RSJ International Conference onIntelligent Robots andSystems(IROS), pages 598–605. IEEE, 2016.[17] D. Morrison, P. Corke, and J. Leitner. Multi-View Picking: Next-best-view Reaching for Im-proved Grasping in Clutter. In International Conference onRobotics andAutomation (ICRA),pages 8762–8768, 2019.[18] R. Zeng, Y . Wen, W. Zhao, and Y .-J. Liu. View Planning in Robot Active Vision: A Surveyof Systems, Algorithms, and Applications. Computational Visual Media, 6(3):225–245, Sept.2020. ISSN 2096-0433, 2096-0662.[19] J. Delmerico, S. Isler, R. Sabzevari, and D. Scaramuzza. A Comparison of V olumetric Informa-tion Gain Metrics for Active 3D Object Reconstruction. Autonomous Robots, 42(2):197–208,2018.[20] S. Kriegel, C. Rink, T. Bodenm ̈uller, and M. Suppa. Efficient Next-best-scan Planning forAutonomous 3D Surface Reconstruction of Unknown Objects. Journal ofReal-Time ImageProcessing, 10:611–631, 2015.[21] W. R. Scott, G. Roth, and J.-F. Rivest. View Planning for Automated Three-dimensional ObjectReconstruction and Inspection. ACM Computing Surveys (CSUR), 35(1):64–96, 2003.[22] S. A. Hutchinson, R. L. Cromwell, and A. C. Kak. Planning Sensing Strategies in a Robot WorkCell with Multi-sensor Capabilities. In Proceedings. 1988 IEEE International Conference onRobotics andAutomation, pages 1068–1075. IEEE, 1988.[23] E. Johns, S. Leutenegger, and A. J. Davison. Pairwise Decomposition of Image Sequences forActive Multi-View Recognition. In Proceedings oftheIEEE Conference onComputer VisionandPattern Recognition, pages 3813–3822, 2016.[24] M. Mendoza, J. I. Vasquez-Gomez, H. Taud, L. E. Sucar, and C. Reta. Supervised Learning ofthe Next-Best-View for 3D Object Reconstruction. Pattern Recognition Letters, 133:224–231,May 2020. ISSN 01678655.[25] S. Song, A. Zeng, J. Lee, and T. Funkhouser. Grasping in the Wild:Learning 6DoF Closed-Loop Grasping from Low-Cost Demonstrations. IEEE Robotics andAutomation Letters, 5(3):4978–4985, 2020.[26] S. Lee, L. Chen, J. Wang, A. Liniger, S. Kumar, and F. Yu. Uncertainty Guided Policy for Ac-tive Robotic 3D Reconstruction using Neural Radiance Fields. IEEE Robotics andAutomationLetters, 7(4):12070–12077, 2022.[27] E. J. Smith, M. Drozdzal, D. Nowrouzezahrai, D. Meger, and A. Romero-Soriano. Uncertainty-Driven Active Vision for Implicit Scene Reconstruction. arXiv preprint arXiv:2210.00978,2022.[28] L. Jin, X. Chen, J. R ̈uckin, and M. Popovi ́c. NeU-NBV: Next Best View Planning UsingUncertainty Estimation in Image-Based Neural Rendering. arXiv preprint arXiv:2303.01284,2023.[29] K. Lin and B. Yi. Active View Planning for Radiance Fields. In Robotics Science andSystems,2022.10[30] X. Pan, Z. Lai, S. Song, and G. Huang. ActiveNeRF: Learning where to See with UncertaintyEstimation. In ECCV 2022: 17th European Conference, Oct. 2022, Proceedings, PartXXXIII,pages 230–246. Springer, 2022.[31] H.-S. Fang, C. Wang, M. Gou, and C. Lu. GraspNet-1Billion: A Large-Scale Benchmark forGeneral Object Grasping. In 2020 IEEE/CVF Conference onComputer Vision andPatternRecognition (CVPR), pages 11441–11450. IEEE, June 2020.[32] S. Peng, M. Niemeyer, L. Mescheder, M. Pollefeys, and A. Geiger. Convolutional Occu-pancy Networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow,UK, August 23–28, 2020, Proceedings, PartIII16, pages 523–540. Springer, 2020.[33] Y .-C. Lin, P. Florence, A. Zeng, J. T. Barron, Y . Du, W.-C. Ma, A. Simeonov, A. R. Garcia, andP. Isola. MIRA: Mental Imagery for Robotic Affordances. In Conference onRobot Learning,pages 1916–1927. PMLR, 2023.[34] P. Wang, L. Liu, Y . Liu, C. Theobalt, T. Komura, and W. Wang. NeuS: Learning NeuralImplicit Surfaces by V olume Rendering for Multi-view Reconstruction. In Advances inNeuralInformation Processing Systems, 2021.[35] Q.-Y . Zhou, J. Park, and V . Koltun. Open3D: A Modern Library for 3D Data Processing. arXivpreprint arXiv:1801.09847, 2018.[36] P. Beeson and B. Ames. TRAC-IK: An Open-source Library for Improved Solving of GenericInverse Kinematics. In IEEE-RAS 15th International Conference onHumanoid Robots(Humanoids), pages 928–935. IEEE, 2015.[37] S. Chitta, I. Sucan, and S. Cousins. Moveit![ros topics]. IEEE Robotics &AutomationMagazine, 19(1):18–19, 2012.[38] E. Coumans and Y . Bai. Pybullet, a Python Module for Physics Simulation for Games,Robotics and Machine Learning. 2016.11AppendixA Pseudo Code of the Proposed ACE-NBV PolicyWe summarize the overall NBV planning policy for grasping a target object in the following algo-rithm 1. In the experiment, we set Tmaxto 8 and qmaxto 0.95.Algorithm 1 Grasp Affordance Prediction and Next-Best-View PlanningInput: A cluttered scene, an occluded target object given its 3D bounding box TbboxOutput: A feasible grasp Gof the target objectfort≤TmaxdoMt←Dt ▷Intergrate depth image into TSDFCt←3D CNN (Mt) ▷Encode featureifG∗q(Tbbox,Ct,Ov,t)≤qmaxthenOv,t+1←arg max Gv∈GvG∗q(Tbbox,Ct,Gv)▷Evaluate candidate next viewsMove camera to Ov,t+1 ▷Go to the next-best-viewelseExecute grasp G∗(Tbbox,Ct,Ov,t)Breakend ifend forB Network Architecture and Implementation DetailsWe adopt the same encoder as in GIGA that takes TSDF Mt∈R40×40×40as input and outputs afeature embedding for each voxel with a 3D CNN layer. Then, the tri-plane feature grids is con-structed by projecting each input voxel on a canonical feature plane via orthographic projection.Then, three feature planes are processed with a 2D U-Net that consists of a series of down-samplingand up-sampling 2D convolution layers with skip connections. The output is formulated as theshared tri-plane feature volume C∈R3×40×40×32, where 32 is the dimension of the feature embed-ding.Based on the shared tri-plane feature volume, the local feature Cpof a 3D point p= (x, y, z )isobtained by projecting it to each feature plane and querying three features Cpx,Cpy,Cpzat theprojected locations using bilinear interpolation, and the local feature Cpis the concatenated featureof these queried features, i.e., Cp=concat (Cpx,Cpy,Cpz). We implement our grasp affordanceprediction network with a five layer fully-connected network with residual connections. The inputdimension of this MLP network is 3 + 96 + 9 ×96 = 963 which is composed of view directionunit vector v∈R3, the ray feature Cray∈R96, and the local geometry feature Cgeo∈R9×96. Theoutput dimension for grasp affordance prediction is 1 + 1 + 1 = 3 which includes the grasp qualityGq, in-plane rotation Gr, and gripper width Gw.As for the novel view depth synthesis, we employ a MLP network that takes the 3D point featureCp∈R96as input and output the SDF value of this point, and adpot the same rendering techniquewith NeuS to synthesize depth images ( η= 12, γ= 5). We sample 128 rays in a depth image ineach batch, each ray consisting of 64 uniformly sampled points and extra 4 ×32 points followingthe importance sampling rule. We set the near and far range close to the ground truth depth atthe beginning of training, and then gradually relax the range to the maximum range of the implicitfeature volume.For experiments in simulation and real word, the size of cubic workspace L= 30 cm. The size of thecubic for Cvertis0.25, which is 7.5cm in the real world. The points S={s1,s2, ...,sn}forCrayisuniformly sampled with a step of 0.1. The sizes of the three datasets front-observe-front-grasp, front-observe-side-grasp and multi-observe-front-grasp are 1M, 1M and 2M grasps, respectively. Eachscene contains 240 grasps and η= 12 ground-truth depth images with the resolution of 480 ×640.After data cleaning and balancing, there are about 40% data left. We separate the datasets randomlyinto 90% training and 10% validation. We train the models with the Adam optimizer and a learningrate of 2×10−4and batch sizes of 128. All experiments are run on a computer equipped with anIntel Core i9-13900K and a GeForce RTX 4090.12C Extra Experiments for IntuitionWe conducted extra experiments in simulation to justify our intuition that the grasp affordancescan be better-measured using the observation when the observation view direction is the same asthe grasp view. In each randomly selected case, the network receives a depth image from differentdirections and is then required to predict a grasping pose of the target object in the same direction.The results are quite evident: the prediction is much better when those two directions are the same.(a) case 1(b) case 2(c) case 3Figure 5: The network receives a TSDF fused from a depth image captured either from the front(lower left) or the side (lower right) as input. It is required to predict grasp in the frontal direction,which aligns with the direction of the depth image used for visualization in the upper row. The colorred indicates high-quality grasps, while blue represents low-quality ones.13D Qualitative Results of Real Robot ExperimentsWe present qualitative results in Fig. 6 and 7 and recommend readers watch the supplementary videofor more comprehensive real robot experimental results. Note that our model can select reasonablenext-best-view to observe the occluded target object. We show a representative failure case in Fig. 7,where small errors in grasp affordance prediction leads to an unsuccessful grasp. This small pre-diction inaccuracy occurs in most failure experiments. Therefore, in the future, we plan to exploit abetter grasp affordance prediction module to improve the success rate of our method.(a) Initial View (b) Selected Next-Best-View(c) Execute Grasp (d) Grasp SuccessFigure 6: Success Case. The robot planned one new view to observe the target box and successfullygrasped it.(a) Initial View (b) Selected Next-Best-View(c) Execute Grasp (d) Grasp FailFigure 7: Failure Case. The robot failed to predict accurate grasp affordances of the target objectafter obtaining a new observation. As a result, the grasping failed.14 |
9cTEQWMo1BF | LabelFormer: Object Trajectory Refinement forOffboard Perception from LiDAR Point CloudsAnqi Joyce Yang1;2Sergio Casas1;2Nikita Dvornik1Sean Segal1;2Yuwen Xiong2Jordan Sir Kwang Hu1Carter Fang1Raquel Urtasun1;2Waabi1University of Toronto2fjyang, sergio, ndvornik, ssegal, jkhu, cfang, urtasun g@waabi.aiAbstract: A major bottleneck to scaling-up training of self-driving perceptionsystems are the human annotations required for supervision. A promising alter-native is to leverage “auto-labelling” offboard perception models that are trainedto automatically generate annotations from raw LiDAR point clouds at a fractionof the cost. Auto-labels are most commonly generated via a two-stage approach– first objects are detected and tracked over time, and then each object trajec-tory is passed to a learned refinement model to improve accuracy. Since existingrefinement models are overly complex and lack advanced temporal reasoning ca-pabilities, in this work we propose LabelFormer , a simple, efficient, and effectivetrajectory-level refinement approach. Our approach first encodes each frame’sobservations separately, then exploits self-attention to reason about the trajectorywith full temporal context, and finally decodes the refined object size and per-frame poses. Evaluation on both urban and highway datasets demonstrates thatLabelFormer outperforms existing works by a large margin. Finally, we show thattraining on a dataset augmented with auto-labels generated by our method leadsto improved downstream detection performance compared to existing methods.Please visit the project website for details https://waabi.ai/labelformer/ .Keywords: Auto-label, Offboard Perception, Trajectory Refinement, Transformer1 IntroductionModern self-driving systems often rely on large-scale manually annotated datasets to train objectdetectors to perceive the traffic participants in the scene. Recently, there has been a growing interestin auto-labelling approaches that can automatically generate labels from sensor data. If the comput-ing cost of auto-labelling is lower than the cost of human annotation and the produced labels are ofsimilar quality, then auto-labelling can be used to generate much larger datasets at a fraction of thecost. These auto-labelled datasets can in turn be used to train more accurate perception models.Following [1, 2], we use LiDAR as input as it is the primary sensor deployed on many self-drivingplatforms [3, 4]. In addition, we focus on the supervised setting where a set of ground-truth labels areavailable to train the auto-labeller. This problem setting is also referred to as offboard perception [2],which, unlike onboard perception, has access to future observations and does not have real-timeconstraints. Inspired by the human annotation workflow, the most common paradigm [1, 2] tacklesthe offboard perception problem in two stages, as shown in Fig. 1. First, objects and their coarsebounding box trajectories are obtained using a “detect-then-track” framework, and then each objecttrack is refined independently. The main goal of the first stage is to track as many objects in thescene as possible ( i.e.to achieve high recall), while the second stage focuses on track refinement toproduce bounding boxes of higher quality. In this paper, we focus on the second stage, which werefer to as trajectory refinement . This task is challenging as it requires handling object occlusions,the sparsity of observations as range increases, and the diverse size and motion profiles of objects.In order to handle these challenges, it is key to design a model that is able to effectively and effi-ciently exploit the temporal context of the entire object trajectory. However, existing methods [1, 2]Work done at Waabi.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.⋯⋯⋯Point cloud sequence Bounding boxes with track IDs (colors represent track IDs) Initial object-based tracks with point observations in global frame Refined object-based tracks ⋯Frame 1 ⋯Frame T First Stage: Coarse Initialization Second Stage: Trajectory Refinement ⋯Object 1 Object N ⋯Detect +Track Refine Tracks Figure 1: Two-stage auto-labelling paradigm . The first stage uses a detect-then-track paradigmto obtain coarse object trajectories. The second stage refines each trajectory independently.fall short as they are designed to process trajectories from dynamic objects in a sub-optimal slidingwindow fashion, where a neural network is applied independently at each time step over a limitedtemporal context to extract features. This is inefficient as features from the same frame are extractedmultiple times for different overlapping windows. As a consequence, the architectures exploit verysmall temporal context to remain within the computational budget. Furthermore, prior works uti-lized complicated pipelines with multiple separate networks ( e.g., to handle static and dynamicobjects differently), which are hard to implement, debug, and maintain.In this paper, we take a different approach and propose LabelFormer , a simple, efficient, and ef-fective trajectory refinement method. It leverages the full temporal context and results in moreaccurate bounding boxes. Moreover, our approach is more computationally efficient than the ex-isting window-based methods, giving auto-labelling a clear advantage over human annotation. Toachieve this goal, we design a transformer-based architecture [5], where we first encode the initialbounding box parameters and the LiDAR observations at each time step independently, and thenutilize self-attention blocks to exploit dependencies across time. Since our method refines the entiretrajectory in a single-shot fashion, it only needs to be applied once for each object track during in-ference without redundant computation. Additionally, our architecture naturally handles both staticand dynamic objects, and it is much simpler than other approaches.Our thorough experimental evaluation on both urban and highway datasets show that our approachnot only achieves better performance than window-based methods but also is much faster. Moreover,we show that LabelFormer can be used to auto-label a larger dataset for training downstream objectdetectors, which results in more accurate detections comparing to training on human data alone orleveraging other auto-labellers.2 Related WorksLiDAR-based Auto-Labelling: These approaches have emerged from the need to automate theexpensive human labelling process. Multiple approaches [6, 7, 8, 9, 10, 11, 12, 13] have attemptedto generate auto-labels with little or no supervision, but they do not achieve satisfactory performanceat high precision. More related to our method, pioneering works Auto4D [1] and 3DAL [2] devel-oped a two-stage paradigm which first uses an object detector followed by a multi-object tracker togenerate coarse object trajectories, and then refines each object trajectory separately using a super-vised model. For the second-stage trajectory refinement, which is the problem we focus on in thiswork, Auto4D [1] consists of two separate models. It first trains a size branch that refines only thesize with full temporal context, and then freezes the refined size and trains a motion path networkto refine each frame’s pose with a small local window. 3DAL [2] first trains a motion classifier, andthen employs two separate networks for stationary and dynamic objects. The stationary networkuses observations from the full trajectory, while the dynamic network again operates in a windowedfashion that consumes point cloud observations from a very small local window. As a result, bothworks have limited temporal context, incur heavier computational costs from overlapping windows,and involve complicated workflows with at least two models that cannot be trained jointly. In con-trast, our work designs a single network to jointly optimize the entire bounding box trajectory atonce. Finally, concurrent works [14, 15] also ingest the full object trajectory for refinement, but [14]2aggregates all point observations in the global frame and does not conduct explicit cross-frame tem-poral reasoning, and [15] trains three separate models while we only need one.Sequence processing with Transformers: The transformer architecture [5] is the state-of-the-artmodel for sequence processing. It incorporates a self-attention mechanism that models communica-tion between the sequence elements and enables sequence-level reasoning. Transformers have beensuccessfully applied in various domains, including language modeling [5, 16, 17], image and videoprocessing [18, 19], object tracking [20], and 3D understanding [21, 22]. They are also effective formulti-modal problems like vision-language joint modeling [23] and autonomous driving [24]. Theapplication of transformers is particularly advantageous for modeling sequences with long-rangedependencies, which is precisely the case for our trajectory-level refinement task. To handle longsequences, it is common to preprocess the input tokens with a powerful short-range encoder andthen utilize a transformer to perform global reasoning on the encoder’s outputs [25, 26, 27]. Forinstance, the work of [25] addresses long video modeling by processing short video snippets withobject detection, tracking, and action recognition models before passing their outputs to a trans-former for holistic video analysis. Drawing inspiration from these works, our LabelFormer takes theper-frame bounding box initializations obtained with a detector and a tracker, and refines them byjointly considering the bounding box trajectories and point clouds of the entire trajectory sequence.3 LabelFormer: Transformer-based Trajectory-level RefinementThe goal of trajectory refinement is to produce an accurate bounding box trajectory given a noisyinitialization that is typically obtained using a detect-then-track paradigm. In this paper we proposea novel transformer-based architecture that takes the raw sensor observations and the full objecttrajectory as input, and conducts temporal reasoning simultaneously for all frames in the trajectory.This architecture naturally handles both static and dynamic objects and jointly refines the boundingbox size and the motion path, resulting in a much simpler, efficient and effective approach.3.1 Problem SettingThe input to a LiDAR-based auto-labeller is a sequence of T“frames” of point clouds, ob-tained from a single LiDAR scan ( i.e., a360sweep). Since Bird’s-Eye View (BEV) is the de-facto representation for downstream tasks in self-driving, such as motion and occupancy forecast-ing [28, 29, 30, 31, 32] and motion planning [33, 34, 35, 36], LabelFormer operates in BEV .In the first stage of the auto-labelling pipeline, we use a detection model followed by a multi-object tracker, which provide us with Nperceived objects with initial bounding box trajecto-ries(B1;:::;BN). Each object trajectory B(we omit the object index for brevity) is definedby a sequence of Mbounding boxes B= (b1;:::;bM), where each BEV bounding boxbi= (xi;yi;li;wi;i)is parameterized by center position (xi;yi), bounding box length and width(li;wi), and heading angle i. All bounding box poses (xi;yi;i)are in a trajectory coordinateframe that is centered at the middle of the trajectory, i.e.,(xm;ym;m) = (0;0;0)withm=M==2as the middle index. Note that the sequence length Mmay vary across objects, and that the boundingbox dimensions (li;wi)for the same object might be different across frames because object detec-tors output bounding boxes for each frame separately. In addition, each bounding box biis detectedfrom a scene-level point cloud Pi2Rni4that consists of nipoints, and each point is representedas its 3D position in the same trajectory frame along with its timestamp.Given each object’s coarse bounding box trajectory B= (b1;:::;bM)and the scene-level pointclouds (P1;:::;PM), the goal of trajectory refinement is then to output a precise trajectory ^B=(^b1;:::; ^bM)with a bounding box size (^l;^w)that is shared across the entire trajectory.3.2 Model ArchitectureTo refine the entire actor trajectory with full temporal context, LabelFormer first uses a sharedencoder to process each frame’s observations separately, and then leverages self-attention to reasonacross frames. Finally, a decoder is employed to obtain the refined pose at each frame as well as aconsistent bounding box size for the full duration of the trajectory. Fig. 2 illustrates the architecture.3Δ(l, w) (x, y, l, w, θ) point cloud box feature point feature CNN +MLP(x, y, l, w, θ) point cloud box feature point feature CNN +MLP⋯Cross-Frame Attention Motion Path and Size Decoder ⋯MLP(x, y, l, w, θ) point cloud box feature point feature Per-frame Encoder CNN +MLP⋯⋯⋯⋯ ⋯⋯MeanPool + MLP Frame 1 Frame i Frame M ⋯⋯MLPMLPΔ(x, y, θ) Δ(x, y, θ) Δ(x, y, θ) Δ(l, w) Figure 2: LabelFormer Architecture which first encodes box and point observations for eachframe separately, then applies a stack of self-attention layers among per-frame features, and finallydecode a size residual along with per-frame pose residuals.3.2.1 Per-Frame EncoderGiven the initial object BEV bounding box trajectory B= (b1;:::;bM)and point clouds(P1;:::;PM)in the trajectory frame described in the problem setting, we first extract object pointsinside each bounding box biby filtering the respective scene-level point cloud Piin BEV . That is,we only keep the points in Piwhose BEV projection lies inside the 2D BEV bounding box bi. Sincethe bounding box initialization biis noisy, similar to previous works [1, 2], we enlarge the filteringregion by 10% such that the point cloud contains the full object with high probability. As a result,each frameihas two input observations: the initial BEV bounding box bi2R5and the set of mi3D object points Oi2Rmi4. We next encode each frame’s bounding box and point observationsseparately before fusing them. For brevity, we refer to “bounding box” as “box” from now on.Bounding Box Encoding: Since the detector might produce noisy heading directions, we first pre-process the heading directions with a simple heuristic that flips inconsistent headings by 180basedon majority voting. We next use a simple multi-layer perceptron (MLP) that maps box parametersbiwith updated heading directions to high-dimensional features ai=MLP(bi)2RD.Point Cloud Encoding: Previous works [1, 2] aggregate multi-frame points in the global framefor feature extraction. However, in practice, humans estimate the relative transformation betweentwo point clouds by aligning them in the object frame. Motivated by this fact, we use the objectpose initialization bito transform Oifrom the trajectory frame to the object frame. We next learna representation of the object-frame points with a PointPillars [37]-style encoder. Specifically, wefirst voxelize the object-frame points into a NxNyNzgrid, apply a PointNet [38] to all pointsin each 3D voxel grid to extract per-voxel feature, and fuse all features along height to generate aBEV feature map2RNxNyDv. We next feed the BEV voxel feature map into a 2D convolutionalnetwork (CNN) composed of a multi-scale ResNet [39] backbone followed by FPN [40] to obtain a4downsampled feature map Fi2RN0xN0yD0. Since the receptive field of the CNN is designedto cover the entire object space, to retrieve point feature pi2RD0we simply index the feature mapFiat its spatial center.Feature Fusion: Finally, we fuse the box feature aiand point feature pito derive the final frame-wise feature fi=ai+MLP(pi)2RD. We apply the same encoder with shared weights on eachframe individually and obtain a set of frame-wise feature tokens ffig1iM.3.2.2 Trajectory-level Understanding via Cross-Frame AttentionIntuitively, both box parameters and point observations are useful for trajectory refinement. How-ever, traditional path smoothers and point cloud registration methods based on ICP [41, 42] failto fuse the information from both sources. In addition, ICP reasons on the point level and fails4when the points are sparse or the point cloud pair has small overlaps. Structured optimization-basedapproaches [7, 8] that combine both methods operate in an online fashion with limited temporal con-text, and still suffer from ICP’s failure modes. To address these limitations from traditional methods,we exploit the fused per-frame features (f1;:::;fM), and model relationships between frames at thefeature level via self-attention [5]. The attention module allows for efficient pairwise reasoningacross frames and offers the flexibility to operate on sequences with arbitrary length.At a high level, the input to the attention module is the feature sequence (f1;:::;fM), and the out-put(g(L)1;:::;g(L)M)represents the updated per-frame features after absorbing information fromthe entire sequence. In particular, the attention module contains Lattention blocks, where thejthattention block consumes the previous block’s output feature sequence (g(j1)1;:::;g(j1)M )and generates an updated feature sequence (g(j)1;:::;g(j)M), with the first attention block inputg(0)i=fi. Each attention block contains a self-attention layer followed by a feed-forward MLP.For each input feature vector g(j1)i2RD, the pre-norm self-attention mechanism first appliesLayerNorm [43] (LN) followed by three separate linear projections to derive query, key and valuevectors q(j)i;k(j)i;v(j)i2RDrespectively. We stack the keys and values from all Mframes toform matrices K(j);V(j)2RMD. Then, for each query frame i, we compute the attention scoresbetween frame iand all frames by comparing query vector q(j)iand each key in K(j):a(j)i=softmax q(j)iK(j)Tpdk+wi!2RM; (1)wheredkis a scaling factor. Note that in Eq. 1 we adopt AliBi [44], which is a form of relative po-sitional encoding that leverages positional difference between query and key frames. AliBi directlyadds weighted biases wi2RM(withwij=mjijj,mis a fixed constant) to the dot productattention score map and is shown to generalize better to longer sequences at test time.With the attention scores, we can then obtain an aggregated feature with h(j)i=a(j)iV(j)2RD:We then apply the subsequent MLP layer to derive output features g(j)iof thejthattention block:h0(j)i=LN(g(j1)i) +h(j)ig(j)i=MLP(LN(h0(j)i))2RD:(2)In practice, we use multi-head self-attention to increase expressivity, which partitions the D-dimensional input feature vector into Hgroups, employs a separate self-attention head for eachfeature group, and concatenates the output features from each attention head as the final feature.Please refer to [5] for more details on multi-head attention. After Lchained self-attention blocks,the attention module outputs a sequence of updated feature vectors at each frame (g(L)1;:::;g(L)M),which is used to decode the final bounding box trajectory.3.2.3 Motion Path and Size DecoderGiven the feature sequence (g(L)1;:::;g(L)M), we decode the final motion path trajectory and objectsize which is consistent for the entire trajectory. To decode the refined bounding box pose at eachframe, we simply feed the feature g(L)iinto an MLP to obtain pose residual (xi;yi;i)andsum them with the initialization bito obtain the final refined pose parameters (^xi;^yi;^i) = (xi+xi;yi+ yi;i+ i). To decode the refined object size, we leverage context from all framesvia mean pooling, and use an MLP to obtain a size residual (l;w) =MLP(mean (fg(L)ig)). Wecompute the final refined object size as (l;w) = ( mean (flig) + l;mean (fwig) + w). The finalrefined bounding box at each frame iwill be ^bi= (^xi;^yi;^l;^w;^i).3.3 TrainingWe train the entire model ( i.e., encoder, attention module, decoder) end-to-end by minimizing acombination of a regression loss that directly compares the refined box parameters ^biwith ground-truth box parameters b?i, and an IoU-based loss that compares the axis-aligned bounding boxes:L(f^big;fb?ig) =Lreg(f^big;fb?ig) +LIoU(f^big;fb?ig): (3)5Please see supp. for more details. We apply two forms of data augmentation during training: (1) werandomly sample a subsequence of the input actor trajectory, and (2) we independently perturb eachinitial bounding box by applying a translational offset uniformly sampled from [0:25;0:25]m forxandyeach, a rotational offset uniformly sampled from [10;10]degrees, and an offset uniformlysampled from [max(0:2;li2);min(0:2;li2)]and[max(0:1;wi2);min(0:1;wi2)]for the dimen-sions. Note that the offsets are sampled per frame and applied to each bounding box separately.4 ExperimentsIn this section, we evaluate the effectiveness of our approach on two real-world datasets. First, wedescribe the experimental setting and metrics used for evaluation. Next, we show that our methodoutperforms previous works on the trajectory refinement task for multiple initializations. Further-more, we demonstrate that the improved refinement translates to downstream detection performancewhen training with auto-labels, showcasing that our approach can be used to train better objectdetectors. Finally, we conduct thorough ablation studies to analyze the effect of our design choices.Datasets: We use two datasets to evaluate our method in both urban and highway domains, whichinclude object trajectories with diverse motion profiles. For the urban setting, we use the Argoverse2 Sensor dataset [3] (A V2) that was collected in six distinct US cities. A V2 contains 850 15-secondlong snippets and around 65.7k vehicle trajectories. The LiDAR data is fairly sparse as it is capturedby two 32-beam LiDARs that spin at 10Hz in the same direction but are 180apart in orientation,and we aggregate both LiDAR scans in the same sweep interval to form an input frame. We use theofficial train and validation splits with 700 and 150 snippets each. For the highway setting, we use anin-house Highway dataset , which contains 188 20-second long snippets collected from US highwayswith roughly 5.8k vehicle trajectories. The LiDAR data in this dataset is denser as it comes from a128-beam LiDAR sensor that spins at 10Hz. We split the dataset into 150 training snippets and 38validation snippets. For our experiments in both datasets, we focus on auto-labelling vehicles in thescene, with a detection region of interest of [-125, 200] meters longitudinally and [-50, 50] meterslaterally with respect to the ego vehicle’s traveling direction.Metrics: Following [1], for each object k, we compute the track-level IoU Sk=1MkPMki=1IoU(B?ki;^Bki), whereMkis the trajectory length and B?ki;^Bki2R5are the respec-tive ground-truth and refined BEV bounding box at each frame i. To aggregate across all Nob-ject trajectories, we report the mean IoU as1NPNk=1Sk. To understand coverage, we addition-ally report average recall at various IoU thresholds, i.e., RC@=1NPNk=11(Sk), where1(x) =1ifx0otherwiseis the indicator function. In our results we choose IoU thresholds= 0:5;0:6;0:7;0:8.Implementation details: For the box encoder, we use a single linear layer to map the 5 boxparameters to an output dimension, D= 256 . For the attention module, we use L= 6 attentionblocks, with H= 4 attention heads. Both the size and pose decoders are a single linear layer.For both datasets, we train our model with the AdamW optimizer [45], with learning rate 5e-5 andweight decay 1e-5. We apply linear learning rate warmup in the first two epochs followed by cosinelearning rate scheduling that eventually decays the initial learning rate (after warmup) by 10. Weadditionally clip the gradient norms at 5.0 as we empirically find this helps with learning. We trainour model for 100 epochs on the Highway dataset and 40 epochs on the A V2 dataset with a batchsize of 4. Please see supp. for more details on the point encoder and attention network.Baselines: We compare our proposed LabelFormer with state-of-the-art auto-labelling methodsAuto4D [1] and 3DAL [2]. Since neither papers released their code, we reimplemented both methodsbased on their implementation details and thoroughly tuned the hyperparameters. We follow theoriginal implementations to set the Auto4D’s temporal window size to be 10. For 3DAL dynamicbranch, we use the original bounding box temporal window size of 101 and raised the point windowsize from 5 to 11 to increase model performance. Note that since the Highway dataset contains very6Dataset First-Stage Detector Second-Stage Refinement Mean IoU RC @ 0.5 RC @ 0.6 RC @ 0.7 RC @ 0.8A V2PointPillars [37]- 62.60 76.39 65.72 48.66 25.04Auto4D [1] 65.49 79.47 70.35 56.04 32.403DAL [2] 64.58 77.25 67.92 53.93 32.53Ours (LabelFormer) 68.28 81.22 73.22 60.68 40.78V oxelNeXt [46]- 65.70 78.92 68.90 54.37 32.44Auto4D [1] 67.92 81.73 73.14 59.42 37.023DAL [2] 67.12 80.36 71.42 57.83 36.01Ours (LabelFormer) 70.18 83.26 75.06 63.04 43.76HighwayPointPillars [37]- 60.20 69.83 59.62 46.11 26.58Auto4D [1] 65.63 77.01 68.23 56.28 37.083DAL [2] 66.03 79.44 70.33 56.62 35.18Ours (LabelFormer) 70.59 83.09 75.77 65.11 46.16V oxelNeXt [46]- 65.90 76.76 68.32 56.27 36.77Auto4D [1] 69.27 79.36 72.81 63.37 45.393DAL [2] 68.17 80.61 73.07 60.61 38.85Ours (LabelFormer) 72.38 83.68 77.45 68.33 50.06Table 1: Comparison with state-of-the-artfew static objects, we train the 3DAL dynamic branch with all objects in the scene and only applythe dynamic network during inference. For the A V2 dataset we apply the original 3DAL methodwith both stationary and dynamic branches.First-stage initialization: We obtain the first-stage coarse initializations by running a detectionand tracking model. For fair comparison between the refinement approaches, we train and evaluateall refinement models on the same set of true-positive first-stage object trajectories, i.e., those thatcan be associated with a ground-truth object trajectory (more details in supp.). To ensure our con-clusions generalize, for each dataset, we evaluate the refinement approaches on initializations fromtwo detection models. Specifically, we experiment with a multi-frame version of PointPillars [37]and a recent state-of-the-art detector V oxelNeXt [46] as the first-stage detector. Following [1, 2], weimplement a simple rule-based multi-object tracker, please see supp. for more details.103104Inference time (ms) per traj6869707172Highway Mean IoU (%)OursAuto4D3DAL102103104Inference time (ms) per traj67686970AV2 Mean IoU (%)OursAuto4D3DALFigure 3: Refinement quality vs. runtimeRefinement results against SOTA: Table 1shows that in terms of refinement accuracy, ourmethod consistently outperforms the initializa-tions and state-of-the-art auto-label refinementmethods by a large margin across both detectorinitializations on both datasets. While existingmethods already have significant gains over theinitializations, our method is able to achieve onaverage 92% more mean IoU gains. In addition, our method is able to achieve significantly higheraccuracy on both static and dynamic objects with a single network, as shown in supp. Furthermore,we measure the average refinement run time on trajectories from the V oxelNeXt initialization, usinga single RTX2080 GPU. Results are shown in Fig. 3 alongside mean IoU. We find that LabelFormeris2:7faster than the window-based dynamic 3DAL on the Highway dataset. On A V2, which has52% of static objects, our method is still slightly faster than 3DAL which uses a non-window-basedstatic branch. Auto4D uses all available points while 3DAL samples at most 1024 points per frame,resulting in Auto4D having much longer run time but higher performance at large IoU thresholds.Auto-Label Mean AP AP@0.5 AP@0.7 AP@0.8N/A 82.98 91.62 79.17 55.97Init 83.63 92.67 79.51 55.30Auto4D 83.42 92.71 79.32 55.073DAL 83.64 92.66 79.76 55.79Ours 84.81 92.91 80.91 59.00Table 2: [Highway] Downstream taskImproving object detection with auto-labeled data:We additionally study the effect of the refined auto-labels in the downstream object detection task. Specif-ically, we use V oxelNeXt [46] as the first-stage de-tector and train various auto-labellers on the mainHighway training set (consisted of 150 human-labelledsnippets), and use them to label an additional 500 Highway snippets. We then train a downstreamobject detector with a combined dataset of 150 human-labelled snippets and 500 auto-labelled snip-pets. Table 2 shows the average precision (AP) results. Overall, training with the bigger datasetaugmented with auto-labels is better than with the human-labelled dataset alone, and our refinedauto-labels give the biggest boost with a 3% gain at 0.8 IoU.7Box Enc. Point Enc. Perturb Window #Att Mean IoU RC@0.5 RC@0.6 RC@0.7 RC@0.8M1 X X All 6 69.20 80.68 73.92 63.28 43.76M2 X X All 6 70.17 81.73 74.74 64.20 45.61M3 X X All 6 70.97 82.37 75.37 65.32 47.65M4 X X X All 1 71.59 83.62 76.35 66.62 48.87M5 X X X All 3 72.11 83.93 77.50 67.66 49.90M6 X X X 5 6 71.18 82.33 75.44 65.81 48.80M7 X X X 10 6 71.72 83.26 76.57 66.68 49.69M8 X X X 20 6 71.92 83.14 76.62 67.08 49.91M9 X X X All 6 72.38 83.68 77.45 68.33 50.06Table 3: [Highway] Ablation study using the V oxelNeXt initializations. Perturb refers to thebounding box perturbation augmentation, Window specifies the window size when applicable, and“#Att” is the number of self-attention blocks.Initialization LabelFormerFigure 4: Qualitative results: Initialization vs. refinement. Auto-labels in orange, GT in magenta.Effect of box and point features: M1andM2in Table 3 each only encode box features and pointfeatures respectively. Comparing to M9which uses both features, we show that both the box andpoint features in the per-frame encoder stage contribute to the overall success of the model.Effect of per-frame perturbation: M3!M9in Table 3 shows that the per-frame bounding boxperturbation augmentation we use helps with the final performance.Effect of number of self-attention blocks: M4,M5andM9in Table 3 show that the refinementaccuracy grows with more attention blocks.Effect of temporal context length: Finally, we train and run our method in a window-basedfashion to restrict the temporal context given to each method. M6,M7,M8andM9in Table 3 showthat the refinement accuracy steadily increases with more temporal context given to the model.Qualitative results: Fig. 4 shows an example for which the observations are very sparse at the be-ginning of the trajectory and get denser afterwards. LabelFormer is able to exploit the full temporalcontext and improve the bounding box trajectory, especially at the most challenging frames. Moreexamples and comparisons with previously proposed refinement methods can be found in the supp.5 LimitationsThe two-stage auto-labelling paradigm has an inherent limitation: the second stage only refines thecontinuous bounding box localization errors, but does not correct discrete detection errors (falsepositives, false negatives) and tracking errors (id switches, fragmented tracklets). Such discreteerrors will propagate to the final output. Moreover, the refinement performance on the fragmentedtracklets may be sub-optimal due to the missing temporal context (otherwise present in the un-fragmented tracklets). Therefore, a future direction is to explore alternative paradigms that canrecover from discrete errors too. Finally, a failure mode of our proposed refinement model is that itcan degrade the quality of the auto-labels with respect to initialization when the input trajectories areshort and have sparse observations. Such cases are challenging even to humans, yet future work cantry to address this by estimating auto-label uncertainty and leveraging it in downstream applications.6 ConclusionIn this work, we study the trajectory refinement problem in a two-stage LiDAR-based offboardperception paradigm. Our proposed method, LabelFormer , is a single transformer-based model thatleverages full temporal context of the trajectory, fusing information from the initial path as well asthe LiDAR observations. Compared to prior works, our method is simple, achieves much higheraccuracy and runs faster in both urban and highway domains. With the ability to auto-label a largerdataset effectively and efficiently, LabelFormer helps boost downstream perception performance,and unleashes the possibility for better autonomy systems.8AcknowledgmentsWe thank the anonymous reviewers for the helpful comments and suggestions. We would also liketo thank Bin Yang and Ioan Andrei B ˆarsan for insightful discussions and guidance at the early stageof the project. Finally, we would like to thank the Waabi team for their valuable support.References[1] B. Yang, M. Bai, M. Liang, W. Zeng, and R. Urtasun. Auto4d: Learning to label 4d objectsfrom sequential point clouds. CoRR , abs/2101.06586, 2021. URL https://arxiv.org/abs/2101.06586 .[2] C. R. Qi, Y . Zhou, M. Najibi, P. Sun, K. V o, B. Deng, and D. Anguelov. Offboard 3d object de-tection from point cloud sequences. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR) , pages 6134–6144, June 2021.[3] B. Wilson, W. Qi, T. Agarwal, J. Lambert, J. Singh, S. Khandelwal, B. Pan, R. Kumar, A. Hart-nett, J. K. Pontes, et al. Argoverse 2: Next generation datasets for self-driving perception andforecasting. In Thirty-fifth Conference on Neural Information Processing Systems Datasetsand Benchmarks Track (Round 2) , 2021.[4] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V . Patnaik, P. Tsui, J. Guo, Y . Zhou, Y . Chai,B. Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages2446–2454, 2020.[5] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u.Kaiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V . Luxburg,S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Ad-vances in Neural Information Processing Systems , volume 30. Curran Associates, Inc.,2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf .[6] S. Zakharov, W. Kehl, A. Bhargava, and A. Gaidon. Autolabeling 3d objects with differentiablerendering of sdf shape priors. In IEEE Computer Vision and Pattern Recognition (CVPR) , June2020.[7] Z. Pang, Z. Li, and N. Wang. Model-free vehicle tracking and state estimation in point cloudsequences. IROS , 2021.[8] J. Ye, Y . Chen, N. Wang, and X. Wang. Online adaptation for implicit object tracking andshape reconstruction in the wild. IEEE Robotics and Automation Letters , 2022.[9] Z. Qin, J. Wang, and Y . Lu. Weakly supervised 3d object detection from point clouds. In Pro-ceedings of the 28th ACM International Conference on Multimedia , MM ’20, page 4144–4152,New York, NY , USA, 2020. Association for Computing Machinery. ISBN 9781450379885.doi:10.1145/3394171.3413805. URL https://doi.org/10.1145/3394171.3413805 .[10] P. Pfreundschuh, H. F. Hendrikx, V . Reijgwart, R. Dub ́e, R. Siegwart, and A. Cramariuc. Dy-namic object aware lidar slam based on automatic generation of training data. In 2021 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 11641–11647, 2021. doi:10.1109/ICRA48506.2021.9560730.[11] X. Chen, B. Mersch, L. Nunes, R. Marcuzzi, I. Vizzo, J. Behley, and C. Stachniss. Automaticlabeling to generate training data for online lidar-based moving object segmentation. IEEERobotics and Automation Letters , 7(3):6107–6114, 2022. doi:10.1109/LRA.2022.3166544.[12] Y . You, K. Luo, C. P. Phoo, W.-L. Chao, W. Sun, B. Hariharan, M. Campbell, and K. Q.Weinberger. Learning to detect mobile objects from lidar scans without labels. In CVPR ,2022.9[13] L. Zhang, A. J. Yang, Y . Xiong, S. Casas, B. Yang, M. Ren, and R. Urtasun. Towards unsuper-vised object detection from lidar point clouds. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 9317–9328, June 2023.[14] L. Fan, Y . Yang, Y . Mao, F. Wang, Y . Chen, N. Wang, and Z. Zhang. Once detected, never lost:Surpassing human performance in offline lidar based 3d object detection. In ICCV , 2023.[15] T. Ma, X. Yang, H. Zhou, X. Li, B. Shi, J. Liu, Y . Yang, Z. Liu, L. He, Y . Qiao, Y . Li, and H. Li.Detzero: Rethinking offboard 3d object detection with long-term sequential point clouds. InProceedings of International Conference on Computer Vision (ICCV) , 2023.[16] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectionaltransformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018.[17] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advancesin neural information processing systems , 33:1877–1901, 2020.[18] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. De-hghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transform-ers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020.[19] A. Arnab, M. Dehghani, G. Heigold, C. Sun, M. Lu ˇci ́c, and C. Schmid. Vivit: A video visiontransformer. In Proceedings of the IEEE/CVF international conference on computer vision ,pages 6836–6846, 2021.[20] T. Meinhardt, A. Kirillov, L. Leal-Taixe, and C. Feichtenhofer. Trackformer: Multi-objecttracking with transformers. In Proceedings of the IEEE/CVF conference on computer visionand pattern recognition , pages 8844–8854, 2022.[21] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V . Koltun. Point transformer. In Proceedings of theIEEE/CVF international conference on computer vision , pages 16259–16268, 2021.[22] D. Wang, X. Cui, X. Chen, Z. Zou, T. Shi, S. Salcudean, Z. J. Wang, and R. Ward. Multi-view3d reconstruction with transformers. In Proceedings of the IEEE/CVF International Confer-ence on Computer Vision , pages 5722–5731, 2021.[23] J. Lu, D. Batra, D. Parikh, and S. Lee. Vilbert: Pretraining task-agnostic visiolinguistic repre-sentations for vision-and-language tasks. Advances in neural information processing systems ,32, 2019.[24] A. Prakash, K. Chitta, and A. Geiger. Multi-modal fusion transformer for end-to-end au-tonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat-tern Recognition , pages 7077–7087, 2021.[25] C.-Y . Wu and P. Krahenbuhl. Towards long-form video understanding. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1884–1894, 2021.[26] B. Ni, H. Peng, M. Chen, S. Zhang, G. Meng, J. Fu, S. Xiang, and H. Ling. Expandinglanguage-image pretrained models for general video recognition. In Computer Vision–ECCV2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV ,pages 1–18. Springer, 2022.[27] N. Dvornik, I. Hadji, R. Zhang, K. G. Derpanis, R. P. Wildes, and A. D. Jepson. Stepformer:Self-supervised step discovery and localization in instructional videos. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 18952–18961,2023.10[28] H. Zhao, J. Gao, T. Lan, C. Sun, B. Sapp, B. Varadarajan, Y . Shen, Y . Shen, Y . Chai, C. Schmid,et al. Tnt: Target-driven trajectory prediction. In Conference on Robot Learning , pages 895–904. PMLR, 2021.[29] A. Hu, Z. Murez, N. Mohan, S. Dudas, J. Hawke, V . Badrinarayanan, R. Cipolla, andA. Kendall. Fiery: future instance prediction in bird’s-eye view from surround monocularcameras. In Proceedings of the IEEE/CVF International Conference on Computer Vision ,pages 15273–15282, 2021.[30] A. Cui, S. Casas, K. Wong, S. Suo, and R. Urtasun. Gorela: Go relative for viewpoint-invariantmotion forecasting. arXiv preprint arXiv:2211.02545 , 2022.[31] R. Mahjourian, J. Kim, Y . Chai, M. Tan, B. Sapp, and D. Anguelov. Occupancy flow fieldsfor motion forecasting in autonomous driving. IEEE Robotics and Automation Letters , 7(2):5639–5646, 2022.[32] B. Agro, Q. Sykora, S. Casas, and R. Urtasun. Implicit occupancy flow fields for perceptionand prediction in self-driving. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 1379–1388, 2023.[33] H. Fan, F. Zhu, C. Liu, L. Zhang, L. Zhuang, D. Li, W. Zhu, J. Hu, H. Li, and Q. Kong. Baiduapollo em motion planner. arXiv preprint arXiv:1807.08048 , 2018.[34] A. Sadat, M. Ren, A. Pokrovsky, Y .-C. Lin, E. Yumer, and R. Urtasun. Jointly learnablebehavior and trajectory planning for self-driving vehicles. In 2019 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 3949–3956. IEEE, 2019.[35] S. Casas, A. Sadat, and R. Urtasun. Mp3: A unified model to map, perceive, predict and plan.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 14403–14412, 2021.[36] K. Renz, K. Chitta, O.-B. Mercea, A. Koepke, Z. Akata, and A. Geiger. Plant: Explain-able planning transformers via object-level representations. arXiv preprint arXiv:2210.14222 ,2022.[37] A. H. Lang, S. V ora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom. Pointpillars: Fast en-coders for object detection from point clouds. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 12697–12705, 2019.[38] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d clas-sification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE ,2017.[39] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778,2016.[40] T.-Y . Lin, P. Doll ́ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramidnetworks for object detection. In Proceedings of the IEEE conference on computer vision andpattern recognition , pages 2117–2125, 2017.[41] P. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Transactions onPattern Analysis and Machine Intelligence , 14(2):239–256, 1992. doi:10.1109/34.121791.[42] J. Park, Q.-Y . Zhou, and V . Koltun. Colored point cloud registration revisited. In 2017 IEEEInternational Conference on Computer Vision (ICCV) , pages 143–152, 2017. doi:10.1109/ICCV .2017.25.[43] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization. 2016.11[44] O. Press, N. Smith, and M. Lewis. Train short, test long: Attention with linear biases enablesinput length extrapolation. In International Conference on Learning Representations , 2022.URL https://openreview.net/forum?id=R8sQPpGCv0 .[45] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Con-ference on Learning Representations , 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7 .[46] Y . Chen, J. Liu, X. Zhang, X. Qi, and J. Jia. V oxelnext: Fully sparse voxelnet for 3d objectdetection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , 2023.[47] X. Weng, J. Wang, D. Held, and K. Kitani. 3D Multi-Object Tracking: A Baseline and NewEvaluation Metrics. IROS , 2020.12Supplementary MaterialsA Implementation and Experiment DetailsA.1 Model DetailsPoint Encoder: The point encoder is consisted of a voxelizer followed by a CNN-based backboneand a Feature Pyramid Network (FPN) [40].Specifically, the voxelizer employs voxel resolution of 10cm in all of X,YandZdirections, with aregion of interest of [-12, 12] meters along X, [-4, 4] meters along Y, and [-0.2, 3.0] meters alongZ, to construct a NxNyNzvoxel grid with Nx= 240 ,Ny= 80 andNz= 320 . For all pointsin each 3D voxel grid, we first represent each point as (x;y;z;t)where (x;y;z)isthe positional offset with respect to the voxel centroid and t=ttrefis the difference betweenthe per-point time and the LiDAR sweep end time of the middle frame in the object trajectory. Wefeed this four-vector representation of each point into a two-layer MLP with 16 output channelseach, and LayerNorm [43] and ReLU applied right after the first layer. Then, for each voxel, wepool all point features inside by summing them and applying a LayerNorm after to derive voxelgrid features RNxNyNz16, which can be viewed as NxNy“feature pillars” along the Zaxis.We additionally encode an z-axis positional embedding via a learnable variable block 2RNz16.We concatenate the non-empty voxels in each feature pillar with the positional embedding block toobtain an augmented feature 2RN0z32(N0zNzis the number of non-empty voxels in the pillar),and pass through a two-layer MLP with 16 and 32 output channels each (with LayerNorm and ReLUin between), apply LayerNorm after the second layer, and sum all the features along each pillar toobtain a BEV feature map 2RNxNy32.The backbone takes the BEV feature map as input and first applied three stem layers with 120, 96,96 output channels each. Each stem layer is consisted of a 3x3 convolutional layer, followed byGroupNorm (GN) and ReLU. The first stem layer has stride of 2 while the next two have a strideof 1. Then, the output passes through three downsampling stages. The three downsampling stagescontain 6, 6, 4 ResNet [39] blocks with 288, 384, 576 output channels respectively. Each ResNetblock applies a sequence of 1x1 conv, GN, ReLU, 3x3 conv (with an optional stride parameter),GN, ReLU, 1x1 conv, GN, ReLU to obtain a residual and sum with the input. Each downsamplingstage downsamples the input by a factor of 2within the first ResNet block, where the first ResNetblock has stride 2 in the middle 3x3 conv, and the residual output is added to the input that isdownsampled with a 1x1 conv block with stride 2 followed by GN. The remaining ResNet blocks ineach downsampling stage all have stride of 1.The FPN takes the outputs from all three stages in the backbone, which are 4,8and16down-sampled from the original resolution (the stem layers downsample the input by 2, and each down-sampling stage in the backbone further downsamples by 2). The FPN module fuses the two lowestresolution feature maps first by applying a 1x1 conv block + GN to the 16low-resolution map, up-sampling it by 2with bilinear interpolation, and adding it to the 8downsampled feature map. Wethen perform a similar operation to fuse the 4downsampled feature map with the newly fused 8downsampled feature map, and apply a final 3x3 conv to output a feature map of 4downsampledoriginal resolution with channel dimension 256 as the feature map of per-frame points.Attention Block: We next provide more details on the feed-forward MLP in each attention block.The feed-forward MLP is consisted of a linear layer with input dimension 256 and output dimension512, followed by ReLU, DropOut with 10%, a second linear layer with input dimension 512 andoutput dimension 256 and another DropOut with 10%. We add the output of the MLP to the inputof the MLP and return the sum.13Training Loss Details: In this section, we provide detailed definitions for our loss functions. Theregression loss is defined as:Lreg(f^big;fb?ig) =MXismooth`1(^xi;x?i) +smooth`1(^yi;y?i) +smooth`1(^l;l?) +smooth`1( ^w;w?)+1MXismooth`1(sin (2 ^i);sin (2?i)) + smooth`1(cos (2 ^i);cos (2?i));(4)with the hyperparameter = 0:1in practice, and the IoU loss is given by:LIoU(f^big;fb?ig) =1MXiIoU(BBox( ^xi;^yi;^l;^w);BBox(x?i;y?i;l?;w?)); (5)to compare the axis-aligned refined and ground-truth bounding box in each frame.A.2 Detector and TrackerTo obtain the first-stage coarse initialization, we follow the standard “detect-then-track” approachwhere a detection model is trained to output per-frame detections and we leverage a tracker to obtainconsistent tracklets over time. Next, we give more details about the detector and tracker we use.Detector: To boost detection performance, we adapt the single-frame public implementation ofboth PointPillars [37] and V oxelNeXt [46] to a multi-frame version that additionally takes 4 pasthistory frames and 4 future frames as input. The validation mean AP of the single-frame vs. multi-frame PointPillars models are 68.78% and 71.02% on the Highway dataset respectively, and 55.98%and 60.58% on A V2 respectively. The validation mean AP of single-frame vs. multi-frame V oxel-NeXt are 81.87%/84.25% on Highway and 60.06%/66.25% on A V2.Tracker: Following [1, 2], we use a simple online tracker, which is largely inspired by [47], andwe provide the implementation details of our tracker, in particular how association is performedacross frames.For each new frame at time step twith detections Bt=fbltgwhere each blt= (xlt;ylt;llt;wlt;lt)2R5is the individual 2D BEV bounding box, we first filter with Non-Maximum Suppression withIoU threshold 0.1, and then filter out bounding boxes with low confidence scores. We then computea cost matrix with existing tracklets St=fsjtgas follows. For each tracklet j, we first predictits bbox position (xjt;yjt)at timet: if the tracklet has at least two past frames, we set (xjt;yjt) =2(xjt1;yjt1)(xjt2;yjt2)via naive extrapolation (assuming constant velocity between twoadjacent frames); otherwise we simply set (xjt;yjt) = (xjt1;yjt1). Then, for each pair of thedetected bbox bltand the predicted tracklet bbox bjt, we compute the Euclidean distance betweenthe bbox centroids as `j;l=q(xjtxlt)2+ (yjtylt)2. For each existing tracklet, we simplyemploy a greedy strategy to find the nearest detection l= arg minl`j;l, and if the closest distancelj;lis greater than a threshold of 5.0m, then the tracklet has no match. We use greedy matchinginstead of a more sophisticated matching strategy such as Hungarian matching because it is morerobust to noisy and spurious detections.If a tracklet is matched to a new detection, we add the detection to the tracklet and update the trackletscorecjt=wcjt1+cltw+1:0, wherecjt1is the old tracklet score, clt= 1:0is the detection confidence scorewe set for every new detection, and w=Pnjt1i=10:9iwherenjt1is the number of tracking steps inthe tracklet.If a tracklet is not matched, we grow the tracklet by naively extrapolating the position and angle,and set the new confidence score as cjt= 0:9cjt1.If a new detection is not matched to any tracklet, we start a new tracklet and initialize the confidencescorecjtwith the detection’s confidence.14Motion State First-Stage Detector Second-Stage Refinement Mean IoU RC @ 0.5 RC @ 0.6 RC @ 0.7 RC @ 0.8StationaryPointPillars- 64.46 79.34 69.95 53.58 28.66Auto4D 67.51 82.56 74.66 61.26 36.913DAL 68.00 81.35 75.04 63.26 41.93Ours (LabelFormer) 70.67 84.08 77.31 66.03 46.81V oxelNeXt- 68.91 83.53 75.15 61.53 38.81Auto4D 70.87 86.11 79.06 66.28 42.863DAL 70.63 84.55 77.74 66.50 45.13Ours (LabelFormer) 73.21 86.77 80.09 69.54 51.08DynamicPointPillars- 60.53 73.12 61.04 43.23 21.03Auto4D 63.26 76.05 65.56 50.27 27.413DAL 60.80 72.70 60.03 45.36 22.12Ours (LabelFormer) 65.64 78.06 68.69 54.74 34.10V oxelNeXt- 62.25 73.96 62.17 46.66 25.57Auto4D 64.73 77.01 66.77 52.02 30.723DAL 63.33 75.85 64.60 48.55 26.18Ours (LabelFormer) 66.93 79.49 69.64 56.04 35.88Table 4: [A V2] Performance break-down for ground-truth stationary vs. dynamic objectsBbox Enc. Point Enc. Perturb Window #Att Mean IoU RC@0.5 RC@0.6 RC@0.7 RC@0.8M1 X X All 3 64.20 69.83 59.62 46.11 26.58M2 X X All 3 69.56 82.36 75.84 64.72 44.50M3 X X All 3 68.91 80.84 73.05 61.65 42.89M4 X X X 5 3 69.07 81.19 74.17 63.47 45.05M5 X X X 10 3 69.33 81.83 74.80 64.12 45.42M6 X X X All 3 70.59 83.09 75.77 65.11 46.16M7 X X X All 6 70.93 83.50 76.51 66.34 47.85Table 5: [Highway] Ablation study using the PointPillars initializations. Perturb refers to thebounding box perturbation augmentation, Window specifies the window size when applicable, and#Att is the number of self-attention blocks.We terminate all tracklets with a tracking confidence score less than 0.1, and apply NMS at the endover all existing tracklets in the current frame with an IoU threshold of 0.1. We repeat this processfor the next frame at time t+ 1until the end of the sequence.A.3 Association with GT TrajectoriesFor each initial object trajectory detected and tracked in the first stage, we use a simple heuristicto associate it with a ground-truth object trajectory as follows: for each frame that the detectedtrajectory is present, we identify the ground-truth bounding box that has the maximum IoU with thedetected bounding box in that frame. If such ground-truth box has IoU less than 10%, then we failto find a matching ground-truth box for this frame. As a result we obtain M0ground-truth objectIDs for a detected trajectory of length M, with 0M0Mas we might not be able to find aground-truth ID for every frame. If M0is 0, then we have failed to find an associated ground-truthobject: we consider the detected object as a false positive and discard it in trajectory refinementtraining and evaluation. Otherwise we take the most common ground-truth actor id out of the M0objects and assign it as the associated ground-truth object trajectory for training and evaluation.B Additional ExperimentsStatic vs. Dynamic Objects The A V2 validation set contains around 52% stationary objects (weclassify an actor as static if the max displacement in the ground-truth displacement in all X,YandZdirection is within 1.0m). Table 4 additionally shows the dynamic vs. stationary object break-downon the A V2 dataset. Our method is able to achieve significantly higher refinement accuracy on bothstatic and dynamic objects with a single network.Ablation with PointPillars Init: We additionally performed the same set of ablation studies asTable 3 in the main paper on the Highway dataset with the PointPillars-based initializations. Table 5shows the results, which give the same conclusions as the V oxelNeXt-based initializations in themain paper.15Architecture Mean IoU RC @ 0.5 RC @ 0.6 RC @ 0.7 RC @ 0.8Init 65.90 76.76 68.32 56.27 36.77MLP (1-layer) 70.14 81.56 74.20 63.95 46.23MLP (3-layer) 70.50 82.31 74.82 65.04 46.80MLP (6-layer) 70.50 82.31 74.82 65.04 46.80LabelFormer(6-block) [44] 72.38 83.68 77.45 68.33 50.06Table 6: [Highway] Ablation of MLP vs. Transformer using the V oxelNeXt initializationsPositional Encoding Mean IoU RC @ 0.5 RC @ 0.6 RC @ 0.7 RC @ 0.8Absolute [5] 71.71 83.63 76.97 67.11 48.67AliBi [44] 72.38 83.68 77.45 68.33 50.06Table 7: [Highway] Ablation of positional encoding using the V oxelNeXt initializationsMLP vs. Transformer To understand whether attention/transformer-like architecture helps, weablate the effect of the cross-frame attention module by replacing it with an MLP. Specifically, toaggregate features across frames, we mean pool the per-frame features from all frames, apply anMLP to the pooled features, and then add the aggregated feature back to each frame-level feature.The updated per-frame features are then passed to the decoder module of LabelFormer. For theMLP, each layer consists of a linear layer, followed by LayerNorm and ReLU. We conducted thisexperiment with 1, 3, 6 layers respectively. The results in Table 6 show that the cross-frame attentionmodule outperforms the MLP architecture by a large margin, demonstrating the benefits of attentionwith a 40.9% higher relative gain in mean IoU.Positional Encoding We additionally ablate our choice of positional encoding with the V oxelNeXtinitialization on the Highway dataset. Table 7 shows that the relative positional encoding AliBi [44]gives overall better performance than using the vanilla absolute positional encodings [5].C Qualitative ResultsIn this section we show qualitative results for trajectory refinement, comparing LabelFormer withthe coarse initialization, 3DAL [2] and Auto4D [1].We illustrate initial and refined auto-labels for the Highway dataset with V oxelNeXt initializa-tions. Fig. 5 showcases trajectories of two objects on the top that have sparse observations (andhence worse initializations) at the beginning, and denser observations towards the end, and ours La-belFormer is able to give better refinement for the worse initializations because it is able to leveragemore temporal context more effectively than previous works. Fig. 5, 6, 7, 8 additionally showcasethat our method works better qualitatively on trajectories with both sparse and dense observationsand with various speeds. For more visualizations on Argoverse, please refer to the supplementaryvideo.16Init Mean IoU: 73.733DAL Mean IoU: 75.37Auto4D Mean IoU: 80.40LabelFormer Mean IoU: 85.77Init Mean IoU: 83.053DAL Mean IoU: 77.28Auto4D Mean IoU: 88.55LabelFormer Mean IoU: 88.34Init Mean IoU: 80.433DAL Mean IoU: 81.55Auto4D Mean IoU: 89.82LabelFormer Mean IoU: 88.54Init Mean IoU: 78.533DAL Mean IoU: 80.71Auto4D Mean IoU: 82.59LabelFormer Mean IoU: 86.89Figure 5: [Highway] Qualitative results showcasing different object trajectories (first-stage init,refined by 3DAL, Auto4D and Ours LabelFormer) in each object’s trajectory coordinate frame. Theground-truth bounding box is in magenta, and the auto-label is in orange. To avoid cluttering, wevisualize every other three bounding box in the first 50 frames of the trajectory.17Init Mean IoU: 77.103DAL Mean IoU: 80.47Auto4D Mean IoU: 86.89LabelFormer Mean IoU: 87.86Init Mean IoU: 80.433DAL Mean IoU: 86.14Auto4D Mean IoU: 85.01LabelFormer Mean IoU: 89.62Init Mean IoU: 62.843DAL Mean IoU: 67.26Auto4D Mean IoU: 68.23LabelFormer Mean IoU: 81.50Init Mean IoU: 79.053DAL Mean IoU: 80.96Auto4D Mean IoU: 86.52LabelFormer Mean IoU: 90.34Init Mean IoU: 72.123DAL Mean IoU: 81.15Auto4D Mean IoU: 84.38LabelFormer Mean IoU: 85.98Figure 6: [Highway] More qualitative results. Ground-truth in magenta, auto-labels in orange.18Init Mean IoU: 65.723DAL Mean IoU: 77.98Auto4D Mean IoU: 80.67LabelFormer Mean IoU: 84.58Init Mean IoU: 62.843DAL Mean IoU: 67.26Auto4D Mean IoU: 68.23LabelFormer Mean IoU: 81.50Init Mean IoU: 62.553DAL Mean IoU: 69.17Auto4D Mean IoU: 69.73LabelFormer Mean IoU: 76.55Init Mean IoU: 63.063DAL Mean IoU: 74.98Auto4D Mean IoU: 74.44LabelFormer Mean IoU: 85.72Figure 7: [Highway] More qualitative results. Ground-truth in magenta, auto-labels in orange.19Init Mean IoU: 63.743DAL Mean IoU: 66.65Auto4D Mean IoU: 80.50LabelFormer Mean IoU: 82.37Init Mean IoU: 80.193DAL Mean IoU: 81.81Auto4D Mean IoU: 82.59LabelFormer Mean IoU: 86.46Init Mean IoU: 79.633DAL Mean IoU: 81.94Auto4D Mean IoU: 82.04LabelFormer Mean IoU: 87.48Figure 8: [Highway] More qualitative results. Ground-truth in magenta, auto-labels in orange.20 |
X0cmlTh1Vl | Waypoint-Based Imitation Learning for RoboticManipulationLucy Xiaoyang Shi∗Archit Sharma∗Tony Z. Zhao Chelsea FinnDepartment of Computer ScienceStanford University{lucyshi,architsh,tonyzhao,cbfinn }@stanford.eduAbstract: While imitation learning methods have seen a resurgent interest forrobotic manipulation, the well-known problem of compounding errors continuesto afflict behavioral cloning (BC). Waypoints can help address this problem byreducing the horizon of the learning problem for BC, and thus, the errors com-pounded over time. However, waypoint labeling is underspecified, and requiresadditional human supervision. Can we generate waypoints automatically withoutany additional human supervision? Our key insight is that if a trajectory segmentcan be approximated by linear motion, the endpoints can be used as waypoints. Wepropose Automatic Waypoint Extraction (AWE ) for imitation learning, a prepro-cessing module to decompose a demonstration into a minimal set of waypointswhich when interpolated linearly can approximate the trajectory up to a specifiederror threshold. AWE can be combined with any BC algorithm, and we find thatAWE can increase the success rate of state-of-the-art algorithms by up to 25% insimulation and by 4-28% on real-world bimanual manipulation tasks, reducing thedecision making horizon by up to a factor of 10. Videos and code are available athttps://lucys0.github.io/awe/.Keywords: imitation learning, waypoints, long-horizon1 IntroductionFigure 1: Our approach reduces the horizon of imitationlearning by extracting waypoints from demonstrations.The simple supervised learning approachof behavioral cloning (BC) has enabled acompelling set of robotic results, from self-driving vehicles [ 1] to manipulation [ 2,3,4,5,6]. However, due to the lack of cor-rective feedback, errors grow quadraticallyin the length of the episode for behaviorcloning (BC) algorithms [ 7,8], colloquiallyknown as the compounding errors prob-lem. Waypoints [ 9,10,4,11] are a rele-vant proposition in this context: breakingthe demonstration into a subset of statesthat can reconstruct the trajectory reducesthe effective length of the decision-makingproblem, addressing the compounding er-rors problem while still allowing the use of simple methods such as BC. Our primary objective is toselect a set of waypoints to reduce the effective horizon of the demonstration, and not necessarily findkey bottleneck states. However, labeling waypoints is both an underspecified problem and requiresadditional human supervision.∗Equal contribution.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Our objective is to develop a method for selecting waypoints for a given demonstration without anyadditional human supervision. For a robot arm, if a trajectory segment can be approximated linearly,a low-level controller can reliably imitate the segment without explicitly learning the intermediatestates. Thus, we can represent that segment just by the endpoints. Extending this argument toarbitrary trajectories, we can use a subsequence of states as waypoints to represent the trajectory ifthe trajectory can be approximated well by linearly interpolating between the selected waypoints.The BC prediction problem then transforms from predicting the next action to the next waypoint.How do we select the waypoint sequence that approximates a given a trajectory? Given a budgetfor the reconstruction error, where the error is defined as the maximum proprioceptive distancebetween the actual and reconstructed trajectory, we want to select the shortest subsequence ofstates for which the reconstruction error is within budget. This can be posed as a standard dynamicprogramming problem, by iteratively choosing an intermediate state as the waypoint and recursivelyselecting waypoints for the two resulting trajectory segments. The recursion terminates whenever thereconstruction error for the linearly interpolated trajectory between the endpoints is already withinthe budget. Importantly, finding the sequence of waypoints relies only on the robot’s proprioceptiveinformation, which is already collected during teleoperation. AWE makes no additional assumptionabout the extrinsic environment (state estimation, point clouds, etc.) and requires no additional labelinformation from humans.Overall, our work proposes Automatic Waypoint Extraction (AWE ), a preprocessing module thatbreaks an expert demonstration into sequence of waypoints. AWE requires minimal additionalinformation, and thus, can easily be plugged into current BC pipelines. We combine AWE with twostate-of-the-art imitation learning methods, diffusion policy [ 5] and action-chunking with transformers(ACT) [ 6], and study its performance when learning from human-teleoperated demonstrations. Ontwo existing simulated imitation learning benchmarks and multiple real bi-manual manipulation tasks,we find that AWE consistently improves performance, with up to 25% increase in success rate insimulation and 4-28% increase in success rate on real-world tasks.2 Related WorkImitation learning is a long-studied approach to training robotic control policies from demonstra-tions [ 1,12,13]. A central challenge is when compounding errors cause the policy to drift awayfrom states seen in the demonstration data [ 8], leading to poor performance with small demonstrationdatasets. Prior approaches have aimed to improve imitation learning performance by developing newpolicy architectures [ 14,11,15,16], using policies with expressive distribution classes [ 17,18,5],introducing modified action spaces [ 19,20,21,22,4], constructing modified training objectives [ 2,6],utilizing particular visual representations [ 23,24], incorporating data augmentation [ 25,26], or col-lecting online data [ 8,27,28]. We instead aim to tackle the challenge of compounding errors byextracting waypoints that shorten the horizon. Our approach is orthogonal to and complementary tomany of these prior developments; indeed, our experiments show that AWE can be combined withtwo recent, representative methods [5, 6] to improve their performance.Prior works also attempted to reduce the policy horizon. Some use hand-defined high-level primi-tives [ 19,22,29,30,31,32,33,34], but they lack flexibility and require extra engineering. Belkhaleet al. [35] proposes a hybrid action space that incorporates both sparse waypoints and dense ac-tions. While innovative, it requires humans to label waypoints either during data collection orpost-processing, which may limit its scalability. Other recent works extract waypoints using variousheuristics [ 9,10,4,11], such as selecting waypoints at timesteps when the robot is at zero-velocity oractuating the gripper [ 4]. We are inspired by the success of these methods: they provide dramatic per-formance and data-efficiency improvements in some settings. However, we find that the heuristics donot apply in general, leading to low success on imitation learning benchmarks and fine manipulationtasks (Sec 5.4). In contrast to these methods, we circumvent the need for human waypoint labellingor heuristics. Our approach instead automatically extracts waypoints that minimize the trajectoryreconstruction cost, which yields improvements on two existing simulated manipulation benchmarksand multiple real robot tasks. We discuss more related works in Appendix D.2.23 PreliminariesProblem Setup . We assume an expert collected dataset of demonstrations D={τ0, τ1. . . τn},where each trajectory τi={(oj, xj)}|τi|j=1is a sequence of paired raw visual observations oandproprioceptive information x. The proprioceptive information can either be the end-effector pose orjoint pose, and includes the gripper width. In this work, we use pose-control for the action space,i.e., proprioceptives xare the action outputs as well. Next, we briefly review two recent successfulmethods for BC, which we will use in our experiments.Diffusion Policy . Diffusion policy [ 5] models the conditional action distribution as a denoisingdiffusion probabilistic model (DDPM) [ 36], allowing for better representation of the multi-modalityin human-collected demonstrations. Specifically, diffusion policy uses DDPM to model the actionsequence p(At|ot, xt), where At={at, . . . a t+C}represents a chunk of next Cactions. The finalaction is output of the following denoising process [37]:Ak−1t=αAkt−γεθ(ot, xt, Akt, k) +N(0, σ2I), (1)where Aktis the denoised action sequence at time k. Denoising starts from AKtsampled from Gaussiannoise and is repeated till k= 1. In Eq 1, (α, γ, σ )are the parameters of the denoising process and εθis the score function trained using the MSE loss l(θ) = (εk−εθ(ot, xt, Akt+εk, k))2. The noise atstepkof the diffusion process, εk, is sampled from a Gaussian of appropriate variance [36].Action Chunking with Transformers . Action chunking with transformers (ACT) [ 6] models thepolicy distribution p(At|ot, xt)as conditional V AE [ 38,39], using a transformer based encoder anddecoder. The decoder output is a chunk of actions of size C. Chunking is particularly important forhigh-frequency fine-grained manipulation tasks, with chunk sizes Cbeing as high as 100[6].4 Automatic Waypoint Extraction for Imitation LearningThe goal of this section is to develop our method for Automatic Waypoint Extraction ( AWE ). First,we define an objective that assesses the quality of the reconstructed trajectory for a given sequence ofwaypoints. Next, we show how a simple dynamic programming algorithm can be used to select theminimal number of waypoints that have a reconstruction error below a specified threshold. Finally, wediscuss how to preprocess a demonstration dataset using AWE before plugging into the BC algorithm,along with the some practical considerations when training and evaluating a waypoint-based policy.Reconstruction Loss . For an expert demonstration τ, define the sequence of proprioceptives asτp={xj}|τ|−1j=0and let Wdenote a sequence of waypoints such that W={w0, . . . w L}, where widenotes the proprioceptive information in the waypoint. We reconstruct an approximate trajectory ˆτby interpolating between the waypoints, i.e., ˆτ=f(W)for an interpolation function f. While werestrict to linear interpolation in this work, the framework can be extended to incorporate splines.To measure how well a sequence of waypoints approximates the true trajectory, we measure howmuch the interpolated trajectory deviates from the true trajectory. We define the reconstruction loss asthe maximum distance of any state in the original trajectory from the reconstructed trajectory, that is,L(ˆτ, τ) = maxx∈τpminˆx∈ˆτl(x,ˆx) (2)where l(·,·)is some distance function (for example, Euclidean l2distance). The minˆx∈ˆτl(·,ˆx)denotes shortest distance of a proprioceptive state to the interpolated trajectory ˆτ. How do weaggregate projection errors for proprioceptives in the true trajectory? While there are severaloptions, for example, mean projection error over τp, we define the reconstruction loss as themaximum projection error over all proprioceptives in τp. The success of a trajectory often re-lies on reaching key states, and the mean error can be low while having a high projection er-ror for those key states. While a low reconstruction loss with maximum projection error alsodoes not guarantee downstream success, it encourages minimizing outlier projection errors po-tentially critical for a successful execution. The reconstruction loss Lis visualized in Figure 2.3Figure 2: Visualizing the loss L.Waypoint Selection via Dynamic Programming . Giventhe reconstruction loss, how do we use it to optimize way-points? We consider the following optimization problem:minW|W| s.t.L(f(W), τ)≤η, (3)i.e., minimize the number of selected waypoints such thatthe reconstruction loss is below the budget η. As presented,waypoints can be arbitrary points in the proprioceptivespace, but we will restrict waypoint selection to the statesvisited in the expert trajectory τ. The problem simplifiesto finding the shortest subsequence of τpsuch that the reconstruction loss is less than η, whichcan be solved efficiently with dynamic programming (DP). For a trajectory segment, either linearlyinterpolating between the endpoints sufficiently reconstructs the segment (i.e, reconstruction lossless than η), in which case the endpoints are returned as waypoints. Or for every intermediate statebetween the endpoints: (1) break the trajectory into two segments at that intermediate state and, (2)recursively find the shortest subsequence for each segment. Finally, choose the intermediate stateresulting in the shortest subsequence when the waypoints from its two trajectory segments are merged,and return the merged waypoints. The pseudocode for selecting waypoints with DP is in Algorithm 1.Preprocessing Demonstrations . For an expert trajectory τ={(o0, x0), . . .(oT, xT)}, denotethe selected waypoints as W={(w0, t0). . .(wL, tL)}, where widenotes the waypoint and tidenotes the time index in τ. The training problem for a BC algorithm changes from predicting thenext proprioceptive state to next waypoint. However, if done na ̈ıvely, the training dataset of nextwaypoints will be much smaller. But, we can use all observations in τbetween two consecutivewaypoints by labeling them with closest waypoint after the observation. This follows from theintuition that following the waypoints implies the robot tries to reach wk+1from wk, and therefore,should target wk+1from intermediate states between them as well. The final dataset can be written asDτwaypoint ={(ot, xt, wnext wp(t))}T−1t=0where next wp(t) = argminj∈{0,1,...L}such that tj> t. Theprocess of selecting waypoints and constructing the augmented dataset is repeated for every expertdemonstration τ∈ D, and the resulting datasets are merged to get the final training dataset.Overview and Practical Considerations . We have proposed AWE , a simple method that canpreprocess a demonstration into sequence of waypoints without any additional supervision. Thetraining dataset can be relabeled with the next waypoint instead of next propriopceptive state, andplugged into a BC pipeline. The choice of policy distribution class used with AWE is important;waypoints introduce increased multi-modality into the conditional action distribution as differentdemonstrations may be processed into different waypoints. Using more expressive policy classescapable of representing multi-modal action distributions is critical, as introducing waypoints canmake the performance worse for less expressive policy classes (Figure 6).Why does AWE return meaningful waypoints? An intuitive notion of waypoints relies on registeringimportant events happening in the extrinsic environment (grasping a cup, opening a door, etc.) whileAWE uses just the proprioceptive information to select waypoints. AWE relies on the idea that theexpert demonstrations will naturally deviate from linear motion during such key events. For simplerparts of the task, such as free-space reaching, demonstrations are more likely to be approximated bylinear motion, resulting in fewer waypoints. Moreover, decreasing the budget ηallows for selectingmore waypoints in general, and thus, better reconstruction as visualized in Figure 4.An important consideration at test-time is to allow more time for position-control to reach waypoints,as waypoints are farther apart compared to proprioceptive positions in the original expert demon-strations. The exact instantiation for the low-level controller depends on the whether the robot isoperating in the joint space or end-effector space, which we discuss in Appendix C.1.5 ExperimentsOur experiments seek to answer the following questions: (1) How well does AWE combine withrepresentative behavioral cloning methods? (2) Can it be used to tackle standard imitation learning4Table 1: Success rate (%) for simulated bimanual tasks. We report results on both training with scripteddata and training with human data, with 3 seeds and 50 policy evaluations each. Baseline results are obtainedfrom Zhao et al. [6]. Overall, AWE +ACT significantly outperforms previous methods.Cube Transfer Bimanual Insertionscripted data human data scripted data human dataBC-ConvMLP 1 0 1 0BeT [14] 27 1 3 0RT-1 [15] 2 0 1 0VINN [24] 3 0 1 0ACT [6] 86 50 32 20AWE +ACT (Ours) 99 71 57 30benchmarks with real human demonstrations? (3) Can it be effective on a real-robot? (4) Howdoes the parameterization of the policy affect the performance? (5) How do the selected waypointsand downstream performance change as we vary the hyperparameters? To answer these questions,we compare the performance of recent state-of-the-art BC methods with and without AWE on 8tasks and 10 datasets. First, we evaluate AWE on a set of simulation environments, specifically twobimanual manipulation tasks from Zhao et al. [6]and three manipulation tasks from the RoboMimicbenchmark [ 17]. We evaluate AWE on a set of three bimanual manipulation tasks on the real robot:coffee making ,wiping the table andscrewdriver handover . Hyperparameter and implementationdetails can be found in Appendix B and C respectively.5.1 Bimanual Simulation SuiteThe bimanual simulation suite contains two fine-grained long-horizon manipulation tasks in MuJoCo[40]. The observation space includes a 480×640image and the current joint positions for bothrobots. The 14-dimensional action space corresponds to the target joint positions. Demonstrationsare400to500steps at a control frequency of 50Hz. In the Cube Transfer task, the right robot armneeds to pick up the cube from a random position on the table, and then hand it to the left arm mid-air.ForBimanual Insertion , both the peg and the socket are placed randomly on the table. The armsneed to first pick them up respectively, then insert the peg into the socket mid-air. Both tasks requiredelicate coordination between the two arms and closed-loop visual feedback: error in grasping candirectly lead to failure of handover or insertion.Two datasets are available for each task: one collected with a scripted policy, and one collectedby human demonstrators, both with 50 demonstrations. As shown in Table 1, AWE outperformscompetitive BC baselines on all tasks and datasets in the bimanual simulation suite, where someof the baselines completely fail due to task difficulty. AWE can increase the success rate of ACT,the state-of-the-art method on this benchmark 17% on an average, and up to 25% on the scriptedbimanual insertion. The effective length of the training demonstrations reduce by a factor of 7×to10×, even allowing for improvements on human data which is fairly multi-modal to begin with.Notably, the performance improves by 50% when imitating human demonstrations on the bimanualinsertion task. Overall, this suggests that the benefit from reducing the effective training horizonexceeds any potential downside from the increased multi-modality introduced by AWE.5.2 RoboMimic SuiteNext, we evaluate on three simulated tasks from the RoboMimic [17] manipulation suite: Liftwherethe robot arm has to pick up a cube from the table, Can where the robots are required to pick up asoda can from a large bin and place it into a smaller target bin, and Square where robots are taskedto pick up a square nut and place it on a rod. It is challenging due to the high precision needed topick up the handle and insert it into a tightly-fitted rod. Episodes start with randomly initializedobject configurations. All environments return RGB observations and the action space is the 6DoFend-effector pose, with an additional degree for the gripper.We combine AWE with the state-of-the-art method on this benchmark, Diffusion Policy [ 5]. Sincediffusion policy achieves a near-perfect success rates on Lift,Can, and Square when training on a5Table 2: Success rate (%) for behavior cloning benchmark, RoboMimic (Visual Policy). AWE + Diffusionis more data-efficient than previous methods. We evaluate the policy every 100 epochs across the training,and report the average of the max performance across 3 training seeds and 30 different environment initialconditions (90 in total). Results on LSTM-GMM and IBC are obtained from Chi et al. [5]for comparison tomore traditional methods. The performance scaling is visualized in Figure 8.Task # Demos AWE + Diffusion (Ours) Diffusion LSTM-GMM IBCLift30 100.0±0.0 100.0±0.0 - -50 100.0±0.0 100.0±0.0 - -100 100.0±0.0 100.0±0.0 - -200 100.0±0.0 100.0±0.0 96 73Can30 69.0±1.4 61.0±5.9 - -50 85.7±1.9 82.3±3.3 - -100 95.3±1.7 93.3±0.0 - -200 96.7±0.9 97.3±2.5 88 1Square30 62.3±3.3 44.3±6.1 - -50 67.0±2.9 57.3±4.2 - -100 91.7±3.9 82.0±7.0 - -200 94.7±3.9 95.0±4.1 59 0dataset with 200proficient-human demonstrations, we focus our evaluation on how the performancescales with the number of demonstrations, both with and without AWE . The results in Table 2 suggestthatAWE consistently improves the performance of diffusion policy as the number of demonstrationsis scaled from 30to200, while both of them outperform LSTM-GMM and implicit BC [ 41] with halfthe demonstration data, or even less. The improvements are larger when the number of demonstrationsis smaller or the task is longer-horizon, for example, an 18% increase in the success rate when using30demonstrations on the Square task.5.3 Real-World Bimanual TasksFor real-robot evaluations, we use ALOHA [ 6], a low-cost open-source bimanual hardware setup. Thesetup consists of two leader arms and two follower arms, where the joint positions are synchronizedbetween the leaders and followers during teleoperation. The observation space consists of RGBimages from 4cameras: two are mounted on the wrist of the follower robots, allowing for close-upviews of objects for fine-manipulation, and the other two are mounted on the front and at the toprespectively. The demonstration data consists of 4 camera streams and the joint positions for eachrobot at 50Hz. We refer readers to the original paper for more hardware details.We experiment with three long-horizon tasks, each requiring precise coordination between the twoarms, illustrated in Figure 3. For Screwdriver Handover , the right arm needs to pick up thescrewdriver that is randomly initialized in a 15cm×20cm rectangular region (#1) and hand it to theleft arm mid-air (#2), followed by the left arm dropping it into the cup (#3). For Wiping the Table , aroll of paper towels is randomly placed in a 15cm×10cm region. The opening of the roll alwaysfaces the right side, with naturally occurring variations in length and spacing. The left arm presses onthe roll to prevent it from moving (#1), while the right arm tears off one segment of the paper towel(#2) and places it on a fixed location to absorb the spilled liquid (#3). For Coffee Making , a smallcoffee pod is randomized in a 15cm×10cm region. The left arm needs to pick it up (#1), followed bythe right arm opening the coffee machine (#2). The left arm then carefully inserts the coffee pod intothe slot (#3), with the right arm closing the lid (#4). Next, the right arm grasps a transparent cup withupto2cm randomization in the position and places it under the coffee outlet (#5).The three tasks emphasize precision and coordination, and involve deformable or transparent objectsthat can be hard to perceive or simulate. For example, placing the coffee pod into the machinerequires high precision. It is easy for the gripper or coffee pod to collide with the machine due tothe small clearance. The screwdriver handover task emphasizes the coordination between two arms.Grasping the paper towel requires accurate perception of the deformable material, which also haslow-contrast against itself. The gripper needs to move accurately so as to only grasp the opening butnot collide with the roll and push it away.6#1 #2 #3 init. #4 #5init.init.#1 #2 #3#1 #2 #3Screwdriver Handover Wiping the Table Coffee Making Figure 3: Real-World Bimanual Tasks. We consider three challenging real-world bi-manual tasks: (top)picking up a screw driver, handing it over to the other arm, and placing it in a cup, (middle) tearing off a segmentof paper towel and putting it on a spill, and (bottom) putting a coffee pod into a coffee machine, closing thecoffee machine, and placing a cup underneath the dispenser. Initial object positions are within the red rectangle.Table 3: Success rate (%) for real world tasks. AWE improves the success of ACT on all three tasks rangingfrom 4% to 28%. On the longest horizon coffee making task, AWE improves success by 28%.Screwdriver Handover Wiping the Table Coffee MakingACT 84 92 36AWE + ACT (Ours) 92 96 64As shown in Table 3, AWE achieves substantial success on each task. It consistently improves overACT by 8%, 4%, and 28% on Screwdriver handover, Wiping table, and Coffee Making, respectively.We observe that the most common failure case for ACT is inaccurate action prediction, which resultsfrom compounding errors on these long-horizon tasks. For example, the robot may make a wrongprediction at the beginning and grasp the coffee pod at an inconvenient position. The subsequentpredictions become increasingly incorrect, and thus the robot fails to insert the coffee pod into themachine. On the other hand, AWE can more accurately grasp the coffee pod due to a smaller decisionhorizon, resulting in more successful insertions into the coffee machine. Leveraging the low-levelcontroller to execute linear motions instead of relying on accurate policy predictions can reduce theerrors compounded over time. By accurately detecting waypoints for a successful handover, fortearing, and for inserting, AWE decreases the policy horizon and consistently improves performance.5.4 AnalysisWaypoint selection for different error budgets. We visualize a ground truth trajectory of end-effector (EE) positions and the EE trajectory reconstructed using AWE for the Can task in Figure 4.As the the error budget ηis reduced, the reconstructed trajectory tracks the original trajectory better.Importantly, as the budget is decreased, waypoints are added to harder segments of the task, asthey are less linear while the number of waypoints for simpler, linear paths stays similar. Smallererror thresholds lead to gradual increases in the number of selected waypoints. We also measureperformance with varying error thresholds (the only hyperparameter), for AWE +DiffusionPolicy onthe Can task with 50 demonstrations. Figure 5 shows that when the ηis too high (too few waypoints)or too low (too many waypoints), the agent does not take full advantage of AWE.On the importance of modeling multi-modality for AWE .The usage of waypoints can increasethe multimodality of the target conditional action distribution. We compare the performance ofAWE when trained with mean-squared error (MSE) loss (i.e, a unimodal Gaussian with identitycovariance) and a more expressive Gaussian mixture model (GMM) with 5modes. As shown inFigure 6, GMM policies can benefit from AWE , as they can represent multimodal action distributions.However, vanilla BC with a MSE loss degrades in performance. BC has a mode-covering behavior,and insufficient representative power of unimodal Gaussian can cause the performance to degrade.7Figure 4: As the error budget ηdecreases, our methodselects fewer waypoints if linear interpolation aptlyapproximates the segment. Best viewed on our website.7e-3 5e-3 1e-3Error Threshold7880828486Success Rate (%)AWEw/o AWEFigure 5: Success rate vs. error budget threshold η.Performance drops slightly if the budget is too tightand more significantly if the budget is too permissive.GMM AWE+GMM Uni.Gaussian AWE+Uni.Gaussian5060708090100Success Rate (%)Figure 6: AWE requires expressive policy classes.While expressive policies that can represent multimodaldistributions benefit from AWE (GMMs on left), theperformance can degrade for policy classes that are notsufficiently expressive ( right ).Figure 7: Comparing the replay success rate of AWEto common heuristics for waypoint selection. Withsimilar numbers of waypoints, following waypointsfrom AWE leads to more consistent task completionthan following the waypoints from heuristics.Comparison to heuristics. Prior works [ 9,10,4,11] have been successful by extracting waypointsusing simple heuristics. Are simple heuristics enough for extracting important waypoints? Weexperiment with two heuristics. The first one is similar to Shridhar et al. [4], labeling timesteps aswaypoints when the end-effector velocity is close to zero, or when the binary gripper state changes.The second heuristic selects waypoints with fixed intervals. For AWE and both heuristics, we extractwaypoints for all 200 trajectories in the RoboMimic Liftdataset, and measure the success rate whenreplaying the demonstration, i.e., following the extracted waypoints starting from the demonstrationtrajectory’s initial state. We adjust the selection threshold or interval to generate similar numbers ofwaypoints across methods for comparable results. Results in Figure 7 show that these two heuristicsdo not lead to satisfactory success rates even when simply replaying the trajectories.6 ConclusionWe presented a method for extracting waypoints from demonstrations of robotic manipulationtasks, therefore reducing the horizon of imitation learning problems. We found that AWE canbe combined with state-of-the-art imitation learning methods such as diffusion policy and ACTto improve performance, especially in data limited settings. AWE also consistently improvedperformance on three real-world dexterous manipulation tasks. Finally our analysis indicated theimportance of the AWE optimization compared to naive or heuristic waypoint selection methods, aswell as the effect of the error budget and policy distribution class on performance.Limitations. AWE leverages proprioceptive information to reparameterize demonstration trajectoriesin end-effector or joint space, an approach that may not be applicable to torque-controlled robot arms,tasks requiring forceful manipulation, or other robotics problems such as purely visual navigationor legged locomotion. Our evaluation only considers quasi-static tasks, and AWE currently doesnot account for velocities, which might be important for dynamic tasks. Furthermore, for tasksthat require extreme precision at certain times, we expect that AWE would require a tight errorbudget, diluting the benefit of using waypoints. This limitation might be resolved by identifyingwhen such precision is needed, either automatically or by incorporating some human supervision,and subsequently modifying the AWE optimization objective.8AcknowledgmentsThis work was supported by Schmidt Futures and ONR grants N00014-20-1-2675 and N00014-21-1-2685. We would like to thank Suneel Belkhale and Chen Wang for helpful discussions, and allmembers of the IRIS lab for constructive feedback.References[1]D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances in neuralinformation processing systems , 1, 1988.[2] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel. Deep imitationlearning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 5628–5635. IEEE, 2018.[3]A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser. Tossingbot: Learning to throwarbitrary objects with residual physics. IEEE Transactions on Robotics , 36(4):1307–1319, 2020.[4]M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning , pages 785–799. PMLR, 2023.[5]C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023.[6]T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulationwith low-cost hardware. arXiv preprint arXiv:2304.13705 , 2023.[7]S. Ross and D. Bagnell. Efficient reductions for imitation learning. In Proceedings of thethirteenth international conference on artificial intelligence and statistics , pages 661–668.JMLR Workshop and Conference Proceedings, 2010.[8]S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predictionto no-regret online learning. In Proceedings of the fourteenth international conference on artifi-cial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Proceedings,2011.[9]K. Hsiao and T. Lozano-Perez. Imitation learning of whole-body grasps. In 2006 IEEE/RSJinternational conference on intelligent robots and systems , pages 5657–5662. IEEE, 2006.[10] B. Akgun, M. Cakmak, K. Jiang, and A. L. Thomaz. Keyframe-based learning from demonstra-tion: Method and evaluation. International Journal of Social Robotics , 4:343–355, 2012.[11] S. James and A. J. Davison. Q-attention: Enabling efficient learning for vision-based roboticmanipulation. IEEE Robotics and Automation Letters , 7(2):1612–1619, 2022.[12] S. Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences , 3(6):233–242, 1999.[13] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[14] N. M. Shafiullah, Z. Cui, A. A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. Advances in neural information processing systems , 35:22955–22968,2022.[15] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[16] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. Viola: Imitation learning for vision-based manipulationwith object proposal priors. arXiv preprint arXiv:2210.11339 , 2022.9[17] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrations forrobot manipulation. In arXiv preprint arXiv:2108.03298 , 2021.[18] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee,I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning ,pages 158–168. PMLR, 2022.[19] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021.[20] C. Wang, R. Wang, A. Mandlekar, L. Fei-Fei, S. Savarese, and D. Xu. Generalization throughhand-eye coordination: An action space for learning spatially-invariant visuomotor control.In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages8913–8920. IEEE, 2021.[21] E. Johns. Coarse-to-fine imitation learning: Robot manipulation from a single demonstration.In2021 IEEE international conference on robotics and automation (ICRA) , pages 4613–4619.IEEE, 2021.[22] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipula-tion. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[23] P. Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotor policylearning. IEEE Robotics and Automation Letters , 5(2):492–499, 2019.[24] J. Pari, N. M. Shafiullah, S. P. Arunachalam, and L. Pinto. The surprising effectiveness ofrepresentation learning for visual imitation. arXiv preprint arXiv:2112.01511 , 2021.[25] M. Jia, D. Wang, G. Su, D. Klee, X. Zhu, R. Walters, and R. Platt. Seil: Simulation-augmentedequivariant imitation learning. arXiv preprint arXiv:2211.00194 , 2022.[26] A. Zhou, M. J. Kim, L. Wang, P. Florence, and C. Finn. Nerf in the palm of your hand:Corrective augmentation for robotics via novel-view synthesis. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 17907–17917, 2023.[27] M. Laskey, S. Staszak, W. Y .-S. Hsieh, J. Mahler, F. T. Pokorny, A. D. Dragan, and K. Goldberg.Shiv: Reducing supervisor burden in dagger using support vectors for efficient learning fromdemonstrations in high dimensional state spaces. In 2016 IEEE International Conference onRobotics and Automation (ICRA) , pages 462–469. IEEE, 2016.[28] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning ,pages 991–1002. PMLR, 2022.[29] D. Morrison, P. Corke, and J. Leitner. Closing the loop for robotic grasping: A real-time,generative grasp synthesis approach. Robotics: Science And Systems , 2018. doi:10.15607/RSS.2018.XIV .021.[30] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergiesbetween pushing and grasping with self-supervised deep reinforcement learning. Ieee/rjsInternational Conference On Intelligent Robots And Systems , 2018. doi:10.1109/IROS.2018.8593986.[31] J. Wu, X. Sun, A. Zeng, S. Song, J. Lee, S. Rusinkiewicz, and T. Funkhouser. Spatial actionmaps for mobile manipulation. Robotics: Science And Systems , 2020. doi:10.15607/RSS.2020.XVI.035.10[32] K. Zakka, A. Zeng, J. Lee, and S. Song. Form2fit: Learning shape priors for generalizableassembly from disassembly. Ieee International Conference On Robotics And Automation , 2019.doi:10.1109/ICRA40945.2020.9196733.[33] A. Zeng, S. Song, J. Lee, A. Rodriguez, and T. Funkhouser. Tossingbot: Learning to throwarbitrary objects with residual physics. Ieee Transactions On Robotics , 2019. doi:10.1109/TRO.2020.2988642.[34] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu,E. Romo, N. Fazeli, F. Alet, N. C. Dafle, R. Holladay, I. Morona, P. Q. Nair, D. Green, I. Taylor,W. Liu, T. Funkhouser, and A. Rodriguez. Robotic pick-and-place of novel objects in clutter withmulti-affordance grasping and cross-domain image matching. arXiv preprint arXiv: 1710.01330 ,2017.[35] S. Belkhale, Y . Cui, and D. Sadigh. Hydra: Hybrid robot actions for imitation learning. arXivpreprint arXiv: 2306.17237 , 2023.[36] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. Advances in NeuralInformation Processing Systems , 33:6840–6851, 2020.[37] M. Welling and Y . W. Teh. Bayesian learning via stochastic gradient langevin dynamics.InProceedings of the 28th international conference on machine learning (ICML-11) , pages681–688, 2011.[38] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 ,2013.[39] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditionalgenerative models. Advances in neural information processing systems , 28, 2015.[40] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033, 2012.doi:10.1109/IROS.2012.6386109.[41] P. R. Florence, C. Lynch, A. Zeng, O. Ramirez, A. Wahid, L. Downs, A. S. Wong, J. Lee,I. Mordatch, and J. Tompson. Implicit behavioral cloning. Conference On Robot Learning ,2021.[42] S. Tonneau, A. Del Prete, J. Pettr ́e, C. Park, D. Manocha, and N. Mansard. An efficient acycliccontact planner for multiped robots. IEEE Transactions on Robotics , 34(3):586–601, 2018.doi:10.1109/TRO.2018.2819658.[43] B. Ichter, E. Schmerling, T.-W. E. Lee, and A. Faust. Learned critical probabilistic roadmapsfor robotic motion planning. arXiv preprint arXiv: 1910.03701 , 2019.[44] Lai. Rapidly-exploring random forest: Adaptively exploits local structure with generalisedmulti-trees motion planning. IEEE Robotics and Automation Letters , 2021.11A AWE PseudocodeWe provide the complete pseudocode for AWE in Algorithm 1.Algorithm 1 Automatic Waypoint Extraction (AWE)input :D; // expert demonstrationsinput :L, f, η ;// waypoint selection via dynamic programmingdefgetwaypoints (τ, η,M):ifτ̸∈ M then// check if the endpoints are valid waypointsifL(f({τ.start , τ.end}), τ)≤ηthenM[τ] ={τ.start , τ.end};// try all intermediate states as waypoints, and return the smallest setelse// initialize length of current shortest subsequencem← ∞ ;// loop over all intermediate states as waypointsforw∈τ.middoWbefore←getwaypoints (τ.before (w), η);Wafter←getwaypoints (τ.after (w), η);// dedupe w, as it is in both of themW ← (Wbefore\{w})∪ W after;if|W|< m thenm← |W| ;M[τ]← W ;return M[τ];// construct dataset for next waypoint predictiondefpreprocess traj(W, τ):Daug← {} ;for(ot, xt)∈τdo// select the nearest future waypoint in Ww← W .next waypoint (t);Daug← D aug∪ {(ot, xt, w)};return Daug;Dnew← {} ;forτ∈ DdoM ← {} ; // memoize waypoints for efficient dynamic programmingDnew← D new∪preprocess traj(getwaypoints (τ, η,M), τ)output :DnewB HyperparametersB.1 Error Budget ThresholdThe only hyperparameter we need for waypoint selection is η, the error threshold (Table 4). Thethreshold ηis the same for all data sizes {30, 50, 100, 200 }across all tasks on RoboMimic, i.e.η= 0.005. We also use a consistent ηfor both scripted data and human data on both tasks in theBimanual Manipulation suite, i.e. η= 0.01. Two out of three real-world tasks also use the sameη; however, on the Coffee Making task, we opt for a lower ηto select more waypoints due to thehigh-precision nature of the task.12Table 4: Hyperparameter for waypoint selection.Task Error thresholod ( η)Lift 0.005Can 0.005Square 0.005Cube Transfer 0.01Bimanual Insertion 0.01Screwdriver Handover 0.01Wiping Table 0.01Coffee Making 0.008B.2 ACT in Bimanual Simulation SuiteWe use the same hyperparameters as the ACT paper [ 6], shown in Table 5, except reducing the chunksize from 100to50. Intuitively, as the length of trajectories reduces after running AWE , the chunksize can also be reduced to represent the same wall-clock time.Hyperparameter ACT AWE + ACTlearning rate 1e-5 1e-5batch size 8 8# encoder layers 4 4# decoder layers 7 7feedforward dimension 3200 3200hidden dimension 512 512# heads 8 8chunk size 100 50beta 10 10dropout 0.1 0.1Table 5: Hyperparameters of AWE +ACT and ACT. The only difference is the reduction in chunk size.B.3 Diffusion Policy in RoboMimicWe use the exact same set of training hyperparameters as Diffusion Policy [ 5] (Table 6). The onlyadditional hyperparameter we added is the “control multiplier” (bottom row), which allows thelow-level controller to take more steps to reach the target position at the inference time. This can beuseful when predicted waypoints are far apart.B.4 A Guide to Hyperparameter SelectionWe suggest selecting an error threshold for new tasks based on a ratio of the number of waypoints tothe average length of the trajectories. Our recommendation is to aim for a ratio of approximately 1:8,which can be automatically calculated using the waypoint generation script in our codebase. The idealratio may vary depending on the specific task and control frequency. Based on our empirical findings,a ratio between 1:5 and 1:15 tends to effectively reduce the policy horizon while still maintaining anaccurate approximation of the trajectories.For tasks involving real robots using ALOHA hardware [ 6], we advise turning on temporal ensembling(Sec C.3) to ensure smoother actions. Nonetheless, if the policy appears overly hesitant, two potentialremedies are: (a) disabling temporal ensembling, and (b) increasing DT to emulate a blockingcontroller, where DT refers to the time interval between each update in a simulation or a control loop.13Hyperparameter Lift Can SquareCtrl Pos Pos PosTo 2 2 2Ta 8 8 8Tp 10 10 10#D-params 9 9 9#V-params 22 22 22# Layers 8 8 8Emb Dim 256 256 256Attn Dropout 0.3 0.3 0.3Lr 1e-4 1e-4 1e-4WDecay 1e-3 1e-3 1e-3D-Iters Train 100 100 100D-Iters Eval 100 100 100Control Multiplier 10 1 10Table 6: Hyperparameters for diffusion policy. Ctrl: position or velocity control, To: observationhorizon, Ta: action horizon, Tp: action prediction horizon , # D-Params: diffusion network numberof parameters in millions, # V-Params: vision encoder number of parameters in millions, Emb Dim:transformer token embedding dimension, Attn Dropout: transformer attention dropout probability,Lr: learining rate, WDecay: weight decay (for transformer only), D-Iters, Train: number of trainingdiffusion iterations, D-Iters Eval: number of inference diffusion iterations, Control Multiplier:multiplier for the low-level control steps.C Implementation and Experiment DetailsC.1 ControllerWe use an Operation Space Controller (OSC) in RoboMimic, which allows position and orientationcontrol of the robot’s end-effector. It takes in the desired absolute position and orientation of theend-effector, and computes the necessary torques and velocities.We use the default joint position controller in the Bimanual Manipulation suite. On real-world tasks,we made no change to the controller except for the Coffee Making task, where we increased thestep time from 0.02 to 0.1. This allows the controller to operate closer to a blocking controller, andexecute low-level actions longer until reaching the desired joint position.C.2 Loss FunctionTo determine the distance between potential waypoints and the ground truth trajectory, we project theground truth state onto the linearly interpolated waypoint trajectory and compute the L2 distance forxyz position. For orientation, we convert the axis angles to quaternions and slerp two ground truthquaternions to determine the projection. Then we sum the position and orientation distances as thestate loss. For the trajectory loss, we take a max over all states.C.3 Temporal EnsembleFor all the ACT experiments, we adopt a temporal ensemble technique as in the original paper [ 6].Temporal ensembling is an approach to improve the smoothness of action chunking in robotic tasks.It queries the policy at each timestep, creating overlapping chunks and multiple predicted actions foreach timestep. These predictions are then combined via a weighted average using an exponentialweighting scheme, wi= exp( −m×i), that helps in smoothly incorporating new observations. Thistechnique enhances the precision and smoothness of motion without any additional training cost, butrequires extra computation during inference. We refer readers to Zhao et al. [6] for more details.C.4 Computation CostComputing waypoints is inexpensive, especially compared to the training budget. The wall clocktime for labeling one trajectory in Liftis 0.8 seconds on average.1430 50 100 200Number of Demos60708090100Success Rate (%)Can30 50 100 200Number of Demos406080100SquareAWE+DiffusionPolicy (Ours) DiffusionPolicyFigure 8: Performance scaling with demonstrations. We compare how the performance scale fordiffusion policy [ 5] with and without AWE . Training on waypoints generated by AWE consistentlyimproves the performance, with improvements being larger on the harder task (Square).D Additional ComparisonsD.1 SubsamplingA potential alternative to our proposed AWE method is the straightforward approach of subsamplingtrajectories. This implicit selection of waypoints can be viewed as a heuristic method. Figure 7demonstrates that AWE achieves a superior replay success rate, i.e., when one follows the extractedwaypoints, starting from the demonstration trajectory’s initial state, than subsampling. However, howdo these methods influence the performance downstream?To address this, we compare the success rate of a policy learned using waypoints selected by AWEagainst those from subsampled trajectories. We experiment on two RoboMimic tasks, Can and Square,both using 100 demonstrations. We compare against two subsampling ratios, specifically 5 and 7: aratio of 7 produces a number of waypoints comparable to that of AWE in RoboMimic.AWE (Ours) Subsampled by 7 Subsampled by 5Can (100 demo) 95.3 77.3 72.7Square (100 demo) 91.7 77.3 86.4Table 7: Comparison of success rates for policies learned using AWE and subsampling methods onCan and Square.As shown in Table 7, we find that: 1) AWE consistently surpasses the subsampling approach; 2)The most effective subsampling ratio is task-dependent. For instance, in the Can task, subsamplingby 7 exceeds subsampling by 5. Conversely, the Square task sees better results with a ratio of 5.This variance suggests that AWE can discern and select waypoints that are more instrumental fordownstream learning.D.2 Keypose-based motion planningOur work is conceptually related to keypose-based motion planning, which has seen notable contribu-tions in recent years. Tonneau et al. [42] utilized keyposes for multi-robot planning, emphasizingbottleneck states. Ichter et al. [43] employed probabilistic roadmaps to identify critical configurations,while Lai [44] RRF* method adaptively targets bottleneck regions. These methods require completeknowledge of the environment to plan trajectories. In contrast, our method derives waypoints from therobot’s proprioceptive data. We assume we only have raw RGB images and no low-level informationabout the external environment.15 |
N3VbFUpwaa | Generalization of Heterogeneous Multi-Robot Policiesvia Awareness and Communication of CapabilitiesyPierce Howell1, Max Rudolph2, Reza Torbati1, Kevin Fu1, Harish Ravichandar11Georgia Institute of Technology,2University of Texas at AustinEmail: pierce.howell@gatech.eduAbstract: Recent advances in multi-agent reinforcement learning (MARL) areenabling impressive coordination in heterogeneous multi-robot teams. However,existing approaches often overlook the challenge of generalizing learned policiesto teams of new compositions, sizes, and robots. While such generalization mightnot be important in teams of virtual agents that can retrain policies on-demand, itis pivotal in multi-robot systems that are deployed in the real-world and must read-ily adapt to inevitable changes. As such, multi-robot policies must remain robustto team changes – an ability we call adaptive teaming . In this work, we investigateifawareness and communication of robot capabilities can provide such general-ization by conducting detailed experiments involving an established multi-robottest bed. We demonstrate that shared decentralized policies, that enable robots tobe both aware of and communicate their capabilities, can achieve adaptive team-ing by implicitly capturing the fundamental relationship between collective capa-bilities and effective coordination. Videos of trained policies can be viewed athttps://sites.google.com/view/cap-comm .Keywords: Heterogeneity, Multi-Robot Teaming, Generalization1 IntroductionHeterogeneous robot teams have the potential to address complex real-world challenges that arisein a wide range of domains, such as precision agriculture, defense, warehouse automation, supplychain optimization, and environmental monitoring. However, a key hurdle in realizing such potentialis the challenge of ensuring effective communication, coordination, and control.Existing approaches to address the challenges of multi-robot systems can be crudely categorized intotwo groups. First, classical approaches use well-understood controllers with simple local interactionrules, giving rise to complex global emergent behavior [1]. Indeed, such controllers have proven ex-traordinarily useful in diverse domains. However, designing them requires both significant technicalexpertise and considerable domain knowledge. Second, recent learning-based approaches alleviatethe need for expertise and domain knowledge by leveraging advances in learning frameworks andcomputational resources. Learning has been successful in many domains, such as video games [2],autonomous driving [3], disaster response [4], and manufacturing [5].However, learning approaches are not without their fair share of limitations. First, the majority of ex-isting methods focus on homogeneous teams and, as such, cannot handle heterogeneous multi-robotteams. Second, and more importantly, even existing methods designed for heterogeneous teams areoften solely concerned with the challenge of learning to coordinate a given team, entirely ignoringthe challenge of generalizing the learned behavior to new teams. Given the potentially prohibitivecost of retraining coordination policies after deployment in real-world settings, it is imperative thatmulti-robot policies generalize learned behaviors to inevitable changes to the team.In this work, we focus on the challenge of generalizing multi-robot policies to team changes. Inparticular, we focus on generalization of trained policies to teams of new compositions, sizes, androbots that are not encountered in training (see Fig. 1). We refer to such generalization as adaptiveteaming , wherein the learning policy can readily handle changes to the team without additionalEqual Contribution.yThis work was supported in part by the Army Research Lab under Grants W911NF-17-2-0181 and W911NF-20-2-00367th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: We investigate the role of capability awareness and communication in generalizing decentralizedheterogeneous multi-robot coordination policies to teams of new composition, size, and robots.training. To this end, we need policies that can reason about how a group of diverse robots cancollectively achieve a common goal, without assigning rigid specialized roles to individual robots.We investigate the role of robot capabilities in generalization to new teams. Our key insight is thatadaptive teaming requires the understanding of how a team’s diverse capabilities combine to dic-tate the behavior of individual robots. For instance, consider an autonomous heterogeneous teamresponding to multiple concurrent wildfires. Effective coordination in such situations requires rea-soning about the opportunities and constraints introduced by the robots’ individual and relative ca-pabilities, such as speed, water capacity, and battery range. In general, robots must learn how theirindividual capabilities relate to those of others to determine their role in achieving shared objectives.We develop a policy architecture that can explicitly reason about robot capabilities when select-ing actions. Our architecture has four key properties: i) capability awareness : our design enablesactions to be conditioned on continuous capabilities in addition to observations, ii) capability com-munication : we leverage graph networks to learn how robots must communicate their capabilitiesiii)robot-agnostic : we utilize parameter sharing and learn policies that are not tied to individualrobots, and iv) decentralized : our trained policies can be deployed in a decentralized manner. To-gether, these four properties provide the potential to generalize to new teams. One can view thisdesign as an extension of agent identification techniques [6] to the metric space of capabilities. Assuch, capabilities do not merely serve to distinguish between agents during training to enable be-havioral heterogeneity [7], but also to provide a more general means to encode how individual andrelative capabilities influence collective behavior.We evaluate the utility of capability awareness and communication in two heterogeneous multi-robottasks in sim and real. Our results reveal that both awareness and communication of capabilities canenable adaptive teaming, outperforming policies that lack either one or both of these features in termsof average returns and task-specific metrics. Further, capability-based policies achieve superior zero-shot generalization than existing agent identification-based techniques, while ensuring comparableperformance on the training set.2 Related WorkLearning for multi-robot teams : Recent advances in deep learning are providing promising ap-proaches that circumvent the challenges associated with classical control of multi-robot systems.Multi-agent reinforcement learning (MARL), in particular, has been shown to be capable of solvinga wide variety of tasks, including simple tasks in the multi-agent particle environments (MPE) [8],complex tasks under partial observability [9], coordinating an arbitrary number of agents in videogames [2], and effective predictive modeling of multi-agent systems [10]. These approaches aredriven by popular MARL algorithms like QMIX [11], MADDPG [8], and MAPPO [12] – nontrivialextensions of their single agent counterparts DQN [13], DDPG [14], and PPO [15], respectively. Weadopt a PPO-based learning framework given its proven benefits despite its simplicity [12]. Cen-tralized training, decentralized execution (CTDE) is a commonly used framework in which decen-tralized agents learn to take actions based on local observations while a centralized critic provides2feedback based on global information [16, 17]. We use the CTDE paradigm as it lends itself natu-rally to multi-robot teams since observation and communication are often restricted. However, itsimportant to note that our approach is agnostic to the specific learning algorithm.Learning for heterogeneous teams : Many MARL algorithms were originally designed for use inhomogeneous multi-agent teams. However, truly homogeneous multi-robot teams are rare exceptbecause of manufacturing differences, wear and tear, or task requirements. Most real-world multi-robot problems such as search & rescue, agriculture, and surveillance require a diverse set of ca-pabilities aggregated from heterogeneous robots [18–20]. While many MARL approaches considerheterogeneity, they either tend to focus on differences in behavior exhibited by physically identi-cal robots [21], or identical behavior exhibited by physically-different robots [22, 23]. A commonstrategy used to elicit heterogeneous behavior from shared models is referred to as agent identifi-cation or behavioral typing, in which the agents’ observations are appended with an agent-specificindex [24, 25]. While these methods have been shown to be highly effective, recent investigationshave revealed issues with scalability [26], and robustness to observation noise [7]. While capability-awareness is similar in spirit to existing identification-based techniques, it does not require assigningindices to individual robots and can thus generalize to teams with new robots. Further, most existingmethods do not simultaneously handle teams with physical and behavioral differences. Accordingto a recent heterogeneous multi-robot taxonomy [7], our work falls under the category consistingof physically-different robots that differ in behavior, but share the same objective. Two recent ap-proaches belong to this same category [4, 7]. However, one is limited to a discrete set of robottypes [4] and the other learned decentralized robot-specific policies that cannot handle the additionof new robots and might not generalize to new compositions [7].Generalization : For applications to real-world multi-robot systems, it is essential to consider thegeneralization capabilities of learned control policies. In our formulation, there are two axes of gen-eralization in heterogeneous multi-robot teams: combinatorial generalization (new team sizes andnew compositions of the same robots) and individual capability generalization (new robots). Priorworks reliant on feed forward or recurrent networks tend to be limited to teams of static size [8, 27].Combinatorial generalization for homogeneous teams can be achieved with graph network-basedpolicies [27, 28]. However, existing methods tend to struggle with generalization in the presenceof heterogeneity [29]. While methods that employ agent identification [24, 25] might be able toachieve combinatorial generalization by reusing the IDs from training, it is unclear how they canhandle new robots. In stark contrast, capability-based policies are robot-agnostic and can take thecapabilities of the new robot as an input feature to determine its actions.3 Capability Awareness and Communication for Adaptive TeamingIn this section, we first model heterogeneous teams and then introduce policy architectures thatenable capability awareness and communication, along with the associated training pipeline.3.1 Modeling Heterogeneous Multi-Robot TeamsWe model teams of Nheterogeneous robots as a graph G= (V;E), where each node vi2V is arobot, and each edge eij= (vi;vj)2E is a communication link. We use zito denote the obser-vations of the ith robot, which includes its capabilities and its sensor readings of the environment.We assume that the robots’ heterogeneity can be captured by their capabilities. We represent thecapabilities of the ith robot by a real-valued vector ci2C RC+, whereCis theC-dimensionalspace of all capabilities of the robots. An example of a multi-dimensional capability is a vector withelements representing payload, speed, and sensing radius. When robot idoes not possess the kthcapability, the kth element of ciis set to zero.3.2 Problem DescriptionWe are interested in learning a decentralized control policy that can i) effectively coordinate a teamof heterogeneous robots to achieve the task objectives, and ii) generalize readily to teams of bothnovel compositions and novel robots that are not encountered during training. Our problem canbe viewed as a multi-agent reinforcement learning problem that can be formalized as a decentral-ized partially-observable Markov Decision Process (Dec-POMDP) [30]. We expand on the Dec-POMDP formulation to incorporate the capabilities of heterogeneous robots and arrive at the tuple3hD;S;fAig;fZig;C;T;R;OiwhereDis the set of Nrobots,Sis the set of global states, fAigis a set of action spaces across all robots, fZigis a set of joint observations across all robots, Cisthe multi-dimensional space of capabilities, Ris the global reward function, and TandOare thejoint state transition and observation models, respectively. Our objective is to learn decentralizedaction policies that control each robot to maximize the expected return E[PTht=0rt]over the taskhorizonTh. The decentralized policy of the ith robot(aijeoi)defines the probability that Robot itakes Action aigiven its effective observation eoi. The effective observation eoiof theith robot is afunction of both its individual observation and that of others in its neighborhood.3.3 Policy ArchitectureTo enable capability awareness and communication in multi-robot coordination policies, we de-signed a policy architecture that leverages graph convolutional networks (GCNs) since a plethoraof recent approaches attest to their ability to learn effective communication protocols and enabledecentralized decision-making in multi-agent teams [27]. Further, operations are local to nodes andcan therefore generalize to graphs of any topology (i.e., permutation invariance [31]). We illustrateour architecture in Fig. 1 and explain its components below. Specific details including the exactarchitecture and hyperparameters we used can be found in Appendix E.Capability awareness : We argue that the heterogeneity of robots can have a significant impact onhow the team must coordinate to achieve a task. Specifically, individual and collective capabilitiescan affect the roles the robots play within the team. For instance, consider a heterogeneous mobilerobot team responding to wildfire incidents at multiple locations. To effectively respond, a robotwithin the team must account for its speed and water capacity. Therefore, it is necessary that robotsare aware of their capabilities. To enable such awareness, we append each robot’s capability vector cito its observations before passing them along as node features to the graph network. This informationwill help each robot condition its actions not just on observations, but also on its capabilities.Capability communication : In addition to awareness, communicating capability information canenable a team to reason about how its collective capabilities impact task performance. Revisiting ourwildfire example, robots in the response team can effectively coordinate their efforts by implicitlyand dynamically taking on roles based on their relative speed and water capacity. But such complexdecision making is only possible if robots communicate with each other about their capabilities.Each node in our GCN-based policy receives capability information along with the correspondingrobot’s local observations so the learned communication protocol can help the team communicateand effectively build representations of their collective capabilities.Note that our policy is robot-agnostic and learns the implicit and interconnected relationships be-tween the observations, capabilities, and actions of all robots in the team. Further, as we demonstratein our experiments, capability awareness and communication enables generalization to teams withnew robot compositions, sizes, and even to entirely new robots as long as their capabilities belongto the same space of capabilities C.3.4 Training ProcedureWe utilize parameter sharing to train a single action policy that is shared by all of the robots. Pa-rameter sharing is known to improve learning efficiency by limiting the number of parameters thatmust be learned. More importantly, parameter sharing is required for our problem so policies cantransfer to new robots without the need for training new policies or assigning robots to alreadytrained policies. Additionally, we believe that parameter sharing serves a secondary role in learninggeneralizable strategies for efficient generalization. Sharing parameters enables the policy to learngeneralized coordination strategies that, when conditioned on robot capabilities and local observa-tions, can be adapted to specific robots and contexts.We employ a centralized training, decentralized execution (CTDE) paradigm to train the actionpolicy. We apply an actor-critic model, and train using proximal policy optimization (PPO). Theactor-critic model is composed of a decentralized actor network (i.e., shared action policy) thatmaps robots observations to control actions, and a centralized critic network [12], which estimatesthe value of the team’s current state based on centralized information about the environment androbots aggregated from individual observations. Finally, we trained our policies on multiple teamsuntil they converged, with the teams changing every 10 episodes to stabilize training.44 Experimental DesignWe conducted detailed experiments to evaluate how capability awareness and communication impactgeneralization to: i) new team sizes and compositions, and ii) new robots with unseen capabilities.Environments : We designed two heterogeneous multi-robot tasks for experimentation:•Heterogeneous Material Transport (HMT) : A team of robots with different material car-rying capacities for lumber and concrete (denoted by ci2R2for theith robot) must transportmaterials from lumber and concrete depots to a construction site to fulfill a pre-specified quotawhile minimizing over-provision. We implemented this environment as a Multi-Particle Envi-ronment (MPE) [32] and leverage the infrastructure of EPyMARL [33].•Heterogeneous Sensor Network (HSN) : A robot team must form a single fully-connectedsensor network while maximizing the collective coverage area. The ith robot’s capability ci2Rcorresponds to its sensing radius. We implemented this environment using the MARBLER [34]framework which enables hardware experimentation in the Robotarium [35], a well-establishedmulti-robot test bed.Policy architectures : In order to systematically examine the impact of capability awareness andcommunication, we consider the following policy architectures:•ID(MLP) : Robot ID-based MLP•ID(GNN) : Robot ID-based GNN•CA(MLP) : Capability-aware MLP•CA(GNN) : Capability-aware GNN without communication of capabilities,•CA+CC(GNN) : Both capability awareness and communication.The ID-based baselines stand in for SOTA approaches that employ behavioral typing to handle het-erogeneous teams [24, 25], and, as such, question the need for capabilities. The MLP based base-lines help us investigate the need for communication. Finally, the CA(GNN) enables communicationof observations but does not does not communicate capability information.Metrics : For both environments, we compare the above policies using Average Return : the averagejoint reward received over the task horizon (higher is better). Additionally, we use environment-specific metrics. In HMT, we terminate the episodes when the quotas for both materials are met.Therefore, we consider Average Steps taken to meet the quota (lower is better). For HSN, weconsider Pairwise Overlap : sum of pairwise overlapping area of robots’ coverage areas (lower isbetter).Training : For each environment, we used five teams with four robots each during training. Weselected the training teams to ensure diverse compositions and degree of heterogeneity. For HSN,we sampled robots’ sensing radius from the uniform distribution U(0:2;0:6). For HMT, we sampledrobots’ lumber and concrete carrying capacities from the uniform distribution U(0;1:0). We alsoassigned each robot a one-hot ID to train ID-based policies. We trained each policy with 3 randomseeds. We resampled robot teams every 10 episodes to stabilize training.5 ResultsBelow, we report i) performance on the training team, ii) zero-shot generalization to new teams, andiii) zero-shot generalization to new robots with unseen values of capabilities for each environment.5.1 Heterogeneous Material TransportWe first focus on the Heterogeneous Material Transport ( HMT) environment.Performance on training set : To ensure considering capabilities does not negatively impact train-ing, we first evaluate trained policies on the training set in terms of average return and average steps(see Fig. 2). We find that all policies resulted in comparable average returns (Fig. 2 (a)). However,the average number of steps per episode better captures performance in HMT, since episodes termi-nate early when the quota is filled. Compared to agent-ID policies, capability-based policies tookfewer steps per episode (Fig. 2 (b)) to achieve comparable rewards, suggesting that reasoning aboutcapabilities can improve task efficiency performance.5(a)HMT: Average return (b)HMT: Average steps (c)HSN: Average return (d)HSN: Overlap areaFigure 2: When evaluated on teams seen during training , capability-aware policies performed comparably toID-based policies in terms of both average return (higher is better) and task-specific metrics (lower is better).(a) 3 Robots (b) 4 Robots (c) 5 RobotsFigure 3: When generalizing to new team compositions and sizes inHMT, capability-based policies consistentlyoutperformed ID-based policies in terms of average steps taken to meet the quota (lower is better).Zero-shot generalization to new team compositions and sizes : We next evaluated how trainedpolicies generalize to team compositions and sizes not encountered during training. Fig. 3 showsplots quantifying performance on new compositions with team sizes of 3, 4, and 5 robots. To ensurea fair comparison, we evaluated all policies on the same set of 100 teams by randomly samplingnovel combinations of robots from the training set. We evaluated each policy on each test teamacross 10 episodes per seed. Given that this evaluation involved no new individual robots, we reusedeach robot’s ID from the training set to facilitate the evaluation of ID-based policies.We find that all capability-aware methods outperformed ID-based methods in return and task-specific metrics. This is likely due to capability-aware methods’ ability to capture the relationshipbetween robots’ carrying capacities and the material quota. In contrast, ID-based methods mustlearn to implicitly reason about how much material each robot can carry. Interestingly, CA(MLP) re-sulted in fewer steps per episode (lowest mean and variance) across all team sizes, and outperformedboth the other capability-based and communication-enabled baselines: CA(GNN) andCA+CC(GNN) .This suggests that mere awareness of capabilities is sufficient to perform well in the HMT environ-ment. Indeed, communication is not as essential in this task as robots can directly observe relevantinformation (e.g. material demands) and implicitly coordinate as long as they are aware of their owncapabilities. It might be possible to further improve performance by learning to better communi-cate, but that would be significantly more challenging since the task can be mostly solved withoutcommunication.(a) 3 robots (b) 4 robots (c) 5 robotsFigure 4: When generalizing to new robots with unseen values for capabilities inHMT, policies that areonly aware of capabilities ( CA(MLP) andCA(GNN) ) outperformed policies that also communicated capabilities(CA+CC(GNN)) in terms of average number of steps taken to transport the required material (lower is better).6Zero-shot generalization to new robots : In Fig. 4, we show the policies’ ability to generalize toteams composed of new robots. Since the robots’ capabilities in this evaluation are different fromthose of the robots in the training set, we could not evaluate agent-ID methods since there is notrivial way to assign IDs to the new robots. These results clearly demonstrate that reasoning aboutcapabilities can enable generalization to teams with entirely new robots. Further, we again see thatcapability awareness without communication is sufficient to generalize in the HMTenvironment.Additional results : We provide additional results for HMT by reporting more task-specific metricsand evaluations on significantly larger teams in Appendix A. The results on task-specific metricsfurther support the claim that capability-aware methods generalize better than ID-based methods.We also find that these benefits extend to teams consisting of 8,10, and15robots. Taken together, theabove results suggest that reasoning about capabilities (rather than assigned IDs) improves adaptiveteaming, likely due to the ability to map capabilities to implicit roles.5.2 Heterogeneous Sensor NetworkBelow, we discuss results on the Heterogeneous Sensor Network ( HSN) environment.Performance on training set : In Fig. 2 (c) and (d), we report the performance of trained policiesinHSN on teams in the training set in terms of average return and pairwise overlap. All policiesexcept CA(MLP) performed comparably and were able to effectively learn to maximize expectedreturns, achieve a fully connected sensor network, and minimize the pairwise overlap in coveragearea. Further, CA+CC(GNN) andID(GNN) perform similarly but marginally better than the otherbaselines. CA(MLP) ’s suboptimal performance indicates that capability awareness in isolation with-out any communication hurts performance in the HSN task. Indeed, while it is possible to achievegood performance in HMTwithout communicating, HSNrequires robots to effectively communicateand reason about their neighbors’ sensing radii in order to form effective networks.Taken together, these results suggest that capability awareness and communication can lead to effec-tive training in heterogeneous teams. ID-based methods are able to perform at a similar level. Thisis to be expected given that we conducted these evaluations on the training set and IDs are sufficientto implicitly assign roles and coordinate heterogeneous robots within known teams.Zero-shot generalization to new team compositions and sizes : In Fig. 5, we report the perfor-mance of the training policies in HSNwhen evaluated on teams of different compositions and sizes.We found that CA+CC(GNN) achieved the best average returns (highest mean and lowest variance)across all team sizes. Both ID-based methods ( ID(MLP) andID(GNN) ) resulted in lower returnscompared to all three capability-awareness baselines. Note that this is in stark contrast to the resultsfor the training set in Fig. 2, demonstrating that IDs alone might help train heterogeneous teamsbut tend to generalize poorly to new team compositions and sizes. This is likely because ID-basedpolicies fail to reason about robot heterogeneity, and instead overfit the relationships between robotIDs and behavior in the training set. Further, CA+CC(GNN) in particular consistently outperformedall other policies across metrics and variations, suggesting that both capability awareness and com-munication are necessary to enable generalization in HSN.Zero-shot generalization to new robots : We evaluated the trained policies’ ability to generalize toteams of different sizes which are composed of entirely new robots whose sensing radii are differentfrom those encountered in training. Similar to HMT, we cannot evaluate ID-based policies on teams(a) 3 Robots (b) 4 Robots (c) 5 RobotsFigure 5: When generalizing to new team compositions and sizes inHSN, capability-based policies consistentlyoutperformed ID-based baselines in terms of average return (higher is better). Further, combining awarenessand communication of capabilities resulted in the best generalization performance.7(a) 3 Robots (b) 4 Robots (c) 5 RobotsFigure 6: When generalizing to teams comprised of new robots inHSN, combining awareness and communi-cation of capabilities ( CA+CC(GNN) ) achieves higher average returns than baselines that are merely aware ofcapabilities, irrespective of whether they communicate observations ( CA(GNN) ) or not ( CA(MLP) ).with new robots since there is no obvious way to assign IDs to the new robots. In Fig. 6, we reportthe performance of all three capability-based policies in terms of average return. Both GNN-basedpolicies ( CA(GNN) andCA+CC(GNN) ) considerably outperform the CA(MLP) policy, underscoringthe importance of communication in generalization to teams with new robots. However, we also seethat communication of observations alone is insufficient, as evidenced by the fact that CA+CC(GNN)(which communicates both observations and capabilities) consistently outperforms CA(GNN) (whichonly communicates observations).Real-robot demonstrations : We also deployed the trained policies of CA+CC(GNN) ,CA(MLP) , andCA(GNN) on the physical Robotarium (see Section B.1 for further details and snapshots). Overall,we find that the benefits reported above extend to physical robot teams. We find that CA+CC(GNN)andCA(GNN) policies generalize to physical robots and successfully build a sensor network whileminimizing sensing overlap for teams of 3 and 4 robots. The CA(MLP) policy resulted in signif-icantly worse performance, where robots’ executed paths provoked significant engagement of theRobotarium’s barrier certificates due to potential collisions.Additional results : We provide additional results for HSN by reporting more task-specific metricsin Appendix B. Much like the results for HMT, the results on task-specific metrics further supportthe claim that capability-aware methods show superior adaptive teaming ability compared with ID-based methods. The additional results also support our claim in this section that communication ofcapabilities is essential for success on this task.6 LimitationsWhile our framework could reason about many different capabilities simultaneously, our experi-ments only involved variations in 1-D and 2-D capabilities. We also only consider generalizationto new values for capabilities; we do not consider generalization to new types of capabilities. Ad-ditionally, our work only considers the representation of robot’s capabilities that we can quantify.Handling implicit capabilities and communication thereof may benefit from additional meta-learningmechanisms, uncovering a more general relationship between robots’ learned behaviors and capa-bilities. Further, we do not consider high-level planning and task-allocation and rely solely on thelearning framework to perform implicit assignments to sub-tasks within the macro task. Futurework can investigate appropriate abstractions and interfaces for considering both learning-basedlow-level policies and efficient algorithms for higher-level coordination. Lastly, we only consid-ered fully-connected communication graphs in our evaluations for simplicity. While graph networksare known to effectively share local observations for global state estimations in partially-connectedteams [27, 36], it is unclear if such ability will translate to the communication of capabilities.7 ConclusionWe investigated the utility of awareness and communication of robot capabilities in the general-ization of heterogeneous multi-robot policies to new teams. We developed a graph network-baseddecentralized policy architecture based on parameter sharing that enables robots to reason about andcommunicate their observations and capabilities to achieve adaptive teaming. Our detailed experi-ments involving two heterogeneous multi-robot tasks unambiguously illustrate the importance andthe need for reasoning about capabilities as opposed to agent IDs.8References[1] J. Cort ́es and M. Egerstedt. Coordinated control of multi-robot systems: A survey. SICEJournal of Control, Measurement, and System Integration , 2017.[2] B. Ellis, S. Moalla, M. Samvelyan, M. Sun, A. Mahajan, J. N. Foerster, and S. Whiteson.Smacv2: An improved benchmark for cooperative multi-agent reinforcement learning, 2022.[3] S. Shalev-Shwartz, S. Shammah, and A. Shashua. Safe, multi-agent, reinforcement learningfor autonomous driving, 2016.[4] E. Seraj, Z. Wang, R. Paleja, D. Martin, M. Sklar, A. Patel, and M. Gombolay. Learningefficient diverse communication for cooperative heterogeneous teaming. In Proceedings of the21st International Conference on Autonomous Agents and Multiagent Systems , 2022.[5] Z. Wang, C. Liu, and M. Gombolay. Heterogeneous graph attention networks for scalablemulti-robot scheduling with temporospatial constraints. Autonomous Robots , 2022.[6] C. Li, T. Wang, C. Wu, Q. Zhao, J. Yang, and C. Zhang. Celebrating Diversity in SharedMulti-Agent Reinforcement Learning, 2021.[7] M. Bettini, A. Shankar, and A. Prorok. Heterogeneous multi-robot reinforcement learning,2023.[8] R. Lowe, Y . Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch. Multi-agent actor-critic formixed cooperative-competitive environments. Neural Information Processing Systems (NIPS) ,2017.[9] S. Omidshafiei, J. Pazis, C. Amato, J. P. How, and J. Vian. Deep decentralized multi-taskmulti-agent reinforcement learning under partial observability. In International Conference onMachine Learning , 2017.[10] Y . Hoshen. Vain: Attentional multi-agent predictive modeling. In Advances in Neural Infor-mation Processing Systems , 2017.[11] T. Rashid, M. Samvelyan, C. S. de Witt, G. Farquhar, J. Foerster, and S. Whiteson. Qmix:Monotonic value function factorisation for deep multi-agent reinforcement learning, 2018.[12] C. Yu, A. Velu, E. Vinitsky, Y . Wang, A. M. Bayen, and Y . Wu. The surprising effectivenessof MAPPO in cooperative, multi-agent games. CoRR , 2021.[13] V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller.Playing atari with deep reinforcement learning, 2013.[14] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning, 2019.[15] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms, 2017.[16] V . Egorov and A. Shpilman. Scalable multi-agent model-based reinforcement learning, 2022.[17] J. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson. Counterfactual multi-agentpolicy gradients, 2017.[18] T. Dang, M. Tranzatto, S. Khattak, F. Mascarich, K. Alexis, and M. Hutter. Graph-based sub-terranean exploration path planning using aerial and legged robots. Journal of Field Robotics ,2020.[19] T. Dang, M. Tranzatto, S. Khattak, F. Mascarich, K. Alexis, and M. Hutter. Graph-based sub-terranean exploration path planning using aerial and legged robots. Journal of Field Robotics ,2020.9[20] G. Gil, D. E. Casagrande, L. P. Cort ́es, and R. Verschae. Why the low adoption of robotics in thefarms? challenges for the establishment of commercial agricultural robots. Smart AgriculturalTechnology , 2023.[21] C. Li, T. Wang, C. Wu, Q. Zhao, J. Yang, and C. Zhang. Celebrating diversity in shared multi-agent reinforcement learning. Advances in Neural Information Processing Systems , 2021.[22] J. K. Terry, N. Grammel, S. Son, and B. Black. Parameter sharing for heterogeneous agents inmulti-agent reinforcement learning, 2022.[23] C. Wakilpoor, P. J. Martin, C. Rebhuhn, and A. Vu. Heterogeneous multi-agent reinforcementlearning for unknown environment mapping. arXiv preprint arXiv:2010.02663 , 2020.[24] J. Foerster, I. A. Assael, N. De Freitas, and S. Whiteson. Learning to communicate with deepmulti-agent reinforcement learning. Advances in neural information processing systems , 2016.[25] J. K. Gupta, M. Egorov, and M. Kochenderfer. Cooperative multi-agent control using deepreinforcement learning. In Autonomous Agents and Multiagent Systems: AAMAS 2017 Work-shops, Best Papers, S ̃ao Paulo, Brazil, May 8-12, 2017, Revised Selected Papers 16 . Springer,2017.[26] F. Christianos, G. Papoudakis, M. A. Rahman, and S. V . Albrecht. Scaling multi-agent rein-forcement learning with selective parameter sharing. In International Conference on MachineLearning . PMLR, 2021.[27] A. Agarwal, S. Kumar, and K. Sycara. Learning transferable cooperative behavior in multi-agent teams, 2019.[28] Q. Li, W. Lin, Z. Liu, and A. Prorok. Message-aware graph attention networks for large-scalemulti-robot path planning, 2021.[29] A. Mahajan, M. Samvelyan, T. Gupta, B. Ellis, M. Sun, T. Rockt ̈aschel, and S. Whiteson.Generalization in cooperative multi-agent systems, 2022.[30] F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs . Springer-Briefs in Intelligent Systems, 2016.[31] A. Khan, E. Tolstaya, A. Ribeiro, and V . Kumar. Graph policy gradients for large scale robotcontrol, 2019.[32] I. Mordatch and P. Abbeel. Emergence of grounded compositional language in multi-agentpopulations. CoRR , 2017. URL http://arxiv.org/abs/1703.04908 .[33] G. Papoudakis, F. Christianos, L. Sch ̈afer, and S. V . Albrecht. Benchmarking multi-agentdeep reinforcement learning algorithms in cooperative tasks. In Proceedings of the NeuralInformation Processing Systems Track on Datasets and Benchmarks (NeurIPS) , 2021.[34] R. Torbati, S. Lohiya, S. Singh, M. S. Nigam, and H. Ravichandar. Marbler: An open platformfor standardized evaluation of multi-robot reinforcement learning algorithms, 2023.[35] S. Wilson, P. Glotfelter, L. Wang, S. Mayya, G. Notomista, M. Mote, and M. Egerstedt.The robotarium: Globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems. IEEE Control Systems Magazine , 2020.[36] Q. Li, F. Gama, A. Ribeiro, and A. Prorok. Graph neural networks for decentralized multi-robot path planning. CoRR , 2019.[37] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal Policy OptimizationAlgorithms. 2017.10A Heterogeneous Material Transport (HMT) Additional ResultsThis section provides additional results for the HMTenvironment. Specifically, we provide additionaltask-specific metrics for the generalization experiments, and new generalization results for robotteams of significantly larger sizes (i.e. team sizes of 8,10, and 15robots).The task-specific metrics defined below evaluate the rate at which each policy contributes to fulfillingthe total quota, and the individual quotas for lumber and concrete:• % of episodes by which the total quota was filled.• % of lumber quota remaining.• % of concrete quota remaining.For both generalization to new teams (see Fig. 7) and new robots (see Fig. 8), capability-awaremethods filled the total quota in fewer episode steps compared to the ID-based methods, while gen-erally better preventing over-provisioning of both lumber and concrete. This result further supportsour claim that capability awareness improves generalization performance. Observing Fig. 9, we findthat these benefits of capability-based policies extend to considerably larger team sizes.(a) 3 robots (b) 4 robots (c) 5 robotsFigure 7: Policies with capability awareness outperform agent ID methods at meeting the material quota with aminimal number of steps when generalizing to new team compositions. Capability-awareness methods withoutcommunication of capabilities (i.e. CA(MLP) &CA(GNN) ) outperform methods with capability communicationfor this task.B Heterogeneous Sensor Network (HSN) Additional ResultsThis section provides additional results on i) training performance and ii) task-specific metrics.The task-specific metrics for the HSNenvironment are the following:• Pairwise overlap: The sum of pairwise overlap in coverage area among robots (lower is better).11(a) 3 robots (b) 4 robots (c) 5 robotsFigure 8: Policies without communication of capability-awareness (i.e. CA(MLP) andCA(GNN) ) outperformedthe policy with communication of capabilities ( CA+CC(GNN) on task-specific metrics when generalizing to newrobots with capabilities not seen during training.• % of fully connected teams (by episode step): Percentage of teams that managed to form a sensornetwork that connects all of the robots (higher is better).In Fig. 12, we report the training performance (i.e. training teams only) for each policy. The train-ing curve suggest that capability-aware and ID-based methods perform comparably during learning.Notably, the communication models ID(GNN) andCA+CC(GNN) converge faster and achieve higheroverall returns than other methods. This result suggests that communication between robots signifi-cantly assists in learning collaborative behavior.Capability-aware policies again demonstrate superior performance when generalizing to new teams(see Fig. 10) and new robots (see Fig. 11) on task-specific metrics, highlighting the importance ofcapability awareness for generalization to robot teams with new robots, team sizes, and team com-positions. Notably, CA+CC(GNN) results in significantly lower pairwise overlap for robot teams ofsize 3 and 4 robots, and marginally lower pairwise overlap for robot teams of size 5, compared toID-based methods. This suggests the communication-enabled policy effectively learns to commu-nicate capabilities and that such communication of capabilities is essential for generalization in thistask.B.1 Images of Robotarium ExperimentsIn this section, we present visual representations derived from actual robot demonstrations of thetrained capability-aware communication policy. Videos of the robot demonstrations can be foundat:https://sites.google.com/view/cap-comm .12S(a) 8 robots (b) 10 robots (c) 15 robotsFigure 9: Experiments evaluating the generalization of policies to significantly larger teams (size 8, 10, and 15)compared to the training team size (size 4). Policies with capability awareness outperform agent ID methodsat meeting the total material quota with a minimal number of steps when generalizing to new, large team com-positions. Capability-awareness methods without communication of capabilities (i.e. CA(MLP) &CA(GNN) )outperform methods with capability communication for this task.13(a) 3 Robots (b) 4 Robots (c) 5 RobotsFigure 10: Capability-based policy architectures consistently outperform ID-based baselines both in terms ofaverage return and task performance metrics when generalizing to new team compositions and sizes. Further,combining awareness and communication of capabilities results in the best generalization performance.(a) 3 Robots (b) 4 Robots (c) 5 RobotsFigure 11: Policy architecture that combines awareness and communication ( CA+CC(GNN) ) of capabilities out-performs both other baselines ( CA(GNN) in terms of % fully connected and CA(MLP) in terms of average returnand pairwise overlap) when generalizing to teams comprised of new robots.14Figure 12: For the HSN environment, capability-aware policies perform comparably to ID-based policies interms of training efficiency (first) and in terms of task-specific metrics when evaluating the trained policy onthe training set.(a) Beginning of episode (3 robots). (b) End of episode (3 robots).(c) Beginning of episode (4 robots). (d) End of episode (4 robots).Figure 13: Demonstrations of CA+CC(GNN) policy deployed to real robot teams in the Robotarium testbed fortheHSNtask. See https://sites.google.com/view/cap-comm for videos of deployment to the Robotar-ium.15C Environment SpecificationsC.1 Heterogeneous Material TransportThis section describes additional details about the heterogeneous material transport ( HMT) environ-ment.Figure 14: In the Heterogeneous Material Transport ( HMT) environment, each agent’s color is a mixture of blueand red, which represents its bias towards its carrying capacity for either lumber (red) or concrete (blue). Theobjective of the team is to fill the lumber and concrete quota at the construction site without delivering excess.The lumber and concrete quota limit for the HMTenvironment are randomly initialized to an integervalue between ( 0:5nagents) and ( 2:0nagents).Robots have five available actions: they can move left, right, up, down, or stop. At the beginningof each episode, all of the robots begin at a random position in the construction site zone (see Fig.14). The observation space for a robot is the combination of the robot’s state and the environment’sstate: specifically it is composed of the robot’s position, velocity, amount of lumber and concreteit’s carrying, and its distance to each depot, the total lumber quota, the total concrete quota, the totalamount of lumber delivered, and the total amount of concrete delivered. Robots’ observations donot contain state information about other robots. Finally, we append the robot’s unique ID for theIDbaseline methods and the robot’s maximum lumber and concrete carrying capacity for the CAmethods to the robots’ observations.The total reward for each robot in HMTis computed by summing the individual rewards of each robot.Robots are rewarded when they make progress in meeting the lumber and concrete quotas and arepenalized when they exceed the quota. If a robot enters the lumber depot orconcrete depot , andthe robot is empty (i.e. not loaded with any lumber or concrete), and the quota has yet to be filled,then the robot is rewarded with pickup reward of0:25. If the robot is loaded with material, then therobot is rewarded or penalized when it drops off the material at the construction site . Specifically,when a robot delivers a material and the quota for that material has yet to be filled, then the robot isrewarded with a positive dropoff reward of0:75. However, if the robot delivers a material and goesover the quota, then the robot is penalized with a negative surplus penalty reward proportional tothe amount of surplus: 0:10surplus material. Finally, robots received a small time penalty of0:005for each episode step in which the total quota is not filled; this promotes the robots to finishthe task as quickly as possible.C.2 Heterogeneous Sensor NetworkThis section describes additional details about the heterogeneous sensor network ( HSN) environment.16(a) Our agents running in simulation using the MAR-BLER framework.(b) Our agents running in the physical Robotarium.Robots have five available actions: they can move left, right, up, down, or stop. After selecting anaction, the robots move in their selected direction for slightly less than a second before selectinga new action. The robots start at random locations least 30cm apart from each other, move at21cm/second, and utilize barrier certificates [35] that takes effect at 17cm away to ensure they donot collide when running in the physical Robotarium.The reward from the heterogeneous sensor network environment is a shared reward. We describethe reward below:D(i;j) =jjp(i)p(j)jj(ci+cj)r(i;j) =0:9jD(i;j)j+ 0:05;ifD(i;j)<01:1jD(i;j)j0:05;otherwiseR=NXi<jr(i;j)whereiandjare robots,p(i)is the position of robot i,ciis the (capability) sensing radius of roboti, andRis the cumulative team reward shared by all the robots. The above reward is designedto reward the team when robots connect their sensing regions while minimizing overlap so as tomaximize the total sensing area.D Training and Evaluation SpecificationsThis section describes the design of the training teams and the sampling of evaluation teams for bothenvironments. To learn generalized coordination behavior, the training teams were required to bediverse in terms of composition and capture the underlying distribution of robot capabilities.D.1 Heterogeneous Material TransportTraining Team Number (concrete capacity ,lumber capacity )1 (0:9;0:1);(0:7;0:3);(1:0;0:0);(0:0;1:0)2 (0:9;0:1);(0:7;0:3);(0:0;1:0);(0:2;0:8)3 (0:8;0:2);(0:3;0:7);(0:4;0:6);(0:7;0:3)4 (1:0;0:0);(0:0;1:0);(0:1;0:9);(0:3;0:7)5 (0:6;0:4);(0:3;0:7);(0:7;0:3);(0:0;1:0)Table 1: Training teams used for the HMTtask (five teams of four robots each).D.2 Heterogeneous Sensor NetworkTo design these training teams, we first binned robot capabilities into small ,medium , and largesensing radii with bin ranges [0:2m;0:33m],[0:33m;0:46m], and [0:46m;0:60m]respectively. Wethen generated all possible combinations with replacements for teams composed of four robots ofsmall ,medium , and large robots for a total of 15 teams. Each robot assigned to one of the binssmall ,medium , and large had its capability (i.e. sensing radius) uniformly sampled within the bin17range. This resulted in 15 total teams, for which we hand-selected 5 sufficiently diverse teams to bethe training teams. The resulting training teams are given in Table 2.Training Team Number Robot Sensing Radii ( m)1 (0:2191);(0:2946);(0:2608);(0:3668)2 (0:2746);(0:2746);(0:5824);(0:5756)3 (0:3178);(0:3467);(0:5317);(0:6073)4 (0:2007);(0:5722);(0:5153);(0:4622)5 (0:4487);(0:5526);(0:5826);(0:58343)Table 2: Training teams used for the HSNtask (five teams of four robots each).The evaluation robot teams were sampled differently for the different experimental evaluations per-formed. In the training evaluation experiment, the teams were the same as the training teams inTable 2. Teams for the generalization experiment to new team compositions, but not new robots,were sampled randomly from the 20 robots from the training teams (with replacement). Each robotfrom the pool of 20 robots was sampled with equal probability. In contrast, teams for the gener-alization experiment to new robots were generated by randomly sampling new robots, where eachrobot’s sensing radius was sampled from a uniform distribution independently U(0:2m;0:6m). Forthe two generalization experiments, 100 total teams were sampled. Each algorithm was evaluatedon the same set of sampled teams by fixing the pseudo random number generator’s seed.We first focus on the training curves and subsequent evaluations conducted on the training set,without considering generalization. The goal of this experiment is to ensure introducing capabilitiesdoes not negatively impact training. The learning curves in terms of average return are shown inFig. 12. All models achieved convergence within 20 million environment steps, with ID(GNN) andCA+CC(GNN) exhibited both the fastest convergence and the highest returns. These results suggestthat communication of individual robot features, whether based on IDs or capabilities, improveslearning efficiency and performance for heterogeneous coordination.E Policy DetailsE.1 Graph Neural NetworksWe employ a graph convolutional network (GCN) architecture for the decentralized policy i, whichenables robots to communicate for coordination according to the robot communication graph G.A GCN is composed of Llayers of graph convolutions, followed by non-linearity. In this work, weconsider a single graph convolution layer applied to node iis given byh(l)i=0@Xj2N(i)[i(h(l1)j)1Awhereh(l1)j2RFis the node feature of node j,N(i) =fjj(vi;vj)2Eg are all nodes jconnectedtoi,is node feature transformation function with parameters ,is a non-linearity (e.g. Relu),andhli2RGis the output node feature.E.2 Policy ArchitecturesEach of the graph neural networks in the GNN-based policy architectures evaluated are composed ofan input encoder network, a message passing network, and an action output network. The encodernetwork is a 2-layer MLP with hidden dimensions of size 64. For the message passing network, asingle graph convolution layer composed of 2-layer MLPs with ReLU non-linear activations. Theaction output network is additionally a 2-layer MLP with hidden dimensions of size 64. The learningrate is 0.005.MLP(ID) /MLP(CA) : The MLP architectures compose of a 4-layer multi-layer perceptron with 64hidden units at each layer and ReLU non-linearities.CA(GNN) /CA+CC(GNN) /ID(GNN) : Each of the graph neural networks compose of an input “encoder”network, a message passing network, and an action output network. The encoder network and the18action output network are multi-layer perceptrons with hidden layers of size 64, ReLU non-linearactivations, and with one and two hidden layers respectively. The message passing network is agraph convolution layer wherein the linear transformation of node features (i.e. observations) is doneby a 2-layer MLP with ReLU non-linear activations and 64 dimensional hidden units, followed by asummation of the transformed neighboring node features. The ouptut node features a concatenatedwith the output feature from the encoder network. This concatenated features is the input to the twoout action network. The CA(GNN) network doesn’t communicate the robot’s capabilities with thegraph convolution layers. Rather, the capabilities are appended to the output of the encoder networkand output of node features of the graph convolution layer just before the the action network. Thus,the the action network is the only part of this model that is conditioned on robot capabilities.E.3 Policy Training Hyper parametersWe detail the hyperparameters used to train each of the policies using proximal policy optimization(PPO) [37] in Table 3.Hyperparameter ValueAction Selection (Training) soft action selectionAction Selection (Testing) hard action selectionCritic Network Update Interval 200 stepsLearning Rate 0.0005Entropy Coefficient 0.01Epochs 4Clip 0.2Q Function Steps 5Buffer Length 64Number of training steps 40106(HMT),20106(HSN)Table 3: Hyperparameters used to train each of the policies with PPO.19 |
Nii0_rRJwN | CALAMARI: Contact-Aware and Languageconditioned spatial Action MApping for contact-RIchmanipulationYoungsun Wi1Mark Van der Merwe1Andy Zeng2Pete Florence2Nima Fazeli11Robotics Department, University of Michigan2Google Deepmind{yswi, nfz }@umich.edu {andyzeng, peteflorence }@google.comhttps://www.mmintlab.com/calamariAbstract: Making contact with purpose is a central part of robot manipulationand remains essential for many household tasks – from sweeping dust into a dust-pan, to wiping tables; from erasing whiteboards, to applying paint. In this work,we investigate learning language-conditioned, vision-based manipulation policieswherein the action representation is in fact, contact itself – predicting contact for-mations at which tools grasped by the robot should meet an observable surface.Our approach, Contact-Aware and Language conditioned spatial Action MAp-ping for contact-RIch manipulation (CALAMARI), exhibits several advantagesincluding (i) benefiting from existing visual-language models for pretrained spa-tial features, grounding instructions to behaviors, and for sim2real transfer; and(ii) factorizing perception and control over a natural boundary (i.e., contact) intotwo modules that synergize with each other, whereby action predictions can bealigned per pixel with image observations, and low-level controllers can optimizemotion trajectories that maintain contact while avoiding penetration. Experimentsshow that CALAMARI outperforms existing state-of-the-art model architecturesfor a broad range of contact-rich tasks, and pushes new ground on embodiment-agnostic generalization to unseen objects with varying elasticity, geometry, andcolors in both simulated and real-world settings.Keywords: Contact-rich Manipulation, Visual-language guided policies1 IntroductionContact-rich manipulation is ubiquitous in our day-to-day lives, encompassing a broad range of tasksincluding sweeping dust into a dustpan, wiping tables, erasing a whiteboard, and applying paint witha brush. A key challenge in performing these tasks lies in controlling the interactions between toolsand their environments. For instance, when sweeping, it is crucial to ensure continuous contactbetween the bristles and the surface while directing the collected dust towards the dustpan.Language-conditioned representations and policies are a promising approach to addressing the chal-lenges of contact-rich manipulation, particularly for domestic applications. For one, language isa powerful tool for creating abstractions that enable generalization for a wide variety of tasks andenvironments. Secondly, language will be among the most common methods to command robotssuch as when performing tasks in the home. Recent work has demonstrated how large pretrainedvisual-language models (VLMs), such as CLIP [1] and PaLM-E [2], enable zero-shot transfer ofvisual-semantic reasoning based on language prompts and well-structured visual and language em-bedding spaces [3, 4, 5, 6, 7, 8, 9, 10]. However, previous efforts have predominantly focused onrearrangement-based tasks and have not adequately addressed the reasoning involved in contact-richmanipulations, thereby limiting their applicability to tasks like wiping, sweeping, or scooping.In this paper, we introduce a novel language-conditioned and contact-aware spatial action map rep-resentation that predicts planar contact affordances – contact formations at which tools grasped by7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: CALAMARI is a contact-aware and language-conditioned spatial-action mapping for contact-richmanipulation. We show that (A) wiping, sweeping, and pushing tasks, trained solely on simulations with asingle task and prompt, can be (B) directly transferred to the real world and applied to new environments withunseen tools, robot setups, table elevations, and prompts.the robot should meet an observable surface in order to perform a tabletop task. Our novel multi-modal spatial action maps are specifically for contact-rich manipulation where each pixel representsa binary indication of extrinsic contact between an object and the environment, and the entire mapis implicitly linked to the tool pose and robot configuration. Notably, our extrinsic contact policiesremains agnostic to intricacies of specific tools and physical robot platforms, unlocking possibilitiesfor generalization to unseen objects with distinct elasticity, geometry, and colors, both in sim andthe real world.The key contributions of this paper are: 1) multi-modal extrinsic contact policy with novel spatial-action maps that outperforms SOTA language-guided manipulation methods for contact-rich tasks;2) an MPPI controller algorithm compatible with the predicted contact goal and contact constraint;and 3) generalization to unseen objects with various elasticity, geometry, and colors in both simula-tion and the real world without requiring fine-tuning of the extrinsic contact policy.2 Related WorksLanguage Grounding for Manipulation: The recent advancements in large language models(LLMs) [11, 12, 13] and visual-language models [1, 2] have enabled language-grounded manip-ulation. Numerous approaches [4, 9, 3, 14, 8, 15, 6, 16], have emerged and demonstrated remark-able multi-task performance, particularly for pick-and-place tasks. These end-to-end methods mapdirectly from RGB or voxel observations to robot configurations. However, one drawback of end-to-end approaches is their reliance on extensive real-world data collection, which can span sev-eral weeks or months [15, 6]. To address this real-world data efficiency challenge, [9] achievedsignificant improvements by predicting only key frames and discretizing input and action spaces.Nevertheless, like end-to-end models, [9] faces the challenge of effectively handling novel tool ma-nipulations, as the tool variations must be captured in the training demonstrations. We show thatour approach CALAMARI can efficiently handle contact-rich tasks while reducing the burden onreal-world data collection via zero-shot sim2real transfer.Planning with Extrinsic Contact: Controlling extrinsic contacts for planning and manipulationhas been an active area of research, with several notable contributions in recent years[17, 18, 19, 20,21, 22, 23, 24]. Kim et al. [17] developed a method for simulataneously estimating and controllingextrinsic contacts of rigid objects using tactile signitures, while [18] focused on planning with ex-trinsic point contact for planar manipulation scenarios. Van der Merwe et al. [21, 22] demonstratedextrinsic contact detection for deforming tool manipulation, specifically scraping with spatulas, in-corporating predefined contact goals and learned dynamics. Wi et al. [23, 24] presented a technique2Figure 2: Overview. As shown in the left panel, our method utilizes the history of RGB and languageinstructions as inputs and predicts the contact patch goal as a binary mask from the input image frame. Thethree yellow blocks (e.g., ‘CLIP’) represent the pretrained models, which are not updated during training.for predicting dense contact patches for compliant tools using learned tool dynamics. CALAMARIprovides a framework for policies with contact goals, allowing for the seamless integration of thecontact dynamics models [21, 22, 23] into a broader context of planning and manipulation.3 Methodology3.1 Problem StatementOur objective is to learn a function Fthat predicts the next desired contact patch goal, denotedasCgoalt, at time t. This function takes as input a sequence of RGB key frames and languageinstructions to compute contact goals: F((It−w+1,l0), . . . , (It,l0)) =Cgoalt, where Itis an RGBimage, l0is a language instruction, and wdictates the observation time window considered formaking predictions. The output is the contact patch goal Cgoalt∈Rw×h, a 2D binary mask in It’scamera frame and is particularly well-suited to for contact-rich planar tasks [5, 25]. The contactpatchCgoalt can be de-projected to a point cloud by overlaying the predicted mask onto a depthmap that excludes objects/tools ( Dnominal ). The point cloud conversion enables the utilization of amodel predictive controller to achieve the desired contact formation. In this work, data is providedin the form of demonstrations consisting of variable-length Tkey frame trajectories, denoted byτ= ((I0,C0,l0), . . . , (IT,CT,l0)). Here, Ct∈Rw×his the contact patch between the object(e.g., grasped tool) and environment represented by a binary mask from the camera perspective. Thecontact patch from the demonstrations serves as the ground truth contact goal for the function F.3.2 Behavior Cloning Tool-Environment InteractionsVision-Language Pre-Processing: CALAMARI has two key vision-language pre-processing stepsto convert raw observations into CLIP features. Firstly, we generate word-wise heatmaps that high-light the spatial locations in the RGB images corresponding to specific words in the language in-struction. The Heatmaps, denoted as Ht=f(I,l), are grayscale images with dimensions w×h,obtained through the image-language relevancy extraction introduced in [26]. Using heatmaps in-stead of raw RGB input is particularly advantageous for sim2real and generalization to novel objectsand environments, as long as similar heatmap distributions are present. This is because heatmapsprovide abstract representation that is less sensitive to variations in visual appearance caused byfactors like object colors and lighting conditions. The heatmap is subjected to further abstractionthrough a pretrained heatmap encoder [27], resulting in word-wise heatmap features. These featuresare organized based on the word sequence and serve as the query input for the Visual-language trans-former, which will be discussed in detail in the subsequent section. Additional information aboutthe selection of heatmap encoders can be found in Appendix-A.4.1.Similarly, we convert the language prompts into embeddings using CLIP’s language encoder. Inthis case, we combine both sentence and word embeddings to capture the contextual information3Figure 3: We generate the key contact goal via CALAMARI (magenta) and reach the contact goal via MPPI,which is linked with corresponding low-level actions. We visualize the tool’s contact trajectory in blue until itreaches the contact goal. Once we have reached it, we generate a new contact goal until the task terminates.conveyed by the language prompt while focusing on individual words. These embeddings are alsoarranged in accordance with the word order and serve as inputs for the subsequent transformernetworks in the following section.Architecture: The CALAMARI architecture (Fig. 2) consists of of two types of transformers,visual-language (v-l) transformer and temporal transformer. Drawing inspiration from the LA V Astructure [14], our v-l transformer is responsible for encoding inputs into multi-modal features andour temporal transformer fuses latent observation over time to generate spatial-actions. In contrastto [14], the language query is comprised of a set of sentence and word embeddings, denoted asQ∈R(l+1)×dft. Here, lrepresents the sentence length and dftis the feature dimension. Thekeys and values, denoted as KandV∈Rl×dftrespectively, are word-wise heatmap embeddings.The temporal transformer takes into account the history of latent observations from the v-l trans-former, considering the wmost recent time stamps. Using the history of observation is importantin contact-rich manipulation as it often involves occlusion caused by tools and the robot itself. Thetemporal transformer utilizes self-attention multi-modal features stacked with time, represented asQ=K=V∈Rw×dft. The outputs of the temporal transformer are decoded using a grayscaleimage decoder, similar to the decoder architecture of the UNet model proposed by [28], withoutincorporating skipping layers.Training Loss: The loss function is a standard supervised behavioral cloning loss similar to theprior works [6, 29, 30]. Specifically, we use L2 regression between predicted contact patch Cgoaltand the ground truth contact patch Cgttas the following: ∥Cgoalt−Cgtt∥.3.3 MPPI controller and Contact GoalsWe use the Model Predictive Path Integral (MPPI) [31] controller to plan a sequence of robot actionswith corresponding contact patches to reach the desired contact goal Cgoalt. The input to MPPIconsists of the current pose of the end-effector and initial guess for the action trajectories. We definean action as the displacement in Cartesian SE (3)pose of the end effector and denote an actiontrajectory as a= (a0,a1, . . . ,aw−1). Next, we define the controller cost:wXi=1dist(Cgoalt∗Dnominal ,Pt+i) +λ(1−IoU(Cgoalt,Ct+i))where Pt+iis the predicted contact pointcloud from applying the end-effector delta change in poseaito the object. We estimate Pt+iby transforming objects with known geometry to the world frameand identifying intersections with environment (Appendix A.5.4). Ct+iis a 2D projection of Pt+ito the camera frame. The first term of our cost function minimizes the mean Euclidean distancebetween the center of Pt+iand the contact goal center. To do so, we uses Cgoalt∗Dnominal toget the contact goal pointcloud by overlaying contact goal mask with the nominal depth map. Thesecond term promotes matching the shape of future contact to the goal using Intersection over Union(IoU). We align the center of the prediction and the goal of contacts by subtracting the mean to focuson matching the contact shape (Appendix A.5.6).Our MPPI has two contact constraints, implemented via penalty costs. The first constraint is tomaintain contact via ∥Ct∥>0. This penalizes any actions that makes no contact. The secondconstraint is maxz(Dnominal −Pt+i)> ε p. This lower bounds the distance in zaxis from theenvironment to the transformed objects with epsilon εp>0. The entire control algorithm we use isdescribed in Appendix Alg. 1.4Figure 4: We visualize the dataset by displaying the language prompt alongside the RGB and contact patchsequences extracted from key contact frames in the demonstrations. The wiping task typically includes 7 to 12key contact frames, sweeping involves 4 key contact frames, and the press button tasks have 1 contact patch.4 Experiments and ResultDatasets: Both our model and baseline were trained using 100 demonstrations per task on Cop-peliaSim [32, 33]. Each task involved manipulating a single object and a single language prompt, asdescribed in Fig. 4. In Sec. 4.1, we investigate the generalization performance using various unseenobjects and prompts. This paper focuses on three different types of contact-rich tasks: multi-steppatch contact ‘ wipe desk’, three-step patch contact ‘ sweep todustpan ’, and single-step point con-tact ‘ press button ’. For the wiping task, we improvised our own closed-loop demonstration wherethe sponge moves towards the center of dust clusters computed from DBScan [34]. The other twotasks use open-loop demonstrations predefined in [33]. Each demonstration consists of a languageprompt and a sequence of RGB observations along with the corresponding contact patches obtainedfrom the key contact frames (Fig. 4 ). The key contact frames were identified when the robot reachedthe waypoints defined in the demonstrations. However, frames where the contact patch was the sameas the previous waypoints were removed to eliminate redundancy. Ground truth contact patches arecomputed via CoppeliaSim’s contact detection algorithm.4.1 Simulation ResultsThe task performance of our model across the tasks is presented in Table 1, where the scores areaveraged over 25 test episodes. Here, we utilized a 4 DoF action space (x, y, z, yaw )to focus onplanar manipulation, and we have included a full 6DoF manipulation result in Appendix A.5.3. Weconducted three different objects/tools for evaluations: one being the training object and the othertwo being held out objects (Fig. 5). Our objective in directly transferring to these held out ob-jects is twofold: 1) To demonstrate the robustness of our contact goal policy in effectively adaptingto shifts in heatmap distribution from variations in object’s structural/visual features. 2) To show-case the flexibility of our MPPI controller in accommodating previously unseen contact formationsfrom unseen object geometries. Note that we employed new task-specific language prompts for thepush button task’s heldout objects to accommodate color change.Evaluation Metrics: The evaluation metrics are the task success rates ranging from 0% to 100%.We evaluate wipe desk andsweep todustpan tasks with the percentage of dust removal. For thepush button tasks, binary metrics were used, where 0% indicated failure and 100% representedsuccess without partial credits.wipe desk:We evaluated the wiping performance after 20 contact goal generations, which is thepoint at which task performance plateaus (see Appendix Fig. 14). This task involves clearing onehundred dust particles with size 0.6x0.6x0.1cm that are randomly spawned within the boundingbox of size 15cmx15cm. We show in Tab. 1 that CALAMARI achieves 98% of dust removal withthe training object. To assess the performance with heldout tools, we conducted two tests. Test1examined the ability of our MPPI controller to manipulate objects with 55% decreased contact patcharea. Test2 evaluated the robustness of CALAMARI against out-of-distribution broom geometrywith a long handle. As shown in Tab. 1, our method performance only decreased by 2% for test1when handling smaller contact patches. Additionally, our goal generation exhibited robustness toheatmaps variations as test2 results are comparable to training object performance.sweep todustpan :We evaluated sweeping task with 3 contact goal generations as in the demon-stration. sweep todustpan task involves 5 dust particles with the size of 1x1x1cm . Details of thetask environments are in Appendix A.2.1. Using training object, we achieved a 93% success rate in5Methodwipe desk sweep todustpan push buttontrain test1 test2 train test1 test2 train test1 test2Ours 98% 96% 90% 93% 73% 84% 92% 60% 60%PerAct 97% 97% 9% 86% 0% 0% 63% 0% 24%CLIPORT 92% 89% 45% 88% 0% 13% 84% 8% 20%Table 1: RLBench success rate of each tasks in test cases using to train objects denoted as ‘train’ and heldoutobjects denoted as ‘test1’ and ‘test2’Figure 5: Train and test objects in simulation of four different tasks. The heldout objects exhibit variationsnot only in color, size, and shape but also in structural attributes, such as the handle locations. The third rowindicates object dimensions in centimeter either as width×depth×height or as radius.test cases for sweeping to a dustpan. We then transferred the pretrained model to two test objects:one with a longer handle, similar to the original, and another with a handle on the side. We noticedthat larger visual discrepancies between the objects resulted in greater performance drops as test 2shows worse performance than test1.push button :We altered the prompt “ push the {}button ” from the word “red” to “green” and“blue”. This change resulted in varying CLIP heatmap intensities and word embedding inputs ,orthe queries of our visual-language transformer. Nevertheless, our test results revealed that even withtraining based on a single word, our model achieved contact goal accuracy of less than 2.5cm in60% of the two heldout cases. These findings emphasize the robustness and effectiveness of ourapproach in accurately pushing buttons of different colors.Baseline-PerAct: In this section, we compare our methods with PerAct, a state-of-the-art language-conditioned manipulation study that also utilizes CLIP features. Details of implementation are de-scribed in Appendix A.3.1. Our method outperforms PerAct mostly across the three tasks (Tab. 1),both in training and testing with heldout objects. For the wiping task, PerACT shows significantlyworse performance for test2 when compared to test1 and the training object. There are two mainfactors that explain the performance drop. First, PerAct is sensitive to larger changes in the transfor-mations between the grasped position and the contact surface. We can observe this trend consistentlyacross the sweeping tasks for the test objects as well. Second, PerAct is sensitive to variation in vi-sual cues of the objects, as further supported by the sweep task. Our model is more robust as thesechanges have less impact on our contact planning than on the robot configuration. The results ofpush button task show that, while PerAct fails to detect buttons not in the training prompts, CALA-MARI leverages VLM’s generalization ability for interacting with unseen prompts of differentcolored buttons.Baseline-CLIPORT: CLIPORT is particularly suited for tabletop manipulation with 2D affordanceprediction [5]. The CLIPORT baseline shares a number of important similarities to CALAMARI, in-cluding using an image action space and known tool geometry and pose. Details of implementationare described in Appendix A.3.2. Tab. 1 demonstrates that CALAMARI consistently outperformsthe CLIPORT baseline. Wiping and sweeping results show that CLIPORT also struggles to gener-alize to unseen objects and tools with significant visual and geometrical variations. This is becauseCLIPORT encodes raw RGB-D without further abstraction unlike CALAMARI. Moreover, CLI-PORT also faces difficulty in generalizing to unseen prompts with color variations for the pushingtask, similar to PerAct with significant drop in performance for these tasks.4.2 Sim2RealIn this section, we directly transferred the pretrained model from simulation to the real world withoutfine-tuning. The quantitative analysis (Tab. 2) was conducted using 10 runs for each object withconsistent resetting across all tasks. We utilized 2 Franka-emika robots and an Intel Realsense D435camera for our real-world setup. The distance between the robot and camera between the real-world6Figure 6: We demonstrate the ability of our model to generate goals for non-rigid tools. We repeat thesweeping task with a compliant tool, using a learned dynamics model to servo the contact of the tool with thetabletop. The predicted contact goal is visualized in magenta while the contact feature predictions from thedynamics model are overlaid in blue.Figure 7: We visualized CALAMARI’s contact patch goals in magenta. The first row represents the predictionswhen using the test1 object, while the second row corresponds to the test2 object. Our model can navigate backand forth until all the dots are erased using a closed loop policy. In real-world scenarios where the pressureexerted by a sponge on the board is not evenly distributed, this ability becomes particularly significant as thesponge may fail to erase certain dots even when it passes over them.and simulation environments was [0.18m, 0.02m, -0.21m] in the x, y, and z directions. We found thatthe spatial-action map allows us to accurately predict contact patches in the camera frame regardlessof the differences in camera positioning between simulation and real-world set-ups.wipe desk:For the reset, we draw one hundred dots within a bounding box measuring 17cmx17cmwith an black marker. We generated 20 contact goals per run and counted the number of erasedparticles for evaluation. Fig. 7 shows our contact patch goals, where we visualize the first and lastthree frames when contacts goals are generated. We also conducted experiment to erase differentdistributions of dots with the test1 object in Appendix Fig. 15.sweep todustpan :We performed resets by arranging the dustpan in 10 different configurationswithin a bounding box measuring 9cmx14cm. Each configuration included 10 1cm3cubes placedin front of the dustpan. For evaluation, we counted the number of cubes successfully swept into thedustpan. Fig. 8 visualizes our contact goals, which directed the broom to align with the dustpan andsweep towards it. Interestingly, our real-world results show a dust sweeping rates of 91-92% fromtest1 and test2 brooms, which surpassed performance in simulation with unseen tools.Figure 8: The real-world sweeping results were obtained using two brooms with small width margins (1-3 cm)with the target dustpan. Left panel shows three contact goal generations, visualized in magenta. The resultsin the last column demonstrate that our policy exhibits excellent zero-shot transfer to the real-world, even withunseen tool geometries. Furthermore, we demonstrate that our method is agnostic to the arrangement of therobot, as the same policy was applied to both the right and left arm.7Figure 9: Our model, trained in CoppeliaSim on the prompt “press the red button,” can be directly transferredto the real world without fine-tuning (first column). Our spatial action space is not limited to the tabletop(second column) and is robust to visual distribution (second and third column). It also handles unseen promptslike “press the green/blue button” (last two columns). The contact goal is visualized in magenta in the first row,while the second row shows the actual execution resultswipe desktest1 91%test2 98%sweep todustpantest1 91%test2 92%comp. 85%push buttonred 90%green 60%blue 40%Table 2: Direct transfer to real-world using testobjects described in Fig. 7, 8, 9. ‘comp’ utilizeslearned compliant tool dynamics as in Sec. 4.3.push button :We positioned three buttons (red, green,blue) in 10 predefined line and triangular arrangementson the desk. For evaluation, we assessed whether theend-effector successfully made contact with the targetbutton. The real-world outcomes aligned with the sim-ulation results for the red and green buttons (trainingand test1), but pressing the blue button exhibited a per-formance discrepancy of 20%. Fig. 9 demonstrates theadaptability of our pretrained model to diverse, unseensetups involving variations in elevation and the numberof buttons in the scene.4.3 Compliant Tool ManipulationIn this section, we demonstrate our model performance on a real world, compliant tool manipu-lation task. By decoupling goal generation and dynamics, our method can generate valid contactgoals for tasks using compliant tools, so long as dynamics of the tool are available. We execute thesweep todustpan task with a compliant brush (compare tool deformation in Fig. 6 to Fig. 8). Notethat the goal generation is learned on rigid tools in CoppeliaSim and transferred without finetuningto a deformable tool in the real-world. We replace rigid dynamics with a learned contact featuredynamics to estimate Pdynt+i[21]. Full details of the contact feature dynamics can be found in Ap-pendix A.2.5. Quantitative results are shown in Tab. 2 and a qualitative sweep is shown in Fig. 6.Our method can effectively predict contact goals for a deforming tool, yielding 82% performance.5 LimitationsOur approach offers versatility in manipulating objects on various planar manipulation scenarios,including elevated and potentially inclined planes. However, our ability to predict contact is limitedto a 2D binary contact patch, therefore, it is challenging to directly apply our method for moreintricate contact-rich manipulation scenarios like screwing bulbs or peg insertions. Moreover, weassume the region of interest (e.g., areas to wipe, sweep, or push) is already within the camera’s fieldof view. Lastly, our approach lacks support for discontinuous contact. As a future work to enableCALAMARI to effectively address more complex contact-rich scenarios, we suggest an extension to3D contact mask prediction, potentially leveraging state-of-the-art surface reconstruction techniques(e.g., [26] ). Additionally, we suggest integrating binary contact prediction with a contact mask toinform the controller to switch between free space motion and in-contact mode.8AcknowledgmentsThis work was supported by the National Science Foundation (NSF) grant NRI-2220876. Anyopinions, findings, and conclusions or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of the National Science Foundation.References[1] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International Conference on Machine Learning , pages 8748–8763. PMLR, 2021.[2] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprintarXiv:2303.03378 , 2023.[3] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[4] A. Zeng, M. Attarian, B. Ichter, K. Choromanski, A. Wong, S. Welker, F. Tombari, A. Purohit,M. Ryoo, V . Sindhwani, J. Lee, V . Vanhoucke, and P. Florence. Socratic models: Compos-ing zero-shot multimodal reasoning with language, 2022. URL https://arxiv.org/abs/2204.00598 .[5] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[6] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002. PMLR, 2022.[7] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329, 2022. doi:10.1109/ICRA46639.2022.9811931.[8] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch,Y . Chebotar, et al. Inner monologue: Embodied reasoning through planning with languagemodels. arXiv preprint arXiv:2207.05608 , 2022.[9] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation, 2022. URL https://arxiv.org/abs/2209.05451 .[10] P. Sharma, B. Sundaralingam, V . Blukis, C. Paxton, T. Hermans, A. Torralba, J. Andreas, andD. Fox. Correcting robot plans with natural language feedback, 2022.[11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advancesin neural information processing systems , 33:1877–1901, 2020.[12] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectionaltransformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018.[13] I. Solaiman, M. Brundage, J. Clark, A. Askell, A. Herbert-V oss, J. Wu, A. Radford, G. Krueger,J. W. Kim, S. Kreps, et al. Release strategies and the social impacts of language models. arXivpreprint arXiv:1908.09203 , 2019.[14] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.9[15] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprintarXiv:2205.06175 , 2022.[16] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich,F. Xia, C. Finn, et al. Open-world object manipulation using pre-trained vision-language mod-els.arXiv preprint arXiv:2303.00905 , 2023.[17] S. Kim, D. K. Jha, D. Romeres, P. Patre, and A. Rodriguez. Simultaneous tactile estimationand control of extrinsic contact. arXiv preprint arXiv:2303.03385 , 2023.[18] B. Aceituno and A. Rodriguez. A hierarchical framework for long horizon planning of object-contact trajectories. In 2022 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 189–196. IEEE, 2022.[19] A. Sipos and N. Fazeli. MultiSCOPE: Disambiguating In-Hand Object Poses with Propriocep-tion and Tactile Feedback. In Proceedings of Robotics: Science and Systems , Daegu, Republicof Korea, July 2023. doi:10.15607/RSS.2023.XIX.078.[20] M. Wilson and T. Hermans. Learning to manipulate object collections using grounded staterepresentations. In Conference on Robot Learning , pages 490–502. PMLR, 2020.[21] M. Van der Merwe, D. Berenson, and N. Fazeli. Learning the dynamics of compliant tool-environment interaction for visuo-tactile contact servoing. In Conference on Robot Learning ,pages 2052–2061. PMLR, 2023.[22] M. J. V . der Merwe, Y . Wi, D. Berenson, and N. Fazeli. Integrated Object Deformation andContact Patch Estimation from Visuo-Tactile Feedback. In Proceedings of Robotics: Scienceand Systems , Daegu, Republic of Korea, July 2023. doi:10.15607/RSS.2023.XIX.080.[23] Y . Wi, A. Zeng, P. Florence, and N. Fazeli. Virdo++: Real-world, visuo-tactile dynamics andperception of deformable objects. In K. Liu, D. Kulic, and J. Ichnowski, editors, Proceedingsof The 6th Conference on Robot Learning , volume 205 of Proceedings of Machine LearningResearch , pages 1806–1816. PMLR, 14–18 Dec 2023. URL https://proceedings.mlr.press/v205/wi23a.html .[24] Y . Wi, P. Florence, A. Zeng, and N. Fazeli. Virdo: Visio-tactile implicit representations ofdeformable objects. In 2022 International Conference on Robotics and Automation (ICRA) ,pages 3583–3590. IEEE, 2022.[25] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021.[26] H. Ha and S. Song. Semantic abstraction: Open-world 3d scene understanding from 2d vision-language models. In Conference on Robot Learning , 2022.[27] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778,2016.[28] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical imagesegmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings,Part III 18 , pages 234–241. Springer, 2015.[29] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel. Deep imita-tion learning for complex manipulation tasks from virtual reality teleoperation. In 2018 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 5628–5635. IEEE, 2018.10[30] C. Lynch and P. Sermanet. Grounding language in play. arXiv preprint arXiv:2005.07648 , 40:105, 2020.[31] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou.Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE Interna-tional Conference on Robotics and Automation (ICRA) , pages 1714–1721. IEEE, 2017.[32] E. Rohmer, S. P. Singh, and M. Freese. V-rep: A versatile and scalable robot simulationframework. In 2013 IEEE/RSJ international conference on intelligent robots and systems ,pages 1321–1326. IEEE, 2013.[33] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020.[34] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discoveringclusters in large spatial databases with noise. In kdd, volume 96, pages 226–231, 1996.11Figure 10: Using contact goals as points instead of patches results in a loss of control over the contactorientation. As demonstrated in this figure, CALAMARI successfully reorients the broom to maximize dirtsweeping. The magenta color indicates contact goal in all figures.A AppendixA.1 Ablation StudiesA.1.1 Contact Patch Verses Point PredictionsPredicting contact patches is crucial for precise contact-rich manipulation. For instance, optimiz-ing contact patch orientation can greatly increase efficiency in sweeping up larger amount of dustsimultaneously. Our proposed model-predictive control approach tracks desired contact patches,in particular matching contact patch size and orientation. To demonstrate the importance of contactpatch control, we present our validation in the following ablation section and Fig.10 where the initialbroom orientation is set to π/4rad to the world-frame. Here, we extracted the center of the contactpatch goal and executed our MPPI algorithm without the Intersection over Union (IoU) cost. Ourresults show that using the point contact goal results in a 14 % success rate, which is significantlylower than CALAMARI’s performance with contact patch goal and the same orientation offset (93%).A.1.2 Temporal TransformerTo evaluate the impact of CALAMARI’s temporal transformer, we conducted a comparison betweenthe wiping task performance with and without the temporal transformer (Tab. 3. We chose thewiping task because it is characterized by the longest planning horizon among our tasks and exhibitsthe most significant scene occlusions due to the robot arm and tool. In the ‘without temporal’experiment, we directly used the latent state from the visual-language transformer as an input to thegrayscale contact goal decoder. We kept the hyperparameters fixed across trials. The training losscurve is shown in Fig. 11. We note that the task success rate using train object without temporaltransformer was 93%, which is 4% lower than that of our proposed architecture.A.2 Results DetailsA.2.1 CoppeliaSim Sweep Task12wipe desk train test1 test2CALAMARI 97 % 93% 78%without temporal 93 % 86% 74%Table 3: Performance analysis with and without temporal transformer.Figure 11: Training loss curve of CALAMARI with and without temporal transformer.Among our tasks, sweep todustpan involves two different objects in the scene: broom and dustpan.To assess the robustness of CALAMARI and the baselines towards unseen scenarios, both objectswere modified, as illustrated in Fig. 12. Each broom was created from scratch, designed after real-world brooms commonly found in retail stores. As for the dustpan, we employed the same mesh usedfor training, but altered its color to match that of the broom. Additionally, we introduced geometricvariance by scaling the dimensions along the x, y, and z directions. We note the dustpan dimensionfor training was 25cmx30cmx7cm, while for Test1, the dimension was 20cmx35cmx17cm, and forTest2, the dimension was 20cmx25cmx3cm.A.2.2 Real-world Setup DetailsFig. 13 shows our experimental setup using a single accessible vision sensor from the front view. Wenote that only one arm was used for manipulation. We found that RGB image noise could adverselyinfluence the scale of the heatmap signal, which in turn affect goal generation of our method onthesweep todustpan task. This is due to the heatmap’s sensitivity to high-frequency RGB noise,not present in simulation, resulting in a mild divergence in subsequent contact goal predictions fornearly identical inputs. As such, for the sweep todustpan task, and compliant variant, we repeatedeach trial twice and reported the better performing result.Figure 12: For the sweeping, we not only used unseen brooms, but also augmented the dustpan todifferent colors and dimensions.13Figure 13: We used 2 Franka-emika robots and an Intel Realsense D435 for real-world setup.Figure 14: Details on wiping performance with training object over the number of goal generated (first)and time (second). Second plot visualizes each 25 tests’ dust removal rate over time. Finally we show thedistribution of success rate after 20 goals.A.2.3 Real-world Wipe DeskFig. 15 shows generalization of our wiping task to unseen dot arrangements. This results in differentheatmap distributions. As in the main experiment, we run the wiping until it generates 20 contactgoals.Figure 15: We examined the wiping results using the test1 object on different dot arrangements, leading toan unseen heatmap distribution for the model, which was originally trained only on square-shaped dot clus-ters. From the top left to the bottom right, we conducted tests on circle, triangle, star, quatrefoil, infinite, andhourglass shapes. Our observations show the first four shapes were nearly entirely erased after the wiping. Incontrast, we observed a disparity in performance with the last two shapes, characterized by unfilled holes intheir centers, leaving behind 10% and 54% of the dots, respectively.14Figure 16: First row indicates generated contact goal indicated in magenta pixels. The second row indicatesrobot execution results.A.2.4 Real-world Push ButtonFig. 16 shows more example results on button pushing task with unseen numbers of buttons in thescene, unseen table elevation, and unseen prompts.A.2.5 Compliant Sweep TaskOur method decouples contact goal generation from the low-level controller responsible for real-izing the contact goals. To control contact between a deforming tool and the tabletop, we use thecontact feature dynamics model from Van der Merwe et al. [21]. The model predicts contact geome-tries (represented as lines in 3D) given candidate actions, conditioned on point cloud and wrenchobservations. The contact point cloud Pt+iis obtained by sampling evenly between the end pointsof the predicted contact line.15Figure 17: (B) depicts the rigid object geometries utilized in our experiment, corresponding to the actualobjects shown in (A). While the object models do not encompass the intricate details of the object geometry,our research demonstrates that relying solely on the coarse dimensions of the geometry – such as the object’scollision model, commonly employed for collision avoidance during robot planning – proved sufficient forsuccessfully executing our task.We train the model by performing randomized actions and label contact lines using a heuristic onthe observed point clouds. Specifically, we threshold for points near the surface, then fit a line to theresulting points projected onto the tabletop. Point cloud observations are obtained by a PhotoneoPhoxi 3D scanner (L) and a Photoneo MotionCam-3D Color (M+) scanner. Wrench observationsare obtained by an ATI gamma force torque sensor, attached between the end effector and complianttool. The dynamics model is trained on 3236 sampled transitions.A.2.6 Grasped Tool GeometryFor the object geometries, we crafted their meshes via trimesh Python package utilizing rough objectdimensions as shown in Figure 17(B). Constructing meshes required less than 10 minutes to createeach for the objects we employed. While we haven’t specifically explored automating this processin our current work, it is plausible that SOTA 3D shape completion techniques from partial objectpoint cloud measurements can be used to generate meshes.A.2.7 Task Performance RobustificationWe achieved robustification of contact goal generations by applying robot workspace mask to theRGB images using a known transformation from the camera to the robot and by employing heatmapaugmentations. These heatmap augmentations include flipping and translation( both horizontal andvertical) ranging from 10 to 30 pixels. Furthermore, we introduced four different levels of Gaussiannoise to the RGB before generating the heatmap, enhancing the robustness of our method againsthigh-frequency noise in real-world RGB images.A.3 Baseline DetailsA.3.1 PerActPerAct employs voxel inputs from a multiple calibrated RGB-D camera setup, while our methodrelies solely on a single front camera view. In contrast, our MPPI controller requires additionalinformation about the model’s geometry and object pose, whereas the baseline does not requiresuch object-related details. For this experiment, PerAct was trained from scratch with a 512 latentdimension. To align the baseline with our experimental setup, we trained PerAct with a single-taskusing the same dataset. During testing, we reduced the action space to (x, y, z, yaw) by providing16Figure 18: Blue is a training loss curve when training image encoder from scratch and the red is the trainindloss curve when training with frozen pretrained ResNet18. Other than the encoder, we used the same trainingsettings. The dataset used for this experiment is 50 demonstrations of press button dataset.ground truth values for the other actions (pitch, roll, gripper state, and collision prediction) as wellas ground truth grasping.A.3.2 CLIPORTCLIPORT’s inputs consist of a single RGB-D and language instruction, whereas the outputs arethree affordances for picking, placing, and a discrete end-effector angle for placing. To elaborate,CLIPORT outputs (u,v) for picking and (u,v,yaw) for placing, where u and v denote pixel coordinatesin the tabletop view. These table-top pixel coordinates are then projected into the world frame (x,y,z)using a known transformation. Subsequently, we employ the known tool geometry and pose totransform the target point (x,y,z) into the robot end effector frame, ensuring that the bottom centerof the tool reaches the target point with the desired wrist rotation (yaw). We provide ground truth zcorresponding to the table height.A.4 Input Processing DetailsA.4.1 Heatmap EncoderWe implemented two gray-scale image encoder. The first was based on pytorch’s ResNet implemen-tation by changing the first encoder’s input dimension as 1 instead of 3. We used 4 residual blocksand 2 convolution layers for each residual blocks following the original pytorch implementation.We trained this encoder from scratch along with other modules of CALAMARI. The second wasthe pretrained ResNet18 [27] trained on ImageNet. The network is for RGB image, such that werepeated the grayscale heatmap inputs for three times and stacked in depth to match the desired theinput dimension. We note that the pretrained image encoder’s parameters are frozen without finingtuning. Our experiment shows that the second encoder gives faster convergence and lower trainingloss as shown in Fig. 18.A.5 Controller DetailsA.5.1 Control PipelineAlg. 1 explains our hierarchical controller in the context of other components of CALAMARI.When given the contact patch goal, we repeat a control process consisting of the MPPI (Alg. 1line 8) and the impedance controller (Alg. 1 line 9). The control loop continues until the graspedobjects achieves the contact goal, measured as cost using the same MPPI cost function (Alg. 1 line5). If the current pose of the object resulting from the impedance controller has a smaller cost thanthe threshold δ, a new contact goal is generated via generate contact goal using visual-languageobservation(Alg. 1 line 5). Task is finished when the task objective is achieved (Alg. 1 line 12); forexample, when all the dust is swept, when all the dots are erased, and when the button has beenpushed correctly. Tasks are also finished when the number of contact goals exceed the number ofgoal threshold (Alg. 1 line 14), where the contact goal threshold ( gthres ) is set to 20 for wiping, 3for sweeping, and 1 for pushing.Next, we provide additional details about our MPPI controller, building upon the description pro-vided in Sec. 3.3. The role of MPPI is to compute a sequence of robot actions that result in the17Algorithm 11:t←02:g←03:complete ←False4:while notcomplete do5: ifcost(st)≤δthenCgoal=generate contact goal(obst)6: g←g+ 17: end if8: at←mppi (st,Cgoal)9: env.step (at)10: t←t+ 111: iftask.get completed then12: complete ←True13: end if14: ifg > g thres then15: complete ←True16: end if17:end whiledesired contact goals given by our representation, Fig. 3. Here, we define an action as the change inCartesian SE (3)pose of the end effector and denote an action trajectory as a= (a0,a1, . . . ,aw−1).The input to MPPI consists of the current pose of the end-effector and initial guess for the action tra-jectories. Given the cost function described in Section 3.3, the output of MPPI is the action sequencewith the lowest cost. To predict contact locations using action samples, given a goal at each timestep, we predict contact patches of a tool given sampled end-effector actions. For rigid object contactestimation, we apply the end-effector Cartesian change in position to the grasped tool (assuming afixed grasp) and compute the intersecting geometry with the environment resulting in the contactpatch. This is achieved by comparing the transformed tool mesh/pointcloud with the Dnominal .For the compliant tool, we use the Extrinsic Contact Servoing approach [21] which directly yieldsthe contact lines, utilizing wrench measurements and a partial point cloud given the action to beexecuted. One may substitute of models including [23].A.5.2 Controller ParametersFor the mppi, the actions were sampled from N(0, σ)and clipped using the action bound. OurMPPI framework is based on the external repository following the prior works [21] and we set theparameters for the mppi as follows: action high = [x, y, z, r, p, y ]=[0.04, 0.04, 0.001, 0., 0., 0.3],action low=[x, y, z, r, p, y ]=[-0.04, -0.04, -0.001, 0., 0., -0.3], num samples=1000, nx=6, lambda=0.000001, horizon=1, andσ=0.01 0 . 0. 0. 0. 0.0.0.01 0 . 0. 0. 0.0. 0.0.001 0 . 0. 0.0. 0. 0.0.0005 0 . 0.0. 0. 0. 0. 0.0005 0 .0. 0. 0. 0. 0. 0.01.A.5.3 6-DoF ManipulationIn this section, we present the results of the full 6DoF manipulation by expanding the action boundsofrandp, as well as the noise sigma. Now, all rotation components share the same action bounds:action high = [x, y, z, r, p, y ]= [0.04, 0.04, 0.001, 0.3, 0.3, 0.3], and action low = [x, y, z, r, p, y ]=[-0.04, -0.04, -0.001, -0.3, -0.3, -0.3]. Other MPPI settings, such as the number of samples, lambda,18(σr, σy) wipe desk sweep todustpan(0.01, 0.01) 95 % 92 %(0.05, 0.05) 94 % 87%(0.1, 0.1) 91 % 84%Table 4:6-DoF manipulation experiments with different roll and pitch action sampling parameters.horizon, and more, remain unchanged. Consequently, we update the action noise sigma as:σ=0.01 0 . 0. 0.0.0.0.0.01 0 . 0.0.0.0. 0.0.001 0 .0.0.0. 0. 0. σ r0.0.0. 0. 0. 0. σ p0.0. 0. 0. 0.0.0.01Here, σrandσpare scalar values associated with the standard distribution of roll and pitch actions,respectively. In Tab. 4, we evaluate different combinations of (σr, σp)in two of our more intri-cate control tasks, using the “train object” in Coppeliasim. The final row, which utilizes ( σr, σy)= (0.1, 0.1), representing a scenario where all rotation components share the same action samplingdistribution. The Tab. 4 shows the task performance may decrease as we increase the action noise.Keeping the number of samples constant while increasing action noise for MPPI results in samplingdeficiencies, which lead to suboptimal control result. Finally, our robot’s Cartesian impedance con-troller stiffness parameters was set to [3000.0, 3000.0, 3000.0, 300.0, 300.0, 300.0] for rigid toolmanipulation and [2000.0, 2000.0, 1000.0, 100.0, 200.0, 200.0] for complaint tool manipulation.A.5.4 Contact Patch PredictionTo obtain the intersections between the object’s point cloud and the environment, we first extractthe lowest 10% of the point cloud, which serves as the contact candidates. Using contact candidatesenhances calculation efficiency. If the z-coordinate of the contact candidate is lower than or equal tothat of the closest point in the environment in Manhattan distance on the x and y axes, we considerthe candidate pixel to be in contact.A.5.5 Environment GeometryWe define the environment as a point cloud or the depth map without tools and non-collidableobjects. In the CoppeliaSim simulation, we utilized setmodel renderable(False) for the tools andnon-collidable objects in the scene during task initialization. In the real world, we set aside the toolsfrom the camera’s angle at the task initialization and capture a depth map to obtain the environmentgeometry.A.5.6 Intersection over UnionBefore calculating the IoU, we align the centers of contact patch predictions with the contact goalmask to ensure that the IoU metric focuses purely on shape matching. This alignment is achieved bysubtracting the mean pixel values of each 2D mask. We found that using IoU metrics without thiscenter offset can lead to undesirable yaw movements of the object. This is because yaw motions canlead to higher IoU scores by creating larger overlaps between the tool’s actual contact and the contactgoal, especially when the tool can only partially reach the goal with the action. Consequently, IoUscores without this alignment do not accurately reflect the deviations from the desired contact’sshape and orientation.19 |
rpWi4SYGXj | Grounding Complex Natural Language Commands forTemporal Tasks in Unseen EnvironmentsJason Xinyu Liu∗1, Ziyi Yang∗1, Ifrah Idrees1, Sam Liang2, Benjamin Schornstein1,Stefanie Tellex1, Ankit Shah11Department of Computer Science, Brown University, United States2Department of Computer Science, Princeton University, United StatesAbstract: Grounding navigational commands to linear temporal logic (LTL)leverages its unambiguous semantics for reasoning about long-horizon tasks andverifying the satisfaction of temporal constraints. Existing approaches requiretraining data from the specific environment and landmarks that will be used innatural language to understand commands in those environments. We proposeLang2LTL, a modular system and a software package that leverages large languagemodels (LLMs) to ground temporal navigational commands to LTL specifications inenvironments without prior language data. We comprehensively evaluate Lang2LTLfor five well-defined generalization behaviors. Lang2LTL demonstrates the state-of-the-art ability of a single model to ground navigational commands to diversetemporal specifications in 21 city-scaled environments. Finally, we demonstrate aphysical robot using Lang2LTL can follow 52 semantically diverse navigationalcommands in two indoor environments.1Keywords: language grounding, robot navigation, formal methodsFigure 1: Lang2LTL can ground complex navigational commands in household and city-scaledenvironments without retraining.1 IntroductionNatural language enables humans to express complex temporal tasks, like “Go to the grocery store onMain Street at least twice, but only after the bank, and always avoid the First Street under construction.”Such commands contain goal specifications and temporal constraints. A robot executing this commandmust identify the bank as its first subgoal, followed by two separate visits to the grocery store, andvisiting First Street is prohibited throughout the task execution. Linear temporal logic [ 1] provides anunambiguous target representation for grounding a wide variety of temporal commands.Prior work of grounding natural language commands to LTL expressions for robotic tasks covered alimited set of LTL formulas with short lengths and required retraining for every new environment[2,3,4,5]. In contrast, some recent approaches proposed leveraging LLMs to directly generate robot∗Equal contribution1Code, datasets and videos are at https://lang2ltl.github.io/7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.policies [ 6,7,8,9]. The LLM output is itself a formal language that must be interpreted by an externaluser-defined program. Such approaches cannot solve many complex temporal tasks considered inthis work. Another advantage of using a formal language like LTL is that existing work on automatedplanning with LTL specifications provides strong soundness guarantees.In this paper, we propose Lang2LTL, a system capable of translating language commands to groundedLTL expressions that are compatible with a range of planning and reinforcement learning tools [ 10,11,12,13,14]. Lang2LTL is a modular system that separately tackles referring expression recognition,grounding these expressions to real-world landmarks, and translating a lifted version of the commandto obtain the grounded LTL specification. This modular design allows Lang2LTL to transfer to novelenvironments without retraining, provided with a semantic map.We formally define five generalization behaviors that a learned language grounding model mustexhibit and comprehensively evaluate Lang2LTL’s generalization performance on a novel datasetcontaining 2,125 semantically unique LTL formulas corresponding to 47 LTL formula templates.Lang2LTL showed the state-of-the-art capabilities of grounding navigational commands in 21 citiesusing semantic information from a map database in a zero-shot fashion. Finally, we demonstratedthat a physical robot equipped with Lang2LTL was able to follow 52 semantically diverse languagecommands in two different indoor environments provided with semantic maps.2 PreliminariesLarge Language Models: Autoregressive large language models (LLMs) are producing SoTAresults on a variety of language-based tasks due to their general-purpose language modeling capa-bilities. LLMs are large-scale transformers [ 15] pretrained to predict the next token given a contextwindow [ 16,17]. In this work, we used the GPT series models [ 18,19] and the T5-Base model [ 20].Temporal Task Specification: Linear temporal logic (LTL) [ 1] has been the formalism of choicefor expressing temporal tasks for a variety of applications, including planning and reinforcementlearning [ 10,21,22,11,23,14], specification elicitation [ 3,24,25,26,2], and assessment of robotactions [ 27]. The grammar of LTL extends propositional logic with a set of temporal operatorsdefined as follows:φ:=α| ¬φ|φ1∨φ2|Xφ|φ1Uφ2 (1)An LTL formula φis interpreted over a discrete-time trace of Boolean propositions, α∈APthatmaps an environment state to a Boolean value. φ, φ 1, φ2are any valid LTL formulas. The operators¬(not) and ∨(or) are identical to propositional logic operators. The temporal operator X(next)defines the property that Xφholds if φholds at the next time step. The binary operator U(until)specifies an ordering constraint between its two operands. The formula φ1Uφ2holds if φ1holds atleast until φ2first holds, which must happen at the current or a future time. In addition, we use thefollowing abbreviated operators, ∧(and),F(finally or eventually), and G(globally or always), thatare derived from the base operators. Fφspecifies that the formula φmust hold at least once in thefuture, while Gφspecifies that φmust always hold.We developed Lang2LTL based on a subset of specification patterns commonly occurring inrobotics [28]. Appendix Table 4 lists the specification patterns and their interpretations.Planning with Temporal Task Specification: Every LTL formula can be represented as a B ̈uchiautomaton [ 29,30] thus providing sufficient memory states to track the task progress. The agentpolicy can be computed by any MDP planning algorithm on the product MDP of the automaton andthe environment [ 10,21,22,11,12,13]. Lang2LTL is compatible with any planning or reinforcementlearning algorithm that accepts an LTL formula as task specification.3 Related WorkNatural Language Robotic Navigation: Early work of language-guided robot task execution focusedon using semantic parsers to ground language commands into abstract representations capable of2informing robot actions [ 31,32,33,34,35]. Recent work leverages large pretrained models todirectly generate task plans, either as code [ 7,6] or through text [ 36,9]. The LLM output is a formallanguage that must be interpreted by an external procedure. Thus the external interpreters need to beexpressive and competent for the success of these approaches. In contrast, planners for LTL offerasymptotic guarantees on the soundness of the resulting robot policies. Finally, LM-Nav [ 8] is amodular system that computes a navigational policy by using an LLM to parse landmarks from acommand, then a vision-language model in conjunction with a graph search algorithm to plan over apre-constructed semantic map. Note that LM-Nav can only ground commands of the Sequence Visitcategory as defined by Menghi et al. [28], while Lang2LTL can interpret 15 temporal tasks.Translating Language to Formal Specification: Prior approaches are tied to the domain they weretrained on and require retraining with a large amount of data to deploy in a novel domain. Gopalanet al. [3]relied on commands paired with LTL formulas in a synthetic domain called CleanUp World .Patel et al. [4]and Wang et al. [5]developed a weakly supervised approach that requires naturallanguage descriptions along with a satisfying trajectory.Leveraging LLM to facilitate translation into LTL is an emerging research direction [ 37,38,39,40].These approaches relied on string transformations to transform referring expressions in a commandinto propositions. In contrast, Lang2LTL explicitly grounds the referring expression to propositionalconcepts in the physical environment through a grounding module.The closest approach to ours, proposed by Berg et al. [2], uses CopyNet [ 41] architecture to generateLTL formula structures followed by explicitly replacing the appropriate propositions. We demonstratea significant performance improvement over Berg et al. [2]’s approach by leveraging LLMs as wellas training and evaluating on a semantically diverse dataset.4 Problem DefinitionWe frame the problem as users providing a natural language command uto a robotic system thatperforms navigational tasks in an environment M=⟨S,A, T⟩, where SandArepresent the statesand actions of the robot, and Tdescribes the transition dynamics of the environment. Our proposedlanguage grounding system, Lang2LTL, translates the language command uto its equivalent LTLformula φand grounds its propositions to real-world landmarks. We assume the robot has accessto its state information Sand a semantic database of propositions D={k: (z, f)}, structured askey-value pairs. The key kis a unique string identifier for each proposition, and zcontains semanticinformation about the landmark stored in a serialized format (e.g., JSON). For example, in a streetmap domain, the semantic information zincludes landmark names, street addresses, amenities, etc.f:S → { 0,1}is a Boolean valued function that evaluates the truth value of the proposition in agiven state s. Appendix B shows an example entry of this semantic database. Finally, we assume thatthe robot has access to an automated planner that accepts an LTL formula and a semantic map asinput and generates a plan over the semantic map as output. We used AP-MDP [12] for this paper.Consider the example of a drone following a given command within an urban environment depictedin Figure 2. The environment Mencodes the position and the dynamics of the drone. The semanticdatabase D={k: (z, f)}includes the landmark identifiers, their semantic information, and aproximity function.5 Lang2LTL: Natural Language GroundingLang2LTL is a modular system that leverages LLMs to solve the grounding problem by solving thefollowing subproblems,1.Referring Expression Recognition : We identify the set of substrings, {ri}, in the commanduthat refer to Boolean propositions. In this case, {ri}={“ the store on Main Street”, “thebank”}.3Figure 2: Lang2LTL system overview: Green blocks are pretrained or finetuned LLM models. Yellowblocks are the input or output of the system.2.Referring Expression Grounding : Each referring expression riis mapped to one of theproposition identifier strings, k∈ D, which yields a map {ri→k}. Each proposition isalso bijectively mapped to a placeholder symbol βfrom a fixed vocabulary β={“A”, “B”,. . .}by{k↔β}. In the example described above, the phrases “the store on Main Street”and “the bank” refer to the proposition identifiers walmart andchase , which are in turnmapped to the placeholder symbols “A” and “B”.3.Lifted Translation : After substituting the referring expressions {ri}by placeholder symbolsβusing the map {ri→β}, we obtain the lifted utterance, “Go to A, but only after visitingB.”. The lifted translation module then translates this lifted utterance into a lifted LTLformula, φβ=F(A)∧(¬AUB).Finally, we substitute placeholder symbols “A” and “B” by grounded propositions walmart andchase using the bijection {k↔β}to construct the output LTL formula depicted in Figure 2.We hypothesized that the benefit of solving the translation problem in the lifted domain is two-fold.First, the output vocabulary size is significantly reduced. Secondly, the lifted formula data can besourced from multiple domains, thus providing access to a larger training dataset. Once accuratelytranslated in the lifted domain, the formulas can be grounded to unseen landmarks in novel domains,a task at which LLMs excel. We examine the efficacy of this modularization in Section 6.5.1 Referring Expression Recognition (RER)Referring expressions are noun phrases, pronouns, and proper names that refer to some individualobjects [ 42]. In this work, we only consider the task of recognizing noun phrases and proper names.Referring expressions are entire substrings that refer to a single entity, therefore, they are a supersetof named entities. For example, “the store on Main Street” is the referring expression, but it containstwo named entities, “store” and “Main Street.Referring expression recognition is generally challenging to all existing pretrained name entityrecognition models, especially without adequate examples for finetuning. We demonstrate highperformance on the RER task by adapting the GPT-4 prompted with a task description and examplesto enable in-context learning. Details of the prompting approach are provided in Appendix D.5.2 Referring Expression Grounding (REG)Due to the diversity of natural language, a user can refer to a landmark using many possible referringexpressions. Grounding these expressions into the correct propositions is challenging. We proposeusing the embeddings computed by an LLM for measuring similarity. LLMs have been shown tomap semantically similar texts to similar embedding values [43].Letgembed :r→Rnrepresent the function that computes an n-dimensional embedding of a textstring using the parameters of the LLM. Following Berg et al. [44], we match the referring expressions{ri}to the proposition tokens k’s by matching their respective embeddings using cosine similarity.4The embedding of a proposition token, k, is computed by encoding the semantic information, z, ofits corresponding landmark, e.g., street name and amenity. This process is represented formally asfollows,k∗= argmax{k:(z,f)}∈Dgembed (ri)⊤gembed (z)||gembed (ri)|| ||gembed (z)||(2)5.3 Lifted TranslationOur lifted translation module operates with a much smaller vocabulary than the number of landmarkswithin any given navigation domain. It can also be trained with navigational commands from a widervariety of data sources. In designing our lifted translation module, we followed the Gopalan et al. [3]and Patel et al. [4]and represented the prediction target LTL formulas in the prefix format instead ofthe infix format. This allows us to unambiguously parse the formulas without requiring parenthesismatching and shorten the output.The lifted translation module accepts a lifted utterance as an input and generates a lifted LTLformula with an output vocabulary of up to 10 operators and five lifted propositions. We evaluated thefollowing model classes for lifted translation. The implementation details are provided in Appendix E.Finetuned LLM: We finetuned LLMs using supervised learning following [ 16]. We tested two LLMswith supervised finetuning, namely, T5-Base (220M) [ 20] (using the Hugging Face Transformerlibrary [ 45], and the text-davinvi-003 version of GPT-3 using the OpenAI API. The target was anexact token-wise match of the ground-truth LTL formula.Prompt LLM: We evaluated prompting the pre-trained GPT-3 [ 18] and GPT-4 [ 19] models using theOpenAI API. We did not vary the prompts throughout a given test set.Seq2Seq Transformers: We trained an encoder-decoder model based on the transformer architecture[15] to optimize the per-token cross entropy loss with respect to the ground-truth LTL formula,together with a token embedding layer to transform the sequence of input tokens into a sequence ofhigh-dimensional vectors.6 Evaluation of Language GroundingWe tested the performance of Lang2LTL towards five definitions of generalizing temporal commandinterpretation as informed by formal methods in Section 6.1. We evaluated the performance of eachmodule described in Section 5 in isolation in addition to a demonstration of the integrated system.Lang2LTL achieved state-of-the-art performance in grounding diverse temporal commands in 21novel OpenStreetMap regions [46].6.1 Generalization in Temporal Command UnderstandingUtilizing a formal language, such as LTL, to encode temporal task specifications allows us to formallydefine five types of generalizing behaviors.Robustness to Paraphrasing: Consider two utterances, u1=“ Go to chase ”, and u2=“Visit chase ,”describing the same temporal formula Fchase . If the system has only seen u1at training time, but itcorrectly interprets u2at test time, it is said to demonstrate robustness to paraphrasing. This is themost common test of generalization we observe in prior works on following language commandsfor robotics. A test-train split of the dataset is adequate for testing robustness to paraphrasing, andwe refer to such test sets as utterance holdout . Most prior works [ 3,2,40,37,5,4] demonstraterobustness to paraphrasing.Robustness to Substitutions: Assume the system has been trained to interpret the commandcorresponding to Fchase , andG¬walmart at training time but has not been trained on a commandcorresponding to Fwalmart . If the system correctly interprets the command corresponding to5Fwalmart , it is said to demonstrate robustness to substitution. To test for robustness to substitutions,any formula in the test set must not have a semantically equivalent formula in the training set. [ 3]demonstrated limited robustness to substitutions. The lifted translation approach followed by Berget al. [2], NL2TL [40], and Lang2LTL demonstrates robustness to substitutionsRobustness to Vocabulary Shift : Assume the system has been trained to interpret commandscorresponding to Fchase at training time but has never seen any command containing the propositionswalmart . If the system correctly interprets Fwalmart at test time, the system is said to be robust tovocabulary shift. To test for robustness to vocabulary shift, we identify the set of unique propositionsoccurring in every formula in the test and training set. The training set and test set vocabulariesshould have an empty intersection in addition to the non-equivalence of every test formula withrespect to the training formula. Methods that explicitly rely on lifted representations show robustnessto vocabulary shifts by design [ 44,8,40]. Our full system evaluations demonstrated robustness tonovel vocabularies in Section 6.5.Robustness to Unseen Formulas : Assume that all formulas in the training set are transformed bysubstituting the propositions in a pre-defined canonical order, e.g., both Fchase , andFwalmartare transformed to Fa. We refer to these transformed formulas as the formula skeleton. To test forrobustness to unseen formulas, the test set must not share any semantically equivalent formula skeletonwith the training set. We used the built-in equivalency checker from the Spot LTL library [47].Robustness to Unseen Template Instances : We define this as a special case of robustness to unseenformulas. If all the formulas in the test and training set are derived from a template-based generatorusing a library of pre-defined templates [ 28,48], a model may exhibit generalization to differentsemantically distinct formula skeletons of the same template; such a model displays robustness tounseen templates. We refer to the test set whose skeletons vary only as instantiations of templates seenduring training as a formula holdout test set. If the unseen formulas in the test set do not correspondto any patterns, we refer to it as a type holdout test set.None of the prior works have evaluated proposed models for robustness to unseen formulas. Weevaluated the lifted translation module of Lang2LTL on both formula andtype holdout test sets.Lang2LTL experienced a degradation of performance as expected, indicating that robustness tounseen formulas is still an open challenge.(a) (b)Figure 3: Figure 3a depicts the average accuracies of six lifted translation models on the threeholdout sets over five-fold cross-validation. Figure 3b depicts the average accuracies of the groundedtranslation task in the OSM domain.Table 1: Proposition Grounding EvaluationComponent AccuracyRE Recognition 98.01±2.08%RE Grounding 98.20±2.30%Table 2: Cross-Domain Evaluation. * for Zero-ShotOSM [44] CleanUp [3]Lang2LTL 49.40±15.49%∗78.28±1.73%∗CopyNet [44] 45.91±12.70% 2 .57%∗RNN-Attn [3] NA∗95.51±0.11%66.2 Lifted DatasetWe first collected a new parallel corpus of 1,156 natural language commands in English and LTLformulas with propositional symbols to train and evaluate the lifted translation module. We included15 LTL templates identified by Menghi et al. [28] and generated 47 unique lifted LTL formulatemplates by varying the number of propositions from 1 to 5 when applicable.To improve the lifted translation module’s Robustness to Substitutions , we permuted the propositionsin utterances and corresponding formulas. The lifted dataset after permutation contains 2,125 uniqueLTL formulas and 49,655 English utterances describing them. For a detailed description of thedataset, please refer to Appendix H.6.3 Grounded DatasetGenerating Diverse REs: We used GPT-4 to paraphrase landmark names from an open-sourcemap database, OpenStreetMap (OSM ) [46], to more diverse forms of referring expressions (REs) byproviding the semantic information. The prompt used for generating diverse REs is in Appendix C.We then substituted the symbols in the lifted dataset (Section 6.2) by diverse REs on 100 randomlysampled lifted utterances for each of the 21 OSM cities. For a list of example commands, please seeAppendix Table 5.6.4 Component-wise EvaluationProposition Grounding: We evaluated the RER and REG modules on the grounded OSM datasetwith 2,100 utterances across 21 cities. The average accuracy and the standard error across the cities forthese modules are depicted in Table 1. The accuracy of RER decreased slightly, and REG performeduniformly well as we varied the complexity of commands and REs, respectively (Appendix Figure 6).Lifted Translation: We evaluated the six models presented in Section 5.3 for lifted translationthrough five-fold cross-validation on utterance ,formula , and type holdout tests. Figure 3a depictsthe average accuracy and the standard deviation across the folds. We note that the two finetunedLLM models demonstrate the best performance on utterance holdout. We also noted a degradationof performance on formula andtype holdout tests, with the latter being the most challenging acrossall models. Finetuned LLM models suffered the worst degradation of performance. Finally, theaddition of type-constrained decoding to T5 significantly improved its performance on the morechallenging formula andtype holdout . Due to the lower cost of inference, and the ability to implementtype-constrained decoding to prevent syntax errors, we chose the finetuned T5 model for liftedtranslation in our full-system evaluation.6.5 System EvaluationWe compared the translation accuracy of the full Lang2LTL system on the grounded datasets with theCopyNet-based translation model [ 2] and Prompt GPT-4. We retrained CopyNet with an identicaldata budget as the Lang2LTL lifted translation model. For Prompt GPT-4, we ensured that there wasat least one example from each formula skeleton in the dataset included in the prompt. Figure 3bdepicts Lang2LTL outperforming both the baselines by a significant margin. Note that due to thehigh cost of inference, Prompt GPT-4 was only evaluated on a smaller subset of the test set.6.6 Cross-Domain EvaluationWe further tested the zero-shot generalization capability of Lang2LTL on two different crowd-sourceddatasets from prior work; the Cleanup World [ 3] on an analog indoor environment; and the OSMdataset [ 2] collected via Amazon Mechanical Turk. Table 2 shows the translation accuracies ofLang2LTL without any fine-tuning on the target datasets. Note that Lang2LTL outperforms CopyNetresults reported by Berg et al. [2]. We further note that the CleanUp World dataset contains 6 unique7formula skeletons, out of which some were not a part of our lifted dataset. The degraded performanceis expected when the model needs to generalize to unseen formulas.7 Robot DemonstrationTo demonstrate Lang2LTL’s ability to directly execute language commands by interfacing withautomated planning algorithms, we deployed a quadruped module robot, Spot [ 49] with the AP-MDP planner [ 12] in two novel indoor environments. Each environment had eight landmarks (e.g.,bookshelf, desks, couches, elevators, tables, etc.). We ensured that each environment had multipleobjects of the same type but different attributes. We used Spot’s GraphNav framework to compute asemantic map of the environment. The AP-MDP [ 12] planner can directly plan over this semanticmap, given an LTL task specification. Please refer to Appendix J for a complete description of thetask environments.As a proof-of-concept, we further finetuned the lifted translation module on 120,000 lifted utterancesand formulas formed by sampling pairwise compositions from the lifted database and composedusing conjunctions and disjunctions.We compared Lang2LTL to Code-as-Policies (CaP) [ 6], a prominent example of directly groundinglanguage instructions to robot plans expressed as Python code. We provided Code-as-Policies withinterface functions to input the semantic map and a helper function to automatically navigate betweennodes while having the ability to avoid certain regions. Thus CaP had access to actions at a muchhigher level of abstraction than our system. Note that AP-MDP only interfaced with primitive actionsdefined as movement between any two neighboring nodes.Lang2LTL was able to correctly ground 52 commands (40 satisfiable and 12 unsatisfiable commands).The failure cases were due to incorrect lifted translations. The formal guarantees of the AP-MDPplanner assured that the robot execution was aborted when facing an unsatisfiable specification. Bycontrast, CaP was only able to generate acceptable executions for 23 out of 52 commands and did notexplicitly recognize the unsatisfiable commands. CaP demonstrated more robustness to paraphrasingthan our system, which failed on some compositional patterns not in the augmented lifted dataset.8 LimitationsWe observed that Lang2LTL fails at grounding language commands with certain utterance structures,which suggests that the lifted translation model overfits some training utterances. A list of incorrectgroundings is shown in Appendix Table 2 and Table 3. Finetuning models with larger capacities, e.g.,T5-Large, may help. In this work, we only consider the task of recognizing noun phrases and propernames as referring expressions, not pronouns. We can tackle the coreference resolution problemby first prompting an LLM or using an off-the-shelf model to map pronouns to their correspondingnoun phrases or proper names before the RER module. If there are multiple landmarks of the samesemantic features present in the environment, e.g., two Starbucks, Lang2LTL cannot distinguish thetwo and selects one at random. To resolve this ambiguity, the robot needs to actively query the humanuser via dialog when necessary.9 ConclusionWe propose Lang2LTL, a modular system using large language models to ground complex navi-gational commands for temporal tasks in novel environments of household and city scale withoutretraining and generalization tests for language grounding systems. Lang2LTL achieves a groundingaccuracy of 81.83% in 21 unseen cities and outperforms the previous SoTA and an end-to-end promptGPT-4 baseline. Any robotic system equipped with position-tracking capability and a semantic mapwith landmarks labeled with free-form text can utilize Lang2LTL to interpret natural language fromhuman users without additional training.8AcknowledgmentsThe authors would like to thank Charles Lovering for his feedback that helped improve the draft, PeilinYu and Mingxi Jia for helping edit the videos, and Alyssa Sun for developing the web demonstration.This work is supported by ONR under grant numbers N00014-22-1-2592 and N00014-21-1-2584,AFOSR under grant number FA9550-21-1-0214, NSF under grant number CNS-2038897, and withsupport from Echo Labs and Amazon Robotics.References[1]A. Pnueli. The temporal logic of programs. In 18th Annual Symposium on Foundations ofComputer Science (sfcs 1977) , pages 46–57. ieee, 1977.[2]M. Berg, D. Bayazit, R. Mathew, A. Rotter-Aboyoun, E. Pavlick, and S. Tellex. GroundingLanguage to Landmarks in Arbitrary Outdoor Environments. In IEEE International Conferenceon Robotics and Automation (ICRA) , 2020.[3]N. Gopalan, D. Arumugam, L. Wong, and S. Tellex. Sequence-to-Sequence Language Ground-ing of Non-Markovian Task Specifications. In Proceedings of Robotics: Science and Systems ,Pittsburgh, Pennsylvania, 2018. doi:10.15607/RSS.2018.XIV .067.[4]R. Patel, E. Pavlick, and S. Tellex. Grounding language to non-markovian tasks with nosupervision of task specifications. In Robotics: Science and Systems , 2020.[5]C. Wang, C. Ross, Y .-L. Kuo, B. Katz, and A. Barbu. Learning a natural-language to ltlexecutable semantic parser for grounded robotics. In Conference on Robot Learning , pages1706–1718. PMLR, 2021.[6]J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code aspolicies: Language model programs for embodied control. In arXiv preprint arXiv:2209.07753 ,2022.[7]I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. Progprompt: Generating situated robot task plans using large language models. arXivpreprint arXiv:2209.11302 , 2022.[8]D. Shah, B. Osi ́nski, S. Levine, et al. Lm-nav: Robotic navigation with large pre-trained modelsof language, vision, and action. In Conference on Robot Learning , pages 492–504. PMLR,2022.[9]S. S. Raman, V . Cohen, E. Rosen, I. Idrees, D. Paulius, and S. Tellex. Planning with largelanguage models via corrective re-prompting. arXiv preprint arXiv:2211.09935 , 2022.[10] M. L. Littman, U. Topcu, J. Fu, C. Isbell, M. Wen, and J. MacGlashan. Environment-independenttask specifications via gltl. arXiv preprint arXiv:1704.04341 , 2017.[11] A. Camacho, R. T. Icarte, T. Q. Klassen, R. A. Valenzano, and S. A. McIlraith. Ltl andbeyond: Formal languages for reward function specification in reinforcement learning. In IJCAI ,volume 19, pages 6065–6073, 2019.[12] Y . Oh, R. Patel, T. Nguyen, B. Huang, E. Pavlick, and S. Tellex. Planning with State Abstractionsfor Non-Markovian Task Specifications. In Proceedings of Robotics: Science and Systems ,Freiburg, Germany, June 2019.[13] R. T. Icarte, T. Q. Klassen, R. Valenzano, and S. A. McIlraith. Reward machines: Exploitingreward function structure in reinforcement learning. Journal of Artificial Intelligence Research ,73:173–208, 2022.[14] J. X. Liu, A. Shah, E. Rosen, M. Jia, G. Konidaris, and S. Tellex. Skill transfer for temporally-extended task specifications. arXiv preprint arXiv:2206.05096 , 2022.9[15] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, andI. Polosukhin. Attention is all you need. Advances in neural information processing systems ,30, 2017.[16] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, et al. Improving language understandingby generative pre-training. 2018.[17] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models areunsupervised multitask learners. OpenAI blog , 1(8):9, 2019.[18] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child,A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray,B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Lan-guage models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, andH. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf .[19] OpenAI. Gpt-4 technical report, 2023.[20] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J.Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journalof Machine Learning Research , 21(140):1–67, 2020. URL http://jmlr.org/papers/v21/20-074.html .[21] R. T. Icarte, T. Klassen, R. Valenzano, and S. McIlraith. Using reward machines for high-leveltask specification and decomposition in reinforcement learning. In International Conference onMachine Learning , pages 2107–2116. PMLR, 2018.[22] G. De Giacomo, L. Iocchi, M. Favorito, and F. Patrizi. Foundations for restraining bolts: Rein-forcement learning with ltlf/ldlf restraining specifications. In Proceedings of the internationalconference on automated planning and scheduling , volume 29, pages 128–136, 2019.[23] A. Shah, S. Li, and J. Shah. Planning with uncertain specifications (puns). IEEE Robotics andAutomation Letters , 5(2):3414–3421, 2020.[24] A. Shah, P. Kamath, J. A. Shah, and S. Li. Bayesian inference of temporal task specificationsfrom demonstrations. Advances in Neural Information Processing Systems , 31, 2018.[25] M. Vazquez-Chanlatte, S. Jha, A. Tiwari, M. K. Ho, and S. Seshia. Learning task specificationsfrom demonstrations. In Advances in Neural Information Processing Systems 31 , pages 5368–5378. 2018.[26] J. Kim, C. Muise, A. Shah, S. Agarwal, and J. Shah. Bayesian inference of linear temporallogic specifications for contrastive explanations. In IJCAI , 2019.[27] A. Shah, S. Wadhwania, and J. Shah. Interactive robot training for non-markov tasks. arXivpreprint arXiv:2003.02232 , 2020.[28] C. Menghi, C. Tsigkanos, P. Pelliccione, C. Ghezzi, and T. Berger. Specification patterns forrobotic missions. IEEE Transactions on Software Engineering , 47(10):2208–2224, oct 2021.ISSN 1939-3520. doi:10.1109/TSE.2019.2945329.[29] M. Y . Vardi. An automata-theoretic approach to linear temporal logic. Logics for concurrency ,pages 238–266, 1996.[30] R. Gerth, D. Peled, M. Y . Vardi, and P. Wolper. Simple on-the-fly automatic verification oflinear temporal logic. In Protocol Specification, Testing and Verification XV: Proceedings ofthe Fifteenth IFIP WG6. 1 International Symposium on Protocol Specification, Testing andVerification, Warsaw, Poland, June 1995 , pages 3–18. Springer, 1996.10[31] M. Macmahon. Marco: A modular architecture for following route instructions. AAAI Workshop- Technical Report , 01 2005.[32] M. MacMahon, B. Stankiewicz, and B. Kuipers. Walk the talk: Connecting language, knowledge,and action in route instructions. In Proceedings of the 21st National Conference on ArtificialIntelligence - Volume 2 , AAAI’06, page 1475–1482. AAAI Press, 2006. ISBN 9781577352815.[33] D. L. Chen and R. J. Mooney. Learning to interpret natural language navigation instructions fromobservations. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence ,AAAI’11, page 859–865. AAAI Press, 2011.[34] J. Kim and R. Mooney. Unsupervised PCFG induction for grounded language learning withhighly ambiguous supervision. In Proceedings of the 2012 Joint Conference on EmpiricalMethods in Natural Language Processing and Computational Natural Language Learning ,pages 433–444, Jeju Island, Korea, July 2012. Association for Computational Linguistics. URLhttps://aclanthology.org/D12-1040 .[35] Y . Artzi and L. Zettlemoyer. Weakly supervised learning of semantic parsers for mappinginstructions to actions. Transactions of the Association for Computational Linguistics , 1:49–62,2013. doi:10.1162/tacl a00209. URL https://aclanthology.org/Q13-1005 .[36] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[37] J. Pan, G. Chou, and D. Berenson. Data-efficient learning of natural language to linear temporallogic translators for robot task specification. arXiv preprint arXiv:2303.08006 , 2023.[38] M. Cosler, C. Hahn, D. Mendoza, F. Schmitt, and C. Trippel. nl2spec: Interactively translatingunstructured natural language to temporal logics with large language models. arXiv preprintarXiv:2303.04864 , 2023.[39] F. Fuggitti and T. Chakraborti. NL2LTL – a python package for converting natural language (NL)instructions to linear temporal logic (LTL) formulas. In AAAI , 2023. System Demonstration.[40] Y . Chen, R. Gandhi, Y . Zhang, and C. Fan. Nl2tl: Transforming natural languages to temporallogics using large language models. arXiv preprint arXiv:2305.07766 , 2023.[41] J. Gu, Z. Lu, H. Li, and V . O. Li. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association forComputational Linguistics (Volume 1: Long Papers) , pages 1631–1640, Berlin, Germany,Aug. 2016. Association for Computational Linguistics. doi:10.18653/v1/P16-1154. URLhttps://aclanthology.org/P16-1154 .[42] J. Lyons. Semantics: Volume 2 , volume 2. Cambridge university press, 1977.[43] A. Neelakantan, T. Xu, R. Puri, A. Radford, J. M. Han, J. Tworek, Q. Yuan, N. Tezak, J. W.Kim, C. Hallacy, et al. Text and code embeddings by contrastive pre-training. arXiv preprintarXiv:2201.10005 , 2022.[44] M. Berg, D. Bayazit, R. Mathew, A. Rotter-Aboyoun, E. Pavlick, and S. Tellex. GroundingLanguage to Landmarks in Arbitrary Outdoor Environments. In IEEE International Conferenceon Robotics and Automation (ICRA) , 2020.[45] T. Wolf, L. Debut, V . Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf,M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y . Jernite, J. Plu, C. Xu, T. L.Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Transformers: State-of-the-artnatural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, Oct. 2020.11Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6 .[46] OpenStreetMap contributors. Planet dump retrieved from https://planet.osm.org . https://www.openstreetmap.org , 2017.[47] A. Duret-Lutz, E. Renault, M. Colange, F. Renkin, A. G. Aisse, P. Schlehuber-Caissier,T. Medioni, A. Martin, J. Dubois, C. Gillard, and H. Lauko. From Spot 2.0 to Spot 2.10:What’s new? In Proceedings of the 34th International Conference on Computer Aided Verifica-tion (CAV’22) , volume 13372 of Lecture Notes in Computer Science , pages 174–187. Springer,Aug. 2022. doi:10.1007/978-3-031-13188-2 9.[48] M. B. Dwyer, G. S. Avrunin, and J. C. Corbett. Patterns in property specifications for finite-stateverification. In Proceedings of the 21st international conference on Software engineering , pages411–420, 1999.[49] Boston Dynamics. Spot ®- the agile mobile robot. https://www.bostondynamics.com/products/spot .12Appendix A Specification PatternsWe developed Lang2LTL to ground navigational commands to LTL formulas. We started with thecatalog of robotic mission-relevant LTL patterns for robotic missions by Menghi et al. [28]. Weadopted 15 templates that are relevant to robot navigation and modified some of their patterns tosemantically match our requirements. The complete list of pattern descriptions and the correspondingLTL templates is in Table 4. Note that we use some additional abbreviated temporal operators,specifically “Weak until” W, and the “Strong release” Min terms of standard operators, i.e.,aWb=aU(b∨Ga), and aMb=bU(a∧b).Appendix B Semantic Information from OpenStreetMap DatabaseWe show an example entry of the semantic dataset as follows,{" Jiaho supermarket ": {" addr : housenumber ": "692"," shop ": " supermarket "," opening_hours ": "Mo -Su 08:00 -20:00 "," phone ": " 6173389788 "," addr : postcode ": " 02111 "," addr : street ": " Washington Street "},... ,}Appendix C Implementation Details about Referring Expression GenerationWe prompt the GPT-4 model for paraphrasing landmarks with corresponding OSM databases. Foreach landmark, three referring expressions are generated. The prompt for generating referringexpressions by paraphrasing is as follows,13Use natural language to describe the landmark provided in a python dictionary form in a short phrase.Landmark dictionary:’Fortuna Cafe’: {’addr:housenumber’: ’711’, ’cuisine’: ’chinese’, ’amenity’: ’restaurant’, ’addr:city’:’Seattle’, ’addr:postcode’: ’98104’, ’source’: ’King County GIS;data.seattle.gov’, ’addr:street’:’South King Street’ }Natural language:Chinese cafe on South King StreetLandmark dictionary:’Seoul Tofu House & Korean BBQ’: {’addr:housenumber’: ’516’, ’cuisine’: ’korean’,’amenity’: ’restaurant’, ’addr:city’: ’Seattle’, ’addr:postcode’: ’98104’, ’source’: ’KingCounty GIS;data.seattle.gov’, ’addr:street’: ’6th Avenue South’ }Natural language:Seoul Tofu HouseLandmark dictionary:’AI Video’: {’shop’: ’electronics’ }Natural language:AI Video selling electronics...Landmark dictionary:’Dochi’: {’addr:housenumber’: ’604’, ’cuisine’: ’donut’, ’amenity’: ’cafe’ }Natural language:a cafe selling donut named DochiLandmark dictionary:’A V A Theater District’: {’addr:housenumber’: ’45’, ’building’: ’residential’, ’building:levels’: ’30’ }Natural language:A V A residential buildingLandmark dictionary:’HI Boston’: {’operator’: ’Hosteling International’, ’smoking’: ’no’, ’wheelchair’: ’yes’, ’tourism’:’hostel’ }Natural language:HI BostonLandmark dictionary:Appendix D Implementation Details about Referring Expression RecognitionThe prompt for referring expression recognition is as follows,14Your task is to repeat exact strings from the given utterance which possibly refer to certainpropositions.Utterance: move to red roomPropositions: red roomUtterance: visit Cutler Majestic TheaterPropositions: Cutler Majestic TheaterUtterance: robot c move to big red room and then move to green areaPropositions: big red room |green areaUtterance: you have to visit Panera Bread on Beacon Street, four or more than four timesPropositions: Panera Bread on Beacon StreetUtterance: go to Cutler Majestic Theater at Emerson College on Tremont Street, exactlythree timesPropositions: Cutler Majestic Theater at Emerson College on Tremont Street...Utterance: make sure you never visit St. James Church, a Christian place of worship onHarrison Avenue, Dunkin’ Donuts, Thai restaurant Montien, New Saigon Sandwich, or Stuart St @Tremont StPropositions: St. James Church, a Christian place of worship on Harrison Avenue |Dunkin’ Donuts |Thai restaurant Montien |New Saigon Sandwich |Stuart St @ Tremont StUtterance: move the robot through yellow region or small red room and then to large greenroomPropositions: yellow region |small red room |large green roomUtterance:The results of using this prompt to recognize referring expressions with spatial relations are shown inTable 6.Appendix E Implementation Details about Lifted TranslationE.1 Fintuned T5-BaseFor finetuning the T5-Base model, we set the batch size to 40, the learning rate to 10−4, and theweight decay to 10−2. We ran training for 10 epochs and picked the best-performing one for reportingresults.E.2 Finetuned GPT-3The per specification type accuracies and the accuracies for varying number of propositions in theformula while testing the finetuned GPT-3 model on the utterance holdout is depicted in Figure 3.The finetuned GPT-3 model achieves high accuracies across formula types and varying numbers ofpropositions. It shows the benefit of having a large high-quality dataset of natural language commandsrepresenting diverse LTL formulas. All previous works also used utterance holdout as their testingmethodology, but their training and test sets contain significantly fewer unique LTL formulas.15E.3 Prompt GPT-4The prompt for end-to-end GPT-4 is as follows,Your task is to translate English utterances into linear temporal logic (LTL) formulas.Utterance: visit bLTL: F bUtterance: eventually reach b and hLTL: & F b F hUtterance: go to h a and bLTL: & F h & F a F bUtterance: proceed to reach h at the next time instant only and only if you see bLTL: G e b X hUtterance: wait at b till you see hLTL: U b hUtterance: go to h in the very next time instant whenever you see bLTL: G i b X hUtterance:E.4 Prompt GPT-3The prompt for end-to-end GPT-3 is the same as the one we used for Prompt GPT-4.E.5 Seq2Seq TransformerWe constructed and trained a transformer model following [ 15]. More specifically, we built themodel’s encoder with three attention layers and decoder with three layers, and we used 512 as theembedding size and 8 as the number of attention heads. For training, we adapted batched trainingwith a batch size equal to 128, learning rate equal to 10−4, and dropout ratio equal to 0.1; the trainingprocess runs for 10 epochs, and we picked the best-performing checkpoint for baseline comparison.E.6 Type Constrained Decoding (TCD)Constrained decoding has been used in generating formal specifications for eliminating syntacticallyinvalid outputs. Due to the sampling nature of NN-based models, generated tokens from the outputlayer can result in syntactical errors that can be detected on the fly, and type-constrained decodingsolves it by forcing the model to only generate tokens following the correct grammar rule. Byeliminating syntax errors, it also improves the overall performance of the system.In practice, type-constrained decoding is implemented at each step of the decoding loop: firstchecking the validity of the output token, then appending the valid token or masking the invalid, andre-generating a new token according to the probability distribution after masking. In addition, wedesign an algorithm to simultaneously enforce the length limitation and syntactical rule by parsingpartial formulas into binary trees. Beyond a given maximum height of the tree, the model is forcedonly to generate propositions but not operators.16Appendix F Implementation Details about Code as PoliciesWe designed two prompts for reproducing Code as Policies: one for code generation and the other forparsing landmarks. The code generation prompt is expected to generate an executable Python scriptthat calls the goto loc() function for traversing through the environment and psrse loc() function toground referring expressions to landmarks, where the landmark resolution prompt is used. The codegeneration prompt for graph search is as follows,# Python 2D robot navigation scriptimport randomfrom utils import goto loc, parse loc# make the robot go to wooden desk.target loc = parse loc(‘wooden desk’)goto loc(target loc)# go to brown desk and then white desk.target loc1 = parse loc(‘brown desk’)target loc2 = parse loc(‘white desk’)target locs = [target loc1, target loc2]for target loc in target locs:goto loc(target loc)# head to doorway, but visit white kitchen counter before that.target loc1 = parse loc(‘white kitchen counter’)target loc2 = parse loc(‘doorway’)target locs = [target loc1, target loc2]for target loc in target locs:goto loc(target loc)# avoid white table while going to grey door.target loc = parse loc(‘grey door’)avoid loc = parse loc(‘white table’)target locs = [target loc]avoid locs = [avoid loc]for loc in target locs:goto loc(loc, avoid locs=avoid locs)# either go to steel gate or doorwaytarget loc1 = parse loc(‘steel gate’)target loc2 = parse loc(‘doorway’)target locs = [target loc1, target loc2]target loc = random.choice(target locs)goto loc(target loc)...# go to doorway three timestarget loc = parse loc(‘doorway’)for in range(3):goto loc(target loc)random loc = target locwhile random loc == target loc:random loc = random.choice(locations)goto loc(random loc)17The landmark resolution prompt is as follows,# Python parsing phrases to locations scriptlocations = [‘bookshelf’, ‘desk A’, ‘table’, ‘desk B’, ‘doorway’, ‘kitchen counter’, ‘couch’,‘door’]semantic info ={“bookshelf”: {“material”: “wood”, “color”: “brown” },“desk A”: {“material”: “wood”, “color”: “brown” },“desk B”: {“material”: “metal”, ‘color’: “white” },“doorway”: {},“kitchen counter”: {“color”: “white” },“couch”: {“color”: “blue”, “brand”: “IKEA” },“door”: {“material”: “steel”, “color”: “grey” },“table”: {“color”: “white” },}# wooden brown bookshelfretval = ‘bookshelf’...locations = [‘bookshelf’, ‘desk A’, ‘table’, ‘desk B’, ‘doorway’, ‘kitchen counter’, ‘couch’,‘door’]semantic info ={“bookshelf”: {“material”: “wood”, “color”: “brown” },“desk A”: {“material”: “wood”, “color”: “brown” },“desk B”: {“material”: “metal”, ‘color’: “white” },“doorway”: {},“kitchen counter”: {“color”: “white” },“couch”: {“color”: “blue”, “brand”: “IKEA” },“door”: {“material”: “steel”, “color”: “grey” },“table”: {“color”: “white” },}# blue IKEA couchretval = ‘couch’Appendix G Implementation Details about Grounded TranslationG.1 CopyNetFor reproducing [ 44], we trained the CopyNet baseline with our grounded dataset preprocessed as itsrequired format. To make a fair comparison on generalization ability, the CopyNet model has onlyseen utterance-formula pairs from the Boston subset, and the evaluation is run on grounded datasetsof the rest 21 cities. For training CopyNet, we followed closely the instructions of the original paperand used the exact same LSTM model structure and pre-computed glove embedding for landmarkresolution. On the hyperparameters, we set the embedding size to 128, the hidden size to 256, thelearning rate to 10−3, and the batch size to 100.G.2 Prompt GPT-4The prompt for end-to-end GPT-4 is as follows. While we tried including a landmark list in theprompt, it was removed in the final version because we observed empirically that Prompt GPT-18Table 1: Dataset ComparisonLang2LTL Lifted CleanUp World NL2TL Wang et al. [5]Number of datapoints 49,655 3,382 39,367 6,556Unique formula skeletons 47 4 605 45#Propositions (min, max, mean) (1, 5, 3.79) (1, 3, 1.85) (1, 7, 2.86) (1, 4, 2.01)Formula Length (min, max, mean) (2, 67, 18.89) (2, 7, 4.77) (1, 13, 5.98) (3, 7, 4.48)4 achieved better performance without explicitly giving a list of landmarks during prompt engineering.Your task is to first find referred landmarks from a given list then use them as propositions to translateEnglish utterances to linear temporal logic (LTL) formulas.Utterance: visit Panera Bread sandwich fast food on Stuart StreetLTL: F panera breadUtterance: eventually reach Wang Theater, and The Kensington apartmentsLTL: & F wang theater F the kensington...Utterance: make sure that you have exactly three separate visits to Seybolt ParkLTL: M & seybolt park F & ! seybolt park F & seybolt park F & ! seybolt park F seybolt park|!seybolt park G |seybolt park G |! seybolt park G |seybolt park G |! seybolt park G |seybolt parkG ! seybolt parkUtterance:Appendix H Dataset DetailsH.1 Quantifying diversity of temporal commandsWe quantify the diversity of the temporal commands a system is tested on using the temporal formulaskeletons in the evaluation corpus of commands. We propose that each novel dataset should becharacterized along the following dimensions, and as an example, we provide the respective valuesfor the Lang2LTL dataset (lifted and grounded OSM dataset) described in Section 6.4 of the mainpaper.1. Number of semantically unique formulas: 472. Number of propositions per formula: minimum: 1, maximum: 5, average: 3.793. Length of formulas: minimum: 2, maximum: 67. average: 18.894. V ocabulary size (for grounded datasets): 17575. Linguistic diversity of utterances: self-BLEU score: 0.85Table 1 compares our proposed lifted dataset and other datasets proposed in prior work.Appendix I Detailed Result Analysis on Lifted TranslationWe further analyzed the results for each model and holdout type for the lifted translation problem. Inparticular, we computed the accuracies per each formula type and the number of unique propositionsrequired to construct the target formula. This analysis provides insights into the sensitivity of themodels to particular templates and formula lengths.The accuracies of each model and holdout type categorized by formula types are depicted in Figure1. We observe that for both the finetuned models (Finetuned T5 and Finetuned GPT-3), the model19achieves high accuracies across various formula types for Utterance Holdout . Note that the perfor-mance across types is more uniform for the Finetuned GPT-3 than the Finetuned T5-Base model.Next, we note that Prompt GPT-4 achieves better accuracies as compared to Prompt GPT-3 across allevaluations.We observe that the performance of the finetuned models is more unbalanced across different formulatypes for the Formula Holdout test case. In comparison, Prompt GPT models achieve non-zeroaccuracies across all formula types. Once again, Prompt GPT-4 outperforms Prompt GPT-3. Wenote that adding type-constrained decoding to Finetuned T5-Base during inference only marginallyimproved Utterance Holdout , but significantly improved Formula andType Holdout , which impliesFinetuned T5-Base model is more likely to produce syntactically incorrect output when the groundingformula instance or type have not seen during training.Finally, we note that only the prompt GPT models achieve meaningful accuracies in the Type Holdoutscenarios. However, even in Type Holdout , the accuracies are concentrated on formula types thatonly had short lengths or shared subformulas with types seen during training. We can conclude thatFormula andType Holdout remain challenging paradigms of generation and an open problem forautomated translation of language commands into formal specifications.Figure 1: The accuracies per grounding formula types of six lifted translation modelsNext, we repeated the above analysis but categorized accuracies by the number of unique propositionsthat appear within a formula. The results are depicted in Figure 2.Figure 2: The accuracies per number of unique propositions of six lifted translation modelsHere we note that in the Utterance Holdout test, the finetuned models demonstrated balanced perfor-mance across the dataset, whereas both prompt GPT models demonstrated degraded performancewhen the number of unique propositions in the formula was increasing. Subsequently, the degradedperformance on longer formulas was apparent even within the Formula andType holdout domains.In contrast, the three finetuned models performed better for longer formulas in Formula Holdout .We hypothesize that this is because the finetuned models were more able to generalize to differentformula lengths of the same template (in particular, the templates that required temporal ordering20constraints to be encoded) as compared to the prompt completion-based approaches. In addition,there are more samples in the training set for longer formulas due to permutations of propositions.As finetuning an LLM on the target task produced the best results for Utterance Holdout , we furtheranalyzed the cause of errors for the instances where the lifted translation was incorrect. We categorizethe errors as follows:1.Syntax Errors: The formula returned by the lifted translation module was not a valid LTLformula.2.Misclassifed formula type: The lifted translation module returns an identifiable but incor-rect formula type that did not correspond to the input command.3.Incorrect propositions: The returned formula was of the correct formula type but had theincorrect number of propositions.4.Incorrect permutation: The formula was of the correct template class and had the rightnumber of propositions, but the propositions were in the wrong location within the formula.5.Unknown template The returned formula was a valid LTL formula but did not belong toany known formula types.Figure 3 to Figure 5 depict the relative frequencies of the error cases as a pie chart for the threefinetuned models. Note that returning unknown formula templates with the correct syntax was themost common cause of error in the lifted translation based on all finetuned models.Figure 3: Error frequencies ofFinetuned T5-Base with TCDFigure 4: Error frequencies ofFinetuned T5-BaseFigure 5: Error frequencies ofFinetuned GPT-3Since Finetuned GPT-3 achieves the best generalization across formula types, and type-constraineddecoding (TCD) during inference significantly improves the translation accuracies for unseen formulainstances and types, the combination of large language models and TCD is by far the best approachfor grounding language commands for temporal tasks.Appendix J Robot DemonstrationJ.1 Indoor Environment #1The semantic information of landmarks in the first household environment is as follows,{" bookshelf ": {" material ": " wood "," color ": " brown "}," desk A":{" material ": " wood "," color ": " brown "}," desk B": {" material ": " metal "," color ": " white "},21" doorway ": {}," kitchen counter ": {" color ": " white "}," couch ": {" color ": " blue "," brand ": " IKEA "}," door ": {" material ": " steel "," color ": " grey "}," table ": {" color ": " white "}}Natural language commands used to test our system Lang2LTL and Code as Polices [ 6] are shown inTable 2.J.2 Indoor Environment #2The semantic information of landmarks in the second household environment is as follows,{" hallway A": {" decoration ": " painting "}," hallway B": {" decoration ": " none "}," table A": {" location ": " kitchen "," material ": " metal "," color ": " blue "}," table B": {" location ": " atrium "," material ": " metal "," color ": " white "}," classroom ": {" door ": [" glass ", " grey "]}," elevator ": {" color ": " purple "}," staircase ": {}," front desk ": {}," office ": {" door ": [" wood ", " yellow "]},}Natural language commands used to test our system Lang2LTL and Code as Polices [ 6] are shown inTable 3.22(a) RER Accuracy vs. Command Complexity (b) REG Accuracy vs. RE ComplexityFigure 6: Figure 6a shows the accuracies of the referring expression recognition (RER) module asthe complexity of commands (measured by the number of referring expressions in the command)increases. Figure 6b shows the accuracy of the referring expression grounding (REG) module as thecomplexity of REs (measured by string length) increases.23Table 2: Commands for Robot Demonstration in Indoor Environment #1Navigational Command Lang2LTL Result Code as Policies Result1. go to brown bookshelf, metal desk, wooden desk,kitchen counter, and the blue couch in any order success success2. move to grey door, then bookrack, then brown desk,then counter, then white desk success success3. visit brown wooden desk but only after bookshelf success misunderstand the task4. go from brown bookshelf to white metal deskand only visit each landmark one time success misunderstand the task5. go to brown wooden desk exactly onceand do not visit brown desk before bookshelf success inexecutable6. go to white desk at least three times success inexecutable7. go to wooden bookshelf at least five times success success8. visit bookshelf at most three times success success9. visit counter at most 5 times success success10. go to wooden desk exactly three times success misunderstand the task11. move to brown wooden desk exactly 5 times success inexecutable12. go to doorway exactly two times,in addition always avoid the table success success13. go to brown desk only after visiting bookshelf,in addition go to brown desk only after visiting white desk success misunderstand the task14. visit wooden desk exactly two times,in addition do not go to wooden desk before bookrack success inexecutable15. visit wooden desk at least two times,in addition do not go to wooden desk before bookshelf success inexecutable16. visit the blue IKEA couch, in additionnever go to the big steel door success success17. visit white kitchen counter then go to brown desk,in addition never visit white table success success18. go to the grey door, and only then go to the bookshelf,in addition always avoid the table success misunderstand the task19. go to kitchen counter then wooden desk,in addition after going to counter, you must avoid white table success misunderstand the task20. Go to bookshelf, alternatively go to metal desk success misunderstand the task21. Go to counter, alternatively go to metal desk success misunderstand the task22. Go to the counter, but never visit the counter unsatisfiable. abort correctly stop execution correctly23. do not go to the wooden desk until bookshelf,and do not go to bookshelf until wooden desk unsatisfiable. abort correctly stop execution correctly24. go to brown desk exactly once,in addition go to brown desk at least twice unsatisfiable. abort correctly misunderstand the task25. find the kitchen counter, in addition avoid the doorway unsatisfiable. abort correctly stop execution correctly26. move to couch exactly twice,in addition pass by counter at most once unsatisfiable. abort correctly stop execution correctly27. navigate to the counter then the brown desk,in addition after going to the counter, you must avoid doorway unsatisfiable. abort correctly misunderstand the task28. Visit the counter at least 2 times and at most 5 times incorrect grounding. OOD inexecutable29. visit counter at least six times incorrect grounding. OOD success30. either go to bookshelf then desk A, or go to couch incorrect grounding. OOD misunderstand the task24Table 3: Commands for Robot Demonstration in Indoor Environment #2Navigational Command Lang2LTL Result Code as Policies Result1. navigate to the office with the wooden door, the classroom withglass door and the table in the atrium, kitchen counter,and the blue couch in any order success success2. go down the hallway decorated with paintings,then find the kitchen table, then front desk, then staircase success success3. navigate to classroom but do not visit classroombefore the white table in atrium success misunderstand the task4. only visit classroom once, and do not visit classroomuntil you visit elevator first success success5. Go to the staircase, front desk and the white table in the atriumin that exact order. You are not permitted to revisit any of these locations success inexecutable6. go to the purple elevator at least five times success inexecutable7. visit the kitchen table at most three times success success8. navigate to the classroom exactly four times success inexecutable9. go to the front desk then the yellow office door,in addition do not visit the classroom with glass door success success10. go to the stairs then the front desk,in addition avoid purple elevator success success11. move to elevator then front desk,in addition avoid staircase success success12. go to front desk exactly two times,in addition avoid elevator success inexecutable13. Go to elevator, alternatively go to staircase success misunderstand the task14. Go to the front desk at least two different occasions,in addition you are only permitted to visit the staircase at most once success misunderstand the task15. Visit the elevator exactly once, in addition visit the front deskon at least 2 separate occasions success inexecutable16. Go to the office, in addition avoid visiting the elevator and the classroom success success17. Visit the front desk, in additionyou are not permitted to visit elevator and staircase success success18. Visit the purple door elevator, then go to the front deskand then go to the kitchen table,in addition you can never go to the elevator once you’ve seen the front desk success inexecutable19. Visit the front desk then the white table, in addition if you visitthe staircase you must avoid the elevator after that success inexecutable20. Go to the classroom with glass door,but never visit the classroom with glass door unsatisfiable. abort correctly stop execution correctly21. do not go to the white table until classroom,and do not go to the classroom until white table unsatisfiable. abort correctly stop execution correctly22. go to kitchen table exactly once,in addition go to kitchen table at least twice unsatisfiable. abort correctly misunderstand the task23. find the office, in addition avoid visiting the front deskand the classroom and the table in atrium unsatisfiable. abort correctly stop execution correctly24. move to the kitchen table exactly twice,in addition pass by hallway decorated by paintings at most once unsatisfiable. abort correctly misunderstand the task25. navigate to the kitchen table then the front desk,in addition after going to the kitchen table,you must avoid hallway decorated with paintings unsatisfiable. abort correctly misunderstand the task26. Go to the front desk at least 4 different occasions,additionally, you are only permitted to visit the staircase at most once incorrect grounding. OOD inexecutable27. Visit the front desk, additionally if you visit the elevator you must visit the office after that incorrect grounding. OOD success28. Visit the front desk, additionally you visitthe elevator you must visit the office after thatthe white table and the classroom incorrect grounding. OOD misunderstand the task25Table 4: Specification Patterns for Lang2LTLSpecification Type Explanation FormulaVisit Visit a set of waypoints {p1, p2. . . , p n}in any orderVni=1FpiSequence Visit Visit a set of waypoints {p1, p2. . . , p n}, but ensurethatp2is visited at least once after visiting p1, and so onF(p1∧F(p2∧. . .∧F(pn)). . .)Ordered Visit Visit a set of waypoints {p1, p2. . . , p n}, but ensurethatp2is never visited before visiting p1F(pn)∧Vn−1i=1(¬pi+1Upi)Strictly Ordered Visit Visit a set of waypoints {p1, p2. . . , p n}, but ensurethatp2is never visited before visiting p1, additionally,ensure that p1is only visited on a single distinct visitbefore completing the rest of the taskF(pn)∧Vn−1i=1(¬pi+1Upi)∧Vn−1i=1(¬piU(piU(¬piUpi+1)))Patrolling Visit a set of wwaypoints {p1, p2. . . , p n}infinitelyoftenVni=1GFpiBound Delay If and only if the proposition ais ever observed, then theproposition bmust hold at the very next time stepG(a↔Xb)Delayed Reaction If the proposition ais ever observed, then its response isto ensure that the proposition bholds at some point in thefutureG(a→Fb)Prompt Reaction If the proposition ais ever observed, then the propositionbmust hold at the very next time stepG(a→Xb)Wait The proposition amust hold till the proposition bbecomestrue, and bmay never holdaWbPast Avoidance The proposition amust not become true until the proposi-tionbholds first. bmay never hold¬aWbFuture Avoidance Once the proposition ais observed to be true, the propo-sitionbmust never be allowed to become true from thatpoint onwards.G(a→XG¬b)Global Avoidance The set of propositions {p1, p2. . . , p n}must never beallowed to become trueVni=1G(¬pi)Upper Restricted Avoidance The waypoint acan be visited on at most nseparate visits For n = 1 ,¬F(a∧(aU(¬a∧(¬a UFa))))Forn= 2 ,¬F(a∧(a U (a∧(¬aUF (a∧(aU(¬a∧(¬aUFa))))))))Lower Restricted Avoidance The waypoint amust be visited on at least nseparatevisitsForn= 1,¬Faforn= 2,F(a∧(aU(¬a∧(¬a UFa))))Exact Restricted Avoidance The waypoint amust be visited on exactly nseparatevisitsForn= 1,aM(¬a∨G(a∨G¬a))Forn= 2,(a∧F(¬a∧Fa))M(¬a∨G(a∨G(¬a∨G(a∨G¬a))))Table 5: Example Commands from OpenStreetMap DatasetLTL Type Command (with two referring expressions)Visit move to Thai hot pot restaurant on Kneeland Street, and Vietnamese restaurant on Washington StreetSequence Visit visit Subway sandwich shop on The Plaza, followed by Zada Jane’s Cafe on Central AvenueOrdered Visit find Local Goods Chicago gift shop, but not until you find Currency exchange bureau, firstStrictly Ordered Visit reach Citibank branch, and then Cutler Majestic Theater on Tremont Street, in that exact order without repetitionsPatrolling keep on visiting US Post Office on West Devon Avenue, and Kanellos Shoe Repair shopBound Delay you must go to Purple Lot parking area, immediately after you visit Royal Nails & Spa on South Main Street,and you can not go to Purple Lot parking area, any other timeDelayed Reaction you must visit Peruvian restaurant on Virginia Avenue, once you visit PNC BankPrompt Reaction immediately after you go to Beachside Resortwear clothing store, you must go to Walgreens PharmacyWait you can not go to other place from Publix supermarket, unless you see Beaches MuseumPast Avoidance avoid visiting IES Test Prep school, till you observe bookstore on Elizabeth StreetFuture Avoidance never go to Commercial building on 5th Avenue, once you go to Cafe MetroGlobal Avoidance make sure to never reach either Citibank, or Seybolt ParkUpper Restricted Avoidance go to Cocktail bar, at most twiceLower Restricted Avoidance you have to visit Main Branch of CoGo Bike Share Library for bicycle rental, two or more than two timesExact Restricted Avoidance navigate to Art shop on Bannock Street, exactly twice26Table 6: Results of Recognizing Referring Expression with Spatial RelationsNavigational Command Referring Expression(s) Correctness1. go to back of Common Market back of Common Market correct2. always avoid entrance and exit of Little Sugar Creek, but visit left and right of Little Sugar Creek entrance and exit of LittleSugar Creek |left and rightof Little Sugar Creekcorrect3. stay at intersection of Thayer street and Waterman street intersection of Thayer streetand Waterman streetcorrect4. move forward to the south of Edgebrook Coffee Ship south of Edgebrook CoffeeShipcorrect5. go to east of Chinatown, without visiting west of New Saigon Sandwich,then go to front of New Saigon Sandwich, without visiting rear of Dumpling Cafe,then go to rear of Dumpling Cafe, without visiting north of Emerson College - Little Building,finally go to south Emerson College - Little Building, while only visiting each location once east of Chinatown |westof New Saigon Sandwich |front of New Saigon Sand-wich — rear of DumplingCafe|rear of Dumpling Cafe|north of Emerson College -Little Building |south Emer-son College - Little Buildingcorrect6. go around big blue box big blue box incorrect7. go to exit of blue area through between red room and blue one exit of blue area |red room |blue oneincorrect8. go to left of CVS and the stay on bridge left of CVS |bridge incorrect9. go pass right of Dairy Queen to left of Harris Teeter, end up at entrance of Wells Fargo Dairy Queen |Harris Teeter|Wells Fargoincorrect10. move to My Sister’s Closet and stop close to bus stop near Ace Hardware My Sister’s Closet |bus stopnear Ace Hardwareincorrect27 |
mTZcxs2O7k | Batch Differentiable Pose Refinement forIn-The-Wild Camera/LiDAR Extrinsic CalibrationLanke Frank Tarimo FuUniversity of Oxfordfu@robots.ox.ac.ukMaurice FallonUniversity of Oxfordmfallon@robots.ox.ac.ukFigure 1: Coarse-to-fine refinement of the LiDAR-to-camera extrinsic parameters. The matchingappearance between the LiDAR and camera features is trained using only ground-truth extrinsicparameters for self-supervision. During training, batched refinement helps retain difficult samplesthat individually would have been discarded. During inference, we show that batched refinementachieves state-of-the-art zero-shot transfer. The rightmost column shows the refined overlay of Li-DAR points in the image.Abstract: Accurate camera to LiDAR (Light Detection and Ranging) extrinsiccalibration is important for robotic tasks carrying out tight sensor fusion — suchas target tracking and odometry. Calibration is typically performed before de-ployment in controlled conditions using calibration targets, however, this limitsscalability and subsequent recalibration. We propose a novel approach for target-free camera-LiDAR calibration using end-to-end direct alignment which doesn’tneed calibration targets. Our batched formulation enhances sample efficiency dur-ing training and robustness at inference time. We present experimental results,on publicly available real-world data, demonstrating 1.6cm/0.07◦median accuracywhen transferred to unseen sensors from held-out data sequences. We also showstate-of-the-art zero-shot transfer to unseen cameras, LiDARs, and environments.Keywords: Sensor Fusion, Extrinsic Calibration, Differentiable Optimization1 IntroductionIn many multi-sensor robotic setups, information fusion between any two sensors requires accurateknowledge of the relative transformation between the sensors — the extrinsic parameters. In the caseof sensor fusion between a camera and a LiDAR, the extrinsic parameters along with the intrinsicparameters of the camera are used to determine point-to-pixel correspondence displayed in Fig. 1.This correspondence enables fusion in downstream tasks such as object detection, tracking, and ego-motion estimation. Camera/LiDAR extrinsic calibration ‘in-the-wild’ — meaning in uncontrolledenvironments without specialized targets — is difficult due to the domain gap between the twosensors. Cameras register textural information but can’t directly measure geometry whereas LiDARs7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.measure geometry but can’t register texture. Additionally, cameras are passive sensors and maycapture illumination variations like shadows. In contrast, LiDARs, which produce their own light,don’t detect shadows in the same visual range. Given these challenges, in-the-wild camera/LiDARextrinsic calibration is still an active research question.In this work, we present a framework for camera/LiDAR extrinsic calibration that:• Can recover the accurate extrinsic parameters from a wide range of initial estimates.• Learns relevant features for pose alignment from both camera and LiDAR input automati-cally.• Generalizes to sensors and environments not encountered during training.We formulate the camera/LiDAR extrinsic calibration problem as one of batch differentiable directalignment – aligning a batch of learned features from the LiDAR domain to their correspondingbatch of deep image features, guided by deep feature gradients in the image. The end-to-end dif-ferentiable nature of our formulation relaxes the need for manual feature tuning. We show in ourexperiments that not only do we achieve state-of-the-art performance on the training sensor suite(with median translation and rotation errors of 1.6cm and 0.07◦), but our method also generalizesto unseen sensor models and environments — with results demonstrated using a variety of commondatasets.2 Related WorksClassical solutions to the problem of target-free camera/LiDAR extrinsic calibration form two cat-egories: The first category exploits correlations between the image intensity value and the LiDARreflectance value [1, 2, 3]. These methods perform well under assumptions of uniform lighting in theenvironment but still require an initial guess which is close to the ground truth extrinsic parameters.The other category of methods maps geometric (e.g. depth or normal) discontinuities in the LiDARscan to image intensity discontinuities i.e. image gradients [4, 5, 6]. This use of local image gradientinformation can be augmented by local intensity normalization, which improves the robustness ofthese methods in scenes with varying illumination sources. Still, their performance in cluttered en-vironments is limited by the abundance of local minima. Both these categories of classical methodstypically require manual parameter tuning for each unique sensor suite and as such struggle to beadapted to different configurations and manufacturers.More recent works tackle the target-free calibration problem using deep-learned approaches. Weclassify these works into two categories: i) regression-based methods [7, 8] that align camera andLiDAR features using parameters regressed by a deep neural network. While these methods demon-strate impressive accuracy on held-out datasets of their training environment, their regression-basednature makes a zero-shot transfer to other datasets difficult since the neural network is biased tooutput extrinsic parameters in the distribution seen during training. Another, newer category ofmethods employs differentiable optimization for either (ii) indirect feature point matching via non-linear perspective-n-point [9]; or (iii) calibration-flow refinement [10] to align LiDAR-extracteddeep features to their corresponding pixel locations in the image. While both these methods gener-alize better than regression-based methods, [10] only recovers calibration over a narrow region ofoperation and [9] achieves significantly less accurate results.RGKCNet [9] and DXQNet [10] are of a different category of differentiable pose alignment com-pared to our method, in that they are forms of indirect alignment. RGKCNet learns 3-D point to 2-Dpixel correspondences, and thus incorrectly models degenerate features such as lamp posts and thintrees which are line constraints and not point constraints. DXQNet learns a 2-dimensional weightingterm which is used to weight the per-point alignment along XY-axis aligned directions. This putsthe burden on the network to not just learn features but to also accurately model their orientations.Our method overcomes this challenge by explicitly computing deep image gradients, naturally dis-2Figure 2: An overview of one iteration of the batched differentiable alignment. Batches of pairs ofimages and voxelized point clouds are passed through their respective U-Net feature extractors. Eachsparse voxel feature is warped to the image using the latest transform parameters and the camera in-trinsic parameters and registers a residual for its difference to the image feature at the correspondingpixel location. This residual along with the image gradient and the Jacobian of the projected pixellocation with respect to the transform parameters forms the signal for the optimization.tinguishing between line and point constraints — allowing the network to focus simply on learninguseful features.3 MethodWe take inspiration from methods for scene-agnostic visual localization. Like PixLoc [11], we avoidoverfitting pose estimation into a scene memorization problem by performing deep feature alignmentusing differentiable optimization. This allows us to decouple the task of projective geometry, in poserefinement, from the task of learning features. In doing so, we translate the task of achieving zero-shot transfer for camera/LiDAR extrinsic calibration, into the task of learning camera and LiDARfeatures that transfer to unseen environments, captured by different sensors.3.1 Problem formulationGiven a point cloud, a set of N3-D points, PL∈R3×Nrepresented in the LIDAR reference frame(denoted by the subscript L) and a set of images, we want to determine the extrinsic parametersRCL∈SO(3),tCL∈R3, such that,PC=RCLPL+tCL. (1)Where PCare the corresponding coordinates of PL, from the camera’s reference frame. For brevity,we use (RCL,tCL) =TCL∈SE(3)to denote the rigid body transform and omit the reference framescripts i.e. T≜TCL.3.2 Camera/LiDAR extrinsic calibration as differentiable direct alignmentWith direct alignment, the transform parameters are optimized to minimize the misalignment in theappearance of the signals registered by different sensors. In our case, our two sensors are the LiDARand the camera. Instead of attempting to match the raw outputs from sensors, such as the intensityfrom LiDAR to image intensity, which has proven problematic in difficult lighting situations [3], webegin by extracting deep features from the raw inputs.From the LiDAR side, we map the 3-D points using the initial guess of the extrinsic parametersto get ˆP=ˆRPL+ˆt.We then voxelize these points and extract, using a sparse 3-D CNN, aset(ˆPp,FpL,wpL)for each p∈[1, ..., P ]of aP-level multi-scale feature pyramid. At each level,ˆPp∈R3×Npare the Npvoxel centroids, and FpL∈RDp×Npare the Npcorresponding deep featurevectors at each voxel centroid of dimension Dp. Lastly, wpLis the vector of Nplearned weights withelements in the range [0,1].3From the camera side, we use a 2-D CNN to extract from the camera image I∈RW×H×3a pyramidof deep features FpC∈RWp×Hp×Dp, and corresponding weights WpC∈RWp×Hp. Similarly,p∈[1, ..., P ]stands for one of the pyramid levels, each matching in scale with their correspondingLiDAR extractor pyramid levels.At each i-th iteration of the optimization, the misalignment between the deep features extracted fromthej-th LiDAR point and camera features at its corresponding projected point is given by,rpj= (FpLj−FpC[Πp(RiˆPpj+ti)])∈RDp. (2)Πpis the projection function of a 3-D point in the camera reference frame onto the image planeat level p, and [·]denotes sub-pixel lookup by bi-linear interpolation. We’ve also introduced newvariables Riandtiwhich are optimization parameters that we iterate and initialize as R0=I3×3andt0=0.Similar to [11], we formulate a total cost function from these residuals in the form,Ep(Ri,ti) =NpXjwljwcjρ(∥rpj∥2), (3)where wljis the j-th element of wpL, the LiDAR-learned weighting, and wcjis the camera-learnedweighting sub-pixel interpolated at the pixel location of the j-th LiDAR point projected into theimage plane at level p. Lastly, ρis the learnable robust cost function [12]. To robustly minimizethis non-linear least-squares problem we perform the alignment in a coarse-to-fine fashion, usingsolutions from the previous coarser level of the pyramid as the starting point in the problem of thefiner level. At each level, we use the learned Levenberg-Marquardt algorithm [11], parameterizingtransform updates using ξ∈se(3). As such, we stack all the residual terms at level pintorp∈RDpNpand formulate each row of the Jacobian J∈RDpNp ×6and the Hessian H∈R6×6withrespect to ξas,Jk+(j−1)Dp=∂rpjk∂ˆp∂ˆp∂ξ,andH=J⊺WJ, (4)where j∈[1, Np]is the j-th 3-D point, k∈[1, Dp]denotes the k-th dimension of the deep-learnedfeatures at level p, and W∈RDpNp×DpNpis a block diagonal matrix with Npblocks of sizeDp×Dpeach, where the j-th block has uniform diagonal weights wljwcj. With these defined, wecompute a Gauss-Newton gradient step in the direction of decreasing cost with ξ=−H−1J⊺Wr,and update our transform estimate using,Ri+1ti+10 1= exp(bξ)⊺Riti01. (5)We visualize one of these steps in Fig. 2The training loss: During training, we perform these alignment updates for fixed Msteps at eachpyramid level, yielding for each level, TpM. We supervise these transforms with the ground truthusing the reprojection error of each 3-D point:L=XpXjρ(∥Πp( ̄RˆPpj+ ̄t)−Πp(RpMˆPpj+tpM)∥2). (6)Note that since the points ˆPpjare LiDAR points projected by the initial guess of the extrinsic pa-rameters, our supervision signal ̄Tis given by, ̄T=TˆT−1, where Tis the ground truth extrinsicparameters and ˆTis the initial guess applied to the LiDAR points before voxelization.3.3 Batch SE(3) alignmentAt test time: When we make the added assumption that we are solving for the same transformparameters across a batch of image/point cloud pairs at test time, the change to our algorithm is4Figure 3: PCA feature visualization of a feature-deprived scene (top) and a feature-rich scene (bot-tom). During training, our batch formulation keeps the harder example at the top from diverging.Consequently, we can still learn the sparse amount of features it does have e.g. the ground to hedgedifferences in the middle columns, and the salient post on the rightmost column.rather straightforward. Where we used to have, at each pyramid level, the residual rp∈RDpNp,we now stack all the residuals across the batch together and get the vector rpB∈RDpNBwhereNB=PBbNpbis the total number of 3-D points in the batch at level p. Then, we update Eq. (4)accordingly and solve the optimization steps just as we did in the single sample case. This techniqueis deployed in most existing non-learning-based target-free calibration works [4, 3, 2], where itis interpreted as making the calibration cost function Eq. (3) smoother and more convex. To oursurprise, this simple yet very effective scheme has so far not been deployed in deep-learned camera-LiDAR extrinsic calibration.At training time While at first glance it may seem impossible to perform batch pose alignmentduring training since we might encounter camera/LiDAR pairs with heterogeneous extrinsic param-eters, note that the pose alignment is performed relative to the initial guess (Eq. (5)). So while theactual camera/LiDAR relative transforms may differ across the batch, we can independently pickinitial guesses for each sample in the batch such that the relative transform from the initial guess tothe ground truth is identical. Mathematically, this gives for each b-th sample in the batch:ˆTb= ∆T−1Tb. (7)Initializing the initial guess of each sample using Eq. (7) enables joint optimization across all sam-ples of the entire batch, allowing us to learn features from even hard examples where individuallythe optimization would have diverged (see Fig. 3).4 Training SetupTo test the capability of our framework to perform accurate zero-shot transfer to unseen environ-ments, we’ve set up our experiments to train solely on one dataset, and subsequently evaluate per-formances using other datasets with different sensors and test environments.Dataset: Due to its popularity as a benchmark for learning-based camera/LiDAR calibration, weuse the KITTI Odometry dataset [13] as our sole training dataset. It consists of 22 sequences of sub-urban driving scenarios. During training, we only use camera “2” which is a front-facing coloredperspective camera and the top-mounted Velodyne HDL-64E LiDAR. Of the 22 sequences, we usesequences “01” – “21”, leaving sequence “00” out for validation and testing.Setup for LiDAR and camera input: Different LiDARs exhibit different spatial coverage, inten-sity profiles, and reference coordinate frames. To make our framework robust to these variationsduring zero-shot transfer, we perform pose and intensity augmentations to the LiDAR point cloudand crop augmentations to the camera image. We specify these details in Appendix A.1.Model: To facilitate robust calibration from large initial offsets, we use a coarse-to-fine alignmentscheme with 3 levels. Two U-Net [14] architectures extract deep features from LiDAR and cameraseparately for each of these levels. To aid the correspondence of similar features from differentdomains, we further adapt the features from each domain with a single multi-layer perceptron. Weprovide details about our model and its weight initialization in Appendix A.2. During training, we5Table 1: Results on the same camera of a held-out sequence (values show component-wisemean/median absolute values of translation/rotation along each axis).InitialerrorMethodMean/Median ∆t (cm) Mean/Median ∆R (◦)x y z roll pitch yaw±1.5m±20◦LCCNet 0.24/0.26 0.38 /0.36 0.46 /0.35 0.03/0.03 0.01/0.00 0.04 /0.02DXQNet / / / / / /Ours (1) 8.77/1.76 5.50/1.45 9.25/1.80 0.36/0.08 0.43/0.07 0.53/0.07Ours (8) 2.26/0.51 2.02/0.87 1.24/0.58 0.10/ 0.02 0.21/0.04 0.16/0.03±0.1m±5◦LCCNet 0.24/0.15 0.48/0.26 1.11/0.47 0.02/0.02 0.17/0.10 0.03/0.03DXQNet 0.75/0.53 0.48/0.51 1.09/0.78 0.05/0.03 0.05/ 0.03 0.03/ 0.02Ours (1) 3.23/0.94 2.58/1.04 3.42/1.18 0.09/0.05 0.13/0.05 0.15/0.04Ours (8) 0.42/0.32 0.82/0.83 0.59/0.46 0.02/0.02 0.04 /0.04 0.02/0.02use the Adam [15] optimizer with a learning rate of 10−5to train for 20 epochs with a batch size of8.5 ResultsAll results showcased here are derived from models trained on a single camera/LiDAR pair as de-tailed in Section 4. To assess the capacity for zero-shot transfer, we incrementally test on morechallenging scenarios, starting with a held-out sequence of the training data and culminating in testson completely different cameras, LiDARs, and environments.5.1 Extrinsic calibration in settings similar to trainingTesting using the same camera: In this simple setting, we use images from the training camera(camera “02”) but from a held-out sequence “00” of the KITTI Odometry dataset which comprises4541 image/point cloud samples. The differences between the extrinsic parameters in this sequenceand the training sequences are negligible, so the key distinction from the training data is the novelscenes in this held-out sequence.We compare against the regression-based LCCNet [8] and the differentiable calibration flow methodDXQNet [10]. When comparing against LCCNet, we pass to our model initial extrinsic parameterguesses sampled uniformly [0, 20]◦and [0, 1.5]m around the ground truth value, using the schemepresented in Appendix A.1. Note that the initial angular errors used in this experiment are evenlarger than the values we used during training.The upper section of Table 1 shows LCCNet outperforming our method. Our method performspoorly on single image/point cloud pairs due to scenes with insufficient data for full 6-DoF poseobservability. However, our method substantially improves when run in batch optimization using 8pairs, achieving sub-centimeter median absolute error on each translation axis and even surpassingLCCNet in median roll rotation accuracy. Remarkably, our model achieves this despite the fact thatit has never encountered rotation perturbations up to 20◦during training.DXQNet is designed to only recover calibration from small drifts [10] in the range [0, 5]◦and [0,0.1]m, so we sample initial calibration parameters in this same range when comparing our methodagainst DXQNet. The lower section of Table 1 shows that our method’s single-sample performanceis slightly worse than DXQNet in median metrics and notably worse in mean metrics since ourmethod diverges when a single scene lacks sufficient structure. However, performing batch op-timization significantly enhances our method, achieving lower error than DXQNet along all axesexcept for Y-axis translation and median pitch-angle error.While DXQNet is designed only for calibration from small initial errors of [0, 5]◦and [0, 0.1]m [10],as seen in Table 1, our method competes even against state-of-the-art regression-based methods likeLCCNet and recovers calibration from large initial errors of [0, 20]◦and [0, 1.5]m.6Table 2: The performance change: trained on camera “2” and tested on camera “3” of a held-outsequence (values show the mean/median magnitudes of the translation/rotation vector).InitialerrorMethodMean ∆t (cm) Mean ∆R (◦) Median ∆t (cm) Median ∆R (◦)seen unseen seen unseen seen unseen seen unseen±1.5m±20◦LCCNet 1.59 52.5 0.16 1.54 1.01 52.5 0.12 1.47DXQNet / / / / / / / /Ours (1) 15.98 20.15 0.92 1.07 3.60 4.65 0.16 0.20Ours (8) 3.09 3.77 0.15 0.30 1.39 1.69 0.07 0.07±0.1m±5◦LCCNet 1.29 52.5 0.18 1.52 0.61 52.5 0.12 1.47DXQNet 1.43 2.94 0.08 0.16 0.81 2.28 0.07 0.13Ours (1) 6.25 8.32 0.25 0.31 2.21 2.86 0.10 0.13Ours (8) 1.20 1.76 0.05 0.07 1.12 1.65 0.06 0.07Testing using a different camera at a different vantage point: While all the methods are trainedon camera “2” (seen), in this test, we perform calibration between the LiDAR and camera “3”(unseen). This is significant for generalization because cameras “3” and “2” are separated 50 cmapart. We also use two ranges of initial errors in this experiment, a larger one to compare againstLCCNet, and a smaller one for DXQNet. In doing so, we test our model’s ability to both recovercalibration from large initial errors and also transfer to new sensors.Unlike the case of testing on the seen camera, the upper section of Table 2 shows that, when testedon the unseen camera, our method consistently performs better than LCCNet — whose mean andmedian translation error magnitudes (of more than 50cm) are more than a magnitude higher than the1cm error achieved with the seen camera. While our method also experiences a drop in accuracywhen tested on the unseen camera, our drop is significantly lower (from 3.60cm to 4.65cm mediantranslation error), and even lower when run in batch optimization, from 1.39cm to 1.69cm mediantranslation error magnitude – only a 3mm drop.Meanwhile, DXQNet, the learned calibration flow method, is more robust when transferred to theunseen camera. Seen on the lower section of Table 2, the median translation accuracy of DXQNetdrops to 2.28cm on the unseen camera, which is still relatively accurate when compared to LCCNet.While DXQNet performed better than our method in the single image/point cloud pair setting ontheseen camera, once transferred to the unseen camera, the performance gap narrows down. In fact,shown on the lower section of Table 2, our method and DXQNet achieve the same level of medianrotation accuracy.We see that running batched optimization with our method achieves the best mean/median rotationand translation accuracy compared to all other methods when transferred to the unseen camera.Additionally, the drop in accuracy (transferring from the seen to the unseen camera) is lower usingour method – especially so in the batched alignment case. We highlight these facts in Fig. 4, wherethe slope of the graphs highlights the drop in accuracy.Seen Unseen1cm10cm100cmTranslation errorLCCNetOurs SingleOurs Batch(a)Seen Unseen0.1°1.0°Rotation errorLCCNetOurs SingleOurs Batch (b)Seen Unseen1cm2cm3cmTranslation errorDXQNetOurs SingleOurs Batch (c)Seen Unseen0.0°0.1°0.2°Rotation errorDXQNetOurs SingleOurs Batch (d)Figure 4: Slope graphs highlighting the error increase when the methods are tested on an unseencamera, having trained on the seen camera. All plots show median metrics: (a) and (b) comparingtranslation and rotation errors of our method versus LCCNet, and (c) and (d) comparing againstDXQNet. Note that our method exhibits gentler slopes compared to the other methods, showing amore robust transfer to the unseen camera.7Table 3: Zero-shot transfer performance on different datasets.Method ∆t (cm) ∆R (◦)DXQNet 5.65/4.70 2.89/1.03Ours Single 28.0/5.99 1.70/0.55Ours Batch 3.67/3.61 0.51/0.51(a) KITTI-360 dataset. Initial error in the range:±5◦±0.1m. Ours Batch uses a batch size of 8Method ∆t (cm) ∆R (◦)LCCNet 324/318 20.8/18.1Ours Single 102/16.8 3.60/0.64Ours Batch 6.97/3.87 0.44/0.43(b) Waymo dataset. Initial error in the range:±20◦±1.5m. Ours Batch uses a batch size of 45.2 Zero-shot transfer to different environments with different sensorsTransfer to a different camera in a different environment: The KITTI-360 [16] dataset is cap-tured in Karlruhe, just like the KITTI Odometry dataset we used during training. However, thecamera setup is different both in intrinsic parameters and its relative transform to the LiDAR.We compare our calibration accuracy to DXQNet, as they have also reported their zero-shot transfermetrics in the KITTI-360 setup. In Table 3a, we show that overall, the accuracy achieved by bothmethods is worse in rotation and in translation compared to their respective performances on theKITTI Odometry dataset. While our single sample optimization method performs better in rota-tion but worse in translation than DXQNet, our batched optimization method (batch of 8) performssignificantly better than DXQNet in both rotation and translation.Transfer to a different camera, LiDAR, and environment: We also test generalizability to theWaymo dataset [17], which has a higher resolution camera and a custom LiDAR. To match thetraining data image resolution, we halve the image size and camera intrinsics, allowing for consistentfeature extraction. This flexibility further distinguishes optimization-based calibration methods fromregression-based methods which can’t explicitly reconfigure the camera projection parameters.In Table 3b, we see that LCCNet does not generalize, with translation errors over 3m and rotationerrors at 20.8◦(mean) and 18.1◦(median). Our method, run on a single sample, is poor in the meanmetric (just above 1m and 3.6◦), but is better in the median metric (16.8cm, 0.64◦), suggesting thatsome outlier scenes impact calibration performance. Notably, batch optimizing image/point cloudpairs improves our performance significantly, reducing the median translation error to 3.87cm androtation errors to 0.44◦(mean) and 0.43◦(median).6 LimitationsCurrently, our model assumes shared visibility between the LiDAR and the camera, enabling posealignment from simultaneous image and point cloud pairs. In sensor settings without shared vis-ibility, existing literature resolves this by creating a local map from several images and LiDARscans [18]. Our model further presumes simultaneous image pixel and LiDAR point registration,necessitating ego-motion compensation for rotating LiDAR models. To overcome these limitations,in future work, we aim to tackle ego-motion estimation, inter-sensor temporal calibration and Li-DAR/camera extrinsic calibration jointly using differentiable representations of sensor relative poseas explored in [19].7 ConclusionWe have presented a method for in-the-wild camera/LiDAR calibration that both recovers calibrationfrom large initial errors ([0, 20]◦and [0, 1.5]m) and transfers to unseen sensors and environments —a trait that no existing method has demonstrated. While these are promising results for in-the-wildcalibration, our accuracy still falls short of target-based calibration methods. In future work, weaim to incorporate more geometric priors such as mapping/ego-motion consistency into our featurelearning to facilitate online tasks that require higher degrees of accuracy.8AcknowledgmentsSupport for this work has been provided by the Horizon Europe project DigiForest (101070405) anda Royal Society University Research Fellowship (M. Fallon). This work has been carried out withinthe framework of the EUROfusion Consortium, funded by the European Union via the EuratomResearch and Training Programme (Grant Agreement No 101052200 — EUROfusion). Views andopinions expressed are however those of the author(s) only and do not necessarily reflect those ofthe European Union or the European Commission. Neither the European Union nor the EuropeanCommission can be held responsible for them.References[1] N. Williams, K.-L. Low, C. Hantak, M. Pollefeys, and A. Lastra. Automatic image alignmentfor 3d environment modeling. pages 388– 395, 11 2004. ISBN 0-7695-2227-0. doi:10.1109/SIBGRA.2004.1352985.[2] G. Pascoe, W. Maddern, and P. Newman. Direct visual localisation and calibration for roadvehicles in changing city environments. In Proceedings of the IEEE International Conferenceon Computer Vision (ICCV) Workshops , December 2015.[3] G. Pandey, J. McBride, S. Savarese, and R. Eustice. Automatic targetless extrinsic calibrationof a 3d lidar and camera by maximizing mutual information. Twenty-Sixth AAAI Conferenceon Artificial Intelligence , 26, 01 2012. doi:10.1609/aaai.v26i1.8379.[4] J. Levinson and S. Thrun. Automatic online calibration of cameras and lasers. In Robotics:Science and Systems , 2013.[5] A. Napier, P. Corke, and P. Newman. Cross-calibration of push-broom 2d lidars and camerasin natural scenes. In 2013 IEEE International Conference on Robotics and Automation , pages3679–3684, 2013. doi:10.1109/ICRA.2013.6631094.[6] X. Liu, C. Yuan, and F. Zhang. Targetless extrinsic calibration of multiple small fov lidars andcameras using adaptive voxelization. IEEE Transactions on Instrumentation and Measurement ,71:1–12, 2022. doi:10.1109/TIM.2022.3176889.[7] G. Iyer, R. KarnikRam, K. M. Jatavallabhula, and K. M. Krishna. Calibnet: Geometricallysupervised extrinsic calibration using 3d spatial transformer networks. 2018 IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems (IROS) , pages 1110–1117, 2018.[8] X. Lv, B. Wang, D. Ye, and S. Wang. Lidar and camera self-calibration using cost volumenetwork. arXiv preprint arXiv:2012.13901 , 2020.[9] C. Ye, H. Pan, and H. Gao. Keypoint-based lidar-camera online calibration with robust ge-ometric network. IEEE Transactions on Instrumentation and Measurement , 71:1–11, 2022.doi:10.1109/TIM.2021.3129882.[10] X. Jing, X. Ding, R. Xiong, H. Deng, and Y . Wang. Dxq-net: Differentiable lidar-cameraextrinsic calibration using quality-aware flow, 2022.[11] P.-E. Sarlin, A. Unagar, M. Larsson, H. Germain, C. Toft, V . Larsson, M. Pollefeys, V . Lepetit,L. Hammarstrand, F. Kahl, and T. Sattler. Back to the Feature: Learning Robust CameraLocalization from Pixels to Pose. In CVPR , 2021.[12] J. T. Barron. A general and adaptive robust loss function. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 4331–4339, 2019.[13] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti visionbenchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR) , 2012.9[14] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical imagesegmentation, 2015.[15] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2017.[16] Y . Liao, J. Xie, and A. Geiger. Kitti-360: A novel dataset and benchmarks for urban sceneunderstanding in 2d and 3d, 2022.[17] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V . Patnaik, P. Tsui, J. Guo, Y . Zhou, Y . Chai,B. Caine, V . Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon,A. Gao, A. Joshi, Y . Zhang, J. Shlens, Z. Chen, and D. Anguelov. Scalability in perception forautonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition (CVPR) , June 2020.[18] T. Scott, A. A. Morye, P. Pini ́es, L. M. Paz, I. Posner, and P. Newman. Choosing a time andplace for calibration of lidar-camera systems. 2016 IEEE International Conference on Roboticsand Automation (ICRA) , pages 4349–4356, 2016.[19] Q. Herau, N. Piasco, M. Bennehar, L. Rold ̃ao, D. Tsishkou, C. Migniot, P. Vasseur, and C. De-monceaux. Moisst: Multimodal optimization of implicit scene for spatiotemporal calibration,2023.[20] M. E. Muller. A note on a method for generating points uniformly on n-dimensional spheres.Commun. ACM , 2(4):19–20, apr 1959. ISSN 0001-0782. doi:10.1145/377939.377946. URLhttps://doi.org/10.1145/377939.377946 .[21] B. Graham, M. Engelcke, and L. van der Maaten. 3d semantic segmentation with submanifoldsparse convolutional networks. In Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition (CVPR) , June 2018.[22] S. Contributors. Spconv: Spatially sparse convolution library. https://github.com/traveller59/spconv , 2022.[23] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recog-nition, 2015.[24] T. Sattler, W. Maddern, C. Toft, A. Torii, L. Hammarstrand, E. Stenborg, D. Safari, M. Oku-tomi, M. Pollefeys, J. Sivic, F. Kahl, and T. Pajdla. Benchmarking 6dof outdoor visual local-ization in changing conditions, 2018.[25] X. Lai, Y . Chen, F. Lu, J. Liu, and J. Jia. Spherical transformer for lidar-based 3d recognition,2023.[26] J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall. Se-manticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. In Proc. ofthe IEEE/CVF International Conf. on Computer Vision (ICCV) , 2019.10A Details on experimental setupA.1 Setup for LiDAR and camera inputSetup for LiDAR input: To learn robust features for the calibration of different transforms, we trainour network to recover the true calibration from a variety of initial guesses ˆT=TpT. We perturbthe ground truth Tusing uniformly sampled Tp, modeled by a translation and angle-axis vector.We sample these vectors uniformly on the 3-sphere using [20] and scale them with a uniformlydistributed scalar. For training, the translation and rotation magnitudes are in the range [0,1.5]m and[0,15]◦, respectively.To account for different intensity profiles produced by different LiDAR models, and to cater for thefact that some LiDAR drivers don’t provide intensity readings, we augment the intensity channel ofour LiDAR data to minimize our dependence on intensity information. We apply uniform randomscalar perturbations in the range [0,1.0]to the intensity channel of the LiDAR points, meaning thatin some samples the intensity information is close to dropped out.Lastly, to process the LiDAR data using our sparse 3-D CNN, we voxelize the points using isotropicvoxels of 2 cm per side. We found this resolution to be reasonable since it is in the range of themeasurement error reported by most automotive LiDAR manufacturers.Setup for camera input: We’ve experimentally found that the camera feature extractor fails tolearn generalizable geometric features unless spatial augmentations are applied. In our training, weperformed random crop augmentations of [512, 256] pixels in width and height to the input image,and updated the camera intrinsic parameters accordingly.A.2 Model setupBoth image and point cloud feature extractors in our model follow the U-Net [14] architecture with5 layers of coarse-to-fine features, each layer being a factor of 2 finer than the previous layer. Weuse features from 3 layers for our alignment, the 1/16-scale, the 1/4-scale, and the 1-scale. Thedimensionality of the feature at these layers are 128, 128, and 32, respectively. To aid adaptation, ateach pyramid level, the features from both domains are passed through a shared 2-layer MLP withthe same input and output dimensions at each layer and Leaky ReLu activation.With recent advancements in 3-D convolutions [21], we’ve chosen the spconv [22] implementationof sparse 3-D CNNs for our LiDAR feature extractor, as it can efficiently handle large point clouds,and can effectively manage the sparsity pattern of the data.Learning features in the image domain is relatively straightforward, we’ve found that using the sameU-Net [14] architecture (similar to Pixloc [11]) with the 2-D convolutional VGG [23] backbone wassufficient for image feature learning.We initialize the visual extractor using weights from a pre-trained PixLoc [11] model trained onthe CMU Seasons dataset [24], and the LiDAR extractor using only the sparse 3D CNN weightsof a SphereFormer [25] model pre-trained on Semantic KITTI [26]. For the pose optimization, wetrained with M= 5iterations at every pyramid level.11B Additional EvaluationsB.1 Calibration recall as a function of initial errorTo gauge the robustness of our method to initial calibration errors, we conducted an experiment tomeasure the percentage of calibration trial that achieve errors less than 2cm and 0.1◦in both transla-tion and rotation, respectively. In plotting this percentage as a function of the initial calibration error,we aim to show the sensitivity of our method to initial calibration errors both in the single sampleoptimization setting and in the batched optimization setting. We used the unseen camera from theheld-out KITTI Odometry data sequence. The results plotted in Figure 5, show that batched opti-mization significantly boosts the robustness when there are large initial errors, achieving accuratecalibration over 60 percent of the time even when the initial error is in the ±2m,±20◦range unseenduring training.2.0m 20° 1.5m 15° 1.0m 10° 0.5m 5° 0.1m 1°Initial Error Range10203040506070Percentage (%)Percentage of accurately calibrated samplesbatchsingleFigure 5: A plot of the percentage of calibration results that have errors less than 2cm and 0.1◦,as a function of the initial error range, tested on the unseen camera from the held-out sequencesof the KITTI Odometry dataset. As seen, the batched optimization (batch size of 8) can estimatean accurate calibration over 60 percent of the time, even when tested in the initial error range of±2m,±20◦which was not encountered during training.B.2 Comparison against additional methodsWe compared our method to LCCNet and DXQNet in our evaluations as they have the best perfor-mance for two different key traits — LCCNet excels in recovering from large initial errors, and canDXQNet transfer to unseen environments. Of our cited works, there is only one other method basedon differentiable alignment, RGKCNet, but this method has reported lower calibration accuracythan DXQNet, in a slightly different experiment setting. For completeness, we also show the per-formance of our method tested in the RGKCNet experiment setting on the KITTI Odometry dataset.The results, summarized in Table 4 show that our method significantly outperforms RGKCNet inboth translation and rotation metrics. Note that the median metrics for CalibNet are not includedbecause they were not made available by the authors.Table 4: Comparison against RGKCNet and CalibNet in the KITTI Odometry setting. Initial errorin the range: ±7.5◦±0.2mMethodMean/Median ∆t (cm) Mean/Median ∆R (◦)x y z roll pitch yawCalibNet 12.0/ 3.5/ 7.9/ 0.18/ 0.9/ 0.15/RGKCNet 5.0/2.8 4.0/2.6 5.9/3.4 0.16/0.09 0.15/0.10 0.17/0.11Ours (1) 3.2/1.0 2.7/1.0 3.4/1.2 0.09/0.05 0.13/0.05 0.14/0.04Ours (8) 0.4/0.3 0.8/0.8 0.6/0.5 0.02/0.02 0.04/0.04 0.02/0.0212B.3 The impact of batch optimizationB.3.1 A closer look at results from Table 2In the case of using only a single image and point cloud pair, the direct alignment can encounteroutlier cases where the optimization diverges. In the absence of other sources of error, the medianmetric can be robust to these outlier cases. However, in addition to spurious diverging outlier cases,direct alignment with batch size = 1 is also affected by the frequent convergence to local optima. Inautonomous driving scenarios, these local optima are prevalent along the translation axes. Due tothe lack of features close to the camera, changes to the translation parameters create minimal visualparallax which can make the optimization less sensitive to translation errors.This shortcoming of direct alignment is consistent with our findings reported in Table 2. Referringto the median rotation metric on the unseen camera, we see that Ours (1) performs just as wellas DXQNet, and nearly an order of magnitude better than LCCNet (the regression-based method).On the other hand, on the median translation metric for the unseen camera, Ours (1) performs anorder of magnitude better than the regression-based method LCCNet (as we expected), but is still 6millimeters less accurate than DXQNet (the sparse flow-based approach).In this context, optimizing over a batch with size greater than 1, not only decreases the impact ofoutlier diverging samples but also improves the accuracy of all samples altogether. This can beseen in the histograms of the error distributions for translation and rotation in Figure 6. The errordistribution of the batch optimization (size 8) exhibits a shorter tail, it also has a sharper peak closerto zero.Distribution of absolute value of translation error (cm)01000200030000.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00 10.00Translation Batch 8 (count) Translation Single (count)Histogram of Translation ErrorsDistribution of absolute value of rotation error (degrees)010002000300040000.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.50Rotation Batch 8 (count) Rotation Single (count)Histogram of Rotation ErrorsFigure 6: The error histograms of our method on the unseen camera referred to in Table 2. In bothtranslation and rotation error, the batched approach exhibits a shorter tail and also peaks closer tozero than the single sample optimization. The last red bin in each plot aggregates all larger (outlier)values.13B.3.2 Experiment with 3 simple scenesTo demonstrate how batch optimization can improve accuracy, even on non-diverging samples, weperformed a small but instructive experiment using 3 image/point-cloud pairs from the validation setof KITTI odometry. Each of these samples contains sufficient information such that direct alignmentdoes not diverge to extreme errors. To further avoid divergence, we sampled initial guess transformsrelatively close to the ground truth (within 50 cm and 5 degrees).(a) Scene 1(b) Scene 2(c) Scene 3Figure 7: Scenes used in experiment B.3.2: Scene 1 has relatively more features close to the camera,whereas Scene 3 has the fewest features due to a large patch of under-exposed bush in the image.Running individual optimizations from 100 different initializations yields the distribution of trans-lation errors seen in Figure 8.Of the three scenes, performance is best on Scene 1 as it has more features closer to the camerawhereas performance is poorest in Scene 3 as it has the sparsest features. Aggregated statistics areshown in Table 5. The last column shows the mean and median values found by stacking all errorvalues from scenes 1, 2, and 3 together. Note that the values in the last column are close to the valuescomputed from Scene 2, the mid ranking scene.Table 5: Aggregate error statistics over 100 different runs of individual optimization on each of the3 scenes from Figure 7Translation Errors on the Experiment with 3 Scenes (cm)Scene 1 Scene 2 Scene 3 Scenes StackedMean 1.49 3.10 6.52 3.72Median 1.11 3.05 6.36 2.96Performing the same experiment as above, only running batch optimization with all 3 scenes yieldssignificantly better results, as seen in Figure 9.14Distribution of absolute value of translation errors (cm)0204060800.00 1.50 3.00 4.50 6.00 7.50 9.00 10.50 12.00 13.50 15.00 16.50 17.50Scene 1 (count) Scene 2 (count) Scene 3 (count)Histogram of Translation errorsFigure 8: Error distribution of optimizing 100 different initializations of each of the scenes in Fig-ure 7 individually.Distribution of absolute value of translation errors (cm)0204060800.00 1.50 3.00 4.50 6.00 7.50 9.00 10.50 12.00 12.50Batch Optimization (count) Scene 1 (count) Scene 2 (count) Scene 3 (count)Histogram of Translation errorsFigure 9: Error distribution of optimizing 100 different initializations of all scenes in Figure 7 jointly(Batch Optimization) compared against the individual optimizations. The distribution of the batchoptimization errors exhibits a shorter tail and is closer to zero.By fusing information from all scenes, the overall error distribution is sharper and closer to zerothan the error distribution of any of the individual scenes. Table 6 shows the aggregate statisticsfound by performing batch optimization and compares it to the individual optimization result shownpreviously.The last two columns of Table 6 show a very significant difference in mean and median error betweenperforming batch optimization and individual per scene optimization. The mean error with batchoptimization is better than the mean error achieved by any of the individual scenes. While the15Table 6: Aggregate error statistics over 100 different runs of batch optimization of all 3 scenesfrom Figure 7 jointly (Scenes Batch Optimized), contrasted against individual optimization.Translation Errors on the Experiment with 3 Scenes (cm)Scene 1 Scene 2 Scene 3 Scenes StackedScenesBatch OptimizedMean (cm) 1.49 3.10 6.52 3.72 1.38Median (cm) 1.11 3.05 6.36 2.96 1.28median error of batch optimization is slightly worse than that of the best-performing individualscene, it is still significantly better than the median error of any other individual scene.To conclude, the reason that batch optimization so significantly improves performance is not due tothe rejection of outlier cases alone. Owing to the effective fusion of features in direct alignment,batch optimization yields solutions that are close in accuracy to the best individual sample in thebatch. This effectively raises the performance of many samples, not just the outlier cases.16 |
0hQMcWfjG9 | α-MDF: An Attention-based MultimodalDifferentiable Filter for Robot State EstimationXiao Liu1, Yifan Zhou1, Shuhei Ikemoto2, and Heni Ben Amor11Interactive Robotics Lab, Arizona State University2Kyushu Institute of Technology1{xliu330,yzhou298,hbenamor}@asu.edu2ikemoto@brain.kyutech.ac.jpForce/Torque Joints Depth RGBRigid Body Dynamics State t-1 State t"Pick" "Push" "Put down"Soft Robot Dynamics t-1 t t+10 100 200 300020 50 100 150010RGB Depth IMUs0 100 200 300 4001000State t-1 State t-MDF -MDF -MDF -MDF -MDF -MDFFigure 1: The attention-based Multimodal Differentiable Filter ( α-MDF) framework enables robot state esti-mation in multimodal settings, applicable to both rigid body robots and soft robots.Abstract: Differentiable Filters are recursive Bayesian estimators that derive thestate transition and measurement models from data alone. Their data-driven na-ture eschews the need for explicit analytical models, while remaining algorith-mic components of the filtering process intact. As a result, the gain mecha-nism – a critical component of the filtering process – remains non-differentiableand cannot be adjusted to the specific nature of the task or context. In this pa-per, we propose an attention-based Multimodal Differentiable Filter ( α-MDF)which utilizes modern attention mechanisms to learn multimodal latent repre-sentations. Unlike previous differentiable filter frameworks, α-MDF substitutesthe traditional gain, e.g., the Kalman gain, with a neural attention mechanism.The approach generates specialized, context-dependent gains that can effectivelycombine multiple input modalities and observed variables. We validate α-MDFon a diverse set of robot state estimation tasks in real world and simulation.Our results show α-MDF achieves significant reductions in state estimation er-rors, demonstrating nearly 4-fold improvements compared to state-of-the-art sen-sor fusion strategies for rigid body robots. Additionally, the α-MDF consis-tently outperforms differentiable filter baselines by up to 45% in soft roboticstasks. The project is available at alpha-mdf.github.io and the codebase is atgithub.com/ir-lab/alpha-MDFKeywords: Differentiable Filters, Sensor Fusion, Multimodal Learning.1 IntroductionRecursive Bayesian filters, in particular Kalman filters, are a core component of many robotic andautonomous systems [1]. These filters offer a probabilistic framework that enables effective state es-timation and allowing robots to perceive and respond to dynamic environmental conditions [2, 3, 4].7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Constructing the analytical models and characterizing their noise profiles can be an overwhelm-ing undertaking and requires supplementary measures, e.g., system identification [5]. In addition,scalability still poses a significant obstacle, particularly when dealing with non-linear and high-dimensional systems. Advanced techniques such as the ensemble Kalman filter [6] have been de-veloped to tackle this challenge. However, they may still require careful manual design of the datapipeline and filtering process, especially in the presence of multimodal data sources. A potentialalternative methodology is to derive the underlying models for filtering from data alone. Recent ad-vancements in Deep state-space models (DSSMs) [7] provide effective solutions for understandingthe state and measurement estimation from observed sequences as data-driven approaches [7, 8, 9].Such approaches do not need to derive explicit system dynamics, which is essential and challengingin traditional filtering techniques. A subclass of algorithms derived from DSSMs, called Differ-entiable Filters (DFs), focus on learning state transition and measurement models from data whileretaining the fundamental principles of Bayesian recursive filtering. This combination of proper-ties renders DFs particularly well-suited for systems with complex dynamics and diverse sensorobservations.In this paper, we introduce a novel class of differentiable filters built upon neural attention mecha-nisms. The key innovation lies in the substitution of the traditional Kalman gain with an attentionmechanism for filtering with multimodal observations. This approach allows for the learning of ahighly specialized and task-specific gain mechanism. The utilization of multimodal observations,also known as multimodal learning [10], has shown substantial advantages in various robotics appli-cations [11, 12, 13]. By harnessing information from diverse modalities such as vision [14, 15, 16],language [17, 18, 19, 20], and tactile sensing [21, 22], robots can learn to better interpret their sur-roundings, produce more accurate estimates of their own internal state, and consequently improvethe overall decision-making process. We propose the attention-based Multimodal Differentiable Fil-ters (α-MDF) framework, as shown in Fig. 2, each module is learnable and operates in latent space.The primary contributions are: (1) Attention Gain : Our approach is an attention-based strategy thatreplaces the conventional Kalman gain in the measurement update step, as depicted by the coloredblocks in Fig. 2. The gain mechanism is learned to update the current state based on multimodalobservations. (2) Latent Space Filtering : our proposed differentiable framework operates in a la-tent space, learning high-dimensional representations of system dynamics and capturing intricatenonlinear relationships. This approach proves particularly advantageous for highly nonlinear sys-tems. (3) Empirical evaluations :α-MDF achieves significant reductions in state estimation errors,demonstrating nearly 4-fold improvements compared to state-of-the-art sensor fusion strategies inmultimodal manipulation tasks. Furthermore, α-MDF accurately models the non-linear dynamicsof soft robots, consistently surpassing differentiable filter baselines by up to 45%.2 Related WorkDifferentiable Filters (DFs) have garnered attention as learnable non-linear state-space mod-els [23, 24, 25]. Previous works [26, 27] have integrated neural network components into roboticalgorithms, such as BackpropKF, which combines backpropagation with neural networks to trainKalman Filters. Similarly, research in Differentiable Particle Filters (DPFs) [28, 29] has also lever-aged learnable modules to address the challenges of filtering and state tracking. Algorithmic priorshave been utilized to improve the learning efficiency of DPFs, and adversarial methods have beenused for posterior estimation [30]. However, the gain mechanism in the traditional Kalman filter isnot differentiable and has not been incorporated into the learning procedure of the DFs mentionedearlier. In a recent study [9], the effectiveness of DFs in training and modeling uncertainty with noiseprofiles has been demonstrated. Typically, multi-layer perceptrons integrated with an RNN layer areemployed in the implementation of DFs. Performance on real-world tasks has shown considerableimprovement in state tracking accuracy [9, 22, 24, 31, 32], and results indicate that the adoption ofend-to-end learning is crucial for accurately learning noise models. However, as noted in [8], theuse of RNN has been shown to be “a limiting factor for learning accurate models” and may “leadto a non-Markovian state-space”. Furthermore, the traditional Kalman gain for DFs in [9, 22, 24]remains non-learnable, despite the numerous advancements made thus far in differentiable filters2Transformer Process ModelStateSensor EncodersAuxiliary modelobservation attention state attentionAttention GainRGB Depth JointsAttention map Keys KeysvaluesLatent DecoderStatelatent observation latent state latent observation latent observationAuxiliary querylatent statelatent stateAction-MDFFigure 2: The α-MDF framework consists of a transformer process model, sensor encoders, and an attentiongain update model. The transformer process model predicts the following latent state, while the sensor encoderslearn latent representations from observations. The attention gain model then corrects the predicted latent stateusing the learned representations.and particle filters. While DFs have demonstrated considerable promise as differentiable sequen-tial models, their application in multimodal settings has thus far received limited attention. Onenotable exception is the recent work in [22], which proposes sensor fusion strategies associatedwith DFs. Employing multimodal environments often necessitates the adoption of representationlearning methodologies [33], which require learning a latent representation for capturing intricatestatic and dynamic features [8, 12, 34]. Studies such as [11] have highlighted the effectiveness ofblending networks in learning a shared representation based on Conditional Neural Processes [35].Other studies, including [21, 12], have adopted a self-supervised approach involving variationalautoencoders (V AE) [36] to discover latent representations for stable manipulation policies. Themajority of multimodal latent representations are obtained through policy learning. However, incertain situations, a gating technique [37] that focuses on specific modalities can be employed to en-hance policy robustness. In light of this, we propose an alternative approach which utilizes modernattention mechanisms to learn multimodal latent representations at both the level of modalities andobservations . The novel approach fuses multiple types of modalities and variables in a dynamic,context-dependent fashion, thereby enabling synergies between their respective qualities, structures,relevance, and degrees of redundancy.3 Multimodal Differentiable FiltersWe introduce a novel approach called attention-based Multimodal Differentiable Filters ( α-MDF ),which combines differentiable filters with insights from transformer models. This approach per-forms state estimation and fusion of multiple sensor modalities in a unified and differentiable man-ner. We first discuss Recursive Bayesian filtering as the general technique used to estimate the statextof a discrete-time dynamical system. Thereafter, we provide the details of our specific algorithm.Given a sequence of actions a1:tand noisy observations y1:t, the posterior distribution of the statecan be represented by the following equation:p(xt|a1:t,y1:t,x1:t−1)∝p(yt|at,xt)p(xt|a1:t−1,y1:t−1,x1:t−1). (1)We can denote the belief of the state as bel (xt) =p(xt|a1:t,y1:t,x1:t−1). Assuming the Markovproperty, where the next state is dependent only on the current state, we get the following expression:bel(xt) =η p(yt|xt)|{z}observation modeltYt=1state transition modelz}|{p(xt|at,xt−1)bel(xt−1), (2)where ηis a normalization factor, p(yt|xt)is the observation model and p(xt|at,xt−1) is the tran-sition model. The transition model describes the laws that govern the evolution of the system state,while the observation model identifies the relationship between the hidden, internal state of the sys-tem and observed, noisy measurements.33.1 α-MDFWe utilize an ensemble method for Bayesian filtering wherein each ensemble member representsa compact robot state. Figure 2 shows the procedural steps of how this compact representation,known as the latent state, is obtained and get updated. The filtering process includes two essentialsteps, namely prediction andupdate , both of which are also implemented through neural networks.Most importantly, we replace the Kalman gain step with an attention mechanism, which is trained toweigh observations against predictions based on the current context. Additionally, we demonstratethat attention can be used to balance and weigh different modalities, e.g., video, depth, inertialmeasurements, against each other. We will see that both steps can be naturally integrated into asingle attention gain (AG) module.LetX0:Ndenote the latent states with dimension dxofNsteps in twith number of Eensemblemembers, X0:N= [x10:N, . . . ,xE0:N], where E∈Z+.Prediction step : In this step, the state transition model takes the previous states with the current ac-tion, and predicts the next state. To this end, we leverage the capabilities of transformer-style neuralnetworks [38]. In addition, we generate a probability distribution over the posterior by implement-ing the state transition model as a stochastic neural network. Therefore, we can use the followingprediction step to update each ensemble member, given a sequence of latent states Xt−N:t−1:xit|t−N:t−1∼fθθθ(xit|t−N:t−1|at,xit−N:t−1),∀i∈E. (3)Where fθθθis a transformer-style neural network with multiple attention layers. In our framework,the latent state and the action at tare processed by positional and type embedding layers [38] priorto being fed into fθθθ. Matrix Xt|t−N:t−1∈Rdx×Eholds the updated ensemble members whichare propagated one step forward in latent space. For simplicity, we represent Xt|t−N:t−1asXtto denote the predicted state. Further elaboration on positional embeddings, type embeddings, andfilter initialization can be found in Appendix A.1 for more comprehensive details.Auxiliary query01Causality-enforced Attention Map KeysEnsembleLatent state Latent observation Latent observationFigure 3: Attention gain (AG) module uses a learnedcausality-enforced attention map to replace Kalman gain.Update step : A crucial step of the filter-ing process is the update step, which in-volves calculating the gain value. Tradi-tional KF uses the Kalman gain to correctthe state by comparing the uncertainty orcovariance obtained from state space andobservation space, it requires an explicitfunction to map the state to the measure-ment. As a result, some sensor measure-ments like images or deep-learned featuresare unable to be used in the formulation di-rectly. The proposed attention gain (AG)module, on the other hand, eliminates theneed for an explicit observation model and can directly utilize high-dimensional features. By lever-aging this approach, our framework enables a more flexible and efficient integration of measure-ments without the explicit requirement of a mapping function from the state to the measurement do-main. Instead of using one sensor encoder, we use multiple sensor encoders [s1(·), s2(·),···, sM(·)]to learn latent observations from each modality: ̃y(i,m)t∼sm( ̃y(i,m)t|ymt),∀i∈E, m∈M. (4)Mis the number of modalities in the system, M∈Z+. The encoders generate a series of latentobservations, ̃Yt= [ ̃Y1t,···, ̃YMt]∈RMdx×E, where ̃Ymt= [ ̃y(1,m)t,···, ̃y(E,m )t ]∈Rdx×E.The latent observations are then concatenated with predicted state Xtas input to the AG model:ˆXt=softmax QQQ(X′t⊕ ̃Y′t)T√E◦ ̃M ̃M ̃M!(Xt⊕ ̃Yt), (5)4where “ ⊕” denotes the concatenation and “ ◦” is the Hadamard product, and ˆXtis the final output.In general, an attention module typically receives three sequences of tokens: queries QQQ, keys KKKandvalues VVV. In our case, we define (X′t⊕ ̃Y′t)as the KKKtokens, where X′tand ̃Y′tare obtained byzero-centering, and the actual values of (Xt⊕ ̃Yt)are regarded as the VVVtokens. As illustrated inFig. 3, the length of the KKKtokens is denoted as dk= (M+ 1)dx, where each token has a dimensionofE, representing the distribution along this particular token index.In a traditional attention mechanism, the proximity of QQQandKKKis measured, and VVVthat is associatedwithKKKis utilized to generate outputs. However, we posit that within each latent vector, every indexis probabilistically independent, and index iof a latent state should only consider index iof eachlatent observation. To accomplish this, we utilize matrix ̃M ̃M ̃Mto retain only the diagonal elementsof each (dx×dx)attention map, which enforces causality and allows the attention weights to bedetermined according to the corresponding indices. As depicted in Fig. 3, the red line represents themapping for a single latent state token index. Auxiliary query tokens QQQ∈Rdx×Eare introduced astrainable parameters in the neural network to facilitate learning. It is important to note that both theQQQandKKKtokens undergo positional embedding before being fed into the AG module.Placing Conditions on the Latent Space: Within the framework of Kalman filters, the updatestep plays a crucial role in aligning the predicted observation with the observations obtained fromsensors. Within the framework of α-MDF, we ensure consistency in the latent space by introduc-ing a decoder model D. This decoder model, implemented using multilayer perceptrons, projectsthe latent space onto the actual state space. By doing so, we resolve the alignment challenges inmultimodal learning [39], and gain meaningful comparisons when conducting sensor fusion andmeasurement update. Let xxxtbe the ground truth state at t, the loss functions are defined as:Lfθθθ=∥D(fθθθ(Xt))−xxxt∥22,Le2e=∥D(ˆXt)−xxxt∥22,Ls=∥D(sm(ymt))−xxxt∥22. (6)The final loss function is L=Lfθθθ+Le2e+Ls, where Le2eis the end-to-end loss. Lfθθθis used tosupervise the state transition model. The latent observation conditioning is provided with Lsduringthe training process, note that the conditioning operation is applied when the modalities collectivelyprovide information about the complete state. The modular architecture of α-MDF provides a keyadvantage in facilitating training and testing with masked modalities. The attention matrix ̃M ̃M ̃Mcan bedisabled (set the attention values to zero) based on different input sensor modalities, thus improvingthe model’s resilience to missing modalities.4 ExperimentsWe conduct a series of experiments to evaluate the efficacy of the α-MDF framework. Specifically,we aim to answer the following questions: (a) Can the α-MDF framework generalize across varioustasks? (b) To what extent does the new filtering mechanism improve state tracking performancewhen compared to the current state-of-the-art? (c) How does the use of multiple modalities compareto a subset of modalities for state estimation with differentiable filters? Therefore, we evaluatethe effectiveness of α-MDF across multiple robotics tasks, each with distinct setups: (1) Visualodometry for autonomous driving, (2) Robot manipulation employing multi-modalities in bothreal-world and simulation, and (3) Soft robot modeling task. Our study examines two categories ofbaselines: (a) DF baselines such as those proposed in [9, 28, 26], including dEKF [9], DPF [28], anddPF-M-lrn [9]; and (b) sensor fusion baselines proposed in [22]. Additional details on the baselinescan be found in Appendix A.3.4.1 Visual Odometry TaskIn this experiment, we evaluate the performance of α-MDF on the popular KITTI Visual Odometrydataset [40]. Since the visual odometry task uses a single modality, we only consider RGB imagesas the input modality in order to make a fair comparison with the baselines [9, 28, 26]. The actualstate is defined as a 5-dimensional vector xxx= [x, y, θ, v, ̇θ]T, including the position and orientationof the vehicle, and the linear and angular velocity. We use the latent state x∈R256forα-MDF.In comparison to dEKF, DPF, and dPF-M-lrn, we observe a reduction in the translational error of501attention mapt = 4 sec t = 0 sec t = 12 secAction: the robot puts down the Pepsi can Action: the robot picks up the milk carton01attention mapt = 4 sec t = 12 sec t = 20 secState RGB Depth Joint State RGB JointFigure 4: Learned attention gain. Left: manipulation in a simulated environment with modalities [ y1,y2,y3],andright : real robot manipulation with modalities [ y1,y3]. The attention maps indicate the attention weightsassigned to each modality during model inference. In the visualization, regions in red correspond to lowattention values, while those in blue indicate high attention values.approximately 88%, 83%, and 79% for Test 100/200/400/800. The results also reflect a considerablereduction in rotational error of approximately 64%, 54%, and 46% as compared to each of thebaselines. We report a detailed experimental setup and thorough results in Appendix B.1.4.2 Multimodal Manipulation TaskThis experiment aims to evaluate the effectiveness of the α-MDF framework in a robot manipulationscenario. Specifically, we use α-MDF for monitoring the state of a UR5 robot during tabletop ar-rangement tasks. Similar to behavioral cloning from observation tasks [41], actions are not availableas inputs for this study. Instead, we train α-MDF to learn how to propagate the state of the robotover time. The evaluation involves three manipulation tasks, namely: (1) estimating the state of therobot in a simulated environment, (2) estimating the state of the real-world robot, and (3) estimatingthe joint state of the robot and the object being manipulated.Task Setup and Data: Forα-MDF, we define the latent state x∈R256for all the sub-tasks. Theactual state of the UR5 robot is defined by xxxR, which consists of the joint angles ( J1-J7) and theCartesian coordinates (x, y, z )of the robot’s end-effector (EE). xxxOdenotes the state of the object,which only includes the location (x, y, z )of the object. The complete set of modalities comprises[y1,y2,y3,y4], where y1∈R224×224×3represents RGB images, y2∈R224×224represents depthmaps, y3∈R7represents proprioceptive inputs (joint angles), and y4∈R6represents Force/torque(F/T) sensor readings. However, the input modalities for each of the three tasks may differ; for task(1), it involves [ y1,y2,y3], for task (2), it comprises [ y1,y3], while task (3) has [ y1,y2,y3,y4]. Amore detailed description of the task setup and data collection is supplied in Appendix B.2.Table 1: Result evaluations on UR5 manipulation taskMethodReal-world (MAE) Simulation (MAE)Joint (deg) EE (cm) Joint (deg) EE (cm)dEKF [9] 16.08 ±0.1 5.67 ±0.1 4.93 ±0.2 1.91 ±0.1DPF [28] 15.93 ±0.1 5.08 ±0.3 4.46 ±0.2 1.51 ±0.2dPF-M-lrn [9] 12.83 ±0.1 3.95 ±0.4 3.82 ±0.2 1.26 ±0.1α-MDF 7.49±0.1 3.81±0.2 2.84±0.1 1.06±0.1Means ±standard errors.Results: Using the same comparisonprotocol as in [9, 28, 26], Table 1 com-pares the proposed framework’s perfor-mance with other DF baselines. Notethat all baselines perform tracking inactual space, therefore, we use a pre-trained sensor encoder to process RGBmodality for all DFs and supplied the la-tent embedding to α-MDF, as DF base-lines only take one modality. α-MDF outperforms dEKF and DPF, reducing errors by 33% and25% in real-world and 45% and 30% in simulation, with an average MAE of 3.81cm and a de-viation of 1.06cm from ground truth for end-effector positions. Additionally, α-MDF exhibits a42% and 26% improvement in estimating joint angles compared to dPF-M-lrn. In the case of fil-tering with multiple modalities , results presented in Table 2 show clear improvements achieved byα-MDF in comparison to other sensor fusion techniques. The baselines are reproduced followingthe procedure of [22] by providing the same pretrained sensor encoder to each modality. α-MDFoutperforms all other methods across all three manipulation tasks. In particular, it cuts the posi-tional error of the end-effector (EE) in half when compared to the crossmodal fusion strategy [22]6(a) UR5 manipulation task in real-worldSteps 0501001502002503003501.00.50.0Joint-1Joint (deg)Joint (deg)0501001502002503003500.000.250.50Joint-205010015020025030035021Joint-3EnsembleGTPred0501001502002503003500.40.20.0Joint-10501001502002503003500.00.5Joint-2050100150200250300321Joint-3EnsembleGTPred(b) UR5 manipulation task in simulation050100150200250300Steps808590attention StateRGBProprioceptionSteps050100150200250300Steps 050100150200attention Missing modalityStateRGBDepthProprioceptionNo RGB No DepthNo ProprioceptionFigure 5: Predicted joint angle trajectories and the corresponding accumulated attention values for each modal-ity. (a) represents the results attained from the actual robot, whereas (b) illustrates attention values for allmodalities both with and without masking certain modalities.Table 2: Result evaluations on UR5 manipulation task with multimodal sensor fusion baselines.MethodSimulation (MAE) Real-world (MAE) Simulation with F/T (MAE)Joint (deg) EE (cm) Joint (deg) EE (cm) Joint (deg) EE (cm) Obj (cm)Feature Fusion [22] 7.58 ±0.12 3.15 ±0.16 11.25 ±1.17 5.65 ±0.01 3.62 ±0.09 2.72 ±0.02 8.36 ±0.06Unimodal [22] 7.46 ±0.32 3.18 ±0.03 11.02 ±0.08 9.52 ±0.07 3.97 ±0.08 3.63 ±0.05 10.23 ±0.10Crossmodal [22] 3.64 ±0.34 1.91 ±0.04 5.98 ±0.08 7.35 ±0.05 3.12 ±0.02 3.25 ±0.02 5.54 ±0.02α-MDF 2.19±0.09 0.75±0.01 5.24±0.04 3.04±0.01 1.41±0.04 0.90±0.01 1.65±0.01Means ±standard errors.on the real-robot ( 7.35cm→3.04cm). In simulation tasks, it achieves an even better reduction intracking error ( 3.25cm→0.90cm in simulation with F/T sensing). We present a visualization of thelearned attention gain in the filtering process (Fig. 4) and state tracking results with and without cer-tain modalities (Fig. 5). Despite the attention values changing when certain modalities are missing,α-MDF still achieves stable results. Further results and explanations can be found in AppendixB.2.4.3 Soft robot ModelingLayer 1Layer 2Layer 3Layer 4Layer 5IMU 5IMU 1MoCapInter-layer ActuatorStrutIntra-layer ActuatorCableFigure 6: The tenseg-rity robot structure.Tensegrity structures [42] have become popular in recent years since theybridge the gap between an inherently flexible system and the ability to userigid components [43, 44, 45]. However, modeling such a complex systemcontinues to pose considerable challenges due to the high nonlinearity. Thisexperiment involves implementing the α-MDF to model the dynamics of asoft robot system, especially Tensegrity robot [45].Task Setup and Data: The robot structure is shown in Fig. 6 with 5 layersof tensegrity. The actual state of a soft robot at time tis represented by a7-dimensional vector xxxt= [x, y, z, qx,qy,qz,qw]T, which denotes the posi-tion and orientation of the robot’s hand tip. The quaternion vector qrepresentsthe posture of the robot w.r.t the base (layer 1’s bottom). In this task, we de-finex∈R256as the latent state. The complete set of modalities comprises[y1,y2,y3], where y1∈R224×224×3represents RGB images, y2∈R224×224is depth maps, andy3∈R30is proprioceptive inputs (IMUs). The action atof the system is the pressure vector ofthe 40 pneumatic cylinder actuators, where at∈R40. In this experiment, synthetic depth maps aregenerated offline using the DPT model [46]. Figure. 7 shows the recordings of RGB and the depthmodalities, further details regarding the task setup and data collection is in Appendix B.3.7Pick up the red can Put down the Pepsi Push the red can to the left (a) Task 1 (b) Task 2 (c) Task 3 101101025050075010001250150017500.51.0EnsembleGTPredt=0.0s t=0.99s t=1.32s Timestep x (m) y (m) z (m) Figure 7: Estimated end-effector (EE) positions for tensegrity robot. Left: the RGB and depth modalities[y1,y2], and right : state estimation results with ensemble distribution.Results: The soft robot modeling task is evaluated using 10-fold cross-validation and the meanabsolute error (MAE) metric, and the results are presented in Table 3. Our results demon-strate that α-MDF outperforms the state-of-the-art methods in terms of DFs, achieving a MAEof 8.99cm. Specifically, our approach yields an MAE on the end-effector (EE) position estimationthat is 45%, 34%, and 29% lower than that obtained by dEKF, DPF, and dPF-M-lrn, respectively.Table 3: Result evaluation on soft robot modeling task.RGB Depth IMUs EE (cm) q(101)dEKF [9] ✓ 16.38±0.10 1.01 ±0.03DPF [28] ✓ 13.68±0.02 0.96 ±0.03dPF-M-lrn [9] ✓ 12.66±0.09 1.10 ±0.03α-MDF ✓ 8.99±0.02 0.79±0.03Feature Fusion [22] ✓ ✓ ✓ 8.35±0.22 0.60 ±0.03Unimodal [22] ✓ ✓ ✓ 2.78±0.05 0.25 ±0.02Crossmodal [22] ✓ ✓ ✓ 2.14±0.05 0.15 ±0.02α-MDF ✓ ✓ ✓ 1.67±0.09 0.12±0.01Means ±standard errors.Of the sensor fusion baselines, cross-modal fusion [22] exhibits marginallybetter outcomes than others, althoughit do not show any advantages overα-MDF in predicting EE positions(2.14cm →1.67cm). Notably, α-MDFsurpasses the feature fusion strategy bya significant margin of 4-fold. Addition-ally, appendix B.3 delves into an explo-ration of the potential benefits of modal-ity selection for state estimation, whereoptimal combinations can be selected to achieve even higher accuracy. The results presented in Fig. 7demonstrate the efficacy of α-MDF in accurately estimating the state of soft robots in a multimodalsetting, the ensemble distribution is indicated by gray shade representing the model uncertainty.With stable performance achieved over an extended duration of inference, α-MDF has shown thepotential in modeling dynamics for various complex non-linear systems.5 ConclusionThis paper illustrates how utilizing attention as a gain mechanism in differentiable Bayesian filteringand multimodal learning can significantly enhance the accuracy of robot state estimation in numer-ous tasks. Proposed α-MDF is a unique differentiable filter that conducts filtering on a compressedmultimodal latent representation, while preserving the integrity of the Kalman filter algorithm com-ponent. Our experiments demonstrate that α-MDF is appropriate for learning both rigid body andsoft robot dynamics, exceeding baseline performance by up to 4-fold. Moving forward, we plan toinvestigate the value of incorporating additional modalities, such as sound, temperature, and prox-imity sensing, into α-MDF.Limitation: An obvious difference of α-MDF when compared to traditional filters is the requiredlearning process – this typically takes multiple hours of training on current machines. In a similarvain, an inherent assumption is that the training and test distributions do not differ substantially,i.e., the problem of concept drift. To date, we have successfully tested the algorithm with latentstates consisting of several hundred variables. We use 256 dimensions in the experiments for con-sistency. However, more research is required to understand α-MDF’s performance when filteringover thousands of variables. As with any deep learning approaches, hyper-parameter tuning may berequired to produce high-performing models. Another practical observation is that utilizing moremodalities does not always translate to improved performance, which is consistent with findingsin [47]. Including redundant modalities can impose longer training times and pose greater difficultyfor the model in extracting valuable information from the input modalities. A pre-processing stepfor feature selection may be advisable.8AcknowledgmentsThis work has received partial support from the National Science Foundation under grants CNS-1932068, IIS-1749783. Additionally, partial support has been provided by JSPS KAKENHI GrantNumbers 22H03671 and 22K19815. We would like to sincerely acknowledge the valuable com-ments and feedback provided by the reviewers. Our gratitude also goes to Yuhei Yoshimitsu forassisting in the data collection with the tensegrity robot. Furthermore, we would like to express ourappreciation for the insightful discussions and constructive feedback received from Fabian Weigendand Shubham Sonawani during the review process.References[1] S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics . MIT Press, Cambridge, Mass., 2005.ISBN 0262201623 9780262201629.[2] L. Wang, G. Wang, S. Jia, A. Turner, and S. Ratchev. Imitation learning for coordinatedhuman–robot collaboration based on hidden state-space models. Robotics and Computer-Integrated Manufacturing , 76:102310, 2022. ISSN 0736-5845. doi:https://doi.org/10.1016/j.rcim.2021.102310.[3] S. Chen. Kalman filter for robot vision: a survey. IEEE Transactions on industrial electronics ,59(11):4409–4420, 2011.[4] J. Reher, W.-L. Ma, and A. D. Ames. Dynamic walking with compliance on a cassie bipedalrobot. In 2019 18th European Control Conference (ECC) , pages 2589–2595. IEEE, 2019.[5] L. Ljung. System identification . Springer, 1998.[6] G. Evensen. The ensemble kalman filter: Theoretical formulation and practical implementa-tion. Ocean dynamics , 53(4):343–367, 2003.[7] S. S. Rangapuram, M. W. Seeger, J. Gasthaus, L. Stella, Y . Wang, and T. Januschowski. Deepstate space models for time series forecasting. In S. Bengio, H. Wallach, H. Larochelle,K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Pro-cessing Systems , volume 31. Curran Associates, Inc., 2018.[8] A. Klushyn, R. Kurle, M. Soelch, B. Cseke, and P. van der Smagt. Latent matters: Learningdeep state-space models. Advances in Neural Information Processing Systems , 34, 2021.[9] A. Kloss, G. Martius, and J. Bohg. How to train your differentiable filter. Autonomous Robots ,pages 1–18, 2021.[10] D. Ramachandram and G. W. Taylor. Deep multimodal learning: A survey on recent advancesand trends. IEEE signal processing magazine , 34(6):96–108, 2017.[11] M. Y . Seker, A. Ahmetoglu, Y . Nagai, M. Asada, E. Oztop, and E. Ugur. Imitation and mirrorsystems in robots through deep modality blending networks. Neural Networks , 146:22–35,2022.[12] M. A. Lee, Y . Zhu, P. Zachares, M. Tan, K. Srinivasan, S. Savarese, L. Fei-Fei, A. Garg, andJ. Bohg. Making sense of vision and touch: Learning multimodal representations for contact-rich tasks. IEEE Transactions on Robotics , 36(3):582–596, 2020.[13] T. Xue, W. Wang, J. Ma, W. Liu, Z. Pan, and M. Han. Progress and prospects of multimodalfusion methods in physical human–robot interaction: A review. IEEE Sensors Journal , 20(18):10355–10370, 2020.[14] S. Sonawani, Y . Zhou, and H. B. Amor. Projecting robot intentions through visual cues: Staticvs. dynamic signaling. arXiv preprint arXiv:2308.09871 , 2023.9[15] Z. Yu, M. Chen, Z. Zhang, S. You, and F. Ren. Transupr: A transformer-based uncertain pointrefiner for lidar point cloud semantic segmentation. arXiv preprint arXiv:2302.08594 , 2023.[16] J. Huang, A. Mishra, B. C. Kwon, and C. Bryan. Conceptexplainer: Interactive explanationfor deep neural networks from a concept perspective. IEEE Transactions on Visualization andComputer Graphics , 29(1):831–841, 2022.[17] Y . Zhou, S. Sonawani, M. Phielipp, H. Ben Amor, and S. Stepputtis. Learning modularlanguage-conditioned robot policies through attention. Autonomous Robots , pages 1–21, 2023.[18] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning , pages 785–799. PMLR, 2023.[19] Y . Zhou, S. Sonawani, M. Phielipp, S. Stepputtis, and H. B. Amor. Modularity through atten-tion: Efficient training and transfer of language-conditioned policies for robot manipulation.arXiv preprint arXiv:2212.04573 , 2022.[20] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. Ben Amor. Language-conditioned imitation learning for robot manipulation tasks. Advances in Neural InformationProcessing Systems , 33:13139–13150, 2020.[21] M. A. Lee, Y . Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg.Making sense of vision and touch: Self-supervised learning of multimodal representations forcontact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA) ,pages 8943–8950. IEEE, 2019.[22] M. A. Lee, B. Yi, R. Martín-Martín, S. Savarese, and J. Bohg. Multimodal sensor fusion withdifferentiable filters. In 2020 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 10444–10451. IEEE, 2020.[23] A. Corenflos, J. Thornton, G. Deligiannidis, and A. Doucet. Differentiable particle filteringvia entropy-regularized optimal transport. In International Conference on Machine Learning ,pages 2100–2111. PMLR, 2021.[24] N. A. Piga, U. Pattacini, and L. Natale. A differentiable extended kalman filter for objecttracking under sliding regime. Frontiers in Robotics and AI , 8:686447, 2021.[25] W. Li, X. Chen, W. Wang, V . Elvira, and Y . Li. Differentiable bootstrap particle filters forregime-switching models. arXiv preprint arXiv:2302.10319 , 2023.[26] T. Haarnoja, A. Ajay, S. Levine, and P. Abbeel. Backprop kf: Learning discriminative deter-ministic state estimators. In Advances in neural information processing systems , pages 4376–4384, 2016.[27] P. Karkus, X. Ma, D. Hsu, L. P. Kaelbling, W. S. Lee, and T. Lozano-Pérez. Differentiablealgorithm networks for composable robot learning. arXiv preprint arXiv:1905.11602 , 2019.[28] R. Jonschkowski, D. Rastogi, and O. Brock. Differentiable particle filters: End-to-end learningwith algorithmic priors. arXiv preprint arXiv:1805.11122 , 2018.[29] X. Chen, H. Wen, and Y . Li. Differentiable particle filters through conditional normalizingflow. In 2021 IEEE 24th International Conference on Information Fusion (FUSION) , pages1–6. IEEE, 2021.[30] Y . Wang, B. Liu, J. Wu, Y . Zhu, S. S. Du, L. Fei-Fei, and J. B. Tenenbaum. Dualsmc:Tunneling differentiable filtering and planning under continuous pomdps. arXiv preprintarXiv:1909.13003 , 2019.10[31] X. Liu, G. Clark, J. Campbell, Y . Zhou, and H. B. Amor. Enhancing state estimation inrobots: A data-driven approach with differentiable ensemble kalman filters. arXiv preprintarXiv:2308.09870 , 2023.[32] X. Liu, S. Ikemoto, Y . Yoshimitsu, and H. B. Amor. Learning soft robot dynamics using dif-ferentiable kalman filters and spatio-temporal embeddings. arXiv preprint arXiv:2308.09868 ,2023.[33] Y . Bengio, A. Courville, and P. Vincent. Representation learning: A review and new per-spectives. IEEE transactions on pattern analysis and machine intelligence , 35(8):1798–1828,2013.[34] G.-H. Liu, A. Siravuru, S. Prabhakar, M. Veloso, and G. Kantor. Learning end-to-end mul-timodal sensor policies for autonomous navigation. In Conference on Robot Learning , pages249–261. PMLR, 2017.[35] M. Garnelo, D. Rosenbaum, C. Maddison, T. Ramalho, D. Saxton, M. Shanahan, Y . W. Teh,D. Rezende, and S. A. Eslami. Conditional neural processes. In International conference onmachine learning , pages 1704–1713. PMLR, 2018.[36] H. Van Hoof, N. Chen, M. Karl, P. van der Smagt, and J. Peters. Stable reinforcement learningwith autoencoders for tactile and visual data. In 2016 IEEE/RSJ international conference onintelligent robots and systems (IROS) , pages 3928–3934. IEEE, 2016.[37] J. Hansen, F. Hogan, D. Rivkin, D. Meger, M. Jenkin, and G. Dudek. Visuotactile-rl: Learningmultimodal manipulation policies with deep reinforcement learning. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 8298–8304. IEEE, 2022.[38] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[39] P. P. Liang, A. Zadeh, and L.-P. Morency. Foundations and recent trends in multimodal machinelearning: Principles, challenges, and open questions. arXiv preprint arXiv:2209.03430 , 2022.[40] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti visionbenchmark suite. In 2012 IEEE conference on computer vision and pattern recognition , pages3354–3361. IEEE, 2012.[41] F. Torabi, G. Warnell, and P. Stone. Behavioral cloning from observation. arXiv preprintarXiv:1805.01954 , 2018.[42] R. E. Skelton and M. C. Oliveira. Tensegrity Systems . Springer Nature, 2009. ISBN 978-1-4419-4491-7.[43] E. Jung, V . Ly, N. Cessna, M. L. Ngo, D. Castro, V . SunSpiral, and M. Teodorescu. Bio-inspired tensegrity flexural joints. In 2018 IEEE International Conference on Robotics andAutomation (ICRA) , pages 5561–5566. IEEE, 2018.[44] K. Kim, A. K. Agogino, and A. M. Agogino. Rolling locomotion of cable-driven soft sphericaltensegrity robots. Soft robotics , 7(3):346–361, 2020.[45] S. Ikemoto, K. Tsukamoto, and Y . Yoshimitsu. Development of a modular tensegrity robot armcapable of continuous bending. Frontiers in Robotics and AI , 8, 2021.[46] R. Ranftl, A. Bochkovskiy, and V . Koltun. Vision transformers for dense prediction. In Pro-ceedings of the IEEE/CVF International Conference on Computer Vision , pages 12179–12188,2021.11[47] W. Wang, D. Tran, and M. Feiszli. What makes training multi-modal classification networkshard? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion, pages 12695–12705, 2020.[48] L. V . Jospin, H. Laga, F. Boussaid, W. Buntine, and M. Bennamoun. Hands-on bayesian neuralnetworks—a tutorial for deep learning users. IEEE Computational Intelligence Magazine , 17(2):29–48, 2022. doi:10.1109/MCI.2022.3155327.[49] Y . Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncer-tainty in deep learning. In international conference on machine learning , pages 1050–1059.PMLR, 2016.[50] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameteriza-tion trick. Advances in neural information processing systems , 28, 2015.[51] K. Lenac, J. ́Cesi ́c, I. Markovi ́c, and I. Petrovi ́c. Exactly sparse delayed state filter on liegroups for long-term pose graph slam. The International Journal of Robotics Research , 37(6):585–610, 2018.[52] J. Zhang and S. Singh. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In2015 IEEE International Conference on Robotics and Automation (ICRA) , pages 2174–2181.IEEE, 2015.[53] C.-C. Chou and C.-F. Chou. Efficient and accurate tightly-coupled visual-lidar slam. IEEETransactions on Intelligent Transportation Systems , 2021.[54] I. Cviši ́c, J. ́Cesi ́c, I. Markovi ́c, and I. Petrovi ́c. Soft-slam: Computationally efficient stereo vi-sual simultaneous localization and mapping for autonomous unmanned aerial vehicles. Journalof field robotics , 35(4):578–595, 2018.[55] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE, 2012.[56] Y . Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín. robosuite: A modular simulationframework and benchmark for robot learning. In arXiv preprint arXiv:2009.12293 , 2020.[57] J. Lu, A. Liu, F. Dong, F. Gu, J. Gama, and G. Zhang. Learning under concept drift: A review.IEEE transactions on knowledge and data engineering , 31(12):2346–2363, 2018.[58] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ComputerVision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 630–645. Springer, 2016.[59] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.12A Details in α-MDFThis section provides a detailed overview of the previously mentioned α-MDF modules, and de-scribes differentiable Ensemble Kalman filters as the underlying DFs framework for α-MDF.A.1 Model Initialization and Embedding FunctionsAn auxiliary model Ais supplied in the filtering process to support training by starting the filtervia projecting the actual state xxxt−N:t−1from low-dimensional space to latent space. The model isimplemented using stochastic neural networks (SNNs) [48],xit−N:t−1∼A(xit−N:t−1|xxxt−N:t−1),∀i∈E, (7)where xit−N:t−1is one latent state, the latent state ensemble is obtained by sampling AforEtimes.During inference, we employ the trained sensor encoders’ output, which is the latent representationof RGB, depth, or proprioception, as the initial state to initiate the filtering process.Regarding the prediction step of α-MDF, we apply positional embedding layers (sinusoidal func-tions) [38] in the transformer process model (Eq. 3) to generate eeet−N:t−1as the embedding fortime-series data, eeet−N:t−1=fL(Xt−N:t−1)∈Rdx×(N−1). The positional embedding layer is uti-lized to label the state by index it with time t. When activating the action atin the process model,we also utilize a type embedding layer that indexes eeet−N:t−1andatwith 0 and 1, and then fedto sinusoidal functions. Subsequently, the element-wise summation of outputs obtained from theaforementioned procedures serve as input to the transformer process model for further processing.A.2 Differentiable Ensemble Kalman FilterUnlike prior proposals for differentiable filters, such as dEKF [9] and DPF [28], Differentiable En-semble Kalman Filter [6] leverages recent advancements in stochastic neural networks (SNNs) [48].Specifically, we draw inspiration from the work in [49], which established a theoretical connectionbetween the Dropout training algorithm and Bayesian inference in deep Gaussian processes. As aresult, we can use stochastic forward passes to produce empirical samples from the predictive poste-rior of a neural network trained with Dropout. Hence, for the purposes of filtering, we can implicitlymodel the process noise by sampling state from a neural network trained on the transition dynamics,i.e.,xt∼fθθθ(xt−1). In contrast to previous approaches [28, 9], the transition network fθθθ(·)modelsthe system dynamics, as well as the inherent noise model in a consistent fashion without imposingdiagonality.Prediction Step : Similar to α-MDF, we use an initial ensemble of Emembers to represent theinitial state distribution X0= [x10, . . . ,xE0],E∈Z+. We leverage the stochastic forward passesfrom a trained state transition model to update each ensemble member:xit|t−1∼fθθθ(xit|t−1|xit−1|t−1),∀i∈E. (8)Matrix Xt|t−1= [x1t|t−1,···,xEt|t−1]holds the updated ensemble members which are propagatedone step forward through the state space. Note that sampling from the transition model fθθθ(·)(usingthe SNN methodology described above) implicitly introduces a process noise.Update step : Given the updated ensemble members Xt|t−1, a nonlinear observation model hψψψ(·)isapplied to transform the ensemble members from the state space to observation space. Following ourmain rationale, the observation model is realized via a neural network with weights ψψψ. Accordingly,the update equations become:HtAt=HtXt−"1EEXi=1hψψψ(xit),···,1EEXi=1hψψψ(xit)#,(9) ̃yit∼s( ̃yit|yt),∀i∈E. (10)HtXtis the predicted observation, and HtAtis the sample mean of the predicted observation at t.Traditional Ensemble Kalman Filter treats observations as random variables. Hence, the ensemble13can incorporate a measurement perturbed by a small stochastic noise to reflect the error covarianceof the best state estimate [6]. In differentiable Ensemble Kalman Filter, we incorporate a Bayesiansensor encoder s(·). Sensor encoder serves to learn projections from observation space to latentspace as in Eq. 10, where ytrepresents the noisy sensor observation. Sampling from sensor encoderyields latent observations ̃Yt= [ ̃y1t,···, ̃yE)t]. The KF update step can then be continued by usingthe learned observation and predicted observation:Kt=1E−1At(HtAt)T(1E−1(HtAt)(HtAt)T+R)−1. (11)The measurement noise model Ris implemented using a multilayer perceptron (MLP), similarto the implementation in [9]. The MLP takes a learned observation ̃Ytat time tand producesa noise covariance matrix. The final estimate of the ensemble ˆXtis obtained by performing themeasurement update step, given by:ˆXt=Xt+Kt( ̃Yt−HtXt). (12)In inference, the ensemble mean ̄ xt|t=1EPEi=1xit|tis used as the updated state.A.3 BaselinesIn our study, we examine two categories of baselines: (a) DFs baselines, which consist of existingmethods such as those proposed in [9, 28, 26], and (b) sensor fusion strategies, as proposed in [22].Table 4: Dimensions pertinent to each of the robot state estimation tasks.MethodVisual Odometry UR5 Manipulation Soft RobotState Observation State Observation State Observation ActiondEKF [9] 5 2 10 10 7 7 40DPF [28] 5 2 10 10 7 7 40dPF-M-lrn [9] 5 2 10 10 7 7 40Feature Fusion [22] - - 10/13 10/13 7 7 40Unimodal [22] - - 10/13 10/13 7 7 40Crossmodal [22] - - 10/13 10/13 7 7 40α-MDF 256 256 256 256 256 256 40Dimensionality: Table 4 presents the dimensions for the state, observations, and actions utilizedfor each of the tasks. To ensure consistency, we opt for a dimension of 256 for α-MDF in all tasks,thus, enabling filtering over high-dimensional spaces. Unlike the baseline methods, which use low-dimensional state definitions, we filter over higher dimension spaces with α-MDF.Differentiable Filters: To maintain consistency in the comparison of results against the DFs base-lines, we train α-MDF with a single modality. The baselines in this category include the differen-tiable Extended Kalman filter (dEKF) [9], differentiable particle filter (DPF) [28], and the modifieddifferentiable particle filter (dPF-M-lrn) [9], which uses learned process and process noise models.For dEKF, the Jacobian matrix in the prediction step can either be learned end-to-end or supplied ifthe motion model is known. DPF employs 100 particles for both training and testing and also in-corporates an observation likelihood estimation model l. This module takes in an image embeddingand produces a likelihood that updates each particle’s weight. Unlike DPF, dPF-M-lrn implementsa learnable process noise model. It also adopts a Gaussian Mixture Model for calculating the like-lihood for all particles. It is worth noting that all the baseline methods perform Kalman filteringon low-dimensional actual state space, whereas α-MDF executes the filtering process in the latentspace.Sensor Fusion: Regarding sensor fusion baselines, we use three strategies discussed in [22], namely,Feature Fusion, Unimodal Fusion, and Crossmodal Fusion. The Feature Fusion strategy aims toprocess each modality individually and subsequently merge the modalities to generate a multimodalfeature set using neural networks, which is then used for state estimation. The Unimodal Fusion14treats each modality N ∼ (μμμM1t,ΣΣΣM1t)andN ∼ (μμμM2t,ΣΣΣM2t)as distributions and fuse two uni-modal distribution as one normally distributed multimodal distribution N ∼ (μμμt,ΣΣΣt):μμμt=(ΣΣΣM1t)−1μμμM1t+ (ΣΣΣM2t)−1μμμM2t(ΣΣΣM1t)−1+ (ΣΣΣM2t)−1,ΣΣΣt= ((ΣΣΣM1t)−1+ (ΣΣΣM2t)−1)−1, (13)where the associative property can be used for fusing more than two modalities. For CrossmodalFusion, information from one modality can be used to determine the uncertainty of the other ones,two coefficients are proposed as βββM1tandβββM2t, where each coefficient has the same dimension ofthe state, the fused distribution is:μμμt=βββM1t◦μμμM1t+βββM2t◦μμμM2tβββM1t+βββM2t,ΣΣΣt=BBBM1t◦ΣΣΣM1t+BBBM2t◦ΣΣΣM2tBBBM1t+BBBM2t, (14)where BBBMt= (βββMt)TβββMt. As mentioned in [22], each sensor encoder was independently trainedand subsequently used for end-to-end training with DFs. We adopt a similar approach, but with adifferentiable Ensemble Kalman Filter backbone in place instead. The resampling procedure fromthe fused distribution in this scenario is achieved by using the reparematerization trick [50].B Additional ExperimentsThis section presents supplementary experimental results for each task. For (1) Visual OdometryTasks, we offer full detailed experiments; however, for (2) Multimodal Manipulation Tasks and (3)Soft Robot Modeling Tasks, we concentrate mainly on ablation studies.B.1 Visual Odometry TasksIn this experiment, we investigate the performance of α-MDF on the popular KITTI Visual Odom-etry dataset [40]. We only consider RBG images as the input modality in order to make a fair com-parison with the baselines [9, 28, 26]. Following the same evaluation procedure as our baselines, wedefine the actual state of the moving vehicle as a 5-dimensional vector xxx= [x, y, θ, v, ̇θ]T, includingthe position and orientation of the vehicle, and the linear and angular velocity w.r.t. the current head-ing direction θ. The raw observation ycorresponds to the RGB camera image of the current frameand a difference image between the current frame and the previous frame, where y∈R150×50×6asshown in Fig. 8. The learned observation ̃yis defined as ̃y= [v, ̇θ]T, since only the relative changesof position and orientation can be captured between two frames. We use the latent state x∈R256forα-MDF.LSTM BKF dEKF DPF dPF-M DEnKF0.00.10.20.30.4Error rateTest 100 m/mTest 100 deg/mTest 100/200/400/800 m/mTest 100/200/400/800 deg/m100 0 100 200 300x (m)1000100200300400500y (m)EnsembleGTPredictionFrom Obs(a) Raw observation (b) State estimation - Trajectory 9(a) RGB images (b) Difference mapDifferent differentiable filtersE=4 E=8 E=16 E=32 E=64 E=128 E=5120.0000.0250.0500.0750.1000.125Error rateTest 100 m/mTest 100 deg/mTest 100/200/400/800 m/mTest 100/200/400/800 deg/m0 50 100 150 200 250 300 350 400100 50 100 150 200 250 300 350 400200 50 100 150 200 250 300 350 400101UncertaintyPredictionGTJoint-1 Joint-2TimestepJoint-3Figure 8: KITTI visual inputs.Data: The KITTI Visual Odometry dataset in-cludes 11 trajectories capturing the ground truthpose (translation and rotation matrices) of a ve-hicle navigating urban areas at a data collec-tion rate of approximately 10Hz. To facilitatethe learning process, we standardize the data bynormalizing each dimension to have a mean of0 and a standard deviation of 1 during training.To process the provided pose data, we convertthem to quaternions to capture the minimal changes between consecutive quaternion pairs. Subse-quently, the results are converted back to radians to represent the angular velocity ̇θ. This conversionensures that the angular velocity remains minimal and falls within the range of [−π, π].B.1.1 ResultsThe performance of state estimation is evaluated using an 11-fold cross-validation, whereby 1 tra-jectory is withheld at each time. The standard KITTI benchmark metrics, namely the translationalerror (m/m) and rotational error (deg/m), are reported in Table 5. The error metrics are computedfrom the test trajectory over all subsequences of 100 timesteps, as well as all subsequences of 100,15Table 5: Result evaluations on KITTI Visual Odometry task measured in m/m and deg/m denote thetranslational error and the rotational error.MethodTest 100 Test 100/200/400/800m/m deg/m m/m deg/mdEKF [9] 0.2646 ±0.004 0.1386 ±0.002 0.3159 ±0.002 0.0923 ±0.005DPF [28] 0.1344 ±0.002 0.1203 ±0.007 0.2255 ±0.001 0.0716 ±0.004dPF-M-lrn [9] 0.1720 ±0.010 0.0974 ±0.009 0.1848 ±0.004 0.0611 ±0.003α-MDF 0.0718 ±0.001 0.0954 ±0.001 0.0379 ±0.002 0.0328 ±0.001Means ±standard errors.200, 400, and 800 timesteps. Figure 9 presents the performance of α-MDF and other differentiablefiltering techniques. It is important to note that incorporating domain- and data-specific information,such as using stereo images [51], integrating LiDAR [52, 53], or applying SLAM and loop-closurerelated assumptions [51, 54], can yield lower error metrics. However, to ensure fair and compara-ble evaluations, we utilize the most commonly used setup when comparing filtering techniques in atask-agnostic fashion (as performed in [9, 28, 26]).LSTM BKF dEKF DPF dPF-M -MDF 0.00.10.20.30.4Error rateTest 100 m/mTest 100 deg/mTest 100/200/400/800 m/mTest 100/200/400/800 deg/mFigure 9: Visual Odometry results with different differen-tiable filters: the error rate for LSTM and BKF are reportedfrom [26], dEKF, DPF, and dPF-M are reproduced.Table 5 presents the outcomes of our pro-posed method in comparison with the ex-isting state-of-the-art DFs, namely dEKF,DPF, and dPF-M-lrn. In order to pro-vide a fair comparison, we do not includeunstructured LSTM models as baselinessince prior works [26, 9] have shown thatthey do not achieve comparable results.The pre-trained sensor encoder with thesame visual inputs is used and integratedinto all the DF frameworks evaluated. Inthis experiment, the motion model of thevehicle is known, and the only unknownpart of the state is the velocities. In light of the above, we adopt a learnable process model to updatestate variables alongside an established motion model to update the ( x,y,θ) variables. While thecomputed Jacobian matrix is supplied in training and testing for dEKF, our α-MDF demonstratessignificant improvements compared to dEKF, DPF, and dPF-M-lrn. Specifically, we observed a re-duction in the translational error of approximately 88%, 83%, and 79% for Test 100/200/400/800.The results also reflect a considerable reduction in rotational error of approximately 64%, 54%, and46% as compared to each of the baselines. Our analysis of α-MDF reveals that conducting filteringon high-dimensional observations in the latent space yields better results than conducting filteringon the actual state space.B.1.2 Compare to EKFIn this section, we present a comparison of the results obtained from a non-learning ExtendedKalman Filter (EKF) and α-MDF on the KITTI Visual Odometry task. As previously mentioned,the actual state of the moving vehicle is represented by a 5-dimensional vector xxx= [x, y, θ, v, ̇θ]T,while the observation ̃yis defined as ̃y= [v, ̇θ]T. The EKF can be formulated using the providedanalytical model.xxxt=f(xxxt−1) =Axxxt−1+qtqt∼N(0,Qt), ̃y=h(xxxt) +rt=Hxxxt+rtrt∼N(0,Rt).(15)where His identity matrix H= [0 0 0 1 00 0 0 0 1 ]. The EKF prediction step is:ˆxˆxˆxt=Axxxt−1+qt,ˆΣˆΣˆΣt=FΣΣΣt−1FT+Qt. (16)16where the Jacobian of the process model can be supplied via Taylor expansion,A=1 0 0 sin θ∆t00 1 0 cos θ∆t00 0 1 0 ∆ t0 0 0 1 00 0 0 0 1,F=∂f(xxxt−1)∂xxxt−1=1 0 vcosθ∆tsinθ∆t00 1 −vsinθ∆tcosθ∆t00 0 1 0 ∆ t0 0 0 1 00 0 0 0 1.(17)The update step for EKF is:St=HˆΣˆΣˆΣtHT+Rt,Kt=ˆΣˆΣˆΣtHTS−1t,xxxt= ˆxˆxˆxt+Kt( ̃y−Hˆxˆxˆxt),ΣΣΣt= (I−KtH)ˆΣˆΣˆΣt.(18)To ensure a fair and unbiased comparison, both the Extended Kalman Filter (EKF) and α-MDFmodels are provided with the same low-dimensional observation ̃y+ε, where εis a noise sampleobtained from a Gaussian distribution N ∼ (0,[1.5 00 0.1]).Table 6: Comparison between EKF and α-MDF.MethodTest 100 Test 100/200/400/800m/m deg/m m/m deg/mEKF 0.2391 ±0.02 0.1548 ±0.02 0.2757 ±0.03 0.0623 ±0.01α-MDF 0.1642 ±0.02 0.0593 ±0.01 0.1509 ±0.01 0.0327 ±0.01Means ±standard errors.We report a comparison viatranslational and rotational errorsin Table 6. For the EKF model,the noise covariance matrices QtandRtare manually fine-tuned.Additionally, we initialize the fil-ter with ΣΣΣ0=I. For α-MDF,we keep the same framework aswhen filtering over a latent state with 256 dimensions. However, we substitute the sensor encoderfrom s1tos2(refer to Table 12). This modification allows for projecting the low-dimensional ob-servation into the latent space. Our observations indicate that α-MDF, when utilizing attention gain,reduces the error over the EKF. α-MDF has the additional benefit of automatically learning noiseprofiles during the training process, thereby eliminating the manual tuning step required by the EKF.B.1.3 6D Motion StateTo conduct a more comprehensive investigation into the visual odometry task, we extended ouranalysis by employing a larger state space. In this section, we consider the 6D motion of the vehiclewhere 3 different heading directions are defined namely yaw θ, pitch ψ, and roll φ. The actual stateis defined as xxx= [x, y, z, φ, ψ, θ, v 1, v2, v3, ̇φ, ̇ψ, ̇θ]T. Similar to the previous setup, α-MDF takesthe image pair at t−1andtas input, with the observation ̃yis defined as ̃y= [v1, v2, v3, ̇φ, ̇ψ, ̇θ]T.Table 7: Using 6D motion state for α-MDF.Method AxisTest 100 Test 100/200/400/800m/m deg/m m/m deg/mα-MDF Yaw θ 0.072±0.001 0.097 ±0.004 0.041 ±0.003 0.033 ±0.001α-MDF Pitch ψ 0.032±0.002 0.013 ±0.001 0.028 ±0.003 0.019 ±0.001α-MDF Roll φ 0.033±0.004 0.032 ±0.010 0.049 ±0.001 0.029 ±0.002Means ±standard errors.The comparison resultsfor translational and rota-tional errors, with an in-creased state space, arepresented in Table 7. No-tably, the performance ofα-MDF remains stablealong the yaw axis, as ob-served in comparison tothe results reported in Table 5. Additionally, we observe smaller translational and rotational er-rors on the pitch and roll axes. This observation can be attributed to the relatively minor deviationson the zaxis, even during inclined maneuvers such as ascending or descending a hill. In conclusion,when the search space expands to include a larger state space, α-MDF demonstrates comparableresults, indicating its ability to handle increased complexity and maintain performance.B.2 Multimodal Manipulation TasksTask Setup: Forα-MDF, we define the latent state x∈R256for all the manipulation tasks. Theactual state of the UR5 robot is described by xxxR, which consists of the seven joint angles ( J1-J7)17Table 8: Ablation study on UR5 manipulation task with different combination of the modalities.RGB Depth Joint F/T Joint (deg) EE (cm) Obj (cm)Task (1)✓ 2.78±0.09 1.06 ±0.01 -✓ 3.65±0.10 1.38 ±0.05 -✓ 9.53±0.20 3.22±0.14 -✓ ✓ 2.39±0.11 1.01 ±0.02 -✓ ✓ 2.69±0.01 1.09 ±0.03 -✓ ✓ 1.91±0.08 0.64 ±0.03 -✓ ✓ ✓ 2.19±0.09 0.75±0.01 -Task (2)✓ 7.49±0.06 3.81±0.17 -✓ 5.47±0.08 3.32 ±0.04 -✓ ✓ 5.24±0.04 3.04±0.01 -Task (3)✓ ✓ ✓ 2.93±0.01 2.26 ±0.02 3.26 ±0.01✓ ✓✓ 3.16±0.20 2.34±0.04 3.66±0.30✓ ✓ ✓ 1.42±0.08 0.93 ±0.01 1.47±0.02✓ ✓ ✓ 1.37±0.02 0.94±0.01 1.78 ±0.06✓ ✓ ✓✓ 1.41±0.04 0.90±0.01 1.65±0.01Means ±standard errors.and the Cartesian coordinates (x, y, z )of the robot’s end-effector. This Cartesian coordinate sys-tem is centered at the manipulation platform’s origin point (0,0,0). On the other hand, the state ofthe object being manipulated is represented by xxxO, which only includes the Cartesian coordinates(x, y, z )of the object. The input modalities for each of the three tasks differ. In task (1), input isgiven through three modalities: y1,y2, andy3. The first modality y1∈R224×224×3is a camera im-age captured from a frontal angle. The second modality y2∈R224×224×1depicts depth maps fromthe same camera view. Lastly, y3is a proprioceptive input source with dimensions R7, representingthe joint angles’ values. In this task, the proprioceptive input specifically refers to the joint anglesas the source. In task (2), input is given by only two modalities: y1andy3, but from a real-worldperspective. In task (3), input is received from four modalities: y1,y2,y3, andy4.y4containsthe Force/torque (F/T) sensor readings from the robot gripper, where y4∈R6, while the first twomodalities are identical to task (1).Pick up the red can Put down the Pepsi Push the red can to the left (a) Task 1 (b) Task 2 (c) Task 3 Figure 10: The multimodal manipulation experiment involves the following subtasks: (a) Task 1 utilizing RGB,depth, and joint modalities, (b) Task 2 utilizing only RGB and joint modality, and (c) Task 3 utilizing RGB,depth, joint, and Force/torque (F/T) sensor modalities. The F/T sensor is mounted on the grabber, as depictedby the orange box.Data: Data collection is conducted for both simulation with MuJoCo [55] and real-world scenarios.We record the UR5 robot operating on a random object by performing one of “pick”, “push”, and“put down” actions. We collect 2,000 demonstrations in simulation for task (1), and 100 on thereal robot for task (2), with changing the location of each object for each demonstration. For task(3), we collect 2,000 demonstrations in simulation with adding the tactile sensors. We use ABRcontrol and robosuite [56] in addition to MuJoCo to ensure rigorous dynamics in the simulator. Eachdemonstration sequence has a length of approximately 350 steps with a timestep of 0.08 seconds.An 80/20 data split is utilized for training and testing each task. It should be noted that in all tasks,we normalize the joint modality y3and apply Gaussian noise to each joint angle, drawn from thedistribution N ∼ (0, σ2I)where σ2= 0.1. We collect the F/T sensor readings directly fromMuJoCo’s native touch sensor. Moreover, the depth maps obtained from MuJoCo are with no noisetherefore can be regarded as high-fidelity data.18B.2.1 Ablation StudyIn addition to the findings presented in Section 4.2, we perform a comprehensive ablation analysisfor each manipulation task to address the question, “How does the use of multiple modalities com-pare to a subset of modalities for state estimation with differentiable filters?”. Table 8 displays theoutcome for each task with various number of modalities using MAE metric. The highest marginof error is indicated by the red shading, while the complete modality is labeled by green shadingfor each task. Interestingly, even though using all modalities can generate comparable results, incertain tasks, utilizing all modalities does not necessarily guarantee superior performance comparedto utilizing a subset of modalities. Through our experiments in Task (1), it becomes apparent thatthe optimal performance is achieved by utilizing the subset of modalities [ y1,y2], which yieldsan improvement of joint angles ( 2.19◦→1.91◦). In Task (3), we observe that diverse subsets ofmodalities lead to superior state estimation results for joint angles, EE, and the object locationsrespectively. Analysis of Table 8 indicates an important role played by the depth map y2whenconsidering all observations. This suggests that y2is treated as high-fidelity data during training,thereby contributing the most towards the final results.(5x5) (7x7) (11x11) (13x13) (15x15)Gaussian Blurring0.02.55.07.510.012.515.017.5Error rateNo Proprioception Joint (deg)No Proprioception EE (cm)All Modality Joint (deg)All Modality EE (cm)Figure 11: State estimation results are shown after introducing di-verse levels of noise to [ y1,y2]. The red group depicts resultsusing [ y1,y2] modality, while the blue group represents resultsusing [ y1,y2,y3] modality.Henceforth, we conduct an additionalablation analysis to ascertain whetheror not the use of a combination ofhigh-fidelity and low-fidelity sensorinputs offers a potential benefit. Asnoted during data collection, the pro-prioceptive input y3comprising jointangles is obtained via adding Gaus-sian noise and is therefore considereda low-fidelity input. Figure 11 illus-trates the scenario of using y3and notusing y3while applying distinct lev-els of Gaussian blur in the image anddepth space. Notably, without em-ploying y3, the state estimation per-formance deteriorates as the level of blur increases. On the other hand, y3- despite being classifiedas a low-fidelity modality - contributes to the final state estimation. In particular, at the highest levelof blur, incorporating y3yields a 29% improvement in joint angle estimation and a 17% improve-ment for end-effector locations.B.2.2 Sensitivity AnalysisIn this study, we analyze the effects of three key factors on the performance of α-MDF. These fac-tors are latent dimensions, the length of previous states, the number of latent ensemble members,whether using Transformers or Multilayer Perceptrons (MLPs) as process model, and with or with-out the matrix ̃M ̃M ̃M. Our investigation focuses on understanding how these factors impact the overallperformance of the α-MDF framework. In this experiment, we use robot manipulation task (1) asan example.The findings from the sensitivity analysis are summarized in Table 9. Regarding latent dimensions,we observe that a larger latent dimension does not consistently yield better error metrics. The op-timal latent dimension may vary for different tasks. Regarding the length of the previous state, wefind that using Xt−10:t−1leads to more accurate results compared to using Xt−30:t−1. This sug-gests that a longer history of states may not significantly contribute to estimating the current state.Therefore, we recommend using a smaller or medium window size for state transition models. Asfor the number of ensemble members, using a larger value for Edoes improve accuracy. How-ever, it is worth noting that increasing the number of ensemble members can result in a larger statespace, which may introduce inefficiency. In terms of utilizing Transformer-style neural networksfor process models, the results from Table 9 indicate an advantage for this approach, as indicated19Table 9: Sensitivity analysis within the α-MDF framework, focusing on three factors: latent dimen-sions, length of previous states, and the number of ensemble members.RGB Depth Joint F/T Joint (deg) EE (cm)α-MDF with 64 latents ✓ ✓ ✓ 2.54±0.06 0.87±0.04α-MDF with 256 latents ✓ ✓ ✓ 2.19±0.09 0.75±0.01α-MDF with 512 latents ✓ ✓ ✓ 2.38±0.01 0.82±0.01α-MDF with Xt−5:t−1✓ ✓ ✓ 2.19±0.09 0.75±0.01α-MDF with Xt−10:t−1✓ ✓ ✓ 2.16±0.06 0.83±0.04α-MDF with Xt−30:t−1✓ ✓ ✓ 2.72±0.03 0.79±0.07α-MDF with E= 23✓ ✓ ✓ 2.67±0.12 1.10±0.02α-MDF with E= 25✓ ✓ ✓ 2.19±0.09 0.75±0.01α-MDF with E= 27✓ ✓ ✓ 1.77±0.05 0.67±0.01α-MDF with MLPs ✓ ✓ ✓ 2.45±0.05 0.83±0.02α-MDF with no ̃M ̃M ̃M ✓ ✓ ✓ 7.25±0.05 2.23±0.09Means ±standard errors.by the green-shaded row. However, it is important to acknowledge that for certain non-complextasks, employing lightweight MLPs as process models can also be a suitable option. It is crucial toconsider the specific task requirements and complexity when deciding between Transformer-styleneural networks and MLPs as process models.B.2.3 Mask in Attention GainWithin the attention gain module, we incorporate a matrix ̃M ̃M ̃Mthat selectively preserves diagonalelements of the attention map. This approach is based on the assumption that within each latentvector, each index possesses probabilistic independence. To empirically verify this assumption, weconducted an additional experiment where we trained an alternative α-MDF framework. In thisframework, we deliberately excluded the matrix ̃M ̃M ̃Min the attention gain module for the specificpurpose of evaluating the effect on robot manipulation task (1). The results of our experiment arereported in Table 9 as indicated by the red-shaded row. It shows a significant increase in the MeanAbsolute Error (MAE) for joint angle estimation when the causality-enforced map ̃M ̃M ̃Mwas excludedfrom the attention gain module. Specifically, the MAE increased from 2.19◦to7.25◦. Moreover, theMAE for tracking the end effector also deteriorated from 0.75cm to 2.23cm. Based on these results,it is strongly recommended to utilize the causality-enforced map ̃M ̃M ̃Mwithin the attention gain modulefor improved performance in both joint angle estimation and end effector tracking.B.3 Soft Robot Modeling TasksLayer 1 Layer 2 Layer 3 Layer 4 Layer 5 IMU 5IMU 1MoCap Inter-layer Actuator Strut Intra-layer Actuator Cable Figure 12: The tensegrity robot features 5 flexible layers,each a tensegrity module with struts, cables, and actuators.This section presents a comprehensiveanalysis of the tensegrity robot structure,the bending motion mechanism, and per-tinent sensory information, followed by adescription of additional experimental out-comes related to this task.Preliminaries : Our research utilizes atensegrity robot arm (developed in [45])that follows a strict tensegrity structurefeaturing struts, cables (including spring-loaded and actuated cables), and five lay-ers of arm-like tensegrity structures, whichproduce continuous bending postures when exposed to external forces. The longitudinal length ismaintained by stiff cables, while the bending direction is solely determined by external forces. Wedetermine the robot’s kinematics through data from Inertial Measurement Units (IMUs), optical mo-tion capture (MoCap), and proportional pressure control valves, with each of the five struts in each20101020.51.0101010.50.00.50 250 500 750 1000 1250 1500 1750 200001EnsembleGTPredx (m) y (m) z (m) Timestep t = 13.2s t = 26.4s t = 39.6s t = 52.8s t = 66.0s Figure 13: Predicted end-effector (EE) positions and quaternion vectors qin the soft robot modeling task. Thetoprow displays the actual robot posture at the corresponding time, with the orange circle indicating the EEpositions, which are not included in the RGB modality input.layer featuring an IMU. We also record the video by placing a camera in front of the robot whilecollecting all sensory data.A soft robot’s state at tis a 7-dimensional vector xt= [x, y, z, qx,qy,qz,qw]T, indicating its po-sition and orientation relative to the base frame (layer 1’s bottom). qrepresents the robot’s posture.The system’s action is the pressure vector of its 40 pneumatic cylinder actuators ( at∈R40). Its rawobservation is comprised of 5 IMU readings ( y3t∈R30), with each IMU measuring a 6-dimensionalvector of accelerations and angular velocities relative to its location. Fig. 12 illustrates the locationsof the IMUs on the struts (blue cubes) in each layer.Data : The complete set of modalities comprises [ y1,y2,y3], where y1∈R224×224×3representsRGB images, y2∈R224×224is synthetic depth maps which we generate from DPT repo [46]utilizing “Intel/dpt-large”, and y3∈R30is proprioceptive inputs (IMUs). The dataset is generatedby performing optical motion capture on the real tensegrity robot hand tip while randomly supplyingdesired pressure vectors to the pneumatic cylinder actuators. The action at∈R40, 5 IMU readingsy3t∈R30, and a 7-dimensional state xtare recorded, with 40-dimensional pressure vectors beingused as a control signal. A total of 12,000 trials of robot motion are collected, with each trialinvolving moving the robot from its current equilibrium posture to the next equilibrium posture byapplying the new desired pressure. All data are collected via a ROS2 network with a samplingfrequency of 30Hz and are synchronized using the “message_filters” package.21B.3.1 Ablation StudyTable 10: Ablation study on Tensegrity robot.RGB Depth IMUs EE (cm) q(101)✓ 2.07±0.03 0.31 ±0.08✓ 2.77±0.01 0.19 ±0.05✓ 8.99±0.02 0.79±0.03✓ ✓ 2.08±0.03 0.14 ±0.02✓ ✓ 1.73±0.05 0.12 ±0.02✓ ✓ 1.74±0.06 0.10±0.02✓ ✓ ✓ 1.67±0.09 0.12±0.01Means ±standard errors.In addition to the results presented inSection 4.3, we evaluate various com-binations of modalities to determinewhether an optimal subset of modali-ties can be identified to attain compara-ble outcomes without using all modal-ities during the filtering operation. Asdemonstrated in Table10, utilizing onlyone modality fails to achieve compara-ble results, with the highest accuracy(2.07cm) exclusively from employingy1(RGB). The lowest error in pos-ture estimation for the robot is obtained by leveraging [ y1,y2], showing slight improvement(0.10→0.12) over leveraging the full modalities [ y1,y2,y3]. However, the lowest MAE error for theEE position persists even when all modalities are employed. Interestingly, using solely y3results inthe highest state estimation error, which aligns with the lowest attention value visualized in Fig 14.As depicted in Fig. 14, it is evident that α-MDF prioritizes y1over other modalities. Interestingly,the attention values change upon turning off certain modalities while the system remains stable andfunctional.101020.51.0101010.50.00.502505007501000125015001750200001EnsembleGTPredx (m) y (m) z (m) Timestep 02505007501000125015001750Timestep 0100200300attentionMissing modalityStateRGBDepthIMUsNo Depth No RGBOnly IMUsFigure 14: The corresponding accumulated attention values for each modality during testing. The gray areasshow certain modalities are selected or not selected.B.3.2 Concept DriftsTo investigate the effects of concept drift and contextual changes [57] on the α-MDF framework,we incorporated a background change at inference time. In particular, image blending is used tooverlay a different RGB picture into the background. The objective of the experiment is to inferencebehavior when changes to the environment occur. We evaluated the tracking performance at variousblending levels, as illustrated in Fig. 15. The results provide an understanding of how effectivelytheα-MDF framework handles concept drift at different levels of intensity. It is noteworthy thatdespite substantially affecting the visual representation of the scene the achieved results (6.54cm)are comparable to utilizing only IMUs (8.99cm). This experiment provides an early insight into theutility of multimodality for mitigating the adverse effects due to contextual changes and conceptdrift.0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9246810Sensor Drifts vs. AccuracyEE (cm)q (1e2)Sensor Drift Intensity Drift Intensity = 0.3 Drift Intensity = 0.5 Drift Intensity = 0.7 Figure 15: Concept drifts analysis by adding background change with scale in RGB space.22C Complexity and Training DetailsIn this section, we present an analysis of the computational complexity associated with each taskby measuring the wall-clock time. Additionally, we provide comprehensive information regardingthe model hyper-parameters and training curriculum employed for the experiments. These detailsoffer insights into the computational requirements and settings utilized for training the models inour study.C.1 ComplexityTo assess the computational complexity of the proposed α-MDF framework alongside the baselinedifferentiable filters (DFs), we measured the wall-clock time during inference. The results, providedin Table 11, demonstrate the computational time for each approach. In the comparison with DFsbaselines, we only considered a single modality. It is worth noting that in the multimodality setting,we observed only a marginal increase in the elapsed time (0.03 sec) when handling multiple types ofobservations. This indicates that the proposed framework, α-MDF, is efficient and capable of effec-tively handling various modalities without significantly compromising computational performance.Table 11: Wall-clock time (sec) for each task.ModalityVisual Odometry Robot Manipulation Soft Robottask task(1) task(2) task(3) taskdEKF [9] 1 0.0463 ±0.004 0.0469 ±0.003 0.0472 ±0.002 - 0.0474 ±0.003DPF [28] 1 0.0486 ±0.005 0.0515 ±0.002 0.0509 ±0.002 - 0.0600 ±0.004dPF-M-lrn [9] 1 0.0693 ±0.011 0.0854 ±0.001 0.0844 ±0.002 - 0.0590 ±0.002α-MDF 1 0.0547 ±0.002 0.0554 ±0.011 0.0524 ±0.003 - 0.0633 ±0.003α-MDF ≥2 - 0.0836 ±0.002 0.0873 ±0.004 0.0890 ±0.005 0.0910 ±0.004Means ±standard errors.C.2 Training DetailsTable 12 provides an exhaustive enumeration of all learnable modules utilized in α-MDF,which includes three primary components: the state transition model fθθθ, the sensor encoders[s1(·), s2(·),···, sM(·)], and the attention gain (AG) module. We adopt self-attention layers withdimension 256 and 8 heads, denoted as “Self Attn”, in the state transition model. The cross-attentionlayers, denoted as “Cross Attn”, is with dimension 32 and 4 heads in the AG module. The sensorencoders utilized in our approach and all baseline models are identical, with s1acting on image-like modalities, utilizing ResNet18 [58] for learning high-dimensional observation representations,while s2pertains to low-dimensional modalities such as joint angles. The auxiliary model Aand thedecoder Dshares a similar structure to s2, but with different number of neurons. Note that xis thedimension of the actual state.Table 12: α-MDF’s learnable sub-modules.fθθθ: 3×SNN(256, ReLu), Positional Embedding, 3 ×Self Attn(256,8), 2 ×SNN(256, ReLu), 1 ×SNN( dx, -)s1: 1×ResNet18(h,w,ch), 2 ×fc(2048, ReLu), 1 ×SNN(512, ReLu), 1 ×SNN( dx, -)s2: 1×SNN(128, ReLu), 1 ×SNN(256, ReLu), 1 ×SNN(512, ReLu), 1 ×SNN( dx, -)AG: Positional Embedding, 1 ×Cross Attn(32, 4, mask)A: 1×SNN(128, ReLu), 1 ×SNN(256, ReLu), 1 ×SNN(512, ReLu), 1 ×SNN(1024, ReLu), 1 ×SNN( dx, -)D: 1 ×fc(256, ReLu), 1 ×SNN(128, ReLu), 1 ×SNN(32, ReLu), 1 ×SNN( x, -)fc: Fully Connected, SNN: Stochastic Neural network.During α-MDF training, we employ the curriculum outlined in Algorithm 1. Note that some tasksmay require pre-training the sensor encoders before performing end-to-end training the entire frame-work. For each task, we train α-MDF model with utilizing batch size of 64 on a single NVIDIAA100 GPU for roughly 48 hours. For all the tasks, we use the Adamw [59] optimizer with a learningrate of 1e-4.23Algorithm 1 Condition in Latent Space: training algorithm return the weights ωInput: α-MDF, dataloader{xxxt}t+1t−N,{ymt}Mm=1,{ymt+1}Mm=1,{at}t+1t−1Output: weights ωwhile not converged doCall dataloader with a random timestep t.fortimestep t←ttot+ 1doe1←PMm=1∥D(sm(ymt))−xxxt∥22according to Eq. 6e2← L fθθθ(Xt) +Le2e(ˆXt)according to Eq. 6et←e1+e2end forω←Train (α-MDF , et+et+1)end whilereturn ω24 |
RaNAaxZfKi8 | One-shot Imitation Learning via Interaction WarpingOndrej Biza1, Skye Thompson2, Kishore Reddy Pagidi1, Abhinav Kumar1,Elise van der Pol3,∗, Robin Walters1,∗, Thomas Kipf4,∗Jan-Willem van de Meent1,5,†, Lawson L.S. Wong1,†, Robert Platt1,†1Northeastern University,2Brown University,3Microsoft Research,4Google DeepMind,5University of Amsterdam∗Equal contribution.†Equal advising.biza.o@northeastern.eduAbstract: Learning robot policies from few demonstrations is crucial in open-ended applications. We propose a new method, Interaction Warping, for one-shotlearning SE(3) robotic manipulation policies. We infer the 3D mesh of each ob-ject in the environment using shape warping, a technique for aligning point cloudsacross object instances. Then, we represent manipulation actions as keypointson objects, which can be warped with the shape of the object. We show suc-cessful one-shot imitation learning on three simulated and real-world object re-arrangement tasks. We also demonstrate the ability of our method to predict objectmeshes and robot grasps in the wild. Webpage: https://shapewarping.github.io.Keywords: 3D manipulation, imitation learning, shape warping1 IntroductionFigure 1: The Mug Tree task.In one-shot imitation learning, we are given a sin-gle demonstration of a desired manipulation behav-ior and we must find a policy that can reproduce thebehavior in different situations. A classic exampleis the Mug Tree task, where a robot must grasp amug and hang it on a tree by its handle. Given asingle demonstration of grasping a mug and hangingit on a tree (top row of Figure 1), we want to ob-tain a policy that can successfully generalize acrossobjects and poses, e.g. differently-shaped mugs andtrees (bottom row of Figure 1). This presents twokey challenges: First, the demonstration must gener-alize to novel object instances, e.g. different mugs.Second, the policy must reason in SE(3) , rather thaninSE(2) where the problem is much easier [1].To be successful in SE(3) manipulation, it is generally necessary to bias the model significantlytoward the object manipulation domains in question. One popular approach is to establish a corre-spondence between points on the surface of the objects in the demonstration(s) with the same pointson the objects seen at test time. This approach is generally implemented using keypoints , point de-scriptors that encode the semantic location of the point on the surface of an object and transfer wellbetween different novel object instances [2, 3, 4]. E.g., points on handles from different mugs shouldbe assigned similar descriptors, thereby helping to correspond handles on different mug instances. Akey challenge therefore becomes how to learn semantically meaningful keypoint descriptors. Earlywork used hand-coded feature labels [4]. More recent methods learn a category-level object descrip-tor models during a pre-training step using implicit object models [5] or point models [2].7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.This paper proposes a different approach to the point correspondence problem based on CoherentPoint Drift (CPD) [6], a point-cloud warping algorithm. We call this method Interaction Warping .Using CPD, we train a shape-completion model to register a novel in-category object instance to acanonical object model in which the task has been defined via a single demonstration. The canonicaltask can then be projected into scenes with novel in-category objects by registering the new objectsto the canonical models. Our method has several advantages over the previous work mentionedabove [2, 3, 4]. First, it performs better in terms of its ability to successfully perform a novel instanceof a demonstrated task, both in simulation and on robotic hardware. Second, it requires an order-of-magnitude fewer object instances to train each new object category – tens of object instances ratherthan hundreds. Third, our method is agnostic to the use of neural networks – the approach presentedis based on CPD and PCA models, though using neural networks is possible.2 Related WorkWe draw on prior works in shape warping [7, 8] and imitation learning via keypoints [4]. Shapewarping uses non-rigid point cloud registration [9], a set of methods for aligning point clouds ormeshes of objects, to transfer robot skills across objects of different shape. Our paper is the firstto use shape warping to perform relational object re-arrangement and to handle objects in arbitraryposes. Second, keypoints are a state abstraction method that reduces objects states to the poses of aset of task-specific keypoints. We use keypoints to transfer robot actions. The novelty in our workis that our interaction points are found automatically and warped together with object shape.Few-shot Learning of Manipulation Policies: Keypoint based methods have been used in few-shot learning of object re-arrangement [4, 10, 11]. These methods rely on human-annotated objectkeypoints. Follow-up works proposed learned keypoints for learning tool affordances [12, 13, 14]and for model-based RL [15]. A related idea is the learning of 2D [16] and 3D [5, 17, 18, 19]descriptor fields, which provide semantic embeddings for arbitrary points. A keypoint can then bematched across object instances using its embedding. We specifically compare to Simeonov et al.[5, 17] and show that our method requires fewer demonstrations. In separate lines of works, Panet al. [2] (also included in our comparison) tackled object re-arrangement using cross-attention [20]between point clouds and Wen et al. [21] used pose estimation to solve precise object insertion.Shape Warping and Manipulation: We use a learned model of in-class shape warping originallyproposed by Rodriguez et al. [22]. This model was previously used to transfer object grasps [7, 23,24] and parameters for skills such as pouring liquids [25, 8]. Our method jointly infers the shapeand pose of an object; prior work assumed object pose to be either given [8] or detected using aneural pose detector [24]. Gradient descent on both the pose and the shape was previously used byRodriguez et al. [7], Rodriguez and Behnke [23], but only to correct for minor deviations in pose.A second line of work transfers grasps by warping contact points [26, 27, 28, 29, 30, 22, 31, 32].Finally, point-cloud warping has been used to manipulate deformable objects [33, 34].3 BackgroundSource over targetInitial FinalFigure 2: Coherent Point Drift warping.Coherent Point Drift (CPD) Given two pointclouds, X(i)∈Rn×3andX(j)∈Rm×3, Coher-ent Point Drift (CPD) finds a displacement Wi→j∈Rn×3of the points X(i)that brings them as close aspossible (in an L2sense) to the points X(j)[6]. CPDis a non-rigid point-cloud registration method – eachpoint in X(i)can be translated independently. CPDminimizes the following cost function,J(Wi→j) =−mXk=1lognXl=1exp−12σ2X(i)l+ (Wi→j)l−X(j)k+α2φ(Wi→j), (1)2using expectation maximization over point correspondences and distances (see [6] for details). Thiscan be viewed as fitting a Gaussian Mixture Model of ncomponents to the data X(j). Here, φ(Wi→j)is a prior on the point displacements that regularizes nearby points in X(i)to move coherently,preventing the assignment of arbitrary correspondences between points in X(i)andX(j).Generative Object Modeling Using CPD: CPD can be used as part of a generative modelfor in-category object shapes as follows [7]. Assume that we are given a set of point clouds,{X(1), . . . , X(K)}, that describe Kobject instances that all belong to a single category, e.g. a set ofpoint clouds describing different mug instances. Each of these point clouds must be a full point cloudin the sense that it covers the entire object. Select a “canonical” object X(C), C∈ {1,2, ..., K}anddefine a set of displacement matrices WC→i= CPD(X(C),X(i)), i∈ {1,2, ..., K}. The choice ofCis arbitrary, but we heuristically choose the Cthat is the most representative (Appendix A.2). Now,we calculate a low rank approximation of the space of object-shape deformations using PCA. Foreach matrix WC→i∈Rn×3, let ̄WC→i∈R3n×1denote the flattened version. We form the 3n×Kdata matrix ̄WC= ̄WC→1, . . . , ̄WC→Kand calculate the d-dimensional PCA projection matrixW∈R3n×d. This allows us to approximate novel in-category objects using a low-dimensionallatent vector vnovel∈Rd, which can be used to compute a point cloudY= X(C)+ Reshape( Wvnovel), (2)where the Reshape operator casts back to an n×3matrix.Shape Completion From Partial Point Clouds: In practice, we want to be able to approximatecomplete point clouds for objects for which we only have a partial view [8]. This can be accom-plished using the generative model by solving forL(Y) =D(Y,X(partial)), (3)using gradient descent on v. Essentially, we are solving for the latent vector that gives us a recon-struction closest to the observed points. To account for the partial view, Thompson et al. [8] use theone-sided Chamfer distance [35],DX(i),X(j)=1mmXk=1minl∈{1,...,n}X(i)l−X(j)k2. (4)Note that X(i)∈Rn×3andX(j)∈Rm×3do not need to have the same number of points ( n̸=m).4 Interaction WarpingThis section describes Interaction Warping (IW) , our proposed imitation method (Figure 3). Weassume that we have first trained a set of category-level generative object models of the form de-scribed in Section 3. Then, given a single demonstration of a desired manipulation activity, weobserved point cloudv1, s1, t1, R1object shapeand poseinference v2, s2, t2, R2graspprediction placementpredictiongrasp demo placement demo inferr ed sceneTgraspTplace1st Principal Axis. 2nd Principal Axis.0.0 -1.0 1.0warping modelFigure 3: Interaction Warping pipeline for predicting grasp and placement poses from point clouds.3detect the objects in the demonstration using off-the-shelf models. For each object in the demon-stration that matches a previously trained generative model, we fit the model to the object in order toget the pose and completed shape of the object (Section 4.1 and 4.2). Next, we identify interactionpoints on pairs of objects that interact and corresponding those points with the matching points inthe canonical object models. Finally, we reproduce the demonstration in a new scene with novelin-category object instances by projecting the demonstrated interaction points onto the completedobject instances in the new scene (Section 4.3).4.1 Joint Shape and Pose InferenceIn order to manipulate objects in SE(3) , we want to jointly infer the pose and shape of an objectrepresented by a point cloud X(partial). To do so, we warp and transform point cloud Y∈Rn×3tominimize a loss function,L(Y) =D(Y,X(partial)) +βmaxk∥Yk∥22, (5)which is akin to Equation 3 with the addition of the second term, a regularizer on the size of thedecoded object. Our implementation regularizes the object to fit into the smallest possible ball. Themain reason for the regularizer is to prevent large predicted meshes in real-world experiments, whichmight make it impossible to find collision-free motion plans.We parameterize Yas a warped, scaled, rotated and translated canonical point cloud,Y= [(X(C)+ Reshape( Wv))| {z }Equation 2⊙s]RT+t. (6)Here, X(C)is a canonical point cloud and v∈Rdparameterizes a warped shape (as describedin Section 3), s∈R3represents scale, R∈SO(3) is a rotation matrix and t∈R3representstranslation. We treat sandtas row vectors in this equation.We directly optimize Lwith respect to v, sandtusing the Adam optimizer [36]. We parameterizeRusing ˆR∈R2×3, an arbitrary matrix, and perform Gram-Schmidt orthogonalization (Algorithm5) to compute a valid rotation matrix R. This parameterization has been shown to enable stablelearning of rotation matrices [37, 38]. We run the optimization with many initial random restarts,please see Appendix A.4 for further details. The inferred v, srepresent the shape of the objectcaptured by X(partial)andR, trepresent its pose.4.2 From Point Clouds to MeshesWe infer the shape and pose of objects by warping point clouds. But, we need object meshes toperform collision checking for finding contacts between objects and motion planning (Section 4.3).We propose a simple approach for recovering the mesh of a warped object based on the vertices andfaces of the canonical object.First, we warp the vertices of the canonical object. To do so, the vertices need to be a part of X(C)because our model only knows how to warp points in X(C)(Section 3). However, these vertices(extracted from meshes made by people) are usually very biased (e.g. 90% of the vertices might bein the handle of a mug), which results in learned warps that ignore some parts of the object. Second,we add points to X(C)that are randomly sampled on the surface of the canonical mesh. X(C)is thencomposed of approximately the same number of mesh vertices and random surface samples, leadingto a better learned warping. We construct X(C)such that the first Vpoints are the vertices; note thatthe ordering of points in X(C)does not change as it is warped.Given a warped, rotated and translated point cloud Y(Equation 6), the first Vpoints are the warpedmesh vertices. We combine them with the faces of the canonical object to create a warped mesh M.Faces are represented as triples of vertices and these stay the same across object warps.4.3 Transferring Robot Actions via Interaction Points4(a) (b) (c)Figure 4: (a) Contacts between a gripper and abowl extracted from a demonstration. (b) Nearbypoints between a mug and a tree extracted froma demonstration of hanging the mug on the tree.(c) A virtual point (red) representing the branchof the tree intersecting the handle of the mug. Thered point is anchored to the mug using k nearestneighbors on the mug (four are shown in green).Consider the example of a point cloud of a mugYthat is warped using Equation 6. We can se-lect any point Yiand track it as the mug changesits shape and pose. For example, if the pointlies on the handle of the mug, we can use it toalign handles of mugs of different shapes andsizes. That can, in turn, facilitate the transferof manipulation policies across mugs. The keyquestion is how to find the points Yirelevantto a particular task. We call these interactionpoints .Grasp Interaction Points: We define the graspinteraction points as the pairs of contact pointsbetween the gripper and the object at the pointof grasp. Let Y(A)andM(A)be the point cloudand mesh respectively for the grasped object inferred by our method (Section 4.1, 4.2). Let M(G)be a mesh of our gripper and TGthe pose of the grasp. We use pybullet collision checking to findPpairs of contact points (p(A)j, p(G)j)Pj=1, where p(A)jis on the surface of M(A)andp(G)jis on thesurface of M(G)in pose TG(Figure 4a). We want to warp points p(A)jonto a different shape, but ourmodel only knows how to warp points in Y(A). Therefore, we find a set of indices IG={i1, ..., i P},where Y(A)ijis the nearest neighbor of p(A)j.Transferring Grasps: In a new scene, we infer the point cloud of the new object Y(A′)(Eq.6). We solve for the new grasp as the optimal transformation T∗Gthat aligns the pairs of points(Y(A′)ij, p(G)j), j∈ {1, ..., P}, ij∈IG. Here, Y(A′)ijare the contact points from the demonstrationwarped to a new object instance. Note that there is a correspondence between the points in Y(A)andY(A′); shape warping does not change their order. We predict the grasp T∗G(Figure 5a) thatminimizes the pairwise distances analytically using an algorithm from Horn et al. [39].Placement Interaction Points: For placement actions, we look at two objects being placed inrelation to each other, such as a mug being placed on a mug-tree. Here, we define interaction pointsas pairs of nearby points between the two object, a generalization of contact points. We use nearbypoints so that the two objects do not have to make contact in the demonstration; e.g., the mug mightnot be touching the tree before it is released from the gripper. Similarly, the demonstration of anobject being dropped into a container might not include contacts.LetY(A)andY(B)be the inferred point clouds of the two objects. We capture the original pointclouds from a demonstration right before the robot opens its gripper. We find pairs of nearby pointswithL2distance below δ,{(p(A)∈Y(A), p(B)∈Y(B)):p(A)−p(B)< δ}. Since there mightbe tens of thousands of these pairs, we find a representative sample using farthest point sampling[40]. We record the indices of points p(B)jinY(B)asIP={i1, i2, ..., i P}.We further add p(B)jasvirtual points intoY(A)– this idea is illustrated in Figure 4 (b) and (c). Forexample, we wish to solve for a pose that places a mug on a tree, such that the branch of the treeintersects the mug’s handle. But, there is no point in the middle of the mug’s handle that we can use.Hence, we add the nearby points p(B)j(e.g. points on the branch of the tree) as virtual points q(A)jtoY(A). We anchor q(A)jusing L-nearest-neighbors so it warps together with Y(A). Specifically, foreach point p(B)jwe find Lnearest neighbors (nj,1, ..., n j,L)inY(A)and anchor q(A)jas follows,q(A)j=1LLXk=1Y(A)nj,k+ (p(B)j−Y(A)nj,k)|{z }∆j,k=p(B)j. (7)To transfer the placement, we save the neighbor indices nj,kand the neighbor displacements ∆j,k.5(a) (b)Figure 5: Predicting grasps using interaction point warping. (a) the predicted grasp for a bowl/platechanges based on the curvature of the object. (b) the placement of a mug on a mug tree changes asthe mug grows larger so that the branch of the tree is in the middle of the handle.Figure 6: Example of an episode of putting a mug on a tree starting from a tilted mug pose.Transferring Placements: We infer the point clouds of the pair of new objects Y(A′)andY(B′).We calculate the positions of the virtual points with respect to the warped nearest neighbors,q(A′)j=1LLXk=1Y(A′)nj,k+ ∆ j,k. (8)We then construct pairs of points (q(A′)j, Y(B′)ij), j∈ {1, ..., P}, ij∈IPand find the optimal trans-formation of the first object T∗Pthat minimizes the distance between the point pairs. Since we knowhow we picked up the first object, we can transform T∗Pinto the coordinate frame of the robot handand execute the action of placing object A′onto object B′(Figure 5b).5 ExperimentsWe evaluate both the perception and imitation learning capabilities of Interaction Warping. In Sec-tion 5.1, we perform three object re-arrangement tasks with previously unseen objects both in sim-ulation and on a physical robot. In Section 5.2, we show our system is capable of proposing graspsin a cluttered kitchen setting from a single RGB-D image.We use ShapeNet [41] for per-category (mug, bowl, bottle and box) object pre-training (required byour method and all baselines). We use synthetic mug-tree meshes provided by [17]. Our method(IW) uses 10 training example per class, whereas all baselines use 200 examples. The trainingmeshes are all aligned in a canonical pose.5.1 Object Re-arrangementSetup: We use an open-source simulated environment with three tasks: mug on a mug-tree, bowlon a mug and a bottle in a container [17]. Given a segmented point cloud of the initial scene, thegoal is to predict the pose of the child object relative to the parent object (e.g. the mug relative tothe mug-tree). A successful action places the object on a rack / in a container so that it does notfall down, but also does not clip within the rack / container. The simulation does not test graspprediction. The three tasks are demonstrated with objects unseen during pre-training. We describedhow our method (IW) uses a single demonstration in Section 4.3; to use multiple demonstration, IWuses training prediction error to select the most informative one (Appendix A.5).6# # Train. Mug on Tree Bowl on Mug Bottle in ContainerMethod Demo Meshes Upright Arbitrary Upright Arbitrary Upright ArbitraryR-NDF [17] 1 200 60.0 51.0 69.0 68.0 19.0 8.0TAX-Pose [2] 1 200 61.0 41.0 16.0 9.0 4.0 1.0IW (Ours) 1 10 86.0 83.0 82.0 84.0 62.0 60.0R-NDF [17] 5 200 88.0 89.0 53.0 46.0 78.0 47.0TAX-Pose [2] 5 200 82.0 51.0 29.0 14.0 6.0 2.0IW (Ours) 5 10 90.0 87.0 75.0 77.0 79.0 79.0R-NDF [17] 10 200 71.0 70.0 69.0 60.0 81.0 59.0TAX-Pose [2] 10 200 82.0 52.0 20.0 20.0 2.0 1.0IW (Ours) 10 10 88.0 88.0 83.0 86.0 70.0 83.0Table 1: Success rates of predicted target poses of objects in simulation. Upright and Arbitrary referto the starting pose of the manipulated object. Measured over 100 trials with unseen object pairs.Mug on Tree Bowl on Mug Bottle in Container MeanMethod Pick Pick&Place Pick Pick&Place Pick Pick&Place Pick Pick&PlaceNDF1[5] 93.3 26.7 75.0 33.3 20.0 6.7 62.8 22.2R-NDF [17] 64.0 12.0 37.5 37.5 26.7 20.0 42.7 23.2IW (Ours) 96.0 92.0 87.5 83.3 86.7 83.3 90.1 86.2Table 2: Success rates of real-world pick-and-place experiments with a single demonstration. Themanipulated object (e.g. a mug) starts in an arbitrary pose (we use a stand to get a range of poses)and the target object (e.g. a mug-tree) starts in an arbitrary upright pose.1The target object (e.g.the mug tree) is in a fixed pose for this experiment, as NDF does not handle target object variation.Each entry is measured over 25 - 30 trials with unseen object pairs.In our real-world experiment, we perform both grasps and placements based on a single demonstra-tion. We capture a fused point cloud using three RGB-D cameras. We use point-cloud clustering andheuristics to detect objects in the real-world scenes (details in Appendix B.1) and perform motionplanning with collision checking based on the meshes predicted by our method. We evaluate theability of each method to pick and place unseen objects with a varying shape and pose (Figure 8).We provide a single demonstration for each task by teleoperating the physical robot. We do not haveaccess to the CAD models of objects used in the real-world experiment.Result: We find that our method (IW) generally outperforms R-NDF [5] and TAX-Pose [2] onthe simulated relational-placement prediction tasks (Table 1) with 20 times fewer training objects.We chose these two baselines as recent state-of-the-art SE(3) few-shot learning methods. IW canusually predict with above 80% success rate even with 1 demo, whereas R-NDF and TAX-Pose canonly occasionally do so with 5+ demos, and often fail to reach 80% success rate at all. We use anopen-source implementation of R-NDF provided by the authors [42], which differs in performancefrom the results reported in [17]. TAX-Pose struggles with precise object placements in the bowlon mug and bottle in box tasks; it often places the pair of objects inside one another. Occasionally,adding more demonstrations decreases the success rate because some demonstrations are of lowquality (e.g. using decorative mugs with strange shapes).In real-world pick and place experiments, we demonstrate the ability of IW to solve the three objectre-arrangement tasks – mug on tree, bowl on mug and bottle in box – with unseen objects (Figure8) and variation in the starting pose of objects (Table 2). We find that NDF and R-NDF [5, 17]struggle with the partial and noisy real-world point clouds. This often results in both the pick andplace actions being too imprecise to successfully solve the task. Pre-training (R-)NDF on real-worldpoint clouds could help, but note that IW was also pre-trained on simulated point clouds. We findthat the warping of canonical objects is more robust to noisy and occluded point clouds. We showan example episode of placing a mug on a tree in Figure 6.7(a) (b) (c) (d) (e)Figure 7: Grasp prediction in the wild: (a) an RGB-D (depth not shown) image, (b) open-vocabularyobject detection and segmentation using Detic [43] and Segment Anything [44], (c) object meshespredicted by our method based on segmented point clouds (we filter out distant and small objects),(d) meshes projected into the original image, (e) grasps predicted by Interaction Warping projectedinto the original image. Figure 9 has additional examples.We use the meshes predicted by IW to perform collision checking during motion planning. We donot perform collision checking (other than to avoid contact with the table) when using (R-)NDF asthese methods do not predict object meshes, but failures due to a collision between the robot andone of the object were infrequent in real-world (R-)NDF trials.5.2 Grasp Prediction in the WildSetup: In this experiment, we show that we can combine our method with a state-of-the-art objectdetection and segmentation pipeline to predict object meshes and robot grasps from a single RGB-Dimage. We use an open-vocabulary object detector Detic [43] to predict bounding boxes for commonhousehold objects and Segment Anything [44] to predict segmentation masks within these boundingboxes. We turn the predicted RGB-D images into point clouds and use our shape warping model topredict a mesh for each object. Finally, we use interaction warping to predict a robot grasp based ona single demonstration per each object class (details in Appendix B.2).Result: We show the results for two example scenes in Figure 7 and 9. Our perception pipeline cansuccessfully detect objects in images with cluttered backgrounds. Our warping algorithm accountsfor the variation in the shape and size of objects and our interaction warping algorithm can generalizethe demonstrated grasps to the novel objects.6 Limitations and ConclusionWe introduced Interaction Warping, a method for one-shot learning of SE(3) robotic manipulationpolicies. We demonstrated that warping of shapes and interaction points leads to successful one-shot learning of object re-arrangement policies. We also showed that we can use open-vocabularydetection and segmentation models to detect objects in the wild and predict their meshes and grasps.Limitations: Our method requires segmented point clouds of objects. We demonstrated a real-worldobject detection pipeline in Section 5.2, but it can be difficult to capture clean point clouds alignedwith image-based segmentations. The joint inference of shape and pose of an object takes around25 seconds per object on an NVIDIA RTX 2080 Ti GPU. Future work could train an additionalneural network to amortize the inference, or to predict favorable initialization. We use a PCA modelof shape warps for simplicity; this model cannot capture the details of objects, such as the detailedshape of the head of a bottle. A model with higher capacity should be used for tasks that requirehigh precision. Finally, our predicted policy is fully determined by the shape warping model and asingle demonstration; our method does not learn from its failures, but it is fully differentiable.8AcknowledgmentsThis work was supported in part by NSF 1724191, NSF 1750649, NSF 1763878, NSF 1901117, NSF2107256, NSF 2134178, NASA 80NSSC19K1474 and NSF GRFP awarded to Skye Thompson. Wewould like the thank the CoRL reviewers and area chair for their feedback.References[1] D. Wang, R. Walters, and R. Platt. So(2)-equivariant reinforcement learning. In The TenthInternational Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022.[2] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. TAX-Pose: Task-Specific Cross-PoseEstimation for Robot Manipulation. In 6th Annual Conference on Robot Learning , Nov. 2022.[3] Y . Wang, Y . Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graphcnn for learning on point clouds. Acm Transactions On Graphics (tog) , 38(5):1–12, 2019.[4] L. Manuelli, W. Gao, P. R. Florence, and R. Tedrake. KPAM: KeyPoint Affordances forCategory-Level Robotic Manipulation. In T. Asfour, E. Yoshida, J. Park, H. Christensen, andO. Khatib, editors, Robotics Research - The 19th International Symposium ISRR 2019, Hanoi,Vietnam, October 6-10, 2019 , volume 20 of Springer Proceedings in Advanced Robotics , pages132–157. Springer, 2019.[5] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation.In2022 International Conference on Robotics and Automation, ICRA 2022, Philadelphia, PA,USA, May 23-27, 2022 , pages 6394–6400. IEEE, 2022.[6] A. Myronenko and X. Song. Point-Set Registration: Coherent Point Drift. IEEE Trans. PatternAnal. Mach. Intell. , 32(12):2262–2275, Dec. 2010. ISSN 0162-8828.[7] D. Rodriguez, C. Cogswell, S. Koo, and S. Behnke. Transferring Grasping Skills to NovelInstances by Latent Space Non-Rigid Registration, Sept. 2018.[8] S. Thompson, L. P. Kaelbling, and T. Lozano-Perez. Shape-Based Transfer of Generic Skills.In2021 IEEE International Conference on Robotics and Automation (ICRA) , pages 5996–6002, May 2021.[9] X. Huang, G. Mei, J. Zhang, and R. Abbas. A comprehensive survey on point cloud registra-tion. CoRR , abs/2103.02690, 2021.[10] W. Gao and R. Tedrake. kPAM 2.0: Feedback Control for Category-Level Robotic Manipula-tion. IEEE Robotics and Automation Letters , 6(2):2962–2969, Apr. 2021. ISSN 2377-3766.[11] W. Gao and R. Tedrake. kPAM-SC: Generalizable Manipulation Planning using KeyPointAffordance and Shape Completion. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 6527–6533, May 2021.[12] Z. Qin, K. Fang, Y . Zhu, L. Fei-Fei, and S. Savarese. KETO: learning keypoint representationsfor tool manipulation. In 2020 IEEE International Conference on Robotics and Automation,ICRA 2020, Paris, France, May 31 - August 31, 2020 , pages 7278–7285. IEEE, 2020.[13] M. Vecerik, J.-B. Regli, O. Sushkov, D. Barker, R. Pevceviciute, T. Roth ̈orl, C. Schuster,R. Hadsell, L. Agapito, and J. Scholz. S3K: Self-Supervised Semantic Keypoints for RoboticManipulation via Multi-View Consistency, Oct. 2020.[14] D. Turpin, L. Wang, S. Tsogkas, S. Dickinson, and A. Garg. GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels, June 2021.9[15] L. Manuelli, Y . Li, P. Florence, and R. Tedrake. Keypoints into the Future: Self-SupervisedCorrespondence in Model-Based Reinforcement Learning, Sept. 2020.[16] P. R. Florence, L. Manuelli, and R. Tedrake. Dense Object Nets: Learning Dense Visual ObjectDescriptors By and For Robotic Manipulation. In 2nd Annual Conference on Robot Learning,CoRL 2018, Z ̈urich, Switzerland, 29-31 October 2018, Proceedings , volume 87 of Proceedingsof Machine Learning Research , pages 373–385. PMLR, 2018.[17] A. Simeonov, Y . Du, Y .-C. Lin, A. R. Garcia, L. P. Kaelbling, T. Lozano-P ́erez, and P. Agrawal.SE(3)-Equivariant Relational Rearrangement with Neural Descriptor Fields. In 6th AnnualConference on Robot Learning , Nov. 2022.[18] H. Ryu, J. Lee, H. Lee, and J. Choi. Equivariant descriptor fields: Se(3)-equivariant energy-based models for end-to-end visual robotic manipulation learning. CoRR , abs/2206.08321,2022.[19] E. Chun, Y . Du, A. Simeonov, T. Lozano-Perez, and L. Kaelbling. Local Neural DescriptorFields: Locally Conditioned Object Representations for Manipulation, Mar. 2023.[20] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is All you Need. In Advances in Neural Information Processing Systems ,volume 30. Curran Associates, Inc., 2017.[21] B. Wen, W. Lian, K. Bekris, and S. Schaal. You Only Demonstrate Once: Category-LevelManipulation from Single Visual Demonstration. In Robotics: Science and Systems XVIII .Robotics: Science and Systems Foundation, June 2022. ISBN 978-0-9923747-8-5.[22] D. Rodriguez, A. Di Guardo, A. Frisoli, and S. Behnke. Learning Postural Synergies for Cat-egorical Grasping Through Shape Space Registration. In 2018 IEEE-RAS 18th InternationalConference on Humanoid Robots (Humanoids) , pages 270–276, Nov. 2018.[23] D. Rodriguez and S. Behnke. Transferring Category-Based Functional Grasping Skills byLatent Space Non-Rigid Registration. IEEE Robotics and Automation Letters , 3(3):2662–2669, July 2018. ISSN 2377-3766.[24] T. Klamt, D. Rodriguez, M. Schwarz, C. Lenz, D. Pavlichenko, D. Droeschel, and S. Behnke.Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-Like Robot. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 1–8, Oct. 2018.[25] S. Brandi, O. Kroemer, and J. Peters. Generalizing pouring actions between objects usingwarped parameters. In 2014 IEEE-RAS International Conference on Humanoid Robots , pages616–621, Nov. 2014.[26] Y . Li, J. L. Fu, and N. S. Pollard. Data-Driven Grasp Synthesis Using Shape Matching andTask-Based Pruning. IEEE Trans. Visual. Comput. Graphics , 13(4):732–747, July 2007. ISSN1077-2626.[27] H. Ben Amor, O. Kroemer, U. Hillenbrand, G. Neumann, and J. Peters. Generalization ofhuman grasping for multi-fingered robot hands. In 2012 IEEE/RSJ International Conferenceon Intelligent Robots and Systems , pages 2043–2050, Oct. 2012.[28] U. Hillenbrand and M. A. Roa. Transferring functional grasps through contact warping and lo-cal replanning. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems ,pages 2963–2970, Oct. 2012.[29] R. J ̈akel, S. R. Schmidt-Rohr, S. W. R ̈uhl, A. Kasper, Z. Xue, and R. Dillmann. Learning ofPlanning Models for Dexterous Manipulation Based on Human Demonstrations. Int J of SocRobotics , 4(4):437–448, Nov. 2012. ISSN 1875-4805.10[30] T. Stouraitis, U. Hillenbrand, and M. A. Roa. Functional power grasps transferred throughwarping and replanning. In 2015 IEEE International Conference on Robotics and Automation(ICRA) , pages 4933–4940, May 2015.[31] D. Pavlichenko, D. Rodriguez, C. Lenz, M. Schwarz, and S. Behnke. Autonomous BimanualFunctional Regrasping of Novel Object Class Instances. In 2019 IEEE-RAS 19th InternationalConference on Humanoid Robots (Humanoids) , pages 351–358, Oct. 2019.[32] H. Tian, C. Wang, D. Manocha, and X. Zhang. Transferring Grasp Configurations using ActiveLearning and Local Replanning. In 2019 International Conference on Robotics and Automa-tion (ICRA) , pages 1622–1628, May 2019.[33] A. X. Lee, A. Gupta, H. Lu, S. Levine, and P. Abbeel. Learning from multiple demonstrationsusing trajectory-aware non-rigid registration with applications to deformable object manipula-tion. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 5265–5272, Hamburg, Germany, Sept. 2015. IEEE. ISBN 978-1-4799-9994-1.[34] J. Schulman, J. Ho, C. Lee, and P. Abbeel. Learning from Demonstrations Through the Useof Non-rigid Registration. In M. Inaba and P. Corke, editors, Robotics Research , volume 114,pages 339–354. Springer International Publishing, Cham, 2016. ISBN 978-3-319-28870-3978-3-319-28872-7.[35] H. G. Barrow, J. M. Tenenbaum, R. C. Bolles, and H. C. Wolf. Parametric correspondence andchamfer matching: Two new techniques for image matching. In R. Reddy, editor, Proceed-ings of the 5th International Joint Conference on Artificial Intelligence. Cambridge, MA, USA,August 22-25, 1977 , pages 659–663. William Kaufmann, 1977.[36] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization, Jan. 2017.[37] L. Falorsi, P. de Haan, T. R. Davidson, N. D. Cao, M. Weiler, P. Forr ́e, and T. S. Cohen.Explorations in homeomorphic variational auto-encoding. CoRR , abs/1807.04689, 2018.[38] J. Y . Park, O. Biza, L. Zhao, J. van de Meent, and R. Walters. Learning symmetric embeddingsfor equivariant world models. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesv ́ari, G. Niu,and S. Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July2022, Baltimore, Maryland, USA , volume 162 of Proceedings of Machine Learning Research ,pages 17372–17389. PMLR, 2022.[39] B. K. P. Horn, H. M. Hilden, and S. Negahdaripour. Closed-form solution of absolute orienta-tion using orthonormal matrices. J. Opt. Soc. Am. A , 5(7):1127–1135, Jul 1988.[40] Y . Eldar, M. Lindenbaum, M. Porat, and Y . Y . Zeevi. The farthest point strategy for progressiveimage sampling. In 12th IAPR International Conference on Pattern Recognition, ConferenceC: Signal Processing / Conference D: Parallel Computing, ICPR 1994, Jerusalem, Israel, 9-13October, 1994, Volume 3 , pages 93–97. IEEE, 1994.[41] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva,S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Reposi-tory, Dec. 2015.[42] A. Simeonov, Y . Du, L. Yen-Chen, , A. Rodriguez, , L. P. Kaelbling, T. L. Perez,and P. Agrawal. Se(3)-equivariant relational rearrangement with neural descriptor fields.https://github.com/anthonysimeonov/relational ndf, 2022.[43] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In S. Avidan, G. J. Brostow, M. Ciss ́e, G. M. Farinella, andT. Hassner, editors, Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Is-rael, October 23-27, 2022, Proceedings, Part IX , volume 13669 of Lecture Notes in ComputerScience , pages 350–368. Springer, 2022.11[44] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W. Lo, P. Doll ́ar, and R. B. Girshick. Segment anything. CoRR , abs/2304.02643,2023.[45] K. He, G. Gkioxari, P. Doll ́ar, and R. B. Girshick. Mask R-CNN. In IEEE InternationalConference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017 , pages 2980–2988. IEEE Computer Society, 2017.[46] B. Cheng, I. Misra, A. G. Schwing, A. Kirillov, and R. Girdhar. Masked-attention MaskTransformer for Universal Image Segmentation, June 2022.[47] T.-Y . Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan,C. L. Zitnick, and P. Doll ́ar. Microsoft COCO: Common Objects in Context, Feb. 2015.[48] B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba. Scene Parsing throughADE20K Dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , pages 5122–5130. IEEE, July 2017.[49] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Ma-hendran, A. Arnab, M. Dehghani, Z. Shen, X. Wang, X. Zhai, T. Kipf, and N. Houlsby. SimpleOpen-V ocabulary Object Detection with Vision Transformers, July 2022.12A Method DetailsWe included the code for both our simulated and real-world experiments for reference. Please findit in the supplementary material under iwcode . Algorithms 1 and 2 describe our warp learning andinference.Algorithm 1 Warp LearningInput: Meshes of Kexample object instances {obj1,obj2, ...,objK}.Output: Canonical point cloud, vertices and faces and a latent space of warps.Parameters: Smoothness of CPD warping αand number of PCA components L.1:PCD =SampleS(obji)Ki=1. ▷Sample a small point cloud per object (Appendix A.1).2:C= SelectCanonical(PCD) . ▷Select a canonical object with index C(Appendix A.2).3:canon = Concat(objC.vertices ,SampleL(objC)).▷Use both vertices and surface samples.4:fori∈ {1,2, ..., K}, i̸=Cdo5: WC→i= CPD(canon ,PCD i, α). ▷Coherent Point Drift warping (Section 3).6:end for7:DW={Flatten( WC→i)}Ki=1,i̸=C. ▷Dataset of displacements of canon .8:PCA = FitPCA( DW,ncomponents = L).▷Learn a latent space of canonical object warps.9:return Canon(points = canon ,vertices = objC.vertices ,faces = objC.faces) ,PCA .Algorithm 2 Warp Inference and Mesh ReconstructionInput: Observed point cloud pcd, canonical object canon and latent space PCA .Output: Predicted latent shape vand pose T.Parameters: Number of random starts S, number of gradient descent steps T, learning rate ηand object size regularization β.1:tg=1|pcd|P|pcd|i=1pcdi.2:pcd = pcd −tg. ▷Center the point cloud.3:fori= 1toSdo4: Rinit=Random initial 3D rotation matrix.5: Initialize v=0 0 ...0, s=1 1 1, tl=0 0 0,ˆR=1 0 00 1 0.6: Initialize Adam [36] with parameters v, s, t l, rand learning rate η.7: forj= 1toTdo8: δ= Reshape( Wv).9: X= canon .points + δ. ▷Warped canonical point cloud.10: R= GramSchmidt( ˆR).11: X= (X⊙s)RTinitRT+tl. ▷Scaled, rotated and translated point cloud.12: L=1|pcd|P|pcd|kmin|X|l∥pcdk−Xl∥22. ▷One-sided Chamfer distance.13: L=L+βmax|X|l∥Xl∥22. ▷Object size regularization.14: Take a gradient descent step to minimize Lusing Adam .15: end for16:end for17:Find parameters v∗, s∗, t∗l, R∗init, R∗with the lowest final loss across i∈ {1,2, ..., S}.18:X= canon .points + Reshape( Wv∗).19:X= (X⊙s∗)(R∗init)T(R∗)T+t∗l+tg.▷Complete point cloud in workspace coordinates.20:vertices = ⟨X1, X2, ..., X |canon .vertices |⟩.▷First|canon .vertices |points of Xare vertices.21:return Mesh(vertices = vertices ,faces = canon .faces) . ▷Warped mesh.13A.1 Point Cloud SamplingWe use trimesh1to sample the surface of object meshes. The functiontrimesh.sample.sample surface even samples a specified number of points and thenrejects points that are too close together. We sample 2k points for small point clouds ( SampleS )and 10k point for large point clouds ( SampleL ).A.2 Canonical Object SelectionAmong the Kexample objects, we would like to find the one that is the easiest to warp to the otherobjects. For example, if we have ten examples of mugs, but only one mug has a square handle,we should not choose it as it might be difficult to warp it to conform to the round handles of theother nine mugs. We use Algorithm 3, which computes K∗K−1warps and picks the object thatwarps to the other K−1objects with the lowest Chamfer distance. We also note an alternative andcomputationally cheaper algorithm from Thompson et al. [8], Algorithm 4. This algorithm simplyfinds the object that is the most similar to the other K−1objects without any warping.Algorithm 3 Exhaustive Canonical Object SelectionInput: Point clouds of Ktraining objects ⟨X(1),X(2), ...,X(K)⟩.Output: Index of the canonical object.1:fori= 1toKdo2: forj= 1toK,j̸=ido3: Wi→j= CPD(X(i),X(j)) ▷Warp point cloud ito point cloud j.4: Ci,j=1|X(j)|P|X(j)|k=1min|X(i)|l=1X(j)k−(X(i)+Wi→j)l225: end for6:end for7:fori= 1toKdo8: Ci=PKj=1,j̸=iCi,j ▷Cumulative cost of point cloud iwarps.9:end for10:return arg minKi=1Ci ▷Pick point cloud that is the easiest to warp.Algorithm 4 Approximate Canonical Object Selection [8]Input: Point clouds of Ktraining objects ⟨X(1),X(2), ...,X(K)⟩.Output: Index of the canonical object.1:fori= 1toKdo2: forj= 1toK,j̸=ido3: Ci,j=1|X(j)|P|X(j)|k=1min|X(i)|l=1X(j)k−X(i)l224: end for5:end for6:fori= 1toKdo7: Ci=PKj=1,j̸=iCi,j8:end for9:return arg minKi=1CiA.3 Gram-Schmidt OrthogonalizationWe compute a rotation matrix from two 3D vectors using Algorithm 5 [38].1https://github.com/mikedh/trimesh14Algorithm 5 Gram-Schmidt OrthogonalizationInput: 3D vectors uandv.Output: Rotation matrix.1:u′=u/∥u∥2:v′=v−(u′·v)u′∥v−(u′·v)u′∥3:w′=u′×v′4:return Stack( u′, v′, w′)A.4 Shape and Pose Inference DetailsThe point clouds Y∈Rn×3starts in its canonical form with the latent shape vequal to zero. We setthe initial scale sto one, translation tto zero and rotation ˆRto identity,v=0 0 ...0|{z }d, s =1 1 1, t=0 0 0,ˆR=1 0 00 1 0. (9)ˆRis then transformed into R∈SO(3) using Algorithm 5. We minimize Lwith respect to v, s, tandˆRusing the Adam optimizer [36] with learning rate 10−2for 100 steps. We set β= 10−2. Wefound the optimization process is prone to getting stuck in local minima; e.g., instead of aligningthe handle of the decoded mug with the observed point cloud, the optimizer might change the shapeof the decoded mug to hide its handle. Hence, we restart the process with many different randominitial rotations and pick the solution with the lowest loss function. Further, we randomly subsampleYto 1k points at each gradient descent step – this allows us to run 12 random starting orientationsat once on an NVIDIA RTX 2080Ti GPU.A.5 Using Multiple DemonstrationsOur method transfers grasps and placements from a single demonstration, but in our simulated ex-periment, we have access to multiple demonstrations. We implement a simple heuristic for choosingthe demonstration that fits our method the best: we make a prediction of the relational object place-ment from the initial state of each demonstration and select the demonstration where our predictionis closest to the demonstrated placement. The intuition is that we are choosing the demonstrationwhere our method was able to warp the objects with the highest accuracy (leading to the best place-ment prediction). This is especially useful in filtering out demonstrations with strangely shapedobjects.B Experiment DetailsB.1 Object re-arrangement on a physical robotWe use a UR5 robotic arm with a Robotiq gripper. We capture the point cloud using three RealSenseD455 camera with extrinsics calibrated to the robot. For motion planning, we use MoveIt withROS1. To segment the objects, we use DBSCAN to cluster the point clouds and simple heuristics(e.g. height, width) to detect the object class.B.2 Grasp prediction in the wildWe use a single RealSense D435 RGB-D camera. Our goal is to be able to demonstrate any taskin the real world without having to re-train our perception pipeline. Therefore, we chose an open-vocabulary object detection model Detic [43], which is able to detect object based on natural lan-guage descriptions. We used the following classes: ”cup”, ”bowl”, ”mug”, ”bottle”, ”cardboard”,”box”, ”Tripod”, ”Baseball bat”, ”Lamp”, ”Mug Rack”, ”Plate”, ”Toaster” and ”Spoon”. We use15(a) (b) (c)Figure 8: Objects used for the real-world tasks: (a) mug on tree, (b) bowl (or plate) on mug and (c)bottle in box. We use a single pair of objects to generate demonstrations and test on novel objects.the predicted bounding boxes from Detic to condition a Segment Anything model [44] to get ac-curate class-agnostic segmentation masks. Both Detic2and Segment Anything3come with severalpre-trained models and we used the largest available. Finally, we select the pixels within each seg-mentation mask and use the depth information from our depth camera to create a per-object pointcloud. We use DBSCAN to clouster the point cloud and filter out outlier points. Then, we performmesh warping and interaction warping to predict object meshes and grasps.Previously, we experimented with Mask R-CNN [45] and Mask2Former [46] trained on standardsegmentation datasets, such as COCO [47] and ADE20k [48]. We found that these dataset lack thewide range of object classes we would see in a household environment and that the trained modelsstruggle with out-of-distribution viewing angles, such as looking from a steep top-down angle. Wealso experimented with an open-vocabulary object detection model OWL-ViT [49] and found it tobe sensitive to scene clutter and the viewing angle.C Additional ResultsTraining and inference times: We measure the training and inference times of TAX-Pose, R-NDFand IW (Table 3). Both R-NDF and IW take tens of seconds to either perceive the environment orto predict an action. This is because both of these methods use gradient descent with many randomrestarts for inference. On the other hand, TAX-Pose performs inference in a fraction of second butrequires around 16 hours of training for each task. Neither R-NDF nor IW require task-specifictraining. We do not include the time it takes to perform pre-training for each class of objects, whichis required by all three methods, because we used checkpoints provided by the authors of TAX-Poseand R-NDF.Additional real-world grasp predictions: We include additional examples of real-world objectsegmentation, mesh prediction and grasp prediction in Figure 9.D LimitationsLimitations of shape warping: Shape warping works well when we can smoothly warp shapesbetween object instances, but it would struggle with a changing number of object parts. For example,if we had a set of mug trees that have between one and six branches, shape warping would pick oneof these trees as the canonical object and it would not be able to change the number of branches inthe canonical tree.Further, many object-oriented point cloud based methods (like IW and NDF) are limited by thereceptive field of the point cloud they model. For example, if we wanted to perform a cooking task,both of these methods would not be able to model the entire kitchen aisle or the entire stove. We2https://github.com/facebookresearch/Detic3https://github.com/facebookresearch/segment-anything16Method Training Perception Grasp prediction Placement predictionTAX-Pose [2] 16.5 ±1.3 h - 0.02 ±0.01 s 0.02 ±0.01 sR-NDF [17] - - 21.4 ±0.5 s 42.5 ±1.8 sIW (Ours) - 29.6 ±0.2 s 0.01 ±0.01 s 0.003 ±0.004 sTable 3: Approximate training and inference times for our method and baselines measured over fivetrials. R-NDF and IW do not have an explicit training phase, as they use demonstrations nonpara-metrically during inference. Only IW has a perception step that is separate from the action predictionstep. We do not include the time it takes to capture a point cloud or to move the robot. Training andinference times were measured on a system with a single NVIDIA RTX 2080Ti GPU and an Inteli7-9700K CPU.(a) (b) (c) (d) (e)Figure 9: Additional examples, please see Figure 7.Figure 10: Example of mug on tree episode.17Figure 11: Example of bowl/plate on mug episode.Figure 12: Example of bottle in box episode.would have to manually crop the point cloud only to the top of the stove or the particular burner wewant to place a pan onto.Limitations of joint shape and pose inference: Joint shape and pose inference is prone to gettingstuck in local minima. For example, instead of rotating a mug to match its handle to the observedpoint cloud, our inference method might change the shape of the mug to make the handle verysmall. We address this problem by using many random starting orientations – the full inferenceprocess takes 25 seconds per object on an NVIDIA RTX 2080 Ti GPU.Pose inference might also fail when we do not see the bottom of the object. We subtract the tablefrom the point cloud, so an observed point cloud of a mug might have an opening both at the topand at the bottom. Then, the inference process might not be able to tell if the mug is right side up orupside down.18 |
_gZLyRGGuo | Learning Efficient Abstract Planning Modelsthat Choose What to PredictNishanth Kumar∗ †njk@csail.mit.eduWillie McClinton∗ †wbm3@csail.mit.eduRohan Chitnis‡ronuchit@meta.comTom Silver†tslvr@csail.mit.eduTom ́as Lozano-P ́erez†tlp@csail.mit.edu†MIT CSAIL,‡Meta AILeslie Pack Kaelbling†lpk@csail.mit.eduAbstract: An effective approach to solving long-horizon tasks in robotics do-mains with continuous state and action spaces is bilevel planning, wherein a high-level search over an abstraction of an environment is used to guide low-leveldecision-making. Recent work has shown how to enable such bilevel planningby learning abstract models in the form of symbolic operators and neural sam-plers. In this work, we show that existing symbolic operator learning approachesfall short in many robotics domains where a robot’s actions tend to cause a largenumber of irrelevant changes in the abstract state. This is primarily because theyattempt to learn operators that exactly predict all observed changes in the abstractstate. To overcome this issue, we propose to learn operators that ‘choose whatto predict’ by only modelling changes necessary for abstract planning to achievespecified goals. Experimentally, we show that our approach learns operators thatlead to efficient planning across 10 different hybrid robotics domains, including4 from the challenging BEHA VIOR-100 benchmark, while generalizing to novelinitial states, goals, and objects.Keywords: Learning for TAMP, Abstraction Learning, Long-horizon Problems1 IntroductionSolving long-horizon robotics problems in domains with continuous state and action spaces is ex-tremely challenging, even when the transition function is deterministic and known. One effectivestrategy is to learn abstractions that capture the essential structure of the domain and then leveragehierarchical planning approaches like task and motion planning (TAMP) to solve new tasks. A typi-cal approach is to first learn state abstractions in the form of symbolic predicates (classifiers on thelow-level state, such as InGripper ), then learn operator descriptions andsamplers in terms ofthese predicates [1, 2]. The operators describe a partial transition model in the abstract space, whilethe samplers enable the search for realizations of abstract actions in terms of primitive actions. Inthis paper, we focus on the problem of learning operator descriptions from very few demonstrationsgiven a set of predicates, an accurate low-level transition model, and a set of parameterized con-trollers (such as Pick(x, y, z) ) that serve as primitive actions. We hope to leverage search-then-sample bilevel planning [3, 2, 4] using these operators to aggressively generalize to a highlyvariable set of problem domain sizes, initial states, and goals in challenging robotics tasks.A natural objective for the problem of learning a good abstract model is prediction accuracy [3, 1],which would be appropriate if we were using the abstract model to make precise predictions. Instead,our objective is to find an abstract model that maximally improves the performance of the bilevel∗Equal contribution7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Ground Atoms Reachable( game5 )Reachable( book1 )MoveTo(book7) Ground Atoms (None) Ground Atoms Reachable( book4 )Reachable( book7 )Reachable( game1 )Reachable( game2 )Reachable( video4 )Figure 1: Example demonstration transition and evaluation task from BEHA VIOR-100 task . (Left) Visu-alization and high-level states for an example transition where the robot moves from being in range of pickinga board game box and a book to picking 2 books, 2 board game boxes and a video game box. (Right) Visual-ization of an evaluation task where the robot starts out not in range of picking any objects.planning algorithm, given the available data. The difference between these objectives is stark inmany robotics domains where robot actions can affect many relationships among the robot andobjects in the world. Making precise predictions about these state changes might require a veryfine-grained abstract model with many complex operators. Such a model would require a lot of datato learn reliably, be very slow to plan with, and also be unlikely to generalize to novel tasks.For example, consider the Sorting Books task from the BEHA VIOR-100 benchmark [5]. Thegoal is to retrieve a number of books strewn about a living room and place them on a shelf. AReachable(?object) predicate is given to indicate when the robot is close enough to an objectto pick it up. When the robot moves to pick a particular object, the set of objects that are reachablevaries depending on their specific configuration. Figure 1 shows a transition where the robot movesto put itself in range of picking up book7 , but happens to also be in range of picking up severalother items. Optimizing prediction error on this transition would yield a very complex operator thatis overfit to this specific situation and thus neither useful for efficient high-level planning nor gener-alizable to new tasks with different object configurations (e.g. the test situation depicted in the rightpanel of Figure 1, where the robot is initially not in reachable range of any objects).Op−MoveToBook −Prediction −Error :Args : ?objA ?objB ?objC ?objD ?objE?objF ?objGPreconditions : (and (Reachable ?objA)(Reachable ?objB))Add Effects : (and (Reachable ?objC)(Reachable ?objD)(Reachable ?objE)(Reachable ?objF)(Reachable ?objG))Delete Effects : (and (Reachable ?objA)(Reachable ?objB))Op−MoveToBook −Necessary −Changes :Args : ?objAPreconditions : ()Add Effects : (Reachable ?objA)Delete Effects : (∀?x. ?objA ̸=?x⇒(Reachable ?x))In this work, we take seriously the objective of learning a set of operators that optimizes overallplanning performance. We observe that operators need only model predicate changes necessary forhigh-level search . Optimizing this objective enables learned operators to yield better generalizationand faster planning. Our main contributions are (1) the formulation of an operator learning objectivebased on planning performance, (2) a procedure for distinguishing necessary changes within thehigh-level states of demonstrations, and (3) an algorithm that leverages (1) and (2) to learn opera-tors via a hill-climbing search. We test our method on a wide range of complex robotic planningproblems and find that our learned operators enable bilevel planning to solve challenging tasks andgeneralize substantially from a small number of examples.2 Problem SettingWe aim to develop an efficient method for solving TAMP problems in complex domains with long-horizon solutions given structured low-level continuous state and action spaces [4, 1, 2]. We as-sume an ‘object-oriented’ state space: a state x∈ X is characterized by the continuous proper-2ties (e.g. pose, color, material) of a set of objects. Actions, u∈ U , are short-horizon policieswith both discrete and continuous parameters, which accomplish a desired change in state (e.g.,Pick(block, θ)where θis a grasp transform). These parameterized controllers can be imple-mented via learning or classical approaches (e.g. motion planning). We opt for the latter in alldomains in this paper. Transitions are deterministic and a simulator f:X × U → X predictsthe next state given current state and action. The state and action representations, as well as thetransition function can be acquired by engineering or learning [6, 4, 7].PREIMAGE BACKCHAINING ((Ω,(x,u),O, g))1 n←length (x);αn←g2 fori←n−1, n−2, . . . ,0do3 si, si+1←ABSTRACT (xi),ABSTRACT (xi+1)4 ωbest←FINDBESTCONSISTENT OP(Ω,si,si+1,αi+1,O)5 ifωbest=Null thenbreak6 ωi+1←ωbest7 αi←Pi+1∪(αi+1\E+i+1)8 return (ωi+1, . . . , ω n),(αi, . . . , α n)Algorithm 1: Preimage backchaining procedure(details in (§C)).HILLCLIMBING SEARCH (D)1 Jlast← ∞ ;Jcurr←J(Ω,D)2 Ω← ∅3 while Jcurr< J lastdo4 Ω′←IMPROVE COVERAGE (Ω,D)5 ifJ(Ω′,D)< J currthen6 Ω←Ω′;Jcurr←J(Ω,D)7 Ω′←REDUCE COMPLEXITY (Ω,D)8 ifJ(Ω′,D)< J currthen9 Ω←Ω′;Jcurr←J(Ω,D)10 Jlast←Jcurr;Jcurr←J(Ω,D)11 return ΩAlgorithm 2: HILLCLIMBING SEARCHlearns operators that optimize objective J.Since planning in this low-level space can be expensive and unreliable [8, 1, 2], we pursue a search-then-sample bilevel planning strategy in which search in an abstraction is used to guide low-levelplanning. We assume we are given a set of predicates Ψto define discrete properties of and relationsbetween objects (e.g., On) via a classifier function that outputs true or false for a tuple of objects ina low-level state. Predicates induce a state abstraction ABSTRACT :X → S where ABSTRACT (x)isthe set of true ground atoms inx(e.g.,{HandEmpty(robot), On(b1, b2) , . . .}). The low-level state space, action space, and simulator together with the predicates comprise an environment .To enable bilevel planning for an environment, we must learn a partial abstract transition model overthe predicates in the form of symbolic operators.We consider a standard learning-from-demonstration setting where we are given a set of trainingtasks with demonstrations, and must generalize to some held-out test tasks. A taskT∈ T ischaracterized by a set of objects O, an initial state x0∈ X, and a goal g. The goal is a set of groundatoms and is achieved inxifg⊆ABSTRACT (x). A solution to a task is a sequence of actionsu= (u1, . . . , u n)that achieve the goal ( g⊆ABSTRACT (xn), and xi=f(xi−1, ui)for1≤i≤n)from the initial state. Each environment is associated with a task distribution . Our objective is tomaximize the likelihood of solving tasks from this distribution within a planning time budget.3 Operators for Bilevel PlanningSymbolic operators, defined in PDDL [9] to support efficient planning, specify a transition func-tion over our state abstraction. Formally, an operator ωhasarguments v,preconditions P,addeffects E+,delete effects E−, and a controller C. The preconditions and effects are each expres-sions over the arguments that describe conditions under which the operator can be executed (e.g.Reachable(?book1) ) and resulting changes to the abstract state (e.g. Holding(?book1) )respectively. The controller is a policy (from the environment’s action space), parameterized bysome of the discrete operator arguments, as well as continuous parameters whose values will bechosen during the sampling phase of bilevel planning. A substitution of arguments to objects in-duces a ground operator ω=⟨P, E+, E−, C⟩. Given a ground operator ω=⟨P, E+, E−, C⟩, ifP⊆s, then the successor abstract state s′is(s\E−)∪E+where \is set difference. We useF(s, ω) =s′to denote this (partial) abstract transition function.3Figure 2: Environments . Visualizations for Screws ,Satellites ,Painting ,Collecting Cans , and Sorting Books .In previous work on operator learning for bilevel planning [3, 2], operators are connected to theunderlying environment via the following semantics: if F(s, ω) =s′, then there exists some low-level transition (x, u, x′)where ABSTRACT (x) =s,ABSTRACT (x′) =s′, andu=C(θ)for some θ.These semantics embody the “prediction error” view in that ABSTRACT must predict the entire nextstate ( ABSTRACT (x′) =s′). Towards implementing the alternative “necessary changes” view, wewill instead only require the abstract state output by Fto be a subset of the atoms in the next state,that is, ABSTRACT (x′)⊆s′. Thus, we permit our operators to have universally quantified single-predicate delete effects (e.g., ∀?v. Reachable(?v) ). Intuitively, Fis now responsible onlyfor predicting abstract subgoals to guide low-level planning, rather than predicting entire successorabstract states. Under these new semantics, Fencodes an abstract transition model where the outputatoms s′represent the set of possible abstract states where these atoms hold.Given operators, we can solve new tasks via search-then-sample bilevel planning (decribed in detailin appendix (§A) and previous work [10, 2, 3, 4, 1]). Given the task goal g, initial state x0, andcorresponding abstract state s0=ABSTRACT (x0), bilevel planning uses AI planning techniques(e.g., Helmert [11]) to generate candidate abstract plans . An abstract plan is a sequence of groundoperators (ω1, . . . , ωn)where g⊆snandsi=F(si−1, ωi)for1≤i≤n. The correspondingabstract state sequence (s1, . . . , s n)serves as a sequence of subgoals for low-level planning. Thecontroller sequence (C1, . . . , Cn)provides a plan sketch , where all that remains is to refine the planby “filling in” the continuous parameters. We sample continuous parameters θfor each controllerstarting from the first and checking if the controller achieves its corresponding subgoal si. If wecannot sample such parameters within a constant time budget, we backtrack and resample, andeventually even generate a new abstract plan. We adapt previous work [2] to learn neural networksamplers after learning operators (see (§D) in the appendix for more details).4 Learning Operators from DemonstrationsTo enable efficient bilevel planning, operators must generate abstract plans corresponding to plansketches that are likely to be refinable into low-level controller executions. Given demonstrationsDconsisting of a goal g, action sequence u, corresponding state sequence x, and set of objects O,we wish to find the simplest set of operators such that for every training task, abstract planning withthese operators is able to generate the plan sketch corresponding to the demonstration for this task.Specifically, we minimize the following objective:J(Ω)≜(1−coverage (D,Ω)) + λcomplexity (Ω) (1)where coverage is the fraction of demonstration plan sketches that planning with operator set Ωis able to produce and complexity is the number of operators. In our experiments, we set λto besmall enough so that complexity is never decreased at the expense of coverage.We will approach this optimization problem using a hill-climbing search (Algorithm 2), which ben-efits from having a more fine-grained interpretation of coverage defined over transitions within atrajectory instead of entire trajectories. To develop this definition, we will first introduce the notionofnecessary atoms .Definition 4.1. Given an operator set Ω, and an abstract plan (ω1, . . . , ω n)in terms of Ωthatachieves goal g, the necessary atoms at step nareαn≜g, and at step 0≤i < n areαi≜Pi+1∪(αi+1\E+i+1).4In other words, an atom is necessary if it is mentioned either in the goal, or the preconditions of afuture operator in a plan. If operators in Ωmodel each of the necessary atoms for every timestep,then these operators must be sufficient for producing the corresponding abstract plan via symbolicplanning. Moreover, each necessary atom set is minimal in that no atoms can be removed withoutviolating either the goal or a future operator’s preconditions. Thus, we need only learn operators tomodel changes in these necessary atoms.Given necessary atoms, we can now define what it means for some sequence of operators to beconsistent with some part of a demonstration.Definition 4.2. An abstract plan suffix (ωk, . . . , ωn)using operators from set Ωand with necessaryatoms (αk−1, . . . , α n)isconsistent with a demonstration (x,u)for goal gand timesteps 1≤k≤n,where x= (x0, . . . , x n)andu= (u1, . . . , u n)if for k≤i≤n, (1) the states are consistent: αi⊆F(ABSTRACT (xi−1), ωi)⊆ABSTRACT (xi); and (2) the actions are consistent: if the controller forωiisCi, then ui=Ci(θ)for some θ.Note that consistency is defined at the level of individual transitions within a demonstration. Thus,it is useful even if Ωdoes not contain sufficient operators to produce a plan that is consistent with anentire demonstration. Given this notion, we can now define coverage for a particular demonstration.Definition 4.3. Thedemonstration coverage ,η(Ω,(x,u, g,O))of demonstration (x,u)of length nfor goal gand objects Oby operators Ω, is a number from 0tonindicating the length of the longestconsistent suffix for the demonstration that can be generated with Ωground with objects O.Now, we define a fine-grained notion of coverage for optimization via hill-climbing:coverage (Ω,D) = Σ (x,u,g,O)∈Dη(ω,(x,u, g,O))|u|(2)Computing coverage implicitly requires that we compute an abstract plan suffix, as well as nec-essary atoms (in order to compute η). To do this efficiently, we will leverage the fact that we knowthe necessary atoms for the final timestep of every demonstration (they are simply the goal atoms!),and compute necessary atoms and plan suffixes for previous timesteps via a preimage backchainingprocedure that starts at the goal and works backwards given operators in Ω. Pseudocode for thisalgorithm is shown in Algorithm 1. We pass backward through each transition in a demonstrationtrajectory while attempting to choose an operator in Ωto cover (per Definition 4.2) the transition andthen updating the transition’s necessary atoms using Definition 4.1. If there are multiple operatorsinΩthat all satisfy the conditions of Definition 4.2, we use a heuristic that selects the operator thatbest matches this particular transition (see (§C) in the appendix for details).Hill-Climbing Search. We perform a hill-climbing search on J(Ω)(Equation 1), starting with theempty operator set. We have two search-step generators: the I MPROVE COVERAGE generator picksa demonstration that is not completely covered by the current operator set and proposes a changethat will increase the coverage term of the objective, and the R EDUCE COMPLEXITY generatorsimply proposes to delete operators to decrease the complexity term.IMPROVE COVERAGE generator. Given a candidate operator set Ω, preimage backchaining is usedto identify an abstract plan suffix of length kwith corresponding necessary atoms (αk−1, . . . , α n)that is consistent with a demonstration (x,u)for goal g, where x= (x0, . . . , x n)andu=(u1, . . . , u n), and where 1≤k < n . Since k < n , we know the transition (xk−2, uk−1, xk−1)with necessary atoms αk−1is not covered by any operator in Ω. To improve the coverage termin our objective, we wish to generate at least one new operator ωto cover this transition withoutuncovering any others. Definition 4.2 gives us the following constraints on the new operator ωwithgrounding ω:1. The controller Cmust match uk−1.2.P⊆sk−2, i.e., the preconditions must be satisfied in the previous state.3.((sk−2\sk−1)∩αk−1)⊆E+, i.e., the add effects must include the added necessary atoms.4.F(sk−2, ω)⊆sk−1, i.e., the delete effects should remove all atoms in sk−2but not in sk−1.5.vmust at least include one variable (with corresponding type) for each of the objects in 1–4.5Environment Ours LOFT LOFT+Replay CI CI + QE GNN Shoot GNN MFPainting 98.80 (0.42) 0.00 (0.00) 98.20 (0.91) 99.00 (0.31) 93.40 (1.47) 36.00 (3.39) 0.60 (0.29)Satellites 93.40 (3.52) 0.00 (0.00) 34 (5.28) 91.60 (2.68) 95.20 (1.30) 40.40 (3.04) 11.00 (1.44)Cluttered 1D 100.00 (0.00) 17.20 (5.46) 0.00 (0.00) 17.40 (5.52) 92.80 (0.90) 98.60 (0.63) 98.60 (0.63)Screws 100.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 50.00 (15.81) 95.60 (3.05) 95.80 (3.07)Cluttered Satellites 95.20 (0.75) 0.00 (0.00) 0.00 (0.00) 1.60 (0.61) 6.00 (1.57) 4.80 (1.27) 0.00 (0.00)Cluttered Painting 99.20 (0.42) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 4.60 (1.16) 0.00 (0.00)Opening Presents 100.00 (0.00) 0.00 (0.00) - 83.00 (10.77) 83.00 (10.77) 28.00 (5.96) 0.00 (0.00)Locking Windows 100.00 (0.00) 0.00 (0.00) - 90.00 (4.47) 88.00 (4.42) 0.00 (0.00) 0.00 (0.00)Collecting Cans 77.00 (11.75) 0.00 (0.00) - 0.00 (0.00) 1.00 (0.94) 0.00 (0.00) 0.00 (0.00)Sorting Books 69.00 (11.61) 0.00 (0.00) - 0.00 (0.00) 0.00 (0.00) 0.00 (0.00) 0.00 (0.00)Table 1: Percentage success rate on test tasks for all domains. Note that BEHA VIOR domains use trainingand testing sizes of 10 tasks, while all other domains use 50 tasks. The percentage standard error is shown inbrackets. All entries within one standard error from the best mean success rate are bolded.These constraints exactly determine ω’s controller, but there are many possible choices for the othercomponents that would satisfy 2–5. One straightforward choice would be to optimize predictionerror over the single transition (sk−2, uk−1, sk−1)and necessary atoms αk−1, but this would leadto a hyper-specific operator.Instead, we construct a more general operator that covers this transition as well as other transitionsin the data set. Specifically, we set ω’s ground controller C=uk−1to satisfy condition 1 above,and ground add effects E+= ((sk−2\sk−1)∩αk−1)to satisfy condition 3. We then constructthe controller Cand add effects E+by ‘lifting’: i.e, replacing all objects that appear in CandE+by variables of the same type. We record the substitution necessary between the variables andobjects for this transition as δτ. The operator’s arguments are then the union of the variables thatappear here. To find preconditions and delete effects that satisfy conditions 2 and 4, we will useeach substitution δτassociated with each transition in Dωtolifteach transition τ, replacing allobjects with arguments while discarding any atom with objects not in δτ. Following Chitnis et al.[2], the preconditions are then: P←Tτ=(si,·,·)∈Dωδτ(si). Similarly, the atomic delete effects are:E−◦←Sτ=(si,·,si+1)∈Dωδτ(si+1)\δτ(si). Finally, the operator’s quantified delete effects are setto be the set of predicates that appear in the current predicted abstract state, but not the observedabstract state: F(si, ω)\si−1. The final delete effects are the union of the atomic and quantified.Some additional bookkeeping is necessary once this new operator has been induced to guarantee thatthis generator decreases the coverage term (we must recompute preconditions and delete effects forallcurrent candidate operators, since their respective transition datasets might have changed, etc.),and we detail these, along with a proof of termination, in the appendix (§B.1).REDUCE COMPLEXITY generator. Given a candidate operator set Ω, we simply delete a singleoperator from the set and then recompute preconditions and delete effects for all remaining operatorsas described above (since now, their associated transition datasets Dωmight change).5 Experimental ResultsOur experiments are designed to empirically answer the following questions:•Q1. Does our approach learn operators capable of generalizing to novel goals and situationswith different objects than seen during training?•Q2.How effective is bilevel planning using our learned operators vs. existing baselines?•Q3.How does the performance of our approach vary with the amount of training data?For additional analyses, including learning efficiency and scalability, complexity of learned opera-tors, task planning efficiency, and ablations of our approach, see (§G) and (§H) in the appendix.Environments. We provide high-level environment details, with specifics in the appendix (§E). Wedeliberately include a few simple environments to highlight differences between methods.•Painting : a challenging robotics environment used by Silver et al. [3, 1]. A robot in 3D mustpick, wash, dry, paint, and then place various objects.•Satellites : a 2D environment inspired by the Satellites domain from Bacchus [12], but aug-mented with collisions and realistic sensing constraints. See appedix (§E) for further details.6•Cluttered 1D : a simple environment where the robot must move and collect objects clutteredalong a 1D line. An object can only be collected if it is reachable.•Screws : a 2D environment where the agent controls a magnetic crane and must pick specificscrews from clutter to place them into a receptacle.•Cluttered Satellites : same as “Satellites”, except readings must be taken from multiple objects.•Cluttered Painting : same as Painting, except the robot can be next to many objects at a time.•BEHAVIOR-100 Tasks : a set of complex, long-horizon household robotic tasks simulated withrealistic 3D models of objects and homes [5]. In Opening Presents , the robot must open anumber of boxes. In Locking Windows , the robot must close a number of open windows. InCollecting Cans , the robot must pick up a number of empty soda cans strewn amongst the houseand throw them into a trash can. In Sorting Books , the robot must find books in a living roomand place them each onto a cluttered shelf.Approaches. We describe the approaches we compare against, with details in the appendix (§F).•Cluster and Intersect (CI) [1]: Induces a different operator for every set of unique lifted effects.•LOFT [3]: Optimizes prediction error, but with a more general class of preconditions. Weinclude a version that only learns from demonstrations (LOFT) and a version that collects ad-ditional transitions (2500) in each domain (LOFT+Replay) as in the original work. Collectingreplay data in BEHA VIOR was intractable due to the size and complexity of the simulation.•Cluster and Intersect with Quantified Delete Effects (CI + QE) : A variant of Cluster and Inter-sect that is capable of learning operators that have both atomic and quantified delete effects. Itfirst runs Cluster and Intersect, and then induces quantified delete effects by optimizing predic-tion error via a hill-climbing search.•GNN Shooting : Trains an abstract GNN policy with behavioral cloning and uses it for trajectoryoptimization. This is inspired by a baseline from [1], see (§F) in the appendix for details.•GNN Model-Free : Uses the same trained GNN as above, but directly executes the policy.Experimental Setup. For the non-BEHA VIOR environments, we run all methods on up to 50training demonstrations. For BEHA VIOR environments, we use 10, since collecting training datain these complex environments is very time and memory intensive. Training demonstrations werecollected by a hand-coded ‘oracle’ policy that was specific to each environment. All experimentswere conducted on a quad-core Intel Xeon Platinum 8260 processor with a 192GB RAM limit, andall results are averaged over 10 random seeds. For each seed, we sample a set of evaluation tasksfrom the task distribution T.The evaluation tasks have more objects, different initial states, andmore atoms in the goal than seen during training. Our key measure of effective bilevel planningis success rate within a timeout (10 seconds for non-BEHA VIOR environments, 500 seconds for 3BEHA VIOR environments, and 1500 seconds for “Sorting Books”: the most complex environment).Figure 3: Data-efficiency of main approach.Results and Analysis. Table 1 shows the success rate achieved on testing tasks for all our envi-ronments and methods. Our method solves many more held-out tasks within the timeout than thebaselines, demonstrating its efficacy even in environments where an assumption on downward refin-ablity might not hold. In our simpler environments (Painting and Satellites; Opening Presents andLocking Windows for BEHA VIOR), the controllers cause a small number of changes in the abstractstate, and baseline approaches (CI, CI + QE) perform reasonably well. In all the other environments,the controllers cause a large number of changes in the abstract state, and the performance of opera-tor learning baselines degrades substantially, though GNN-baselines perform well on Cluttered 1D7and Screws. Despite the increased complexity, our approach learns operators that are resilient to thelack of downward refinability, enabling bilevel planning to achieve a substantial test-time successrate under timeout. The performance in Collecting Cans and Sorting Books is especially notable; allbaselines achieve a negligible success rate, while our approach achieves a near 70% rate on testingtasks. Upon investigation, we found that failures are due to local minima during learning for certainrandom seeds. Figure 3 shows our method’s testing success rate as a function of the size of its train-ing set for our non-BEHA VIOR environments. Our approach’s performance improves with moredata, though as the dataset size increases, the impact of additional data on performance reduces.6 Related WorkOur work continues a long line of research in learning operators for planning [13, 14, 15, 16, 17,18, 19, 20, 21]; see Arora et al. [22] for a recent survey. This previous work focuses on learningoperators from purely discrete plan traces in the context of classical (not bilevel) planning.Other work has considered learning symbolic planning models in continuous environments [23, 24,25, 26, 27, 28, 29, 30, 31], but typically the interface between symbolic planner and low-level poli-cies assumes downward refinability, which requires that every valid high-level plan must be refinableinto low-level steps [32], a critical assumption we do not make. Therefore, our efforts are most di-rectly inspired by LOFT [3] and learning Neuro-Symbolic Relational Transition Models [2], whichoptimize prediction error to learn operators for bilevel planning. Like our method, LOFT performsa search over operator sets, but commits to modeling all effects seen in the data and searches onlyover operator preconditions. We point out the limitations of optimizing prediction error in com-plex robotics environments, and take inspiration from Silver et al. [1] who show that optimizing forplanning efficiency can enable good predicate invention. We include LOFT and Cluster and Intersect(used by [2, 1]) as baselines representative of these previous methods in our experiments.The bilevel planner used in our work can be viewed as a search-then-sample solver for (TAMP) [33,34, 10]. This bilevel strategy allows for fast planning in continuous state and action spaces, whileavoiding the downward refinability assumption. To that end, our work also contributes to a recentline of work on learning for TAMP. Other efforts in this line include sampler learning [35, 36, 37, 38],heuristic learning [39, 40, 41], and abstract plan feasibility estimation [42, 43].7 Limitations, Conclusions and Future WorkOur method assumes that the provided predicates Ψcomprise a good state abstraction given thetask distribution for operator learning. With random or meaningless predicates, our approach islikely to learn complex operators such that planning with these is unlikely to outperform non-symbolic behavior-cloning baselines. Fortunately, prior work [1] suggests such ‘good’ predicatescan be learned from data. There is no guarantee that our overall hill-climbing procedure will con-verge quickly; the I MPROVE COVERAGE successor generator is especially inefficient and has a highworst-case computational complexity. In practice, we find its learning time to be comparable orfaster than baseline methods in our domains (see (§G) in the appendix), though this may not holdin more complex domains where a very large (e.g. greater than 100) number of operators need tobe learned. Additionally, our I MPROVE COVERAGE successor generator is rather complicated, andthere is perhaps a simpler and more efficient way to optimize the coverage term in our objective.Overall, we proposed a new objective for operator learning that is specifically tailored to bilevelplanning, and a search-based method for optimizing this objective. Experiments confirmed that op-erators learned with our new method lead to substantially better generalization and planning thanthose learned by optimizing prediction error and other reasonable baselines. Important next stepsinclude learning all components necessary to enable bilevel planning, including the predicates [1],controllers [4], and object-oriented state-space, as well as handling stochasticity and partial observ-ability. We believe that pursuing these steps will yield important progress toward solving sparse-feedback, long-horizon decision-making problems at scale via modular learning.8AcknowledgementsWe gratefully acknowledge support from NSF grant 2214177; from AFOSR grant FA9550-22-1-0249; from ONR MURI grant N00014-22-1-2740; from ARO grant W911NF-23-1-0034; from theMIT-IBM Watson Lab; from the MIT Quest for Intelligence; and from the Boston Dynamics Ar-tificial Intelligence Institute. Nishanth, Willie, and Tom are supported by NSF Graduate ResearchFellowships. Any opinions, findings, and conclusions or recommendations expressed in this mate-rial are those of the authors and do not necessarily reflect the views of our sponsors. We thank JorgeMendez, Aidan Curtis, and anonymous conference reviewers for helpful comments on earlier draftsof this work.References[1] T. Silver, R. Chitnis, N. Kumar, W. McClinton, T. Lozano-P ́erez, L. P. Kaelbling, and J. Tenen-baum. Predicate invention for bilevel planning. Proceedings of the AAAI Conference on Arti-ficial Intelligence , 2023.[2] R. Chitnis, T. Silver, J. B. Tenenbaum, T. Lozano-P ́erez, and L. P. Kaelbling. Learning neuro-symbolic relational transition models for bilevel planning. In The IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , 2022.[3] T. Silver, R. Chitnis, J. Tenenbaum, L. P. Kaelbling, and T. Lozano-P ́erez. Learning sym-bolic operators for task and motion planning. In The IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , 2021.[4] T. Silver, A. Athalye, J. B. Tenenbaum, T. Lozano-Perez, and L. P. Kaelbling. Learning neuro-symbolic skills for bilevel planning. In Conference on Robot Learning (CoRL) , 2022.[5] S. Srivastava, C. Li, M. Lingelbach, R. Mart ́ın-Mart ́ın, F. Xia, K. E. Vainio, Z. Lian, C. Gok-men, S. Buch, K. Liu, S. Savarese, H. Gweon, J. Wu, and L. Fei-Fei. BEHA VIOR: Benchmarkfor everyday household activities in virtual, interactive, and ecological environments. In Con-ference on Robot Learning (CoRL) , 2021.[6] J. Brady, R. S. Zimmermann, Y . Sharma, B. Sch ̈olkopf, J. von K ̈ugelgen, and W. Brendel.Provably learning object-centric representations. The International Conference on MachineLearning (ICML) , 2023.[7] M. Chang, T. Ullman, A. Torralba, and J. Tenenbaum. A compositional object-based approachto learning physical dynamics. In The International Conference on Learning Representations(ICLR) , 2017.[8] T. Kurutach, A. Tamar, G. Yang, S. J. Russell, and P. Abbeel. Learning plannable representa-tions with causal infogan. Advances in Neural Information Processing Systems , 31, 2018.[9] A. Howe, C. Knoblock, I. D. McDermott, A. Ram, M. Veloso, D. Weld, D. W. SRI, A. Barrett,D. Christianson, et al. PDDL— the planning domain definition language. Technical Report,Tech. Rep. , 1998.[10] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated task and motion planning. Annual review of control, robotics, and autonomoussystems , 2021.[11] M. Helmert. The fast downward planning system. The Journal of Artificial Intelligence Re-search (JAIR) , 2006.[12] F. Bacchus. Aips 2000 planning competition: The fifth international conference on artificialintelligence planning and scheduling systems. AI magazine , 2001.9[13] G. L. Drescher. Made-up minds: a constructivist approach to artificial intelligence . MIT press,1991.[14] E. Amir and A. Chang. Learning partially observable deterministic action models. The Journalof Artificial Intelligence Research (JAIR) , 2008.[15] N. Kr ̈uger, C. Geib, J. Piater, R. Petrick, M. Steedman, F. W ̈org ̈otter, A. Ude, T. Asfour,D. Kraft, D. Omr ˇcen, et al. Object–action complexes: Grounded abstractions of sensory–motorprocesses. Robotics and Autonomous Systems , 2011.[16] T. Lang, M. Toussaint, and K. Kersting. Exploration in relational domains for model-basedreinforcement learning. The Journal of Machine Learning Research (JMLR) , 2012.[17] K. Mourao, L. S. Zettlemoyer, R. Petrick, and M. Steedman. Learning strips operators fromnoisy and incomplete observations. arXiv preprint arXiv:1210.4889 , 2012.[18] H. M. Pasula, L. S. Zettlemoyer, and L. P. Kaelbling. Learning symbolic models of stochasticdomains. The Journal of Artificial Intelligence Research (JAIR) , 2007.[19] C. Rodrigues, P. G ́erard, C. Rouveirol, and H. Soldano. Active learning of relational actionmodels. In International Conference on Inductive Logic Programming , 2011.[20] S. N. Cresswell, T. L. McCluskey, and M. M. West. Acquiring planning domain models usingLOCM. The Knowledge Engineering Review , 2013.[21] D. Aineto, S. Jim ́enez, and E. Onaindia. Learning strips action models with classical planning.InThe International Conference on Automated Planning and Scheduling (ICAPS) , 2018.[22] A. Arora, H. Fiorino, D. Pellier, M. M ́etivier, and S. Pesty. A review of learning planningaction models. The Knowledge Engineering Review , 2018.[23] N. Jetchev, T. Lang, and M. Toussaint. Learning grounded relational symbols from continuousdata for abstract reasoning. In ICRA Workshop on Autonomous Learning , 2013.[24] E. Ugur and J. Piater. Bottom-up learning of object categories, action effects and logical rules:From continuous manipulative exploration to symbolic planning. In The IEEE InternationalConference on Robotics and Automation (ICRA) , 2015.[25] A. Ahmetoglu, M. Y . Seker, J. Piater, E. Oztop, and E. Ugur. Deepsym: Deep symbol gen-eration and rule learning from unsupervised continuous robot interaction for planning. arXivpreprint arXiv:2012.02532 , 2020.[26] M. Asai and A. S. Fukunaga. Classical planning in deep latent space: Bridging thesubsymbolic-symbolic boundary. In The AAAI Conference on Artificial Intelligence (AAAI) ,2018.[27] B. Bonet and H. Geffner. Learning first-order symbolic representations for planning from thestructure of the state space. In The European Conference on Artificial Intelligence (ECAI) ,2020.[28] M. Asai and C. Muise. Learning neural-symbolic descriptive planning models via cube-spacepriors: The voyage home (to STRIPS). arXiv preprint arXiv:2004.12850 , 2020.[29] E. Umili, E. Antonioni, F. Riccio, R. Capobianco, D. Nardi, and G. De Giacomo. Learning asymbolic planning domain through the interaction with continuous environments. ICAPS PRLWorkshop , 2021.[30] G. Konidaris, L. P. Kaelbling, and T. Lozano-P ́erez. From skills to symbols: Learning symbolicrepresentations for abstract high-level planning. The Journal of Artificial Intelligence Research(JAIR) , 2018.10[31] A. Curtis, T. Silver, J. B. Tenenbaum, T. Lozano-P ́erez, and L. Kaelbling. Discovering stateand action abstractions for generalized task and motion planning. In The AAAI Conference onArtificial Intelligence (AAAI) , 2022.[32] B. Marthi, S. J. Russell, and J. A. Wolfe. Angelic semantics for high-level actions. In TheInternational Conference on Automated Planning and Scheduling (ICAPS) , pages 232–239,2007.[33] S. Srivastava, E. Fang, L. Riano, R. Chitnis, S. Russell, and P. Abbeel. Combined task andmotion planning through an extensible planner-independent interface layer. In The IEEE In-ternational Conference on Robotics and Automation (ICRA) , 2014.[34] N. T. Dantam, Z. K. Kingston, S. Chaudhuri, and L. E. Kavraki. Incremental task and motionplanning: A constraint-based approach. In Robotics: Science and Systems (R:SS) , 2016.[35] R. Chitnis, D. Hadfield-Menell, A. Gupta, S. Srivastava, E. Groshev, C. Lin, and P. Abbeel.Guided search for task and motion plans using learned heuristics. In The IEEE InternationalConference on Robotics and Automation (ICRA) , 2016.[36] A. Mandalika, S. Choudhury, O. Salzman, and S. Srinivasa. Generalized lazy search for robotmotion planning: Interleaving search and edge evaluation via event-based toggles. In TheInternational Conference on Automated Planning and Scheduling (ICAPS) , 2019.[37] Z. Wang, C. R. Garrett, L. P. Kaelbling, and T. Lozano-P ́erez. Learning compositional modelsof robot skills for task and motion planning. The International Journal of Robotics Research(IJRR) , 2021.[38] J. Ortiz-Haro, J.-S. Ha, D. Driess, and M. Toussaint. Structured deep generative models forsampling on constraint manifolds in sequential manipulation. In Conference on Robot Learning(CoRL) , 2022.[39] W. Shen, F. Trevizan, and S. Thi ́ebaux. Learning domain-independent planning heuristics withhypergraph networks. In The International Conference on Automated Planning and Scheduling(ICAPS) , 2020.[40] B. Kim and L. Shimanuki. Learning value functions with relational state representations forguiding task-and-motion planning. Conference on Robot Learning (CoRL) , 2019.[41] C. Paxton, V . Raman, G. D. Hager, and M. Kobilarov. Combining neural networks and treesearch for task and motion planning in challenging environments. In The IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS) , 2017.[42] D. Driess, J.-S. Ha, and M. Toussaint. Deep visual reasoning: Learning to predict actionsequences for task and motion planning from an initial scene image. In Robotics: Science andSystems (R:SS) , 2020.[43] M. Noseworthy, C. Moses, I. Brand, S. Castro, L. Kaelbling, T. Lozano-P ́erez, and N. Roy.Active learning of abstract plan feasibility. In Robotics: Science and Systems (R:SS) , 2021.[44] M. Helmert and C. Domshlak. Landmarks, critical paths and abstractions: what’s the differenceanyway? In The International Conference on Automated Planning and Scheduling (ICAPS) ,2009.[45] P. W. Battaglia, J. B. Hamrick, V . Bapst, A. Sanchez-Gonzalez, V . Zambaldi, M. Malinowski,A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al. Relational inductive biases, deeplearning, and graph networks. arXiv preprint arXiv:1806.01261 , 2018.11A Bilevel Planning Algorithm DetailsAlgorithms 3 and 4 provide pseudocode and additional details necessary for high-level and low-levelsearch respectively in bilevel planning (Algorithm 5).GENABSTRACT PLAN(s0,g,Ω,nabstract )1 Ω←GROUND (Ω)//Search over ground operators from s0 to goal (returns top nplans).2 ˆπ←SEARCH (s0, g,Ω, nabstract )3 return ˆπAlgorithm 3: This is G ENABSTRACT PLAN which finds a high-level plan by creating operators for allpossible groundings then uses search to find nabstract plans. It returns a list of plans ˆπ, which the R EFINEprocedure below will attempt to turn into executable trajectories.REFINE ((ˆπ,x0,Ψ,Σ,nsamples ))1 state←ABSTRACT (x0)//While current state is not goal, sample and run current operatoron current state and check ground atoms. If is passes continueif not backtrack.2 curr idx←0while curr idx< len (ˆπ)do3 samples [curr idx]←samples [curr idx] + 14 state current,Ω←ˆπ[curr idx]5 Ω.C.Θ∼Ω.Σ6 π[curr idx]←Ω.C7 curr idx←curr idx+ 1ifΩ.C.initiable (state current )then8 state next←Simulate(statecurrent,Ω.C)9 state expected ,←ˆπ[curr idx]ifstate next⊆state expected then10 cancontinue on←Trueifcurr idx==len(skeleton )then11 return sucess, πelse12 canContinueOn ←Falseelse13 canContinueOn ←Falseifnot canContinueOn then14 curr idx←curr idx−1ifsamples [curr idx] == max samples then15 return failure, π16 return success, πAlgorithm 4: This is R EFINE which turns a task plan ˆπinto a sequence of ground skills. It gets the stateand operators from ˆπand adds the controller with newly sampled continuous parameters to π. After this itchecks to see if the added controller is initiable from the current state in the plan and we simulate the skillexecution to verify it reached the expected state we predicted next in ˆπ. If the controller is not initiable orfails the expected atoms check we backtrack and resample a new continuous parameter for this controlleruntil either we reach the max number of samples or we successfully refine our final controller. In thisway, the performance of the low-level controllers in the environment strongly impacts the overall planningprocess.12BILEVEL PLANNING (O,x0,g,Ψ,Ω,Σ)//Parameters: nabstract ,nsamples .1 s0←ABSTRACT (x0)//Outer Planning Loop2 forˆπinGENABSPLAN(s0,g,Ω,nabstract )do//Inner Refinement Loop3 ifREFINE (ˆπ,x0,Ψ,Ω,nsamples ) succeeds w/ π 4 return πAlgorithm 5: Pseudocode for bilevel planning algorithm, adapted from Silver et al. [1]. The inputs areobjects O, initial state x0, goal g, predicates Ψ, operators Ω, and samplers Σ; the output is a plan π. Theouter loop G ENABSPLAN generates high-level plans that guide our inner loop, which samples continuousparameters from our samplers Σto concretize each abstract plan ˆπinto a plan π. If the inner loop succeeds,then the found plan πis returned as the solution; if it fails, then the outer G ENABSTRACT PLAN continues.IMPROVE COVERAGE (Ω,D)1 cov init,Dα, τunc, αunc←COMPUTE COVERAGE (Ω,D)2 ifcov init=|D|thenreturn Ω3 cov curr←cov init4 Ω′←Ω5 while cov curr≥cov initdo6 ωnew←INDUCE OPTOCOVER (τunc, αunc)7 Ω′←REMOVE PRECANDDELEFFS(Ω′)∪ωnew8 (Dω1. . .Dωm),(δτ1, . . . , δ τj)←PARTITION DATA(Ω′,D)9 Ω′←INDUCE PRECANDDELEFFS(Ω′,(Dω1. . .Dωm),(δτ1, . . . , δ τj))10 Ω′←Ω′∪ENSURE NECATOMS SAT(ωnew,Dα)11 (Dω1. . .Dωl),(δτ1, . . . , δ τj)←PARTITION DATA(Ω′,D)12 Ω′←INDUCE PRECANDDELEFFS(Ω′,(Dω1. . .Dωl),(δτ1, . . . , δ τj))13 cov curr,Dα, τunc, αunc←COMPUTE COVERAGE (Ω′,D)14 Ω′←PRUNE NULLDATAOPERATORS (Ω′)15 return Ω′Algorithm 6: Pseudocode for our improve-coverage successor generator. The inputs are a set ofoperators Ω, the set of all training demonstrations D, and the corresponding set of training tasks Ttrain. Theoutput is a set of operators Ω′such that coverage (Ω′)≤coverage (Ω).B Detailed Description of Successor GeneratorsB.1 I MPROVE COVERAGEThe pseudocode for the improve-coverage successor generator is shown in Algorithm 6. Giventhe current candidate operator set Ω, training demonstrations Dand corresponding tasks Ttrain, wefirst attempt to compute the current coverage of ΩonD. We do this by calling the C OMPUTE -COVERAGE method. This method simply calls Algorithm 1 on every demonstration (x,u)inD(the set of objects Oand goal grequired by Algorithm 1 are obtained from the training tasks). TheCOMPUTE COVERAGE method then returns the number of covered transitions2(cov init), a datasetof necessary atoms sequences for each demonstration Algorithm 1 is able to cover ( Dα), the firstuncovered transition encountered ( τunc= (sk, uk+1, sk+1)), and the corresponding necessary atomsfor the transition ( αunc). If the number of covered transitions is the same as the size of the trainingdataset, then all transitions must be covered and the coverage term in our objective (Equation2The total number of transitions in abstract plan suffixes that Algorithm 1 is able to find when run on eachdemonstration in D.131) must be 0. We thus just return the current operator set Ωwith no modifications. Otherwise, wecompute a new set of operators Ω′with a lower coverage value.To generate Ω′, we first create a new operator with preconditions, add effects and arguments set tocover the transition τuncand corresponding necessary atoms αunc. The operator’s ground controllerC(θ) =uk+1is determined directly from the transition’s action uk+1. The operator’s ground addeffects are set to be E+= (sk+1\sk)∩αunc. The controller and add effects are lifted by creatinga variable vifor every distinct object that appears in C∪E+. The operator’s arguments vare set tothese variables.Next, we must induce the preconditions and delete effects of this new operator ωnew. To this end,we add ωnewto our current candidate set, and partition all data in our training set Dinto operatorspecific datasets Dωfor each operator ωin our current candidate set. Since operator preconditionsand delete effects depend on the partitioning, we first remove these from all operators that are notωnew(REMOVE PRECANDDELEFFS). We perform this partitioning by running the F INDBESTCON-SISTENT OPmethod from Algorithm 8 on this new operator set for every transition in the dataset,though we do not check the condition spredi+1⊆si+1, since the operators do not yet have delete effectsspecified. While performing this step, we save a mapping δτifrom the operator’s arguments to thespecific objects used to ground it for every transition in the dataset (this will be used for lifting thepreconditions and delete effects of each operator below). We assign each transition to the dataset as-sociated with the operator returned by F INDBESTCONSISTENT OP. We return the operator specificdatasets (Dω1. . .Dωl), as well as the saved object mappings for each transition (δτ1, . . . , δ τj).We now induce preconditions and delete effects using (Dω1. . .Dωl)and(δτ1, . . . , δ τj). Before wedo this, we delete any operator whose corresponding dataset is empty. Similar to Chitnis et al.[2], we set the preconditions to:P←Tτ=(si,·,·)∈Dωδτ(si). We also set the atomic delete effectstoE−◦←Sτ=(si,·,si+1)∈Dωδτ(si+1)\δτ(si). For every transition (si, ui+1, si+1), letspredi+1=(si\E−◦)∪E+. Then, we set smispred =Sτ=(·,·,si+1)∈Dωspredi+1\si+1. We induce a quantifieddelete effect for every predicate corresponding to atoms in smispred . We then set each operator’sdelete effects to be the union of E−◦and the quantified delete effects.Now that all operators have preconditions and delete effects specified, we must ensure that thenewly-added operator ( ωnew) is able to satisfy the necessary atoms for each of its transitions inDωnew. Recall that we set the operator’s add effects to be the necessary atoms that changed in thefirst uncovered transition τunc. Given the way partitioning is done (specifically the conditions in theFINDBESTCONSISTENT OPmethod in Algorithm 8), we know that these add effects must satisfyαi+1⊆si+1∪E+for all transitions (si, ui+1, si+1)∈ Dωnewwith corresponding necessary atomsαi+1for state si+1. However, the delete effects may cause the necessary atoms to become violatedfor certain transitions: i.e, αi+1⊈(si+1\E−)∪E+. For every such transition, we let αmissi+1=αi+1\((si+1\E−)∪E+). We then create a new operator ωmissiby copying all components of ωnew, andadding lifted atoms from αmissi+1to both the preconditions and add effects. We modify the operator’sarguments to contain new variables accordingly. This now ensures that the necessary atoms are notviolated for any transition in Dωnew. We add these new operators to the current candidate operatorset.After having added new operators to our candidate set in the above step, we must re-partition dataand consequently re-induce preconditions and delete effects to match this new partitioning (lines11-12 of Algorithm 6). We now have a new operator set that is guaranteed to cover the transitionτuncthat was initially uncovered. We check whether this new set achieves a lower value for thecoverage term of our objective, and iterate the above steps until it does.Finally, after the while loop terminates, we remove all operators from Ω′that have associateddatasets that are empty . This corresponds exactly to removing operators that are not used in anyabstract plan suffix computed by C OMPUTE COVERAGE and are thus unnecessary for planning.Proof of termination To see that the main loop of Algorithm 6 is guaranteed to terminate, considerthat the operator set Ω′strictly grows larger at every loop iteration (no operators are deleted). Since14the predicates are fixed, there is a finite number of possible operators. Thus, at some finite iteration,Ω′will contain every possible operator. At this point, it must contain an operator that covers everytransition and the loop must terminate.Anytime Removal of Operators with Null Data In the I MPROVE COVERAGE procedure as illus-trated in Algorithm 6, we only prune out operators that do not have any data associated with themafter the main while loop has terminated. However, we note here that we can remove such operatorsfrom the current operator set ( Ω′) at any time during the algorithm’s loop.This property arises because the amount of data associated with a particular operator will onlydecrease over time . To see this, note that (1) the number of operators in Ω′only increases over time,and (2) data is assigned to the ‘best covering’ operator as judged by our heuristic in Equation 3.Given a particular operator ωat some iteration iof the loop, suppose there are dtransitions fromDassociated with it (i.e, |Dω|=d). During future (i.e > i) loop iterations, new operators will beadded to Ω′. For any of the dtransitions in Dω, these new operators can either be a worse match (inwhich case, the transition will remain in Dω), or a better match (in which case, the transition willbecome associated with the new operator). Thus, for any operator ω, once there is no longer anydata associated with it, there will never be any data associated with it, and it will simply be prunedafter the while loop terminates.As a result, we can prune operators from our current set whenever there is no data associated withthem. We do this in our implementation, since it improves our algorithm’s wall-clock runtime.B.2 R EDUCE COMPLEXITYREDUCE COMPLEXITY (Ω,D,Ttrain)1 Ω′←DELETE OPERATOR (Ω)2 (Dω1. . .Dωm),(δτ1, . . . , δ τj)←PARTITION DATA(Ω′,D)3 Ω′←INDUCE PRECANDDELEFFS(Ω′,(Dω1. . .Dωm),(δτ1, . . . , δ τj))4 return Ω′Algorithm 7: Pseudocode for our reduce-complexity successor generator. The inputs are a set ofoperators Ω, the set of all training demonstrations D, and the corresponding set of trainign tasks Ttrain. Theoutput is a set of operators Ω′such that complexity (Ω′)≤complexity (Ω).The pseudocode for our reduce-complexity generator is shown in Algorithm 7. As canbe seen, the generator is rather simple: we simply delete an operator from the current set(DELETE OPERATOR ) and return the remaining operators. Since we’ve changed the operator set,we must recompute the partitioning and re-induce preconditions and delete effects accordingly.This generator clearly reduces the complexity term from our objective (Equation 1), since |Ω′|<|Ω|.C Associating Transitions with OperatorsFINDBESTCONSISTENT OP((Ω,si,si+1,α,O))1 Ωcon← ∅2 forω∈Ωdo3 forω∈GETALLGROUNDINGS (ω,O)do4 spredi+1←((si\E−)∪E+)5 ifP⊆siAND α⊆spredi+1AND spredi+1⊆si+1AND∃θ:C(θ) =uithen6 Ωcon←Ωcon∪ω7 ifΩcon̸=∅thenreturn FINDBESTCOVER (Ωcon,(si, si+1)).Algorithm 8: Pseudocode for the FindBestConsistentOp helper method used in Algorithm 115A key component of our algorithm is the F INDBESTCOVER method from Algorithm 8, which inturn is used in Algorithm 1 and the P ARTITION DATA method of Algorithm 6. The purpose of thismethod is to associate a transition with a particular operator when multiple operators satisfy theconditions necessary to ‘cover’ it. Intuitively, we wish to assign a transition to the operator whoseprediction best matches the observed effects in the transition. We can do this by simply measuringthe discrepancy between the operator’s add and delete effects, and the observed add and deleteeffects in the transition. We make two minor changes to this simple measure that are appropriateto our setting. First, we only use the atomic delete effects as part of our measure. We exclude thequantified delete effects because these exist in order to enable our operators to decline to predictparticular changes in state. Second, we favor operators that correctly predict which atoms will notchange. Recall that the E NSURE NECATOMS SATmethod in Algorithm 6 induces such operators byplacing the same atoms in the add effects and preconditions.Given some transition (si, ui+1, si+1), and some ground operator ωwith atomic delete effects E−◦,our heuristic for data partitioning is represented by the score function shown in equation 3.K=E+∩PC=E+\ Kscore =|C\(si+1\si)|+|(si+1\si)\ C|+|(E−◦\(si\si+1))|+|(si\si+1)\E−◦| − C(3)Once all eligible operators have been scored, we simply pick the lowest-scoring operator to associatewith this transition. If multiple operators achieve the same score, we break ties arbitrarily.D Learning SamplersIn addition to operators, we must also learn samplers to propose continuous parameters for con-trollers during plan refinement. We directly adapt existing approaches [2, 1] to accomplish this andlearn one sampler per operator of the following form: σ(x, o 1, . . . , o k) =sσ(x[o1]⊕ ··· ⊕ x[ok]),where x[o]denotes the feature vector for oinx, the⊕denotes concatenation, and sσis the model tobe learned. Specifically, we treat the problem as one of supervised learning on each of the datasetsassociated with each operator: Dω. Recall that for every transition (xi, ui+1, xi+1)inDω, we save amapping δ:v→ O τfrom the operator’s arguments vto objects to ground the operator with. Recallalso that every action is a hybrid controller with discrete parameters and continuous parameters θ.To create a datapoint that can be used for supervised learning for the associated sampler, we canreuse this substitution to create an input vector x[δτ(v1)]⊕ ··· ⊕ x[δ(vk)], where (v1, . . . , v k) =v.The corresponding output for supervised learning is the continuous parameter vector θin the actionui+1.Following previous work by Silver et al. [1] and Chitnis et al. [2], we learn two neural networksto parameterize each sampler. The first neural network takes in x[o1]⊕ ··· ⊕ x[ok]and regressesto the mean and covariance matrix of a Gaussian distribution over θ. We assume that the desireddistribution has nonzero measure, but the covariances can be arbitrarily small in practice. To improvethe representational capacity of this network, we learn a second neural network that takes in x[o1]⊕··· ⊕ x[ok]andθ, and returns true or false. This classifier is then used to rejection sample from thefirst network. To create negative examples, we use all transitions such that the controller used in thetransition matches the current controller, but the transition is not in the operator’s dataset Dω.16E Additional Experiment DetailsHere we provide detailed descriptions of each of experiment environments. See (§5) for high-leveldescriptions and the accompanying code for implementations.E.1 Screws Environment Details•Types:–Thescrew type has features x,y,held .–Thereceptacle type has features x,y.–Thegripper type has features x,y.•Predicates: Pickable(?x0:gripper, ?x1:receptacle) ,AboveReceptacle(?x0:gripper, ?x1:receptacle) ,HoldingScrew(?x0:gripper, ?x1:screw) ,ScrewInReceptacle(?x0:screw,?x1:receptacle) .•Actions:–MoveToScrew(?x0: gripper, ?x1: screw) : moves the gripper to beNear the screw ?x1.–MoveToReceptacle(?x0: gripper, ?x1: receptacle) : moves thegripper to be AboveReceptacle(?x0:gripper, ?x1:receptacle)–MagnetizeGripper(?x0: gripper) : Magnetizes the gripper at the currentlocation, which causes all screws that the gripper is Near to be held by the gripper.–DemagnetizeGripper(?x0: gripper) : Demagnetizes the gripper at thecurrent location, which causes all screws that are being held by the gripper to fall.•Goal: The agent must make ScrewInReceptacle(?x0:screw,?x1:receptacle) true for a particular screw that varies per task.E.2 Cluttered 1D Environment Details•Types:–Therobot type has features x.–Thedot type has features x,grasped .•Predicates: NextTo(?x0:robot, ?x1:dot) ,NextToNothing(?x0:robot) ,Grasped(?x0:robot, ?x1:dot) .•Actions:–MoveGrasp(?x0: robot, ?x1: dot, [move orgrasp, x]) : Asingle controller that performs both moving and grasping. If move orgrasp <0.5,then the controller moves the robot to a continuous position y. Else, the controllergrasps the dot ?x1 if it is within range.•Goal: The agent must make Grasped(?x0:robot, ?x1:dot) true for a particularset of dots that varies per task.E.3 Satellites Environment Details•Types:–The satellite type has features x,y,theta ,instrument ,calibration objid,iscalibrated ,read objid,shoots chem x,shoots chem y.17–Theobject type has features id,x,y,haschem x,haschem y.•Predicates: Sees(?x0:satellite, ?x1:object) ,CalibrationTarget(?x0:satellite, ?x1:object) ,IsCalibrated(?x0:satellite) , HasCamera(?x0:satellite) ,HasInfrared(?x0:satellite) , HasGeiger(?x0:satellite) ,ShootsChemX(?x0:satellite) , ShootsChemY(?x0:satellite) ,HasChemX(?x0:satellite) , HasChemY(?x0:satellite) ,CameraReadingTaken(?x0:satellite, ?x1:object) ,InfraredReadingTaken(?x0:satellite, ?x1:object) ,GeigerReadingTaken(?x0:satellite, ?x1:object) .•Actions:–MoveTo(?x0:satellite, ?x1:object, [x, y]) : Moves the satellite?x0 to be at x, y .–Calibrate(?x0:satellite, ?x1:object) : Tries to calibratethe satellite ?x0 against object ?x1. This will only succeed (i.e, makeIsCalibrated(?x0:satellite) true) if ?x1 is the calibration target of?x0.–ShootChemX(?x0:satellite, ?x1: object) : Tries to shoot a pellet ofchemical X from satellite ?x0. This will only succeed if ?x0 both has chemical X andis capable of shooting it.–ShootChemY(?x0:satellite, ?x1:object) : Tries to shoot a pellet ofchemical Y from satellite ?x0. This will only succeed if ?x0 both has chemical Yand is capable of shooting it.–UseInstrument(?x0:satellite, ?x1:object) : Tries to use the instru-ment possessed by ?x0 on object ?x1 (note that we assume ?x0 only possesses asingle instrument).•Goal: The agent must take particular readings (i.e some combination ofCameraReadingTaken(?x0:satellite, ?x1:object) ,InfraredReadingTaken(?x0:satellite, ?x1:object) ,GeigerReadingTaken(?x0:satellite, ?x1:object) ) from a specific set ofobjects that varies per task.E.4 Painting Environment Details•Types:–Theobject type has features x,y,z,dirtiness ,wetness ,color ,grasp ,held .–Thebox type has features x,y,color .–Thelid type has features open .–Theshelf type has features x,y,color .–Therobot type has features x,y,fingers .•Predicates: InBox(?x0:obj) , InShelf(?x0:obj) ,IsBoxColor(?x0:obj, ?x1:box) , IsShelfColor(?x0:obj,?x1:shelf) , GripperOpen(?x0:robot) , OnTable(?x0:obj) ,NotOnTable(?x0:obj) ,HoldingTop(?x0:obj) ,HoldingSide(?x0:obj) ,Holding(?x0:obj) , IsWet(?x0:obj) , IsDry(?x0:obj) ,IsDirty(?x0:obj) ,IsClean(?x0:obj) .•Actions:18–Pick(?x0:robot, ?x1:obj, [grasp]) : picks up a particular object, ifgrasp >0.5it performs a top grasp otherwise a side grasp.–Wash(?x0:robot) : washes the object in hand, which is needed to clean the object.–Dry(?x0:robot) : drys the object in hand, which is needed after you wash theobject.–Paint(?x0:robot, [color]) : paints the object in hand a particular colorspecified by the continuous parameter.–Place(?x0:robot, [x, y, z]) : places the object in hand at a particular x, y,z location specified by the continuous parameters.–OpenLid(?x0:robot, ?x1:lid) : opens a specific lid, which is need to placeobjects inside the box.•Goal: A robot in 3D must pick, wash, dry, paint, and then place various objectsin order to get InBox(?x0:obj) andIsBoxColor(?x0:obj, ?x1:box) , orInShelf(?x0:obj) andIsShelfColor(?x0:obj, ?x1:shelf) true for par-ticular goal objects.E.5 Cluttered Painting Environment Details•Types:–Theobject type has features x,y,z,dirtiness ,wetness ,color ,grasp ,held .–Thebox type has features x,y,color .–Thelid type has features open .–Theshelf type has features x,y,color .–Therobot type has features x,y,fingers .•Predicates: InBox(?x0:obj) , InShelf(?x0:obj) ,IsBoxColor(?x0:obj, ?x1:box) , IsShelfColor(?x0:obj,?x1:shelf) , GripperOpen(?x0:robot) , OnTable(?x0:obj) ,NotOnTable(?x0:obj) ,HoldingTop(?x0:obj) ,HoldingSide(?x0:obj) ,Holding(?x0:obj) , IsWet(?x0:obj) , IsDry(?x0:obj) ,IsDirty(?x0:obj) ,IsClean(?x0:obj) , along with RepeatedNextTo Predicates:NextTo(?x0:robot, ?x1:obj) , NextToBox(?x0:robot, ?x1:box) ,NextToShelf(?x0:robot, ?x1:shelf) , NextToTable(?x0:robot,?x1:table) .•Actions:–Pick(?x0:robot, ?x1:obj, [grasp]) : picks up a particular object, ifgrasp >0.5 it performs a top grasp otherwise a side grasp.–Wash(?x0:robot) : washes the object in hand, which is needed to clean the object.–Dry(?x0:robot) : drys the object in hand, which is needed after you wash theobject.–Paint(?x0:robot, [color]) : paints the object in hand a particular colorspecified by the continuous parameter.–Place(?x0:robot, [x, y, z]) : places the object in hand at a particular x, y,z location specified by the continuous parameters.–OpenLid(?x0:robot, ?x1:lid) : opens a specific lid, which is need to placeobjects inside the box.–MoveToObj(?x0:robot, ?x1:obj, [x]) : moves to a particular object withcertain displacement x.19–MoveToBox(?x0:robot, ?x1:box, [x]) : moves to a particular box withcertain displacement x.–MoveToShelf(?x0:robot, ?x1:shelf, [x]) : moves to a particular shelfwith certain displacement x.•Goal: A robot in 3D must pick, wash, dry, paint, and then place various objectsin order to get InBox(?x0:obj) andIsBoxColor(?x0:obj, ?x1:box) , orInShelf(?x0:obj) andIsShelfColor(?x0:obj, ?x1:shelf) true for par-ticular goal objects. In contrast to the previous painting environment, we also need tonavigate to the right objects (i.e. all objects are not always reachable from any states). Thisversion of the environment requires operators with quantified delete effects.E.6 BEHA VIOR Environment Details•Types:–Many object types that range from relevant types like hardbacks andnotebooks to many irrelevant types like toys andjars . Allobject types havefeatures from location andorientation tograspable andopen . For acomplete list of object types and features see [5].•Predicates: Inside(?x0:obj, ?x1:obj) ,OnTop(?x0:obj, ?x1:obj) ,Reachable-Nothing() ,HandEmpty() ,Holding(?x0:obj) ,Reachable() ,Openable(?x0:obj) , Not-Openable(?x0:obj) , Open(?x0:obj) ,Closed(?x0:obj) .•Actions:–NavigateTo(?x0:obj) : navigates to make a particular object reachable.–Grasp(?x0:obj, [x, y, z]) : picks up a particular object with the hand start-ing at a particular relative x, y, z location specified by the continuous parameters.–PlaceOnTop(?x0:obj) : places the object in hand ontop of another object as longas the agent is holding an object and is in range of the object to be placed onto.–PlaceInside(?x0:obj) : places the object in hand inside another object as longas the agent is holding an object and is in range of the object to be placed into.–Open(?x0:obj)) : opens a specific object (windows, doors, boxes, etc.) if it is‘openable’.–Close(?x0:obj)) : closes a specific object (windows, doors, boxes, etc.) if it iscurrently in an ‘open’ state.•Goal: InOpening Presents , the robot must Open(?x0:package) a number of boxes oftypepackage around the room. In Locking Windows , the robot must navigate aroundthe house to Close(?x0:window) a number of windows. In Collecting Cans , therobot must pick up a number of empty soda cans of type pop strewn amongst the houseand throw them into a trash can of type bucket . This will satisfy the goal of gettingInside(?x0:pop, ?x1:bucket) for every soda can around the house. In SortingBooks , the robot must find books of type hardback andnotebook in a living room andplace them each onto a cluttered shelf (i.e. satisfy the goal of OnTop(?x0:hardback,?x1:shelf) andOnTop(?x0:notebook, ?x1:shelf) for a number of books).F Additional Approach DetailsHere we provide detailed descriptions of each approach evaluated in experiments. For the ap-proaches that learn operators, we use A∗search with the lmcut heuristic [44] as the high-levelplanner for bilevel planning in non-BEHA VIOR environments, and use Fast Downward [11] in a20configuration with minor differences from lama-first as the high-level planner in BEHA VIORenvironments, since A∗search was unable to find abstract plans given the large state and actionspaces of these tasks. All approaches also iteratively resample until the simulator fverifies that thetransition has been achieved, except for GNN Model-Free, which is completely model-free. See(§5) for high-level descriptions and the accompanying code for implementations.F.1 Ours•Operator Learning: We learn operators via the hill-climbing search described in Section4.3. For our objective (Equation 1), we set the λterm to be 1/|D|, where |D|represents thenumber of transitions in the training demonstrations.•Sampler Learning: As described in Section D, each sampler consists of two neural net-works: a generator and a discriminator. The generator outputs the mean and diagonalcovariance of a Gaussian, using an exponential linear unit (ELU) to assure PSD covari-ance. The generator is a fully-connected neural network with two hidden layers of size 32,trained with Adam for 50,000 epochs with a learning rate of 1e−3using Gaussian nega-tive log likelihood loss. The discriminator is a binary classifier of samples output by thegenerator. Negative examples for the discriminator are collected from other skill datasets.The classifier is a fully-connected neural network with two hidden layers of size 32, trainedwith Adam for 10,000 epochs with a learning rate of 1e−3using binary cross entropy loss.During planning, the generator is rejection sampled using the discriminator for up to 100tries, after which the last sample is returned.•Planning: The number of abstract plans for high-level planning was set to Nabstract = 8for our non-BEHA VIOR domains, and Nabstract = 1 for our BEHA VIOR domains. Thesamples per step for refinement was set to Nsamples = 10 for all environments.F.2 Cluster and Intersect:This is the operator learning approach used by Silver et al. [1].•Operator Learning: This approach learns STRIPS operators by attempting to induce adifferent operator for every set of unique lifted effects (See Silver et al. [1] for more infor-mation).•Sampler Learning and Planning: Same as Ours (See (§F.1) for more details).F.3 LOFT:This is the operator learning approach used by Silver et al. [3]. We include a version(‘LOFT+Replay’) that is allowed to mine additional negative data from the environment to matchthe implementation of the original authors. We also include a version (‘LOFT’) that is restricted tolearning purely from the demonstration data.•Operator Learning: This approach learns operators similar to the Cluster and Intersectbaseline, except that it uses search to see if it can modify the operators after performingCluster and Intersect (See Silver et al. [3] for more information).•Sampler Learning and Planning: Same as Ours (See (§F.1) for more details).F.4 CI + QE:A baseline variant of Cluster and Intersect that is capable of learning operators that have quantifieddelete effects in addition to atomic delete effects.21•Operator Learning: This approach first runs Cluster and Intersect, then attempts to in-duce quantified delete effects by performing a hill-climbing search over possible choicesof quantified delete effects using prediction error as the metric to be optimized.•Sampler Learning and Planning: Same as Ours (See (§F.1) for more details).F.5 GNN Shooting:This approach trains a graph neural network (GNN) [45] policy. This GNN takes in the currentstatex, abstract state s=ABSTRACT (x,ΨG), and goal g. It outputs an action via a one-hot vectoroverCcorresponding to which controller to execute, one-hot vectors over all objects at each discreteargument position, and a vector of continuous arguments. We train the GNN using behavior cloningon the dataset D. At evaluation time, we sample trajectories by treating the GNN’s output continuousarguments as the mean of a Gaussian with fixed variance. We use the known transition model ftocheck if the goal is achieved, and repeat until the planning timeout is reached.•Planning: Repeat until the goal is reached: query the model on the current state, abstractstate, and goal to get a ground skill. Invoke the ground skill’s sampler up to 100 times tofind a subgoal that leads to the abstract successor state predicted by the skill’s operator. Ifsuccessful, simulate the state forward; otherwise, terminate with failure.•Learning: This approach essentially learns a TAMP planner in the form of a GNN. Follow-ing the baselines presented in prior work [2], the GNN is a standard encode-process-decodearchitecture with 3 message passing steps. Node and edge modules are fully-connectedneural networks with two hidden layers of size 16. We follow the method of Chitnis et al.[2] for encoding object-centric states, abstract states, and goals into graph inputs. To getgraph outputs, we use node features to identify the object arguments for the skill and aglobal node with a one-hot vector to identify the skill identity. The models are trained withAdam for 1000 epochs with a learning rate of 1e−3and batch size 128 using MSE loss.F.6 GNN Model-Free:A baseline that uses the same trained GNN as above, but at evaluation time, directly executes thepolicy instead of checking execution using f. This has the advantage of being more efficient toevaluate than GNN Shooting, but is less effective.G Additional Experimental Results and AnalysesEnvironment Ours LOFT LOFT+replay CI CI + QE GNNPainting 69.35 (3.58) 92.26 (11.41) 135.73 (6.45) 70.95 (5.07) 67.08 (5.86) 2220.19 (181.29)Satellites 19.38 (7.83) 52.73 (18.35) 438.44 (51.62) 23.29 (5.38) 15.96 (4.70) 1625.69 (218.88)Clutter 1D 17.98 (1.06) 68.04 (17.68) 366.89 (146.09) 62.68 (14.89) 28.58 (3.68) 1164.92 (84.74)Screws 1.31 (0.04) 143.60 (49.10) 5712.80 (736.84) 0.32 (0.02) 708.98 (1023.02) 1369.59 (68.44)Cluttered Satellites 16.12 (0.55) 353.67 (52.78) 902.99 (148.22) 107.04 (11.94) 87.24 (10.49) 3043.62 (285.27)Cluttered Painting 131.68 (5.05) 1699.52 (216.71) 7364.03 (532.67) 470.32 (40.38) 2788.74 (1330.38) 4615.70 (334.11)Opening Presents 28.91 (11.26) 106.57 (27.72) - 100.62 (23.66) 92.63 (17.01) 185.53 (6.63)Locking Windows 16.77 (1.55) 62.55 (10.12) - 61.71 (8.95) 45.51 (5.74) 319.09 (7.61)Collecting Cans 3728.73 (9544.75) 1520.93 (354.20) - 576.89 (100.57) 781.38 (350.46) 2121.86 (120.51)Sorting Books 4981.79 (14460.37) 6423.03 (602.44) - 1528.18 (111.18) - 5359.99 (170.46)Table 2: Learning times in seconds on training data for all domains. Note that BEHA VIOR domains (bottom4) use training set sizes of 10 tasks, while all other domains use training and testing set sizes of 50 tasks. Thestandard deviation is shown in parentheses.We have already established that our approach learns operators that lead to more effective bilevelplanning than baselines. In this section, we are interested in comparing our approach with baselineson three additional metrics: (1) the efficiency of high-level planning using learned operators, (2) theefficiency of the learning algorithm itself, and (3) the simplicity of operator sets we learn. We alsorun ablations of our method to investigate the importance of optimizing the complexity term, as wellas downward-refinability to our method.22Figure 4: Nodes Created by Operator Learning Approaches . We show scatter plots of the nodes created(x-axis) for each operator learning approach (y-axis). We also include a violin graph to visualize the densityof points throughout the graph. If bilevel planning failed, we set the nodes created to 106for non-BEHA VIORdomains and 103for BEHA VIOR domains. Our approach achieves a low number of nodes created across whencompared to baselines in most domains.Environment Ours LOFT LOFT+replay CI CI + QEPainting 10.00 (0.00) 13.60 (0.80) 19.20 (0.39) 11.00 (0.00) 10.20 (0.40)Satellites 7.40 (0.79) 10.90 (1.44) 33.80 (3.70) 10.40 (1.20) 9.30 (0.9)Clutter 1D 2.00 (0.00) 7.10 (1.64) 16.10 (2.11) 7.10 (1.64) 3.00 (0.44)Screws 4.0 (0.00) 14.80 (1.98) 91.14 (5.11) 14.80 (1.98) 4.80 (0.97)Cluttered Satellites 7.00 (0.00) 19.80 (2.60) 59.60 (4.45) 16.30 (1.10) 13.9 (0.83)Cluttered Painting 13.00 (0.00) 28.00 (0.00) 157.8 (6.49) 25.20 (2.31) 20.70 (1.61)Opening Presents 2.30 (0.90) 10.80 (2.99) - 10.80 (2.99) 9.80 (1.83)Locking Windows 2.00 (0.00) 6.10 (0.70) - 6.10 (0.70) 4.70 (0.64)Collecting Cans 6.10 (5.37) 57.40 (9.43) - 52.90 (8.41) 13.40 (2.33)Sorting Books 14.70 (7.57) 76.70 (5.62) - 75.80 (5.79) -Table 3: Average number of operators learned for all domains. Note that BEHA VIOR domains (bottom 4) usetraining set sizes of 10 tasks, while all other domains use training and testing set sizes of 50 tasks. The standarddeviation is shown in parentheses.Figure 4 shows the nodes created during high-level planning for each of our various environmentsand operator learning methods. We can see that operators learned by our approach generally lead tocomparable or fewer node creations during planning when compared to baselines. In many of theenvironments where baseline methods are able to achieve a number of points with fewer node cre-ations — Cluttered 1D ,Opening Presents , and Locking Windows — our method has a significantlyhigher success rate.Table 2 shows the learning times for all methods in all domains3. Our approach achieves the lowestlearning time in 7/10 domains. Upon inspection of our method’s performance on the ‘LockingWindows‘ and ‘Collecting Cans‘ domains, we discovered that the high average learning times arebecause of a few outlier seeds encountering local minima learning, yielding large and complexoperator sets (this is the reason for the extremely high standard deviation).3Note that there is no entry for ‘CI + QE’ for sorting books because learning exceeded the memory limit ofour hardware (192 GB)23Environment Ours No complexity Down EvalPainting 98.80 (1.33) 98.80 (1.33) 26.60 (6.52)Satellites 93.40 (11.14) 81.20 (19.40) 84.20 (16.88)Cluttered 1D 100.00 (0.00) 100.0 (0.00) 100.00 (0.00)Screws 100.00 (0.00) 100.00 (0.00) 100.00 (0.00)Cluttered Satellites 95.20 (2.40) 94.80 (2.72) 94.00 (3.22)Cluttered Painting 99.20 (1.33) 99.00 (1.84) 23.80 (7.56)Opening Presents 100.00 (0.00) 100.00 (0.00) 100.00 (0.00)Locking Windows 100.00 (0.00) 100.00 (0.00) 100.00 (0.00)Collecting Cans 77.00 (37.16) 75.00 (38.30) 75.00 (38.80)Sorting Books 69.00 (36.73) 52.00 (34.00) 67.00 (37.2)Table 4: Percentage success rate for our original method, as well as ablations where we set the λparameterfrom Equation 1 to 0on test tasks (No complexity), and enforce downward refinability at evaluation time (DownEval) for all domains. Note that BEHA VIOR domains use training and testing set sizes of 10 tasks, while allother domains use training and testing set sizes of 50 tasks. The percentage standard deviation is shown inbrackets.Table 3 shows the number of operators learned for all operator learning methods in all domains.Our approach learns the lowest number of operator sets across all environments and massively outperforms other approaches on this metric in Collecting Cans , and Sorting Books . These resultsfurther highlight our ability to learn operator sets that efficient for high-level planning, but alsosimpler and therefore, more likely to generalize to new environments.Table 4 shows the success rate of our method when the λparameter from Equation 1 is set to 0(the‘No complexity’ column), thereby effectively removing any impact on the optimization from thecomplexity term in the objective. In most environments (Painting, Cluttered 1D, Screws, Clut-tered Satellites, Cluttered Painting, Opening Presents, Locking Windows, and Collecting Cans), thischange has minimal impact on the method’s success rate. However, in two environments (Satellitesand Sorting Books), the change causes a significant reduction in success rate. Upon inspection, wefound that our approach learned a number of very complex operators in these domains. When λwasset to be greater than 0, our approach would delete a number of these operators, but in this ablation,it was unable to and thus planning performance suffered. We can conclude from these experimentsthat optimizing the complexity term is a key component of our approach on particular domains.In fact, we believe that more aggressively optimizing the complexity term, perhaps by increasingthe sophistication of our R EDUCE COMPLEXITY step, could enable us to improve performance sig-nificantly even on the two complex BEHA VIOR tasks, since we found that our approach learnedoverly-complex operators for a few random seeds.Table 4 also shows the success rate of our method when bilevel planning is only allowed to produce1 abstract plan during refinement. This effectively enforces that the learned operators must yielddownward-refinable plans. This causes a significant reduction in success rate in many environments,showing that the ability to evaluate multiple abstract plans is important to planning with operatorslearned by our method. Indeed, all the environments that exhibit significant reductions in successrate do not - to our knowledge - admit a downward-refinable high-level theory over the providedskills and predicates.H Learned Operator ExamplesFinally, we provide operator examples to demonstrate our approaches ability to overcome overfitingto specific situations. Figure 5 shows a comparison of the operators learned with Open inOpeningPackages environment and NavigateTo inCollecting Cans environment across our approach and ‘CI+ QE’ (the most competitive baseline in these environments). As shown, by optimizing predictionerror ‘CI + QE’ learns a number of operators to describe the same amount of transitions that is cov-ered by the single operator our approach learns. Upon inspection, ‘CI + QE‘ learns overly specificoperators when trying to cluster effects that try to predict the entire state to the point where ‘Quan-24NavigateTo-pop0: Arguments: [?x0:pop] Preconditions: [handempty(), not-openable-pop(?x0:pop)] Add Effects: [reachable-pop(?x0:pop)] Delete Effects: [] Quantified Delete Effects: [ontop-pop-pop, reachable-bed, reachable-bucket, reachable-pop] Controller: NavigateTo-pop(?x0:pop)} NavigateTo-pop0: Arguments: [?x0:bed, ?x1:pop] Preconditions: [handempty(), not-openable-bed(?x0:bed), not-openable-pop(?x1:pop), ontop-pop-bed(?x1:pop, ?x0:bed), reachable-bed(?x0:bed)] Add Effects: [reachable-pop(?x1:pop)] Delete Effects: [reachable-bed(?x0:bed)] Quantified Delete Effects: [] Controller: NavigateTo-pop(?x1:pop), NavigateTo-pop1: Arguments: [?x0:pop] Preconditions: [handempty(), not-openable-pop(?x0:pop), reachable-pop(?x0:pop)] Add Effects: [] Delete Effects: [] Quantified Delete Effects: [] Controller: NavigateTo-pop(?x0:pop), NavigateTo-pop2: Arguments: [?x0:pop] Preconditions: [handempty(), not-openable-pop(?x0:pop)] Add Effects: [reachable-pop(?x0:pop)] Delete Effects: [] Quantified Delete Effects: [] Controller: NavigateTo-pop(?x0:pop), Open-package0: Arguments: [?x0:package] Preconditions: [closed-package(?x0:package), handempty(), openable-package(?x0:package), reachable-package(?x0:package)] Add Effects: [open-package(?x0:package)] Delete Effects: [closed-package(?x0:package)] Quantified Delete Effects: [] Controller: Open-package(?x0:package), Open-package1: Arguments: [?x0:room_floor, ?x1:package] Preconditions: [closed-package(?x1:package), handempty(), not-openable-room_floor(?x0:room_floor), ontop-package-room_floor(?x1:package, ?x0:room_floor), openable-package(?x1:package), reachable-package(?x1:package), reachable-room_floor(?x0:room_floor)] Add Effects: [open-package(?x1:package)] Delete Effects: [closed-package(?x1:package), ontop-package-room_floor(?x1:package, ?x0:room_floor)] Quantified Delete Effects: [] Controller: Open-package(?x1:package)} Open-package0: Arguments: [?x0:package] Preconditions: [closed-package(?x0:package), handempty(), openable-package(?x0:package), reachable-package(?x0:package)] Add Effects: [open-package(?x0:package)] Delete Effects: [closed-package(?x0:package)] Quantified Delete Effects: [ontop-package-room_floor] Controller: Open-package(?x0:package), Figure 5: Operator Comparison . Operators learned after our approach (left) and ‘CI+QE‘ (right), for OpeninOpening Packages environment (top) and NavigateTo inCollecting Cans Cans environment. Our approachlearns fewer operators that are generally simpler, and thus more conducive to effective bilevel planning andgeneralization.tified Delete Effects’ are not fully utilized. For the full set of operators learned by our algorithm onthe ‘Sorting Books’ task, see Figure 6.25Grasp-notebook0: Arguments: [?x0:notebook] Preconditions: [handempty(), not-openable-notebook(?x0:notebook), reachable-notebook(?x0:notebook)] Add Effects: [holding-notebook(?x0:notebook)] Delete Effects: [handempty(), reachable-notebook(?x0:notebook)] Quantified Delete Effects: [ontop-notebook-coffee_table, ontop-notebook-room_floor] Controller: Grasp-notebook(?x0:notebook) NavigateTo-notebook0: Arguments: [?x0:notebook] Preconditions: [handempty(), not-openable-notebook(?x0:notebook)] Add Effects: [reachable-notebook(?x0:notebook)] Delete Effects: [] Quantified Delete Effects: [reachable-board_game, reachable-coffee_table, reachable-hardback, reachable-notebook, reachable-shelf, reachable-video_game] Controller: NavigateTo-notebook(?x0:notebook) PlaceOnTop-shelf0: Arguments: [?x0:shelf, ?x1:hardback] Preconditions: [holding-hardback(?x1:hardback), not-openable-hardback(?x1:hardback), not-openable-shelf(?x0:shelf), reachable-shelf(?x0:shelf)] Add Effects: [handempty(), ontop-hardback-shelf(?x1:hardback, ?x0:shelf)] Delete Effects: [holding-hardback(?x1:hardback)] Quantified Delete Effects: [] Controller: PlaceOnTop-shelf(?x0:shelf) PlaceOnTop-shelf1: Arguments: [?x0:shelf, ?x1:notebook] Preconditions: [holding-notebook(?x1:notebook), not-openable-notebook(?x1:notebook), not-openable-shelf(?x0:shelf), reachable-shelf(?x0:shelf)] Add Effects: [handempty(), ontop-notebook-shelf(?x1:notebook, ?x0:shelf)] Delete Effects: [holding-notebook(?x1:notebook)] Quantified Delete Effects: [] Controller: PlaceOnTop-shelf(?x0:shelf) Grasp-hardback0: Arguments: [?x0:hardback] Preconditions: [handempty(), not-openable-hardback(?x0:hardback), reachable-hardback(?x0:hardback)] Add Effects: [holding-hardback(?x0:hardback)] Delete Effects: [handempty(), reachable-hardback(?x0:hardback)] Quantified Delete Effects: [ontop-hardback-coffee_table, ontop-hardback-room_floor] Controller: Grasp-hardback(?x0:hardback) NavigateTo-shelf0: Arguments: [?x0:shelf] Preconditions: [not-openable-shelf(?x0:shelf)] Add Effects: [reachable-shelf(?x0:shelf)] Delete Effects: [] Quantified Delete Effects: [ontop-hardback-coffee_table, reachable-board_game, reachable-coffee_table, reachable-hardback, reachable-notebook, reachable-video_game] Controller: NavigateTo-shelf(?x0:shelf) NavigateTo-hardback0: Arguments: [?x0:hardback] Preconditions: [handempty(), not-openable-hardback(?x0:hardback)] Add Effects: [reachable-hardback(?x0:hardback)] Delete Effects: [] Quantified Delete Effects: [reachable-board_game, reachable-coffee_table, reachable-hardback, reachable-notebook, reachable-shelf, reachable-video_game] Controller: NavigateTo-hardback(?x0:hardback) Figure 6: Sorting Books learned operators.26 |
i84V7i6KEMd | Sample-Efficient Preference-based ReinforcementLearning with Dynamics Aware RewardsKatherine Metcalf Miguel Sarabia Natalie Mackraz Barry-John TheobaldApple, California, USA{kmetcalf, miguelsdc, natalie mackraz, barryjohn theobald }@apple.comAbstract: Preference-based reinforcement learning (PbRL) aligns a robot behav-ior with human preferences via a reward function learned from binary feedbackover agent behaviors. We show that dynamics-aware reward functions improvethe sample efficiency of PbRL by an order of magnitude. In our experiments weiterate between: (1) learning a dynamics-aware state-action representation zsaviaa self-supervised temporal consistency task, and (2) bootstrapping the preference-based reward function from zsa, which results in faster policy learning and betterfinal policy performance. For example, on quadruped-walk, walker-walk, andcheetah-run, with 50 preference labels we achieve the same performance as ex-isting approaches with 500 preference labels, and we recover 83% and 66% ofground truth reward policy performance versus only 38% and 21%. The per-formance gains demonstrate the benefits of explicitly learning a dynamics-awarereward model. Repo: https://github.com/apple/ml-reed .Keywords: human-in-the-loop learning, preference-based RL, RLHF1 IntroductionThe quality of a reinforcement learned (RL) policy depends on the quality of the reward func-tion used to train it. However, specifying a reliable numerical reward function is challenging.For example, a robot may learn to maximize a defined reward function without actually com-pleting a desired task, known as reward hacking [1, 2]. Instead, preference-based reinforcementlearning (PbRL) infers the reward values by way of preference feedback used to train a policy[3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. Using preference feedback avoids the need to manually defineabsolute numerical reward values (e.g. in TAMER [13]) and is easier to provide than correctivefeedback (e.g. in DA GGER [14]). However, many existing PbRL methods require either demonstra-tions [5], which are not always feasible to provide, or an impractical number of feedback samples[3, 4, 11, 12, 15].We target sample-efficient reward function learning by exploring the benefits of dynamics-awarepreference-learned reward functions or Rewards Encoding Environment Dynamics (REED) (Sec-tion 4.1). Fast alignment between robot behaviors and human needs is essential for robots operatingon real world domains. Given the difficulty people face when providing feedback for a single state-action pair [13], and the importance of defining preferences over transitions instead of single state-action pairs [4], it is likely that people’s internal reward functions are defined over outcomes ratherthan state-action pairs . We hypothesize that: (1) modelling the relationship between state, action,and next-state triplets is essential to learn preferences over transitions, (2) encoding awareness ofdynamics with a temporal consistency objective will allow the reward function to better generalizeover states and actions with similar outcomes, and (3) exposing the reward model to all transitionsexperienced by the policy during training will result in more stable reward estimations during re-ward and policy learning. Therefore, we incorporate environment dynamics via a self-supervisedtemporal consistency task using the state-of-the-art self-predictive representations (SPR) [16] as onesuch method for capturing environment dynamics.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.We evaluate the benefits of dynamics-awareness using the current state-of-the-art in preferencelearning [3, 4]. In our experiments, which follow Lee et al. [10], REED reward functions out-perform non-REED reward functions across different preference dataset sizes, quality of preferencelabels, observation modalities, and tasks (Section 5). REED reward functions lead to faster policytraining and reduce the number of preference samples needed (Section 6) supporting our hypothesesabout the importance of environments dynamics for preference-learned reward functions.2 Related WorkLearning from Human Feedback. Learning reward functions from preference-based feedback[7, 17, 18, 19, 20, 21, 22, 23] has been used to address the limitations of learning policies directlyfrom human feedback [24, 25, 26] by inferring reward functions from either task success [27, 28, 29]or real-valued reward labels [30, 31]. Learning policies directly from human feedback is inefficientas near constant supervision is commonly assumed. Inferring reward functions from task successfeedback requires examples of success, which can be difficult to acquire in complex and multi-steptask domains. Finally, people have difficulty providing reliable, real-valued reward labels. PbRLwas extended to deep RL domains by Christiano et al. [3], then improved upon and made more effi-cient by PEBBLE [4] followed by SURF [11], Meta-Reward-Net (MRN) [12], and RUNE [15]. Toreduce the feedback complexity of PbRL, PEBBLE [4] sped up policy learning via (1) intrinsically-motivated exploration, and (2) relabelling the experience replay buffer. Both techniques improvedthe sample complexity of the policy and the trajectories generated by the policy, which were thenused to seek feedback. SURF [11] reduced feedback complexity by incorporating augmentations andpseudo-labelling into the reward model learning. RUNE [15] improved feedback sample complexityby guiding policy exploration with reward uncertainty. MRN [12] incorporated policy performancein reward model updates, but further investigation is required to ensure that the method does notallow the policy to influence and bias how the reward function is learned, two concerns called outin [32]. SIRL [33] adds an auxiliary contrastive objective to encourage the reward function to learnsimilar representations for behaviors human labellers consider to be similar. However, this approachrequires extra feedback from human teachers to provide information about which behaviors are sim-ilar to one another. Additionally, preference-learning has also been incorporated into data-drivenskill extraction and execution in the absence of a known reward function [34]. Of the extensions andimprovements to PbRL, only MRN [12] and SIRL [33], like REED, explore the benefits of auxiliaryinformation.Encoding Environment Dynamics. Prior work has demonstrated the benefits of encoding environ-ment dynamics in the state-action representation of a policy [16, 35, 36], and reward functions forimitation learning [37, 38] and inverse reinforcement learning [39, 40]. Additionally, it is commonfor dynamics models to predict both the next state and the environment’s reward [35], which sug-gests it is important to imbue the reward function with awareness of the dynamics. The primary self-supervised approach to learning a dynamics model is to predict the latent next state [16, 35, 41, 42],and the current state-of-the-art in data efficient RL [16, 36] uses SPR [16] to do exactly this. Unlikeprior work in imitation and inverse reinforcement learning, we explicitly evaluate the benefits ofdynamics-aware auxiliary objectives versus auxiliary objective induced regularization effect.3 Preference-based Reinforcement LearningRL trains an agent to achieve tasks via environment interactions and reward signals [43]. For eachtime step tthe environment provides a state stused by the agent to select an action according to itspolicy at∼πφ(a|st). Then atis applied to the environment, which returns a next state according toits transition function st+1∼τ(st, at)and a reward r(st, at). The agent’s goal is to learn a policy πφmaximizing the expected discounted return,P∞k=0γkr(st+k, at+k). In PbRL [3, 4, 5, 7, 8, 9, 10] πφis trained with a reward function ˆrψdistilled from preferences Pψiteratively queried from a teacher,where rψis assumed to be a latent factor explaining the preference Pψ. A buffer Bof transitions isaccumulated as πφlearns and explores.2A labelled preference dataset Dprefis acquired by querying a teacher for preference labels every Ksteps of policy training and is stored as triplets (σ1, σ2, yp), where σ1andσ2are trajectory segments(sequences of state-action pairs) of length l, and ypis a preference label indicating which, if any,of the trajectories is preferred [10]. To query the teacher, the Mmaximally informative pairs oftrajectory segments (e.g. pairs that most reduce model uncertainty) are sampled from B, sent to theteacher for preference labelling, and stored in Dpref[10, 23, 44, 45]. Typically Dprefis used to updateˆrψon a schedule conditioned on the training steps for πφ(e.g. every time the teacher is queried).The preference triplets (σ1, σ2, yp)create a supervised preference prediction task to approximate rψwithˆrψ[3, 4, 19]. The prediction task follows the Bradley-Terry model [46] for a stochastic teacherand assumes that the preferred trajectory has a higher cumulative reward according to the teacher’srψ. The probability of the teacher preferring σ1overσ2(σ1≻σ2) is formalized as:Pψ[σ1≻σ2] =expPtˆrψ(s1t, a1t)Pi∈{1,2}expPtˆrψ(sit, ait), (1)where sitis the state at time step tof trajectory i∈ {1,2}, and aitis the corresponding action taken.The parameters ψofˆrψare optimized such that the binary cross-entropy over Dprefis minimized:Lψ=− E(σ1,σ2,yp)∼D prefyp(0) log Pψ[σ2≻σ1] +yp(1) log Pψ[σ1≻σ2]. (2)While Pψ[σ1≻σ2]andLψare defined over trajectory segments, ˆrψoperates over individual (st, at)pairs. Each reward estimation in Equation 1 is made independently of the other (st, at)pairs in thetrajectory and Pψ[σ1≻σ2]simply sums the independently estimated rewards. Therefore, environ-ment dynamics, or the outcome of different actions in different states, are not explicitly encoded inthe reward function, limiting its ability to model the relationship between state-action pairs and thevalues associated with their outcomes. By supplementing the supervised preference prediction taskwith a self-supervised temporal consistency task (Section 4.1), we take advantage of all transitionsexperienced by πφto learn a state-action representation in a way that explicitly encodes environmentdynamics, and can be used to learn to solve the preference prediction task.4 Dynamics-Aware Reward FunctionIn this section, we present our approach to encoding dynamics-awareness via a temporal consistencytask into the state-action representation of a preference-learned reward function. There are manymethods for encoding dynamics and we show results using SPR, the current state of the art. Themain idea is to learn a state-action representation that is predictive of the latent representation of thenext state using a self-supervised temporal consistency task and all transitions experienced by πφ.Preferences are then learned with a linear layer over the state-action representation.4.1 Rewards Encoding Environment Dynamics (REED)We use the SPR [16] self-supervised temporal consistency task to learn state-action representationsthat are predictive of likely future states and thus environment dynamics. The state-action represen-tations are then bootstrapped to solve the preference prediction task in Equation 1 (see Figure 1 foran overview of the architecture). The SPR network is parameterized by ψandθ, where ψis sharedwith ˆrψandθis unique to the SPR network. At train time, batches of ( st,at,st+1) triplets aresampled from a buffer Band encoded: fs(st, ψs)→zst,fa(at, ψa)→zat,fsa(zst, zat, ψsa)→zsat,andfs(st+t, ψs)→zst+1. The embedding zst+1is used to form our target for Equations 3 and 4. Adynamics function gd(zsat, θd)→ˆzst+1then predicts the latent representation of the next state zst+1.The functions fs(·),fa(·), andgd(·)are multi-layer perceptrons (MLPs), and fsa(·)concatenates zstandzatalong the feature dimension before encoding them with a MLP. To encourage knowledge ofenvironment dynamics in zsat,gd(·)is kept simple, e.g. a linear layer.Following [16], a projection head hpro(·, θpro)is used to project both the predicted and targetnext state representations to smaller latent spaces via a bottleneck layer and a prediction head3PREFERENCE-LEARNED REWARD FUNCTIONSELF-PREDICTIVE REPRESENTATION (SPR)State Encoder fs(⋅,ψs)Action Encoder fa(⋅,ψa)State Action Encoder fsa(⋅,⋅,ψsa)Dynamics gd(⋅,θd)Prediction hpre(⋅,θpre)Projection hpro(⋅,θpro)State Encoder fs(⋅,ψs)Projection hpro(⋅,θpro)Replay Buffer Bstate staction atnext state st+1sampleLSSzatzsatzst̂zst+1zst+1̂ydt+1ydt+1MLPpredicted reward ̂rψFigure 1: Architecture for self-predictive representation (SPR) objective [16] (in yellow), andpreference-learned reward function (in blue). Modules in green are shared between SPR and thepreference-learned reward function.hpre(·, θpre)is used to predict the target projections: ˆydt+1=hpre(hpro(ˆzt+1, θpro), θpre)andydt+1=hpro(zt+1, θpro). Both hproandhpreare modelled using linear layers.The benefits of REED should be independent of the self-supervised objective function. Therefore,we present results for two different self-supervised objectives: Distillation (i.e. SimSiam with lossLSS) [36, 47] and Contrastive (i.e. SimCLR with loss LC) [48, 49, 50], referred to as DistillationREED and Contrastive REED respectively. LSSandLCare defined as:LSS=−cos(ˆydt+1,sg(ydt+1)), (3)LC=−logexp(cos(ˆ ydt+1,sg(ydt+1))/τ)P2Nk=11[sk̸=st+1]exp(cos( ydt+1,ˆydk)/τ). (4)InLSS, a stop gradient operation, sg (...), is applied to ydt+1and then ˆydt+1is pushed to be consistentwithydt+1via a negative cosine similarity loss. In LC, a stop gradient operation is applied to ydt+1and then ˆydt+1is pushed to be predictive of which candidate next state is the true next state via theNT-Xent loss.Rather than applying augmentations to the input, temporally adjacent states are used to create thedifferent views [16, 36, 49, 50]. Appendix D details the architectures for the SPR component net-works.State-Action Fusion Reward Network. REED requires a modification to the reward network ar-chitecture used by Christiano et al. [3] and PEBBLE [4] as latent state representations are compared.Instead of concatenating raw state-action features, we separately encode the state, fs(·), and action,fa(·), before concatenating the embeddings and passing them to the body of our reward network.For the purposes of comparison, we refer to the modified reward network as the state-action fusion(SAF) reward network. For architecture details, see Appendix D.4.2 Incorporating REED into PbRLThe self-supervised temporal consistency task is used to update the parameters ψandθeach timethe reward network is updated (every Ksteps of policy training, Section 3). All transitions inthe buffer Bare used to update the state-action representation zsa, which effectively increases theamount of data used to train the reward function from M·Kpreference triplets to all state-actionpairs experienced by the policy1. REED precedes selecting and presenting the Mqueries to theteacher for feedback. Updating ψandθprior to querying the teacher exposes zsato a larger spaceof environment dynamics (all transitions collected since the last model update), which enables themodel to learn more about the world prior to selecting informative trajectory pairs for the teacherto label. The state-action representation zsaand a linear prediction layer are used to solve the1Note the reward function is still trained with M·Ktriplets, but the state-action encoder has the opportunityto better capture the dynamics of the environment.4preference prediction task (Equation 1). After each update to ˆrψ,πφis trained on the updated ˆrψ.See Appendix C for REED incorporated into PrefPPO [3] and PEBBLE [4].5 Experimental SetupOur experimental results in Section 6 demonstrate that learning a dynamics-aware reward functionexplicitly improves PbRL policy performance for both state-space and image-space observations.To verify that the performance improvements are due to dynamics-awareness rather than just theinclusion of a self-supervised auxiliary task, we compared against an image-augmentation-basedauxiliary task (Image Aug.). The experiments and results are provided in Appendix G) and showthat indeed the performance improvements are due specifically to encoding environment dynamicsin the reward function. Additionally, we compare against the SURF [11], RUNE [15], and MRN[12] extensions to PEBBLE.We follow the experiments outlined by the B-Pref benchmark [10]. Models are evaluated on theDeepMind Control Suite (DMC) [51] and MetaWorld [52] environment simulators. DMC provideslocomotion tasks with varying degrees of difficulty and MetaWorld provides object manipulationtasks. For each DMC and MetaWorld task, we evaluate performance on varying amounts of feed-back, i.e. different preference dataset sizes, and different labelling strategies for the synthetic teacher.The number of queries ( M) presented to the teacher every Ksteps is set such that for a given task,teacher feedback always stops at the same episode. Feedback is provided by simulated teachersfollowing [3, 4, 10, 11, 34], where six labelling strategies are used to evaluate model performanceunder different types and amounts of label noise. The teaching strategies were first proposed byB-Pref [10]. An overview of the labelling strategies is provided in Appendix B.Following Christiano et al. [3] and PEBBLE [4], ˆrψis modelled with an ensemble of three networkswith a corresponding ensemble for the SPR auxiliary task. The ensemble is key for disagreement-based query sampling (Appendix A) and has been shown to improve final policy performance [10].All queried segments are of a fixed length ( l= 50 )2. The Adam optimizer [53] with β1= 0.9,β2= 0.999, and no L2-regularization [54] is used to train the reward functions. For all PEBBLE-related methods, intrinsic policy training is reported in the learning curves and occurs over thefirst 9000 steps. The batch size for training on the preference dataset is M, matching the number ofqueries presented to the teacher, and varies based on the amount of feedback. For details about modelarchitectures, hyper-parameters, and the image augmentations used in the image-augmentation self-supervised auxiliary task, refer to Appendices D, E, and G. None of the hyper-parameters nor archi-tectures are altered from the original SAC [55], PPO [56], PEBBLE [4], PrefPPO [4], Meta-Reward-Net [12], RUNE [15], nor SURF [11] papers. The policy and preference learning implementationsprovided in the B-Pref repository [57] are used for all experiments.6 ResultsThe synthetic preference labellers allow policy performance to be evaluated against the ground truthreward function and is reported as mean and standard deviation over 10 runs. Both learning curvesand mean normalized returns are reported, where mean normalized returns [10] are given by:normalized returns =1TXtrψ(st, πˆrψφ(at))rψ(st, πrψφ(at)), (5)where Tis the number of policy training training steps or episodes, rψis the ground truth rewardfunction, πˆrψφis the policy trained on the learned reward function, and πrψφis the policy trained onthe ground truth reward function.2Fixed segments lengths are not strictly necessary, and, when evaluating with simulated humans, are harmfulwhen the reward is a constant step penalty.5Table 1: Mean and ±s.d. normalized return (Equation 5) over 10 random seeds with the oraclelabeller and disagreement sampling. The best result for each condition is in bold . BASErefers to thePEBBLE or PrefPPO baseline, +D ISTILL distillation REED, and +C ONTRAST contrastive REED.SURF, RUNE, and MRN are baselines. Results are normalized relative to SAC. See AppendicesH.2 and H.4 for all tasks, feedback amounts, and P REFPPO results.TASK FEED.PEBBLEBASE +DISTILL +CONTRAST SURF [11] RUNE [15] MRN [12]WALKERWALK500 0.74±0.18 0 .86±0.20 0.90±0.17 0.78±0.12 0 .76±0.20 0 .77±0.2050 0.21±0.10 0.66±0.24 0.62±0.22 0 .47±0.13 0 .23±0.12 0 .38±0.12QUADRUPEDWALK500 0.56±0.21 1.10±0.21 1 .10±0.21 0.80±0.18 1.10±0.20 1 .10±0.2150 0.38±0.26 0 .65±0.16 0 .31±0.18 0 .48±0.19 0 .44±0.21 0.83±0.12CHEETAHRUN500 0.86±0.14 0 .88±0.22 0.94±0.21 0.56±0.16 0 .61±0.17 0 .80±0.1650 0.35±0.11 0 .63±0.23 0.70±0.28 0.55±0.18 0 .32±0.12 0 .38±0.16BUTTONPRESS10K 0.66±0.26 Collapses 0.65±0.27 0.68±0.29 0.45±0.21 0 .59±0.272.5K 0.37±0.18 Collapses 0.49±0.25 0.40±0.18 0 .22±0.10 0 .35±0.15SWEEPINTO10K 0.28±0.12 Collapses 0.47±0.23 0.48±0.26 0.29±0.15 0 .28±0.252.5K 0.15±0.09 Collapses 0.21±0.13 0.25±0.13 0.16±0.11 0 .22±0.12MEAN - 0.46 0.47 0.64 0.55 0.46 0.57Figure 2: Learning curves for three DMC and two MetaWorld tasks with 50 and 500 (DMC) and2.5k and 10k (MetaWorld) pieces of feedback, for state-space observations, disagreement sampling,and oracle labels. Refer to Appendices H.1 and H.3 for more tasks and feedback amounts.505001000Feedback Amount0.00.51.01.5Normalized ReturnWalker Walk505001000Feedback AmountQuadruped Walk505001000Feedback AmountCheetah Run25001000020000Feedback AmountButton Press25001000020000Feedback AmountSweep IntoFigure 3: Mean normalized return across or-acle, noisy, mistake, and equal labellers Leeet al. [10] on quadruped-walk with state-space observations for 50, 500, and 1000pieces of feedback.Learning curves for state-space observations forSAC and PPO trained on the ground truth reward,PEBBLE, PrefPPO, PEBBLE + REED, PrefPPO +REED, Meta-Reward-Net, SURF, and RUNE areshown in Figure 2, and mean normalized returns[10] are shown in Table 1 (Appendix H.2 and H.4for PrefPPO). The learning curves show that rewardfunctions learned using REED consistently outper-form the baseline methods for locomotive tasks,and for manipulation tasks REED methods are con-sistently a top performer, especially for smalleramounts of feedback. On average across tasks andfeedback amounts, REED methods outperform baselines (M EAN in Table 1).Learning curves for image-space observations are presented for the PEBBLE and PEBBLE+REEDmethods in Figure 4, and mean normalized returns [10] in Appendix H.4 . The impact of preferencelabel quality on policy performance for PEBBLE and PEBBLE+REED is shown in Figure 3 andTable 2. On average, across labeller strategies, REED-based methods outperform baselines.Figures 2, 3 and 4 show that REED improves the speed of policy learning and the final performanceof the learned policy relative to non-REED methods. The increase in policy performance is observedacross environments, labelling strategies, amount of feedback, and observation type. We ablated the6Apple Confidential–Internal Use OnlyApple Confidential–Internal Use Onlypebble+contr.pebble+dist.pebblepposacprefppoprefppo+distill.prefppo+contr.Figure 4: Learning curves for three DMC and two MetaWorld tasks with 50 and 500 (DMC) and2.5k and 10k (MetaWorld) pieces of feedback, for image–space observations, disagreement sam-pling, and oracle labels. Only PEBBLE is evaluated for the image-space due to the poor state-spaceperformance of PrefPPO. Results for more tasks and feedback amounts are available in AppendicesH.1 and H.3.impact of the modified reward architecture and found that the REED performance improvements arenot due to the modified reward network architecture (Appendix F). Refer to Appendix H for resultsacross all combinations of task, feedback amount, and teacher labelling strategies for state-spaceobservations (H.1 and H.2), and image-space MetaWorld results on more tasks (drawer open, drawerclose, window open, and door open) and feedback amounts (H.3 and H.4). The trends observed inthe subset of tasks included in Table 1, and Figures 2 and 4 are also observed in the additional tasksand experimental conditions in the Appendices.6.1 Source of ImprovementsTable 2: Mean normalized returns acrossfeedback amounts, tasks, and labeller types(oracle, mistake, noisy, and equal).PEBBLE +D ISTILL +CONTRAST SURF RUNE MRN0.53 0.47 0.68 0.54 0.46 0.50There is no clear advantage between the Distillationand Contrastive REED objectives on the DMC loco-motion tasks, suggesting the improved policy perfor-mance stems from encoding awareness of dynamicsrather than any particular self-supervised objective.However in the MetaWorld object manipulation tasks, Distillation REED tends to collapse with Con-trastive REED being the more robust method. From comparing SAC, PEBBLE, PEBBLE+REED,and PEBBLE+Image Aug. (Appendix G.3), we see that PEBBLE+Image Aug. improves perfor-mance over PEBBLE with large amounts of feedback (e.g. 4.2 times higher mean normalized re-turns for walker-walk at 1000 pieces of feedback), but does not have a large effect on performancefor lower-feedback regimes (e.g. 5.6% mean normalized returns with PEBBLE+Image Aug. versus5.5% with PEBBLE for walker-walk at 50 pieces of feedback). In contrast, incorporating REEDalways yields higher performance than both the baseline and PEBBLE+Image Aug. regardless ofthe amount of feedback. For results analyzing the generalizability and stability of reward functionlearning when using a dynamics-aware auxiliary objective, see Appendix I.7 Discussion and LimitationsThe benefits of dynamics awareness are especially pronounced for labelling types that introduce in-correct labels (i.e. mistake and noisy) (Figure 3 and Appendix H) and smaller amounts of preferencefeedback. For example, on state-space observation DMC tasks with 50 pieces of feedback, REEDmethods more closely recover the performance of the policy trained on the ground truth reward re-covering 62 – 66% versus 21% on walker-walk, and 65 – 85% versus 38% on quadruped-walk forPEBBLE-based methods (Table 1). Additionally, PEBBLE+REED methods retain policy perfor-mance with a factor of 10 fewer pieces of feedback compared to PEBBLE. Likewise, when con-sidering image-space observations, PEBBLE+REED methods trained with 10 times less feedbackexceed the performance of base PEBBLE on all DMC tasks. For instance, PEBBLE+ContrastiveREED achieves a mean normalized return of 53% with 50 pieces of feedback whereas baselinePEBBLE reaches 36% on the same task with 500 pieces of feedback.7The policy improvements are smaller for REED reward functions on MetaWorld tasks than they arefor DMC tasks and are generally smaller for PrefPPO than PEBBLE due to a lack of data diversity inthe buffer Bused to train on the temporal consistency task. For PrefPPO lack of data diversity is dueto slow learning and for MetaWorld a high similarity between observations. In particular, DistillationREED methods on state-space observations frequently suffer representation collapse and are notreported here. The objective, in this case SimSiam, learns a degenerate solution, where states areencoded by an constant function and actions are ignored due to the source and target views having anear perfect cosine similarity. However, representation collapse is not observed for the image-spaceobservations, and baseline performance is retained with one quarter the amount of feedback whentraining with PEBBLE+REED methods. If the amount of feedback is kept constant, we notice a25% to 70% performance improvement over the baseline for all PEBBLE+REED methods in theButton Press task.The benefits of dynamics awareness can be compared against the benefits of other approaches toimproving feedback sample complexity, specifically pseudo-labelling (in SURF) [11], guiding pol-icy exploration with reward uncertainty (in RUNE) [15], and incorporating policy performance intoreward updates (in MRN) [12]. REED methods consistently outperform SURF, RUNE, and MRNon the DMC tasks demonstrating the importance of dynamics awareness for locomotion tasks. Onthe MetaWorld object manipulation tasks, REED frequently outperforms SURF, RUNE, and MRN,especially for smaller amounts of feedback, but the performance gains are smaller than for DMC.Smaller performance gains on MetaWorld relative to other sample efficiency methods is in linewith general REED findings for MetaWorld (above) that relate to the slower environment dynamics.However, it is important to call out that all four methods are complementary and can be combined.Across tasks and feedback amounts, policy performance is higher for rewards that are learned onthe state-space observations compared to those learned on image-space observations. There areseveral tasks, such as cheetah-run and sweep into, for which PEBBLE, and therefore all REEDexperiments that build on PEBBLE, are not able to learn reward functions that lead to reasonablepolicy performance when using the image-space observations.The results demonstrate the benefits and importance of environment dynamics to preference-learnedreward functions.Limitations The limitations of REED are: (1) more complex tasks still require a relatively largenumber of preference labels, (2) extra compute and time are required, (3) Distillation REED cancollapse when observations have high similarity, and (4) redundant transitions in the buffer Bfromslow policy learning or state spaces with low variability result in over-fitting on the temporal consis-tency task.8 ConclusionWe have demonstrated the benefits of dynamics awareness in a preference-learned reward for PbRL,especially when feedback is limited or noisy . Across experimental conditions, we found REEDmethods retain the performance of PEBBLE with a 10-fold decrease in feedback. The benefits areobserved across tasks, observation modalities, and labeller types. Additionally, we found that, com-pared to the other PbRL extensions targeting sample efficiency, REED most consistently producedthe largest performance gains, especially for smaller amounts of feedback. The resulting sample ef-ficiency is necessary for learning reward functions aligned with user preferences in practical roboticsettings.8References[1] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Man ́e. Concrete prob-lems in AI safety. arXiv preprint arXiv:1606.06565 , 2016.[2] D. Hadfield-Menell, S. Milli, P. Abbeel, S. Russell, and A. Dragan. Inverse reward design.Advances in Neural Information Processing Systems , 30, 2017.[3] P. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcementlearning from human preferences. volume 30, 2017.[4] K. Lee, L. Smith, and P. Abbeel. PEBBLE: Feedback-efficient interactive reinforcement learn-ing via relabeling experience and unsupervised pre-training. In Proceedings of the Interna-tional Conference on Machine Learning , pages 6152–6163. PMLR, 2021.[5] B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei. Reward learning from humanpreferences and demonstrations in Atari. Advances in Neural Information Processing Systems ,31, 2018.[6] D. Hadfield-Menell, S. Russell, P. Abbeel, and A. Dragan. Cooperative inverse reinforcementlearning. Advances in Neural Information Processing Systems , 29, 2016.[7] J. Leike, D. Krueger, T. Everitt, M. Martic, V . Maini, and S. Legg. Scalable agent alignmentvia reward modeling: A research direction. arXiv preprint arXiv:1811.07871 , 2018.[8] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. V oss, A. Radford, D. Amodei, andP. Christiano. Learning to summarize with human feedback. Advances in Neural InformationProcessing Systems , 33:3008–3021, 2020.[9] J. Wu, L. Ouyang, D. Ziegler, N. Stiennon, R. Lowe, J. Leike, and P. Christiano. Recursivelysummarizing books with human feedback. arXiv preprint arXiv:2109.10862 , 2021.[10] K. Lee, L. Smith, A. Dragan, and P. Abbeel. B-Pref: Benchmarking preference-based rein-forcement learning. Neural Information Processing Systems , 2021.[11] J. Park, Y . Seo, J. Shin, H. Lee, P. Abbeel, and K. Lee. SURF: Semi-supervised reward learningwith data augmentation for feedback-efficient preference-based reinforcement learning. arXivpreprint arXiv:2203.10050 , 2022.[12] R. Liu, F. Bai, Y . Du, and Y . Yang. Meta-Reward-Net: Implicitly differentiable reward learn-ing for preference-based reinforcement learning. Advances in Neural Information ProcessingSystems , 35, 2022.[13] W. Knox and P. Stone. Tamer: Training an agent manually via evaluative reinforcement. InProceedings of the International Conference on Development and Learning , pages 292–297.IEEE, 2008.[14] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings of the International Conference on ArtificialIntelligence and Statistics , pages 627–635, 2011.[15] X. Liang, K. Shu, K. Lee, and P. Abbeel. Reward uncertainty for exploration in preference-based reinforcement learning. 2022.[16] M. Schwarzer, A. Anand, R. Goel, R. Hjelm, A. Courville, and P. Bachman. Data-efficientreinforcement learning with self-predictive representations. In Proceedings of the InternationalConference on Learning Representations , 2020.[17] R. Akrour, M. Schoenauer, and M. Sebag. Preference-based policy learning. In Proceedings ofthe Joint European Conference on Machine Learning and Knowledge Discovery in Databases ,pages 12–27. Springer, 2011.9[18] R. Akrour, M. Schoenauer, and M. Sebag. April: Active preference learning-based reinforce-ment learning. In Proceedings of the Joint European Conference on Machine Learning andKnowledge Discovery in Databases , pages 116–131. Springer, 2012.[19] A. Wilson, A. Fern, and P. Tadepalli. A Bayesian approach for policy learning from trajectorypreference queries. Advances in Neural Information Processing Systems , 25, 2012.[20] H. Sugiyama, T. Meguro, and Y . Minami. Preference-learning based inverse reinforcementlearning for dialog control. In Proceedings of Interspeech , 2012.[21] C. Wirth and J. F ̈urnkranz. Preference-based reinforcement learning: A preliminary survey. InProceedings of the Workshop on Reinforcement Learning from Generalized Feedback: BeyondNumeric Rewards , 2013.[22] C. Wirth, J. F ̈urnkranz, and G. Neumann. Model-free preference-based reinforcement learning.InProceedings of the Conference on Artificial Intelligence (AAAI) , 2016.[23] D. Sadigh, A. Dragan, S. Sastry, and S. Seshia. Active preference-based learning of rewardfunctions. 2017.[24] P. Pilarski, M. Dawson, T. Degris, F. Fahimi, J. Carey, and R. Sutton. Online human trainingof a myoelectric prosthesis controller via actor-critic reinforcement learning. In Proceedingsof the International Conference on Rehabilitation Robotics , pages 1–7. IEEE, 2011.[25] J. MacGlashan, M. Ho, R. Loftin, B. Peng, G. Wang, D. Roberts, M. Taylor, and M. Littman.Interactive learning from policy-dependent human feedback. In Proceedings of the Interna-tional Conference on Machine Learning , pages 2285–2294. PMLR, 2017.[26] D. Arumugam, J. Lee, S. Saskin, and M. Littman. Deep reinforcement learning from policy-dependent human feedback. arXiv preprint arXiv:1902.04257 , 2019.[27] M. Zhang, S. Vikram, L. Smith, P. Abbeel, M. Johnson, and S. Levine. Solar: Deep structuredrepresentations for model-based reinforcement learning. In Proceedings of the InternationalConference on Machine Learning , pages 7444–7453. PMLR, 2019.[28] A. Singh, L. Yang, K. Hartikainen, C. Finn, and S. Levine. End-to-end robotic reinforcementlearning without reward engineering. arXiv preprint arXiv:1904.07854 , 2019.[29] L. Smith, N. Dhawan, M. Zhang, P. Abbeel, and S. Levine. A VID: Learning multi-stage tasksvia pixel-level translation of human videos. arXiv preprint arXiv:1912.04443 , 2019.[30] W. Knox and P. Stone. Interactively shaping agents via human reinforcement: The TAMERframework. In Proceedings of the International Conference on Knowledge Capture , pages9–16, 2009.[31] G. Warnell, N. Waytowich, V . Lawhern, and P. Stone. Deep Tamer: Interactive agent shapingin high-dimensional state spaces. In Proceedings of the Conference on Artificial Intelligence(AAAI) , volume 32, 2018.[32] S. Armstrong, J. Leike, L. Orseau, and S. Legg. Pitfalls of learning a reward function online.arXiv preprint arXiv:2004.13654 , 2020.[33] A. Bobu, Y . Liu, R. Shah, D. S. Brown, and A. D. Dragan. Sirl: Similarity-based implicitrepresentation learning. In Proceedings of the 2023 ACM/IEEE International Conference onHuman-Robot Interaction , pages 565–574, 2023.[34] X. Wang, K. Lee, K. Hakhamaneshi, P. Abbeel, and M. Laskin. Skill preferences: Learning toextract and execute robotic skills from human feedback. In Proceedings of the Conference onRobot Learning , pages 1259–1268. PMLR, 2021.10[35] A. Zhang, R. McAllister, R. Calandra, Y . Gal, and S. Levine. Learning invariant representationsfor reinforcement learning without reconstruction. arXiv preprint arXiv:2006.10742 , 2020.[36] W. Ye, S. Liu, T. Kurutach, P. Abbeel, and Y . Gao. Mastering Atari games with limited data.Advances in Neural Information Processing Systems , 34, 2021.[37] D. Brown, R. Coleman, R. Srinivasan, and S. Niekum. Safe imitation learning via fast bayesianreward inference from preferences. In International Conference on Machine Learning , pages1165–1177. PMLR, 2020.[38] H. Sikchi, A. Saran, W. Goo, and S. Niekum. A ranking game for imitation learning. arXivpreprint arXiv:2202.03481 , 2022.[39] D. Brown, W. Goo, P. Nagarajan, and S. Niekum. Extrapolating beyond suboptimal demon-strations via inverse reinforcement learning from observations. In International conference onmachine learning , pages 783–792. PMLR, 2019.[40] L. Chen, R. Paleja, and M. Gombolay. Learning from suboptimal demonstration via self-supervised reward regression. In Conference on robot learning , pages 1262–1277. PMLR,2021.[41] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learninglatent dynamics for planning from pixels. In Proceedings of the International Conference onMachine Learning , pages 2555–2565. PMLR, 2019.[42] A. Lee, A. Nagabandi, P. Abbeel, and S. Levine. Stochastic latent actor-critic: Deep rein-forcement learning with a latent variable model. Advances in Neural Information ProcessingSystems , 33:741–752, 2020.[43] R. Sutton and A. Barto. Reinforcement learning: An introduction . MIT press, 2018.[44] E. Biyik and D. Sadigh. Batch active preference-based learning of reward functions. In Pro-ceedings of the Conference on Robot Learning , pages 519–528. PMLR, 2018.[45] E. Biyik, N. Huynh, M. Kochenderfer, and D. Sadigh. Active preference-based gaussian pro-cess regression for reward learning. In Proceedings of the Robotics: Science and Systems ,2020.[46] R. Bradley and M. Terry. Rank analysis of incomplete block designs: I. The method of pairedcomparisons. Biometrika , 39(3/4):324–345, 1952.[47] X. Chen and K. He. Exploring simple Siamese representation learning. In Proceedings of theConference on Computer Vision and Pattern Recognition , pages 15750–15758, 2021.[48] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learn-ing of visual representations. In Proceedings of the International Conference on MachineLearning , pages 1597–1607. PMLR, 2020.[49] A. Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 , 2018.[50] B. Mazoure, R. Tachet des Combes, T. Doan, P. Bachman, and R. Hjelm. Deep reinforcementand InfoMax learning. Advances in Neural Information Processing Systems , 33:3686–3698,2020.[51] Y . Tassa, Y . Doron, A. Muldal, T. Erez, Y . Li, D. Casas, D. Budden, A. Abdolmaleki, J. Merel,A. Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690 , 2018.[52] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-World: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Proceedings ofthe Conference on Robot Learning , pages 1094–1100. PMLR, 2020.11[53] D. Kingma and J. Ba. ADAM: A method for stochastic optimization. volume 3, 2015.[54] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin,N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani,S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style,high-performance deep learning library. Advances in neural information processing systems ,32, 2019.[55] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum en-tropy deep reinforcement learning with a stochastic actor. In Proceedings of the InternationalConference on Machine Learning , pages 1861–1870. PMLR, 2018.[56] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms, 2017.[57] K. Lee, L. Smith, A. Dragan, and P. Abbeel. B-Pref, 2021. URL https://github.com/rll-research/BPref .[58] Z. Mandi, F. Liu, K. Lee, and P. Abbeel. Towards more generalizable one-shot visual imitationlearning. In Proceedings of the International Conference on Robotics and Automation , pages2434–2444. IEEE, 2022.[59] P. Goyal, S. Niekum, and R. Mooney. Pixl2r: Guiding reinforcement learning using naturallanguage by mapping pixels to rewards. In Proceedings of the Conference on Robot Learning ,pages 485–497. PMLR, 2021.A Disagreement SamplingFor all experiments in this paper, disagreement sampling is used to select which trajectory pairs willbe presented to the teacher for preference labels. Disagreement-based sampling selects trajectorypairs as follows: (1) Nsegments are sampled uniformly from the replay buffer; (2) the Mpairsof segments with the largest variance in preference prediction across the reward network ensembleare sub-sampled. Disagreement-based sampling is used as it reliably resulted in highest performingpolicies compared to the other sampling methods discussed in Lee et al. [10].B Labelling StrategiesAn overview of the six labelling strategies is provided below, ordered from least to most noisy (see[10] for details and configuration specifics):1.oracle - prefers the trajectory segment with the larger return and equally prefers both seg-ments when their returns are identical2.skip - follows oracle, except randomly selects 10% of the Mquery pairs to discard fromthe preference dataset Dpref3.myopic - follows oracle, except compares discounted returns ( γ= 0.9) placing moreweight on transitions at the end of the trajectory4.equal - follows oracle, except marks trajectory segments as equally preferable when thedifference in returns is less than 0.5%of the average ground truth returns observed duringthe last Kpolicy training steps5.mistake - follows oracle, except randomly selects 10% of the Mquery pairs and assigns in-correct labels in a structured way (e.g., a preference for segment two becomes a preferencefor segment one)6.noisy - randomly assigns labels with probability proportional to the relative returns asso-ciated with the pair, but labels the segments as equally preferred when they have identicalreturns12C REED AlgorithmThe REED task is specified in Algorithm 1 in the context of the PEBBLE preference-learning al-gorithm. The main components of the PEBBLE algorithm are included, with our modificationsidentified in the comments. For the original and complete specification of PEBBLE, please see [4] -Algorithm 2.Algorithm 1 PEBBLE + REED Training Procedure1:Given:2: K ▷ teacher feedback frequency3: M ▷ queries per feedback4:Initializes:5: Qθ▷parameters for Q-function6: ˆrψ▷learned reward function7: SPR (ψ,θ)▷self-future consistency ( ψparameters shared with ˆrψ)8: Dpref← ∅▷preference dataset9: DSPR← ∅▷SPR dataset10:▷unsupervised policy training and exploration ◁11:B,πφ←EXPLORE ()▷[4] - Algorithm 112:▷joint policy and reward training ◁13:forpolicy train step do14: ifstep % K= 0then15: Dsfc←DsfcSB▷update SPR dataset16: foreach SPR gradient step do17: {(st, at, st+1)} ∼ D sfc▷sample minibatch18: {(ˆzst+1, zst+1)} ← SFC FORWARD ({(st, at, st+1)})▷Section D.219: optimize Lreedwith respect to SPR (ψ,θ)▷Equations (3) and (4)20: ˆrψ←SPRψ▷copy shared SPR parameters to reward model21: update Dpref,ˆrψ, andB▷following [4] - Algorithm 2 [lines 9 - 18]22: update B,πφ, and Qθ▷following [4] - Algorithm 2 [lines 20 - 27]13D ArchitecturesThe network architectures are specified in PyTorch 1.13. For architecture hyper-parameters, e.g.hidden size and number of hidden layers, see Appendix E.2D.1 Self-Predictive Representations NetworkThe SPR network is implemented in PyTorch. The architectures for the next state projectorandconsistency predictor when image observations are used come from [58]. The imageencoder architecture comes from [59]. The SPR network is initialized as follows:def build_spr_network (self ,state_size : int ,state_embed_size : int ,action_size : int ,action_embed_size : int ,hidden_size : int ,consistency_projection_size : int ,consistency_comparison_hidden_size : int ,with_consistency_prediction_head : bool ,num_layers : int ,image_observations : bool ,image_hidden_num_channels : int):"""The network architecture and build logic to the complete the REEDself - supervised temporal consistency task based on SPR.Args :state_size : number of features defining the agent ’s state spacestate_embed_size : number of dimensions in the state embeddingaction_size : number of features defining the agent ’s actionsaction_embed_size : number of dimensions in the action embeddinghidden_size : number of dimensions in the hidden layers ofstate - action embedding networkconsistency_projection_size : number of units used to compare thepredicted and target latent nextstateconsistency_comparison_hidden_size : number of units in the hiddenlayers of the next_state_projectorand the consistency_predictorwith_consistency_prediction_head : when using the contrastive ofobjective the consistencyprediction is not used , but iswhen using the distillationobjectivenum_layers : number of hidden layers used to embed thestate - action representationimage_observations : whether image observations are used . If imageobservations are not used , state - spaceobservations are usedimage_hidden_num_channels : the number of channels to use in theimage encoder ’s hidden layers"""# build the network that will encode the state featuresif image_observations :state_conv_encoder = nn. Sequential (nn. Conv2d (state_size [0] ,image_hidden_num_channels ,3,stride =1),14nn. ReLU (),nn. MaxPool2d (2, 2),nn. Conv2d (image_hidden_num_channels ,image_hidden_num_channels ,3,stride =1),nn. ReLU (),nn. MaxPool2d (2, 2),nn. Conv2d (image_hidden_num_channels ,image_hidden_num_channels ,3,stride =1),nn. ReLU (),nn. MaxPool2d (2, 2))conv_out_size = torch . flatten (_state_conv_encoder (torch . rand ( size =[1] + list ( state_size )))). size ()[0]self . state_encoder = nn. Sequential (_state_conv_encodernn. Linear ( conv_out_size , state_embed_size )nn. LeakyReLU ( negative_slope =1e -2))else :self . state_encoder = torch .nn. Sequential (torch .nn. Linear ( state_size , state_embed_size ),torch .nn. LeakyReLU ( negative_slope =1e -2) ,)# build the network that will encode the action featuresself . action_encoder = torch .nn. Sequential (torch .nn. Linear ( action_size , action_embed_size ),torch .nn. LeakyReLU ( negative_slope =1e -2) ,)# build the network that models the relationship between the# state and action embeddingsstate_action_encoder = []hidden_in_size = action_embed_size + state_embed_sizefor i in range ( num_layers ):state_action_encoder . append (torch .nn. Linear ( hidden_in_size , hidden_size ),)state_action_encoder . append (torch .nn. LeakyReLU ( negative_slope =1e -2) ,)hidden_in_size = hidden_sizeself . state_action_encoder = torch .nn. Sequential (* state_action_encoder )# this is a single dense layer because we want to focus as much of# the useful semantic information as possible in the state - action# representationself . next_state_predictor = torch .nn. Linear (hidden_size , state_embed_size)if image_observations :self . next_state_projector = nn. Sequential (nn. BatchNorm1d ( state_embed_size ),15nn. ReLU ( inplace = True ),nn. Linear (state_embed_size ,consistency_comparison_hidden_size),nn. ReLU ( inplace = True ),nn. Linear (consistency_comparison_hidden_size ,consistency_projection_size),nn. LayerNorm ( consistency_projection_size ))if with_consistency_prediction_head :self . consistency_predictor = nn. Sequential (nn. ReLU ( inplace = True ),nn. Linear (consistency_projection_size ,consistency_comparison_hidden_size),nn. ReLU ( inplace = True ),nn. Linear (consistency_comparison_hidden_size ,consistency_projection_size),nn. LayerNorm ( consistency_projection_size ))else :predictor = Noneelse :self . next_state_projector = torch .nn. Linear (state_embed_size ,consistency_projection_size)if with_consistency_prediction_head :self . consistency_predictor = nn. Linear (consistency_projection_size ,consistency_projection_size)else :self . consistency_predictor = NoneA forward pass through the SFC network is as follows:def spr_forward (self ,transitions : EnvironmentTransitionBatch ,with_consistency_prediction_head : bool ):"""The logic for a forward pass through the SPR network .Args :transitions : a batch of environment transitions composed ofstates , actions , and next stateswith_consistency_prediction_head : when using the contrastive ofobjective the consistencyprediction is not used , but iswhen using the distillationobjectiveReturns :predicted embedding of the next state - p in SimSiam papernext state embedding ( detached from graph ) - z in SimSiam paperdimensionality : (batch , time step )"""# encode the state , the action , and the state - action pair#st→zststates_embed = self . state_encoder ( transitions . states )16#at→zatactions_embed = self . action_encoder ( transitions . actions )#(st, at)→zsatstate_action_embeds = torch . concat ([ states_embed , actions_embed ], dim =-1)state_action_embed = self . state_action_encoder (state_action_embeds)# predict and project the representation of the next state#zsat→ˆzst+1next_state_pred = self . next_state_predictor ( state_action_embed )next_state_pred = self . next_state_projector ( next_state_pred )if with_consistency_prediction_head :next_state_pred = self . consistency_predictor ( next_state_pred )# we don ’t want gradients to back - propagate into the learned# parameters from anything we do with the next statewith torch . no_grad ():#st+1→zst+1# embed the next statenext_state_embed = self . state_encoder ( transitions . next_states )# project the next state embedding into a space where it can be# compared with the predicted next stateprojected_next_state_embed = self . next_state_projector (next_state_embed)# from the SimSiam paper , this is p and zreturn next_state_pred , projected_next_state_embedD.2 SAF Reward NetworkThe architecture of the SAF Reward Network is a subset of the SFC network with the addition of alinear to map the state-action representation to predicted rewards. The SFC network is implementedin PyTorch and is initialized following the below build method:def build_saf_network (self ,state_size : int ,state_embed_size : int ,action_size : int ,action_embed_size : int ,hidden_size : int ,num_layers : int ,final_activation_type : str ,image_observations : bool ,image_hidden_num_channels : int):"""Args :state_size : number of features defining the agent ’s state spacestate_embed_size : number of dimensions in the state embeddingaction_size : number of features defining the agent ’s actionsaction_embed_size : number of dimensions in the action embeddinghidden_size : number of dimensions in the hidden layers ofstate - action embedding networknum_layers : number of hidden layers used to embed thestate - action representationfinal_activation_type : the activation used on the final layerimage_observations : whether image observations are used . If imageobservations are not used , state - spaceobservations are used17image_hidden_num_channels : the number of channels to use in theimage encoder ’s hidden layers"""# build the network that will encode the state featuresif image_observations :state_conv_encoder = nn. Sequential (nn. Conv2d (state_size [0] ,image_hidden_num_channels ,3,stride =1),nn. ReLU (),nn. MaxPool2d (2, 2),nn. Conv2d (image_hidden_num_channels ,image_hidden_num_channels ,3,stride =1),nn. ReLU (),nn. MaxPool2d (2, 2),nn. Conv2d (image_hidden_num_channels ,image_hidden_num_channels ,3,stride =1),nn. ReLU (),nn. MaxPool2d (2, 2))conv_out_size = torch . flatten (_state_conv_encoder (torch . rand ( size =[1] + list ( state_size )))). size ()[0]self . state_encoder = nn. Sequential (_state_conv_encodernn. Linear ( conv_out_size , state_embed_size )nn. LeakyReLU ( negative_slope =1e -2))else :self . state_encoder = torch .nn. Sequential (torch .nn. Linear ( state_size , state_embed_size ),torch .nn. LeakyReLU ( negative_slope =1e -2) ,)# build the network that will encode the action featuresself . action_encoder = torch .nn. Sequential (torch .nn. Linear ( action_size , action_embed_size ),torch .nn. LeakyReLU ( negative_slope =1e -2) ,)# build the network that models the relationship between the# state and action embeddingsstate_action_encoder = []hidden_in_size = action_embed_size + state_embed_sizefor i in range ( num_layers ):state_action_encoder . append (torch .nn. Linear ( hidden_in_size , hidden_size ),)state_action_encoder . append (torch .nn. LeakyReLU ( negative_slope =1e -2) ,)18hidden_in_size = hidden_sizeself . state_action_encoder = torch .nn. Sequential (* state_action_encoder )# build the prediction head and select a final activationself . prediction_head = torch .nn. Linear ( hidden_size , 1)if final_activation_type == " tanh ":self . final_activation = torch .nn. Tanh ()elif final_activation_type == " sig":self . final_activation = torch .nn. Sigmoid ()else :self . final_activation_type = torch .nn. ReLU ()A forward pass through the SAF network is as follows:def saf_forward (self , transitions : EnvironmentTransitionBatch ):"""Args :transitions : a batch of environment transitions composed ofstates , actions , and next statesReturns :predicted embedding of the next state - p in SimSiam papernext state embedding ( detached from graph ) - z in SimSiam paperdimensionality : (batch , time step )"""# encode the state , the action , and the state - action pair#st→zststates_embed = self . state_encoder ( transitions . states )#at→zatactions_embed = self . action_encoder ( transitions . actions )#(st, at)→zsatstate_action_embeds = torch . concat ([ states_embed , actions_embed ], dim =-1)state_action_embed = self . state_action_encoder (state_action_embeds)return self . final_activation (self . prediction_head ( state_action_embed ))E Hyper-parametersE.1 Train Hyper-parametersThis section specifies the hyper-parameters (e.g. learning rate, batch size, etc) used for the experi-ments and results (Section 6). The SAC, PPO, PEBBLE, and PrefPPO experiments all match thoseused in [55], [56], and [4] respectively. The SAC and PPO hyper-parameters are specified in Table3, the PEBBLE and PrefPPO hyper-parameters are given in Table 4, and the hyper-parameters usedto train on the REED task are in Table 5.The image-space models were trained on images of size 50x50. For PEBBLE and REED on DMCtasks, color images were used, and for MetaWorld tasks, grayscale images were used. All pixelvalues were scaled to the range [0.0,1.0].For the image-based REED methods, we found that a larger value of kwas important for the Meta-World experiments compared to the DMC experiments due to slower environment dynamics. InMetaWorld the differences between subsequent observations are far more similar than in DMC. Inthe state-space, the mean cosine similarity between all observations accumulated in the replay bufferwas0.9. For the image-space observations, the mean cosine similarity was 0.7. Additionally, forsweep into, due to the similarity in MetaWorld observations, the slower environment dynamics, and19difficulty of tasks like sweep into, we found it beneficial to update on the REED objective every 5thupdate to the reward model in order to avoid over fitting on the REED objective and reducing theaccuracy of the ˆrψpreference predictions.Table 3: Training hyper-parameters for SAC [55] and PPO [56].HYPER -PARAMETER VALUESACLearning rate1e-3 (cheetah), 5e-4 (walker),1e-4 (quadruped), 3e-4 (MetaWorld)Batch size 512(DMC), 1024 (MetaWorld)Total timesteps 500k,1M (quadruped, sweep into)Optimizer Adam [53]Critic EMA τ 5e-3Critic target update freq. 2(B1,B2) (0.9,0.999)Initial Temperature 0.1Discount γ 0.99PPOLearning rate 5e-5 (DMC), 3e-4 (MetaWorld)Batch size 128(all but cheetah), 512(cheetah)Total timesteps500k (cheetah, walker, button press),1M (quadruped, sweep into)Envs per worker8(sweep into), 16(cheetah, quadruped),32(walker, sweep into)Optimizer Adam [53]Discount γ 0.99Clip range 0.2Entropy bonus 0.0GAE parameter λ 0.92Timesteps per rollout 250(MetaWorld), 500(DMC)20Table 4: Training hyper-parameters for PEBBLE [4] and PrefPPO [3, 4]. The only hyper-parameterthat differs between PEBBLE and PrefPPO is the DMC learning rate. The batch size for the rewardnetwork changes based per total feedback amount to match the number of queries Msent to theteacher for labelling each feedback session.HYPER -PARAMETER VALUELearning rate PEBBLE 3e-4Learning rate PrefPPO 5e-4 (DMC), 3e-4 (MetaWorld)Optimizer Adam [53]Segment length l 50(DMC), 25(MetaWorld)Feedback amount / number queries ( M)1k/100,500/50,200/20,100/10,50/5(DMC)20k/100,10k/50,5k/25,2.5k/12(MetaWorld)Steps between queries ( K)20k (walker, cheetah), 30k (quadruped),5k (MetaWorld)Table 5: Training hyper-parameters for REED with the SPR objective [16] (Section 4.1). The REEDhyper-parameters were used with both the PEBBLE [4] and PrefPPO [3, 4] preference-learning algo-rithms. Hyper-parameters are by environment/task and shared by the two SSL objectives: Distilla-tion versus Contrastive (Section 4.1). Training on the REED task occurred every Ksteps (specifiedin Table 4) prior to updating on the preference task. The SPR objective predicts future latent statesksteps in the future. While our hyper-parameter sweep evaluated multiple values for k, we foundthatk= 1vs.k >1had no real impact on learning quality for these state-action feature spaces. ForContrastive REED experiments (state-space and image-space observations), τ= 0.005. In general,the image-based experiments REED were less sensitive to the hyper-parameters than the state-spaceexperiments experiments.ENVIRONMENT LEARNING RATE EPOCHS PER UPDATE BATCH SIZE OPTIMIZER KSTATE -SPACE OBSERVATIONSWalker 1e-3 20 12 SGD 1Cheetah 1e-3 20 12 SGD 1Quadruped 1e-4 20 128 Adam [53] 1Button Press 1e-4 10 128 Adam [53] 1Sweep Into 5e-5 5 256 Adam [53] 1IMAGE -SPACE OBSERVATIONSWalker 1e-4 5 256 Adam [53] 1Cheetah 1e-4 5 256 Adam [53] 1Quadruped 1e-4 5 256 Adam [53] 1Button Press 1e-4 5 256 Adam [53] 5Sweep Into 1e-4 5 512 Adam [53] 5Drawer Open 1e-4 5 256 Adam [53] 5Drawer Close 1e-4 5 256 Adam [53] 5Window Open 1e-4 5 256 Adam [53] 5Door Open 1e-4 5 256 Adam [53] 521E.2 Architecture Hyper-parametersThe network hyper-parameters (e.g. hidden dimension, number of hidden layers, etc) used for theexperiments and results (Section 6) are specified in Table 6.Table 6: Architecture hyper-parameters for SAC [55], PPO [56], the base reward model (used forPEBBLE [4] and PrePPO [3, 4]), the SAF reward model (Section 4.1), and the SPR model (Section4.1). The hyper-parameters reported here are intended to inform the values to used to initialize thearchitectures in Appendix D. Hyper-parameters not relevant to a model are indicated with “N/A”.The SPR model is what REED uses to construct the self-supervised temporal consistency task. Thebase reward model is used with PEBBLE and PrefPPO in Lee et al. [4] and [10]. The SAF rewardnetwork is used for all REED conditions in Section 6. The “Final Activation” refers to the activationfunction used just prior to predicting the reward for a given state action pair. The action embeddingsizes are the same for the state-space and image-space observations.HYPER -PARAMETER SAC PPO B ASE REWARD SAF R EWARD SPR N ETSTATE -SPACE OBSERVATIONSState embed size N/A N/A N/A20(walker), 20(walker),17(cheetah), 17(cheetah),78(quadruped), 78(quadruped),30(MetaWorld) 30(MetaWorld)Action embed size N/A N/A N/A10(walker), 10(walker),6(cheetah), 6(cheetah),12(quadruped), 12(quadruped),4(MetaWorld) 4(MetaWorld)Comparison units N/A N/A N/A N/A5(walker),4(cheetah),10(quadruped),5(MetaWorld)Num. hidden2(DMC),3 3 3 33(MetaWorld)Units per layer1024 (DMC),256 256 256 256256(MetaWorld)Final activation N/A N/A tanh tanh N/AIMAGE -SPACE OBSERVATIONSState embed size N/A N/A N/A20(walker), 20(walker),17(cheetah), 17(cheetah),78(quadruped), 78(quadruped),30(MetaWorld) 30(MetaWorld)Comparison units N/A N/A N/A N/A 128Num. hidden2(DMC),3 3 3 33(MetaWorld)Units per layer1024 (DMC),256 256 256 256256(MetaWorld)Final activation N/A N/A tanh tanh N/A22F SAF Reward Net Ablation0.0 0.505001000ReturnsWalker Walk0 1Quadruped Walk0.0 0.5050100Success RateButton Press0 1Sweep Into0.0 0.505001000Walker Walk0 1Quadruped Walksac pebble pebble+saf pebble+simsiam pebble+contr.Figure 5: Ablation of the SAF reward net for walker-walk, quadruped-walk, sweep into, and buttonpress with 500 (walker and quadruped) and 5k (sweep into and button press) teacher-labelled querieswith disagreement-based sampling and the oracle labelling strategy.We present results ablating the impact of our modified SAF reward network architecture in Table 7,see Section 4.1, State-Action Fusion Reward Network, for details. In our ablation, we replace theoriginal PEBBLE reward network architecture from [4] with our SAF network and then evaluateon the joint experimental condition with no other changes to reward function learning. We evaluatethe impact of the SAF reward network on the walker-walk, quadruped-walk, sweep into, and buttonpress tasks. Policy and reward function learning is evaluated across feedback amounts and labellingstyles. All hyper-parameters match those used in all other experiments in the paper (see AppendixE). We compare PEBBLE with the SAF reward network architecture (PEBBLE + SAF) against SACtrained on the ground truth reward, PEBBLE with the original architecture (PEBBLE), PEBBLEwith Distillation REED (PEBBLE+Dist.), and PEBBLE with Contrastive REED (PEBBLE+Contr.).The inclusion of the SAF reward network architecture does not meaningfully impact policy perfor-mance. In general, across domains and experimental conditions, PEBBLE + SAF performs on parwith or slightly worse than PEBBLE. The lack of performance improvements suggest that the per-formance improvements observed when the auxiliary temporal consistency objective are due to theauxiliary objective and not the change in network architecture.23Table 7: The impact of the SAF reward network is ablated. Ratio of policy performance on learnedversus ground truth rewards for walker-walk ,quadruped-walk ,sweep into , and button pressacross preference learning methods, labelling methods and feedback amounts (with disagreementsampling).FEEDBACK METHOD ORACLE MISTAKE EQUAL SKIP MYOPIC NOISY MEANWALKER -WALK1KPEBBLE 0.85 (0.17) 0.76 (0.21) 0.88 (0.16) 0.85 (0.17) 0.79 (0.18) 0.81 (0.18) 0.83+SAF 0.81 (0.19) 0.62 (0.18) 0.88 (0.16) 0.81 (0.19) 0.74 (0.17) 0.81 (0.19) 0.78+DIST. 0.9 (0.16) 0.77 (0.2) 0.91 (0.12) 0.89 (0.16) 0.8 (0.17) 0.88 (0.17) 0.86+CONTR . 0.9 (0.16) 0.77 (0.2) 0.91 (0.12) 0.89 (0.16) 0.8 (0.17) 0.88 (0.17) 0.86500PEBBLE 0.74 (0.18) 0.61 (0.17) 0.84 (0.19) 0.75 (0.19) 0.67 (0.19) 0.69 (0.19) 0.72+SAF 0.68 (0.17) 0.51 (0.13) 0.76 (0.17) 0.68 (0.17) 0.56 (0.15) 0.68 (0.17) 0.65+DIST. 0.86 (0.2) 0.71 (0.2) 0.87 (0.2) 0.87 (0.2) 0.82 (0.22) 0.84 (0.2) 0.83+CONTR . 0.9 (0.17) 0.81 (0.19) 0.9 (0.14) 0.9 (0.17) 0.88 (0.16) 0.88 (0.18) 0.88250PEBBLE 0.59 (0.17) 0.41 (0.12) 0.67 (0.2) 0.56 (0.17) 0.43 (0.13) 0.51 (0.13) 0.53+SAF 0.53 (0.16) 0.41 (0.15) 0.59 (0.18) 0.53 (0.16) 0.36 (0.1) 0.48 (0.14) 0.48+DIST. 0.8 (0.23) 0.6 (0.16) 0.85 (0.21) 0.8 (0.24) 0.75 (0.26) 0.8 (0.24) 0.77+CONTR . 0.85 (0.19) 0.73 (0.23) 0.85 (0.19) 0.85 (0.2) 0.79 (0.2) 0.85 (0.22) 0.82QUADRUPED -WALK2KPEBBLE 0.94 (0.15) 0.55 (0.19) 1.1 (0.26) 1.0 (0.16) 0.93 (0.13) 0.56 (0.19) 0.86+SAF 0.97 (0.15) 0.45 (0.17) 1.2 (0.22) 0.87 (0.19) 0.76 (0.13) 0.59 (0.14) 0.81+DIST. 1.3 (0.31) 0.47 (0.19) 1.4 (0.37) 1.3 (0.26) 1.2 (0.18) 0.96 (0.15) 1.09+CONTR . 1.3 (0.25) 0.7 (0.16) 1.2 (0.24) 1.3 (0.29) 1.3 (0.28) 1.0 (0.16) 1.131KPEBBLE 0.86 (0.15) 0.53 (0.19) 0.88 (0.15) 0.91 (0.14) 0.73 (0.18) 0.48 (0.25) 0.73+SAF 0.79 (0.16) 0.44 (0.19) 0.99 (0.23) 0.9 (0.19) 0.63 (0.15) 0.6 (0.2) 0.72+DIST. 1.1 (0.19) 0.59 (0.14) 1.2 (0.22) 1.3 (0.3) 1.1 (0.21) 1.0 (0.15) 1.04+CONTR . 1.1 (0.19) 0.63 (0.16) 1.2 (0.29) 1.1 (0.19) 1.1 (0.19) 0.83 (0.14) 0.99500PEBBLE 0.56 (0.21) 0.48 (0.21) 0.66 (0.2) 0.64 (0.15) 0.47 (0.22) 0.48 (0.23) 0.55+SAF 0.63 (0.16) 0.4 (0.22) 0.85 (0.14) 0.75 (0.19) 0.56 (0.18) 0.5 (0.19) 0.61+DIST. 1.1 (0.21) 0.58 (0.16) 1.2 (0.24) 1.0 (0.22) 1.0 (0.19) 0.68 (0.16) 0.93+CONTR . 1.1 (0.21) 0.64 (0.11) 1.1 (0.22) 1.1 (0.17) 1.0 (0.17) 0.85 (0.14) 0.97250PEBBLE 0.53 (0.18) 0.36 (0.23) 0.64 (0.15) 0.62 (0.16) 0.46 (0.22) 0.47 (0.21) 0.51+SAF 0.51 (0.2) 0.36 (0.22) 0.73 (0.18) 0.53 (0.17) 0.53 (0.19) 0.45 (0.24) 0.52+DIST. 0.98 (0.15) 0.58 (0.18) 1.0 (0.19) 0.79 (0.12) 0.9 (0.18) 0.77 (0.16) 0.84+CONTR . 0.98 (0.15) 0.58 (0.18) 1.0 (0.19) 0.79 (0.12) 0.9 (0.18) 0.77 (0.16) 0.84BUTTON PRESS20KPEBBLE 0.72 (0.26) 0.57 (0.26) 0.77 (0.25) 0.75 (0.26) 0.68 (0.21) 0.72 (0.24) 0.70+SAF 0.77 (0.23) 0.72 (0.28) 0.84 (0.23) 0.75 (0.24) 0.78 (0.21) 0.77 (0.22) 0.77+CONTR . 0.65 (0.25) 0.61 (0.28) 0.67 (0.27) 0.67 (0.27) 0.67 (0.24) 0.69 (0.26) 0.6610KPEBBLE 0.66 (0.26) 0.47 (0.21) 0.67 (0.27) 0.63 (0.26) 0.67 (0.24) 0.6 (0.26) 0.62+SAF 0.7 (0.25) 0.66 (0.26) 0.74 (0.23) 0.71 (0.25) 0.67 (0.19) 0.71 (0.25) 0.70+CONTR . 0.65 (0.27) 0.61 (0.3) 0.66 (0.27) 0.62 (0.26) 0.6 (0.25) 0.68 (0.28) 0.645KPEBBLE 0.48 (0.21) 0.31 (0.12) 0.56 (0.25) 0.54 (0.24) 0.59 (0.23) 0.52 (0.23) 0.50+SAF 0.63 (0.25) 0.55 (0.24) 0.65 (0.26) 0.68 (0.24) 0.62 (0.21) 0.7 (0.24) 0.64+CONTR . 0.55 (0.24) 0.54 (0.26) 0.65 (0.27) 0.63 (0.26) 0.57 (0.24) 0.63 (0.28) 0.602.5KPEBBLE 0.37 (0.18) 0.21 (0.088) 0.44 (0.21) 0.34 (0.15) 0.4 (0.17) 0.34 (0.18) 0.35+SAF 0.58 (0.26) 0.38 (0.17) 0.61 (0.26) 0.54 (0.23) 0.52 (0.21) 0.54 (0.2) 0.53+CONTR . 0.49 (0.25) 0.42 (0.22) 0.52 (0.24) 0.5 (0.23) 0.44 (0.17) 0.45 (0.21) 0.47SWEEP INTO20KPEBBLE 0.53 (0.25) 0.26 (0.15) 0.51 (0.23) 0.52 (0.27) 0.47 (0.28) 0.47 (0.26) 0.46+SAF 0.5 (0.24) 0.36 (0.15) 0.47 (0.22) 0.39 (0.19) 0.49 (0.21) 0.6 (0.21) 0.47+CONTR . 0.5 (0.22) 0.36 (0.13) 0.41 (0.2) 0.6 (0.22) 0.54 (0.21) 0.61 (0.25) 0.5010KPEBBLE 0.28 (0.12) 0.22 (0.13) 0.45 (0.21) 0.33 (0.17) 0.47 (0.25) 0.51 (0.24) 0.38+SAF 0.41 (0.2) 0.32 (0.19) 0.48 (0.2) 0.47 (0.17) 0.46 (0.2) 0.57 (0.24) 0.45+CONTR . 0.47 (0.23) 0.3 (0.14) 0.45 (0.24) 0.32 (0.21) 0.42 (0.22) 0.44 (0.21) 0.405KPEBBLE 0.17 (0.099) 0.17 (0.089) 0.28 (0.19) 0.24 (0.15) 0.23 (0.13) 0.22 (0.12) 0.22+SAF 0.36 (0.15) 0.2 (0.13) 0.4 (0.23) 0.38 (0.17) 0.19 (0.11) 0.41 (0.2) 0.32+CONTR . 0.34 (0.14) 0.23 (0.19) 0.52 (0.24) 0.37 (0.2) 0.4 (0.24) 0.44 (0.18) 0.382.5KPEBBLE 0.15 (0.086) 0.13 (0.076) 0.16 (0.1) 0.16 (0.09) 0.18 (0.075) 0.25 (0.11) 0.17+SAF 0.33 (0.19) 0.12 (0.082) 0.32 (0.17) 0.18 (0.09) 0.27 (0.11) 0.22 (0.14) 0.25+CONTR . 0.21 (0.13) 0.19 (0.22) 0.29 (0.17) 0.17 (0.09) 0.25 (0.15) 0.28 (0.16) 0.2324G Image Aug. Task DetailsWe present results ablating the impact of environment dynamics on top of the PEBBLE modelto show how much of the REED gains come from encoding environment dynamics versus incor-porating an auxiliary task. In our ablation, we replace the REED auxiliary task with an image-augmentation-based self-supervised learning auxiliary task that compares a batch of image obser-vation states with augmented versions of the same observations using either LSS (Equation 3), orLC (Equation 4). We compare the impact of the SSL data augmentation auxiliary task (PEBBLE +Img. Aug.) with the impact of PEBBLE + REED on the image-based PEBBLE preference learningalgorithm using the walker-walk, quadruped-walk, and cheetah-run DMC tasks. Policy and rewardfunction learning is evaluated across feedback amounts and with the oracle labeler. All hyper-parameters match those specified in Appendix E, with the exception of those listed below (Table8).The Img. Aug. task learns representations of the state, not state-action pairs, as is done in REED,and so the Img. Aug. representations do not encode environment dynamics. To separate out statesand actions, the Img. Aug. task uses the SAF reward model architecture. The data augmentationsmatch those used in [58].The PEBBLE + Img. Aug. algorithm is the same as PEBBLE + REED (Algorithm C), except, in-stead of updating the SPR network using the temporal dynamics task, the SSL Image Augmentationnetwork (Appendix G.2) is updated using the image-augmentation task. The state encoder is sharedbetween the reward and the SSL Image Augmentation networks.The inclusion of an auxiliary task to improve the state encodings does improve the performance ofPEBBLE, but does not improve as much as with the encoded environment dynamics. This can beseen across feedback types and DMC environments (see Figure 6 and Table 12). The performanceimprovement against the PEBBLE baseline suggests that having the auxiliary task does have somebenefit. However, the performance improvements of REED against the image augmentation aux-iliary task suggest that the performance improvements observed when the auxiliary task encodesenvironment dynamics gives meaningful improvements.G.1 Data Augmentation ParametersThis section presents the PEBBLE + Img. Aug. method for creating the augmented image observa-tions. We evaluate the impact of both the “weak” and “strong” image augmentations used in [58].The augmentation parameters for both the weak and strong styles are given in Table 9.25Table 8: Image Augmentation hyper-parameters for the PEBBLE reward + SSL model that differfrom the parameters outlined in Appendix E.HYPER -PARAMETER VALUEREWARD MODELLearning Rate 1e-4Grayscale Images FalseNormalize Images TrueSSL D ATA AUGMENTATION MODELLearning Rate 5e-5Grayscale Images FalseNormalize Images TrueUse Strong Augmentations (Table 9) FalseBatch Size 256Loss DistillationTable 9: Data Augmentation hyper-parameters for PEBBLE+SSL as specified in MOSAIC [58].HYPER -PARAMETER VALUEWEAK AUGMENTATIONSRandom Jitter ( ρ) 0.01Normalization ( μ,σ) [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]Random Resize Crop (scale min/max, ratio) [0.7, 1.0], [1.8, 1.8]STRONG AUGMENTATIONSRandom Jitter ( ρ) 0.01Random Grayscale ( ρ) 0.01Random Horizontal Flip ( ρ) 0.01Normalization ( μ,σ) [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]Random Gaussian Blur ( σmin/max, ρ) [0.1, 2.0], 0.01Random Resize Crop (scale min/max, ratio) [0.6, 1.0], [1.8, 1.8]26G.2 SSL Data Augmentation ArchitectureThe SSL Image Augmentation network is implemented in PyTorch. The architecture for the con-sistency predictor comes from [58]. The image encoder architecture comes from [59]. The SSLnetwork is initialized as follows:def build_ssl_network (self ,state_size : t. List [int],state_embed_size : int ,consistency_projection_size : int ,consistency_comparison_hidden_size : int ,with_consistency_prediction_head : bool ,image_hidden_num_channels : int ):"""The network architecture and build logic to the complete the REEDself - supervised temporal consistency task based on SPR.Args :state_size : dimensionality of the statesstate_embed_size : number of dimensions in the state embeddingconsistency_projection_size : number of units used to compare thepredicted and target latent nextstateconsistency_comparison_hidden_size : number of units in the hiddenlayers of the next_state_projectorand the consistency_predictorwith_consistency_prediction_head : when using the contrastive ofobjective the consistencyprediction is not used , but iswhen using the distillationobjectiveimage_hidden_num_channels : the number of channels to use in theimage encoder ’s hidden layers"""# Build the network that will encode the state features .state_conv_encoder = nn. Sequential (nn. Conv2d ( state_size [0] , image_hidden_num_channels , 3, stride =1) ,nn. ReLU (),nn. MaxPool2d (2, 2),nn. Conv2d (image_hidden_num_channels ,image_hidden_num_channels ,3, stride =1) ,nn. ReLU (),nn. MaxPool2d (2, 2),nn. Conv2d (image_hidden_num_channels ,image_hidden_num_channels ,3, stride =1) ,nn. ReLU (),nn. MaxPool2d (2, 2))conv_out_size = torch . flatten ( state_conv_encoder ( torch . rand ( size =[1] + list ( state_size ))). size ()[0])self . state_encoder = nn. Sequential (state_conv_encoder ,nn. Linear ( conv_out_size , state_embed_size ),nn. LeakyReLU ( negative_slope =1e -2))self . consistency_projector = nn. Sequential (# Rearrange (’B T d H W -> (B T) d H W ’),nn. BatchNorm1d ( state_embed_size ), nn. ReLU ( inplace = True ),# Rearrange (’BT d H W -> BT (d H W)’),nn. Linear ( state_embed_size , consistency_comparison_hidden_size ), nn. ReLU ( inplace = True ),27nn. Linear ( consistency_comparison_hidden_size , consistency_projection_size ),nn. LayerNorm ( consistency_projection_size ))# from : https :// github .com/rll - research / mosaic / blob /561814 b40d33f853aeb93f1113a301508fd45274 / mosaic / models / rep_modules .py# L118if with_consistency_prediction_head :self . consistency_predictor = nn. Sequential (nn. ReLU ( inplace = True ),nn. Linear ( consistency_projection_size , consistency_comparison_hidden_size ),nn. ReLU ( inplace = True ),nn. Linear ( consistency_comparison_hidden_size , consistency_projection_size ),nn. LayerNorm ( consistency_projection_size ))else :self . consistency_predictor = NoneA forward pass through the SSL network is as follows:def ssl_forward (self ,observations : RawAugmentedObservationsBatch ,with_consistency_prediction_head : bool ):"""The logic for a forward pass through the SSL network .Args :observations : a batch of environment raw observations andaugmented observationswith_consistency_prediction_head : when using the contrastiveobjective the consistencyprediction is not used , but iswhen using the distillationobjectiveReturns :predicted embedding of the augmented state and the augmented state embedding ( detached from graph )dimensionality : (batch , time step )"""# Encode the observations .observations_embed = self . state_encoder ( observations . states )# Predict the augmented observations .if with_consistency_prediction_head :augmented_observation_pred = self . consistency_predictor ( self . state_projector ( observations_embed ))else :augmented_observation_pred = self . state_projector ( observations_embed )# we don ’t want gradients to back - propagate into the learned parameters from anything we do with the augmented observationwith torch . no_grad ():# embed the augmented observationaugmented_observation_embed = self . state_encoder ( observations . augmented_states )# project the augmented observation embedding into a space where it can be compared with the predicted augmented observationprojected_agumented_observation_embed = self . state_projector ( augmented_observation_embed )return augmented_observation_pred , projected_agumented_observation_embed28G.3 Image Aug. Auxiliary Task ResultsWe present results comparing the impact of REED’s temporal auxiliary task and the Image Aug.auxiliary task. Learning curves (Figure 6) and normalized returns (Table 10) are provided for image-space observation walker-walk, cheetah-run, and quadruped-walk tasks across different amounts offeedback. We compare the contributions of the Image Aug. auxiliary task to PEBBLE against SACtrained on the ground truth reward, PEBBLE, PEBBLE + Distillation REED (+Dist.), and PEBBLE+ Contrastive REED (+Contr.). Results are reported for the Image Aug. task using the distillationobjective (Equation 3) as +Dist.+Img. Aug.The inclusion of the Image Aug. auxiliary task improves performance relative to PEBBLE, but doesnot reach the level of performance achieved by REED. The gap policy performance between REEDand the Image Aug. auxiliary task suggests that encoding environment dynamics in the rewardfunction and not including an auxiliary task that trains on all policy experiences is the cause of theperformance gains observed from REED.0500100050Walker Walk Quadruped Walk Cheetah Run0500100010005001000250050010005000.0 0.5Steps (x1e6)0500100010000 1Steps (x1e6)0.0 0.5Steps (x1e6)0500100050Walker Walk Quadruped Walk Cheetah Run0500100010005001000250050010005000.0 0.5Steps (x1e6)0500100010000 1Steps (x1e6)0.0 0.5Steps (x1e6) sac pebble pebble+dist. pebble+contr. pebble_img.aug.Figure 6: Episode returns learning curves for walker walk ,quadruped walk , and cheetah runacross preference-based RL methods and feedback amounts for image-based observations. Theoracle labeller is used to generate preference feedback. Mean policy returns are plotted along they-axis with number of steps (in units of 1000) along the x-axis. There is one plot per environment(grid columns) and feedback amount (grid rows) with corresponding results per learning methods ineach plot. The learning methods evaluated are SAC trained on the ground truth reward, PEBBLE,PEBBLE with the Image Aug. auxiliary task (pebble+img. aug.), PEBBLE + Distillation REED(pebble+dist.), and PEBBLE + Contrastive REED (pebble+contr.). From top to bottom, the rowscorrespond to 2.5k, 5k, 10k, and 20k pieces of teacher feedback. From left to right, the columnscorrespond to walker walk, quadruped walk, and cheetah run.29Table 10: Ratio of policy performance on learned versus ground truth rewards for walker-walk ,cheetah-run , and quadruped-walk across feedback amounts (with disagreement sampling). Theresults are reported as means (standard deviations) over 10 random seeds.DMCTASK METHOD 50 100 250 500 1000 M EANWALKER -WALKPEBBLE 0.06 (0.02) 0.07 (0.02) 0.09 (0.02) 0.11 (0.03) 0.11 (0.03) 0.09+DIST. 0.18 (0.03) 0.33 (0.10) 0.40 (0.09) 0.57 (0.16) 0.68 (0.23) 0.43+CONTR . 0.28 (0.12) 0.35 (0.13) 0.46 (0.14) 0.58 (0.14) 0.61 (0.16) 0.46+DIST.+IMG.AUG. 0.06 (0.02) 0.10 (0.02) 0.16 (0.02) 0.24 (0.03) 0.46 (0.11) 0.20QUADRUPED -WALKPEBBLE 0.28 (0.23) 0.23 (0.19) 0.23 (0.15) 0.23 (0.19) 0.47 (0.09) 0.29+DIST. 0.56 (0.15) 0.59 (0.16) 0.61 (0.12) 0.71 (0.14) 0.72 (0.19) 0.64+CONTR . 0.53 (0.16) 0.64 (0.10) 0.69 (0.17) 0.73 (0.22) 0.71 (0.16) 0.66+DIST.+IMG.AUG. 0.43 (0.19) 0.35 (0.17) 0.42 (0.17) 0.68 (0.17) 0.62 (0.16) 0.50CHEETAH -RUNPEBBLE 0.01 (0.01) 0.02 (0.01) 0.02 (0.02) 0.02 (0.03) 0.03 (0.02) 0.02+DIST. 0.06 (0.03) 0.07 (0.04) 0.07 (0.02) 0.12 (0.04) 0.18 (0.07) 0.10+CONTR . 0.05 (0.02) 0.14 (0.04) 0.14 (0.04) 0.23 (0.09) 0.16 (0.06) 0.14+DIST.+IMG.AUG. 0.02 (0.01) 0.05 (0.02) 0.14 (0.05) 0.11 (0.04) 0.12 (0.05) 0.0930H Complete Joint ResultsResults are presented for all tasks, feedback amounts, and teacher labelling styles. The benefits ofthe SPR rewards are greatest: 1) for increasingly more challenging tasks, 2) when there is limitedfeedback available, and 3) when the labels are increasingly noisy.H.1 State-space Observations Learning CurvesLearning curves are provided for walker-walk (Figure 7), cheetah-run (Figure 8), quadruped-walk(Figure 9), button press (Figure 10), and sweep into (Figure 11) across feedback amounts and allteacher labelling strategies for state-space observations.0500100050equal mistake myopic noisy oracle skip0500100010005001000200050010005000.0 0.5Steps (x1e6)0500100010000.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)sac ppo pebble pebble+simsiam pebble+contr. prefppo prefppo+simsiam prefppo+contr.Figure 7: Returns learning curves for walker-walk across preference-based RL methods, labellingstyle, and feedback amount for state-space observations. Mean policy returns are plotted along they-axis with number of steps (in units of 1000) along the x-axis. There is one plot per labelling style(grid columns) and feedback amount (grid rows) with corresponding results per learning methodsin each plot. From top to bottom, the rows correspond to 50, 100, 500, and 1000 pieces of teacherfeedback. From left to right, the columns correspond to equal, noisy, mistake, myopic, oracle, andskip labelling styles (see Appendix B for details).310500100050equal mistake myopic noisy oracle skip0500100010005001000200050010005000.0 0.5Steps (x1e6)0500100010000.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)sac ppo pebble pebble+simsiam pebble+contr. prefppo prefppo+simsiam prefppo+contr.Figure 8: Returns learning curves for cheetah-run across preference-based RL methods, labellingstyle, and feedback amount for state-space observations. Mean policy returns are plotted along they-axis with number of steps (in units of 1000) along the x-axis. There is one plot per labelling style(grid columns) and feedback amount (grid rows) with corresponding results per learning methodsin each plot. From top to bottom, the rows correspond to 50, 100, 500, and 1000 pieces of teacherfeedback. From left to right, the columns correspond to equal, noisy, mistake, myopic, oracle, andskip labelling styles (see Appendix B for details).320500100050equal mistake myopic noisy oracle skip0500100010005001000200050010005000500100010000 1Steps (x1e6)0500100020000 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)sac ppo pebble pebble+simsiam pebble+contr. prefppo prefppo+simsiam prefppo+contr.Figure 9: Returns learning curves for quadruped-walk across preference-based RL methods, la-belling style, and feedback amount for state-space observations. Mean policy returns are plottedalong the y-axis with number of steps (in units of 1e6) along the x-axis. There is one plot per la-belling style (grid columns) and feedback amount (grid rows) with corresponding results per learningmethods in each plot. From top to bottom, the rows correspond to 50, 100, 500, 1000, and 2000pieces of teacher feedback. From left to right, the columns correspond to equal, noisy, mistake,myopic, oracle, and skip labelling styles (see Appendix B for details).330501002500equal mistake myopic noisy oracle skip0501005000050100100000.0 0.5Steps (x1e6)050100200000.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)sac ppo pebble pebble+contr. prefppo prefppo+contr.Figure 10: Success rate learning curves for button press across preference-based RL methods, la-belling style, and feedback amount for state-space observations. Mean success rates are plottedalong the y-axis with number of steps (in units of 1000) along the x-axis. There is one plot per la-belling style (grid columns) and feedback amount (grid rows) with corresponding results per learningmethods in each plot. From top to bottom, the rows correspond to 2.5k, 5k, 10k, and 20k pieces ofteacher feedback. From left to right, the columns correspond to equal, noisy, mistake, myopic, ora-cle, and skip labelling styles (see Appendix B for details).340501002500equal mistake myopic noisy oracle skip0501005000050100100000 1Steps (x1e6)050100200000 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)sac ppo pebble pebble+contr. prefppo prefppo+contr.Figure 11: Success rate learning curves for sweep into across preference-based RL methods, la-belling style, and feedback amount for state-space observations. Mean success rates are plottedalong the y-axis with number of steps (in units of 1000) along the x-axis. There is one plot per la-belling style (grid columns) and feedback amount (grid rows) with corresponding results per learningmethods in each plot. From top to bottom, the rows correspond to 2.5k, 5k, 10k, and 20k pieces ofteacher feedback. From left to right, the columns correspond to equal, noisy, mistake, myopic, ora-cle, and skip labelling styles (see Appendix B for details).35H.2 State-space Normalized ReturnsThe normalized returns (see Section 6 and Equation 5) for walker-walk, quadruped-walk, cheetah-run, button press, and sweep into, across all teacher labelling styles and a larger range of feedbackamounts for state-space observations, are given in Table 11.36Table 11: Ratio of policy performance on learned versus ground truth rewards for walker-walk ,cheetah-run ,quadruped-walk ,sweep into , and button pressacross preference learning methods, labelling methods and feedback amounts (with disagreement sampling). The results are reported as means (standard deviations)over 10 random seeds.FEEDBACK METHOD ORACLE MISTAKE EQUAL SKIP MYOPIC NOISY MEANWALKER -WALK1KPEBBLE 0.85 (0.17) 0.76 (0.21) 0.88 (0.16) 0.85 (0.17) 0.79 (0.18) 0.81 (0.18) 0.83+DIST. 0.9 (0.16) 0.77 (0.2) 0.91 (0.12) 0.89 (0.16) 0.8 (0.17) 0.88 (0.17) 0.86+CONTR . 0.9 (0.16) 0.77 (0.2) 0.91 (0.12) 0.89 (0.16) 0.8 (0.17) 0.88 (0.17) 0.86PREFPPO 1.0 (0.034) 0.92 (0.056) 1.0 (0.029) 1.0 (0.031) 1.0 (0.044) 0.96 (0.056) 0.99+DIST. 1.1 (0.025) 0.92 (0.05) 1.0 (0.031) 1.0 (0.02) 0.99 (0.035) 0.96 (0.046) 1.00+CONTR . 1.0 (0.029) 0.93 (0.043) 1.1 (0.021) 1.0 (0.028) 0.98 (0.033) 0.95 (0.045) 1.00500PEBBLE 0.74 (0.18) 0.61 (0.17) 0.84 (0.19) 0.75 (0.19) 0.67 (0.19) 0.69 (0.19) 0.72+DIST. 0.86 (0.2) 0.71 (0.2) 0.87 (0.2) 0.87 (0.2) 0.82 (0.22) 0.84 (0.2) 0.83+CONTR . 0.9 (0.17) 0.81 (0.19) 0.9 (0.14) 0.9 (0.17) 0.88 (0.16) 0.88 (0.18) 0.88PREFPPO 0.95 (0.052) 0.83 (0.087) 0.95 (0.044) 0.96 (0.058) 0.89 (0.076) 0.88 (0.069) 0.91+DIST. 0.88 (0.074) 0.81 (0.084) 0.98 (0.025) 0.95 (0.027) 0.9 (0.073) 0.9 (0.069) 0.90+CONTR . 0.93 (0.061) 0.85 (0.077) 0.95 (0.046) 0.92 (0.059) 0.82 (0.085) 0.77 (0.098) 0.88200PEBBLE 0.52 (0.17) 0.46 (0.15) 0.67 (0.2) 0.54 (0.15) 0.46 (0.13) 0.45 (0.14) 0.52+DIST. 0.74 (0.2) 0.65 (0.21) 0.79 (0.22) 0.74 (0.19) 0.64 (0.21) 0.8 (0.25) 0.73+CONTR . 0.84 (0.2) 0.69 (0.2) 0.84 (0.18) 0.84 (0.2) 0.75 (0.19) 0.84 (0.21) 0.80PREFPPO 0.93 (0.058) 0.83 (0.076) 0.93 (0.027) 0.88 (0.045) 0.87 (0.079) 0.82 (0.06) 0.88+DIST. 0.78 (0.087) 0.68 (0.11) 0.81 (0.062) 0.77 (0.081) 0.77 (0.077) 0.78 (0.053) 0.77+CONTR . 0.81 (0.088) 0.81 (0.079) 0.86 (0.044) 0.85 (0.055) 0.76 (0.13) 0.74 (0.1) 0.80100PEBBLE 0.34 (0.11) 0.31 (0.11) 0.37 (0.1) 0.37 (0.12) 0.29 (0.085) 0.41 (0.13) 0.35+DIST. 0.68 (0.23) 0.57 (0.2) 0.72 (0.25) 0.67 (0.23) 0.61 (0.19) 0.66 (0.24) 0.65+CONTR . 0.78 (0.21) 0.69 (0.22) 0.74 (0.22) 0.74 (0.19) 0.68 (0.19) 0.74 (0.22) 0.73PREFPPO 0.68 (0.08) 0.59 (0.093) 0.73 (0.065) 0.73 (0.065) 0.58 (0.11) 0.68 (0.072) 0.67+DIST. 0.67 (0.08) 0.63 (0.11) 0.71 (0.075) 0.71 (0.084) 0.63 (0.099) 0.63 (0.094) 0.66+CONTR . 0.72 (0.084) 0.49 (0.14) 0.71 (0.063) 0.65 (0.091) 0.64 (0.1) 0.58 (0.13) 0.6350PEBBLE 0.21 (0.1) 0.22 (0.12) 0.22 (0.12) 0.23 (0.14) 0.21 (0.11) 0.18 (0.11) 0.21+DIST. 0.66 (0.24) 0.44 (0.13) 0.6 (0.21) 0.64 (0.24) 0.44 (0.12) 0.48 (0.15) 0.54+CONTR . 0.62 (0.22) 0.44 (0.11) 0.72 (0.22) 0.62 (0.22) 0.54 (0.16) 0.54 (0.17) 0.58PREFPPO 0.51 (0.13) 0.41 (0.18) 0.57 (0.12) 0.59 (0.11) 0.51 (0.14) 0.51 (0.14) 0.52+DIST. 0.58 (0.13) 0.41 (0.17) 0.6 (0.13) 0.56 (0.13) 0.58 (0.12) 0.48 (0.13) 0.54+CONTR . 0.58 (0.12) 0.54 (0.15) 0.62 (0.12) 0.63 (0.11) 0.5 (0.13) 0.57 (0.12) 0.57Continues on next page...37TABLE 11–continued from previous pageFEEDBACK METHOD ORACLE MISTAKE EQUAL SKIP MYOPIC NOISY MEANCHEETAH -RUN1KPEBBLE 0.87 (0.18) 0.82 (0.18) 0.91 (0.2) 0.89 (0.17) 0.87 (0.16) 0.82 (0.18) 0.86+DIST. 0.93 (0.2) 0.83 (0.18) 0.88 (0.14) 0.92 (0.16) 1.0 (0.18) 0.86 (0.21) 0.90+CONTR . 0.9 (0.17) 0.84 (0.18) 0.89 (0.17) 0.96 (0.17) 1.0 (0.21) 0.85 (0.13) 0.91PREFPPO 0.71 (0.064) 0.72 (0.086) 0.76 (0.069) 0.71 (0.076) 0.75 (0.066) 0.69 (0.093) 0.72+DIST. 0.67 (0.054) 0.7 (0.076) 0.75 (0.08) 0.66 (0.064) 0.79 (0.084) 0.65 (0.081) 0.70+CONTR . 0.7 (0.069) 0.8 (0.11) 0.72 (0.07) 0.71 (0.065) 0.78 (0.089) 0.65 (0.083) 0.73500PEBBLE 0.86 (0.14) 0.84 (0.18) 0.86 (0.19) 0.71 (0.16) 0.79 (0.16) 0.71 (0.15) 0.79+DIST. 0.88 (0.22) 0.83 (0.18) 0.93 (0.15) 0.76 (0.15) 0.85 (0.14) 0.8 (0.19) 0.84+CONTR . 0.94 (0.21) 0.72 (0.14) 0.9 (0.21) 0.89 (0.18) 0.93 (0.18) 0.82 (0.16) 0.87PREFPPO 0.62 (0.043) 0.63 (0.047) 0.77 (0.089) 0.66 (0.06) 0.66 (0.04) 0.72 (0.09) 0.67+DIST. 0.67 (0.062) 0.74 (0.14) 0.7 (0.072) 0.63 (0.069) 0.67 (0.076) 0.68 (0.081) 0.68+CONTR . 0.66 (0.062) 0.61 (0.072) 0.73 (0.082) 0.69 (0.065) 0.61 (0.047) 0.67 (0.073) 0.66200PEBBLE 0.71 (0.23) 0.62 (0.22) 0.71 (0.24) 0.57 (0.2) 0.75 (0.18) 0.6 (0.22) 0.66+DIST. 0.77 (0.28) 0.61 (0.19) 0.77 (0.22) 0.79 (0.25) 0.76 (0.25) 0.65 (0.27) 0.72+CONTR . 0.73 (0.22) 0.67 (0.21) 0.83 (0.25) 0.8 (0.24) 0.83 (0.19) 0.76 (0.23) 0.77PREFPPO 0.57 (0.042) 0.73 (0.13) 0.66 (0.059) 0.54 (0.052) 0.64 (0.073) 0.56 (0.099) 0.62+DIST. 0.53 (0.066) 0.52 (0.047) 0.66 (0.066) 0.53 (0.049) 0.55 (0.054) 0.57 (0.085) 0.56+CONTR . 0.62 (0.071) 0.49 (0.046) 0.61 (0.06) 0.63 (0.1) 0.56 (0.062) 0.58 (0.048) 0.58100PEBBLE 0.4 (0.14) 0.4 (0.13) 0.61 (0.22) 0.47 (0.2) 0.55 (0.21) 0.42 (0.14) 0.48+DIST. 0.69 (0.26) 0.59 (0.22) 0.79 (0.28) 0.65 (0.29) 0.67 (0.26) 0.53 (0.21) 0.65+CONTR . 0.64 (0.28) 0.65 (0.21) 0.78 (0.21) 0.7 (0.29) 0.81 (0.26) 0.72 (0.25) 0.72PREFPPO 0.46 (0.036) 0.51 (0.061) 0.58 (0.051) 0.58 (0.054) 0.6 (0.047) 0.59 (0.054) 0.55+DIST. 0.49 (0.036) 0.47 (0.088) 0.49 (0.046) 0.49 (0.038) 0.52 (0.04) 0.52 (0.081) 0.50+CONTR . 0.54 (0.037) 0.51 (0.06) 0.59 (0.085) 0.5 (0.059) 0.59 (0.061) 0.45 (0.039) 0.5350PEBBLE 0.35 (0.11) 0.26 (0.098) 0.39 (0.14) 0.39 (0.12) 0.4 (0.15) 0.24 (0.089) 0.34+DIST. 0.63 (0.23) 0.59 (0.27) 0.69 (0.25) 0.62 (0.23) 0.68 (0.31) 0.46 (0.22) 0.61+CONTR . 0.7 (0.28) 0.51 (0.21) 0.72 (0.28) 0.53 (0.2) 0.66 (0.28) 0.66 (0.28) 0.63PREFPPO 0.5 (0.066) 0.49 (0.07) 0.44 (0.076) 0.4 (0.038) 0.34 (0.062) 0.3 (0.066) 0.41+DIST. 0.44 (0.041) 0.38 (0.039) 0.44 (0.082) 0.3 (0.063) 0.44 (0.085) 0.47 (0.11) 0.41+CONTR . 0.47 (0.051) 0.42 (0.039) 0.47 (0.093) 0.43 (0.038) 0.46 (0.039) 0.31 (0.061) 0.43QUADRUPED -WALK2KPEBBLE 0.94 (0.15) 0.55 (0.19) 1.1 (0.26) 1.0 (0.16) 0.93 (0.13) 0.56 (0.19) 0.86+DIST. 1.3 (0.31) 0.47 (0.19) 1.4 (0.37) 1.3 (0.26) 1.2 (0.18) 0.96 (0.15) 1.09Continues on next page...38TABLE 11–continued from previous pageFEEDBACK METHOD ORACLE MISTAKE EQUAL SKIP MYOPIC NOISY MEAN2K+CONTR . 1.3 (0.25) 0.7 (0.16) 1.2 (0.24) 1.3 (0.29) 1.3 (0.28) 1.0 (0.16) 1.13PREFPPO 1.1 (0.18) 0.89 (0.18) 1.2 (0.22) 1.2 (0.17) 1.0 (0.25) 1.1 (0.18) 1.07+DIST. 1.1 (0.22) 1.0 (0.2) 1.2 (0.23) 1.1 (0.24) 1.1 (0.26) 0.91 (0.1) 1.06+CONTR . 1.0 (0.24) 0.9 (0.2) 1.4 (0.3) 1.2 (0.29) 1.1 (0.38) 1.6 (0.32) 1.281KPEBBLE 0.86 (0.15) 0.53 (0.19) 0.88 (0.15) 0.91 (0.14) 0.73 (0.18) 0.48 (0.25) 0.73+DIST. 1.1 (0.19) 0.59 (0.14) 1.2 (0.22) 1.3 (0.3) 1.1 (0.21) 1.0 (0.15) 1.04+CONTR . 1.1 (0.19) 0.63 (0.16) 1.2 (0.29) 1.1 (0.19) 1.1 (0.19) 0.83 (0.14) 0.99PREFPPO 0.9 (0.17) 0.88 (0.17) 1.1 (0.15) 0.98 (0.21) 0.89 (0.18) 0.83 (0.17) 0.92+DIST. 1.2 (0.21) 0.88 (0.23) 1.2 (0.27) 1.2 (0.23) 1.1 (0.26) 1.1 (0.16) 1.11+CONTR . 1.1 (0.19) 0.68 (0.28) 1.2 (0.25) 1.1 (0.2) 0.82 (0.31) 0.56 (0.25) 0.82500PEBBLE 0.56 (0.21) 0.48 (0.21) 0.66 (0.2) 0.64 (0.15) 0.47 (0.22) 0.48 (0.23) 0.55+DIST. 1.1 (0.21) 0.58 (0.16) 1.2 (0.24) 1.0 (0.22) 1.0 (0.19) 0.68 (0.16) 0.93+CONTR . 1.1 (0.21) 0.64 (0.11) 1.1 (0.22) 1.1 (0.17) 1.0 (0.17) 0.85 (0.14) 0.97PREFPPO 0.8 (0.18) 0.81 (0.22) 0.96 (0.12) 0.72 (0.18) 0.74 (0.24) 0.88 (0.17) 0.82+DIST. 1.1 (0.2) 0.76 (0.19) 1.0 (0.2) 1.1 (0.25) 0.89 (0.21) 0.81 (0.25) 0.95+CONTR . 1.1 (0.21) 0.63 (0.25) 0.9 (0.28) 0.89 (0.22) 0.88 (0.16) 1.5 (0.48) 0.95200PEBBLE 0.54 (0.19) 0.49 (0.22) 0.64 (0.15) 0.46 (0.2) 0.43 (0.22) 0.48 (0.23) 0.51+DIST. 0.9 (0.17) 0.57 (0.17) 0.77 (0.16) 0.89 (0.14) 0.76 (0.11) 0.68 (0.15) 0.76+CONTR . 0.95 (0.15) 0.53 (0.16) 0.86 (0.16) 0.88 (0.14) 0.77 (0.12) 0.74 (0.16) 0.79PREFPPO 0.7 (0.23) 0.59 (0.28) 0.82 (0.17) 0.65 (0.27) 0.8 (0.25) 0.82 (0.27) 0.73+DIST. 0.89 (0.15) 0.79 (0.23) 0.95 (0.18) 0.87 (0.16) 0.76 (0.26) 0.79 (0.17) 0.84+CONTR . 1.0 (0.33) 0.7 (0.15) 0.95 (0.21) 0.86 (0.18) 1.2 (0.43) 1.2 (0.24) 1.01100PEBBLE 0.38 (0.21) 0.47 (0.17) 0.64 (0.14) 0.42 (0.22) 0.46 (0.2) 0.44 (0.22) 0.47+DIST. 0.78 (0.16) 0.54 (0.2) 0.98 (0.19) 0.72 (0.15) 0.75 (0.16) 0.67 (0.18) 0.74+CONTR . 0.67 (0.18) 0.47 (0.2) 0.89 (0.14) 0.76 (0.15) 0.79 (0.17) 0.65 (0.19) 0.71PREFPPO 0.56 (0.31) 0.81 (0.31) 0.66 (0.22) 0.62 (0.28) 0.51 (0.31) 0.6 (0.29) 0.63+DIST. 1.0 (0.24) 0.8 (0.23) 0.82 (0.19) 0.71 (0.23) 0.76 (0.17) 0.81 (0.26) 0.82+CONTR . 0.91 (0.19) 0.52 (0.42) 0.61 (0.32) 0.6 (0.27) 0.99 (0.21) 0.53 (0.37) 0.6950PEBBLE 0.38 (0.26) 0.4 (0.23) 0.49 (0.2) 0.42 (0.25) 0.42 (0.26) 0.36 (0.26) 0.41+DIST. 0.65 (0.16) 0.47 (0.24) 0.77 (0.14) 0.68 (0.18) 0.67 (0.2) 0.56 (0.19) 0.63+CONTR . 0.83 (0.12) 0.49 (0.23) 0.8 (0.14) 0.65 (0.18) 0.69 (0.16) 0.62 (0.19) 0.68PREFPPO 0.68 (0.3) 0.64 (0.28) 0.58 (0.28) 0.49 (0.26) 0.49 (0.3) 0.5 (0.31) 0.56+DIST. 0.9 (0.19) 0.71 (0.29) 0.9 (0.25) 0.83 (0.18) 0.68 (0.26) 0.77 (0.29) 0.80+CONTR . 1.2 (0.34) 0.58 (0.29) 0.9 (0.27) 0.82 (0.16) 0.47 (0.44) 1.1 (0.35) 0.85Continues on next page...39TABLE 11–continued from previous pageFEEDBACK METHOD ORACLE MISTAKE EQUAL SKIP MYOPIC NOISY MEANBUTTON -PRESS20KPEBBLE 0.72 (0.26) 0.57 (0.26) 0.77 (0.25) 0.75 (0.26) 0.68 (0.21) 0.72 (0.24) 0.70+CONTR . 0.65 (0.25) 0.61 (0.28) 0.67 (0.27) 0.67 (0.27) 0.67 (0.24) 0.69 (0.26) 0.66PREFPPO 0.18 (0.03) 0.18 (0.04) 0.21 (0.03) 0.18 (0.03) 0.17 (0.04) 0.17 (0.04) 0.18+CONTR . 0.22 (0.03) 0.17 (0.03) 0.22 (0.02) 0.17 (0.03) 0.19 (0.03) 0.17 (0.04) 0.1910KPEBBLE 0.66 (0.26) 0.47 (0.21) 0.67 (0.27) 0.63 (0.26) 0.67 (0.24) 0.6 (0.26) 0.62+CONTR . 0.65 (0.27) 0.61 (0.3) 0.66 (0.27) 0.62 (0.26) 0.6 (0.25) 0.68 (0.28) 0.64PREFPPO 0.18 (0.03) 0.14 (0.04) 0.19 (0.03) 0.17 (0.04) 0.18 (0.03) 0.17 (0.04) 0.17+CONTR . 0.15 (0.04) 0.12 (0.05) 0.18 (0.03) 0.17 (0.03) 0.17 (0.03) 0.16 (0.03) 0.165KPEBBLE 0.48 (0.21) 0.31 (0.12) 0.56 (0.25) 0.54 (0.24) 0.59 (0.23) 0.52 (0.23) 0.50+CONTR . 0.55 (0.24) 0.54 (0.26) 0.65 (0.27) 0.63 (0.26) 0.57 (0.24) 0.63 (0.28) 0.60PREFPPO 0.15 (0.04) 0.13 (0.05) 0.19 (0.03) 0.16 (0.04) 0.16 (0.04) 0.14 (0.04) 0.15+CONTR . 0.14 (0.04) 0.13 (0.05) 0.18 (0.03) 0.14 (0.04) 0.14 (0.04) 0.14 (0.03) 0.142.5KPEBBLE 0.37 (0.18) 0.21 (0.088) 0.44 (0.21) 0.34 (0.15) 0.4 (0.17) 0.34 (0.18) 0.35+CONTR . 0.49 (0.25) 0.42 (0.22) 0.52 (0.24) 0.5 (0.23) 0.44 (0.17) 0.45 (0.21) 0.47PREFPPO 0.14 (0.04) 0.12 (0.05) 0.13 (0.05) 0.13 (0.05) 0.13 (0.05) 0.14 (0.05) 0.13+CONTR . 0.14 (0.04) 0.11 (0.05) 0.15 (0.04) 0.11 (0.04) 0.14 (0.04) 0.13 (0.04) 0.13SWEEP -INTO20KPEBBLE 0.53 (0.25) 0.26 (0.15) 0.51 (0.23) 0.52 (0.27) 0.47 (0.28) 0.47 (0.26) 0.46+CONTR . 0.5 (0.22) 0.36 (0.13) 0.41 (0.2) 0.6 (0.22) 0.54 (0.21) 0.61 (0.25) 0.50PREFPPO 0.16 (0.046) 0.14 (0.047) 0.16 (0.069) 0.18 (0.065) 0.19 (0.063) 0.08 (0.026) 0.15+CONTR . 0.23 (0.064) 0.11 (0.042) 0.2 (0.058) 0.2 (0.051) 0.19 (0.054) 0.1 (0.034) 0.1710KPEBBLE 0.28 (0.12) 0.22 (0.13) 0.45 (0.21) 0.33 (0.17) 0.47 (0.25) 0.51 (0.24) 0.38+CONTR . 0.47 (0.23) 0.3 (0.14) 0.45 (0.24) 0.32 (0.21) 0.42 (0.22) 0.44 (0.21) 0.40PREFPPO 0.16 (0.048) 0.19 (0.064) 0.18 (0.05) 0.18 (0.049) 0.12 (0.055) 0.058 (0.024) 0.15+CONTR . 0.11 (0.034) 0.12 (0.046) 0.15 (0.046) 0.16 (0.056) 0.11 (0.052) 0.054 (0.029) 0.125KPEBBLE 0.17 (0.099) 0.17 (0.089) 0.28 (0.19) 0.24 (0.15) 0.23 (0.13) 0.22 (0.12) 0.22+CONTR . 0.34 (0.14) 0.23 (0.19) 0.52 (0.24) 0.37 (0.2) 0.4 (0.24) 0.44 (0.18) 0.38PREFPPO 0.1 (0.039) 0.078 (0.027) 0.1 (0.032) 0.092 (0.025) 0.1 (0.038) 0.051 (0.019) 0.09+CONTR . 0.14 (0.052) 0.097 (0.034) 0.12 (0.032) 0.14 (0.076) 0.1 (0.026) 0.043 (0.026) 0.112.5K PEBBLE 0.15 (0.086) 0.13 (0.076) 0.16 (0.1) 0.16 (0.088) 0.18 (0.075) 0.25 (0.11) 0.17Continues on next page...40TABLE 11–continued from previous pageFEEDBACK METHOD ORACLE MISTAKE EQUAL SKIP MYOPIC NOISY MEAN2.5K+CONTR . 0.21 (0.13) 0.19 (0.22) 0.29 (0.17) 0.17 (0.092) 0.25 (0.15) 0.28 (0.16) 0.23PREFPPO 0.092 (0.032) 0.097 (0.044) 0.15 (0.051) 0.15 (0.049) 0.099 (0.035) 0.032 (0.022) 0.10+CONTR . 0.058 (0.019) 0.048 (0.018) 0.11 (0.032) 0.072 (0.035) 0.07 (0.016) 0.036 (0.017) 0.0741H.3 Image-space Observations Learning CurvesLearning curves are provided for the walker-walk, cheetah-run, and quadruped-walk DMC tasks(Figure 12), and for the button press, sweep into, drawer open, drawer close, window open, and doorclose MetaWorld tasks (Figure 13) across feedback amounts for image-space observations. For allimage-based results, the orable labeller is used to provide the preference feedback.0500100050Walker Walk Quadruped Walk Cheetah Run0500100010005001000250050010005000.0 0.5Steps (x1e6)0500100010000 1Steps (x1e6)0.0 0.5Steps (x1e6)0501002500Button Press Sweep Into Drawer Open Drawer Close Window Open Door Open0501005000050100100000.0 0.5Steps (x1e6)050100200000 1Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6) sac pebble pebble+dist. pebble+contr.Figure 12: Episode returns learning curves for walker walk ,quadruped walk , and cheetah runacross preference-based RL methods and feedback amounts for image-based observations. Theoracle labeller is used to generate preference feedback. Mean policy returns are plotted along they-axis with number of steps (in units of 1000) along the x-axis. There is one plot per environment(grid columns) and feedback amount (grid rows) with corresponding results per learning methodsin each plot. From top to bottom, the rows correspond to 2.5k, 5k, 10k, and 20k pieces of teacherfeedback. From left to right, the columns correspond to walker walk, quadruped walk, and cheetahrun.420501002500Button Press Sweep Into Drawer Open Drawer Close Window Open Door Open0501005000050100100000.0 0.5Steps (x1e6)050100200000 1Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0501002500Button Press Sweep Into Drawer Open Drawer Close Window Open Door Open0501005000050100100000.0 0.5Steps (x1e6)050100200000 1Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6)0.0 0.5Steps (x1e6) sac pebble pebble+dist. pebble+contr.Figure 13: Success rate learning curves for button press ,sweep into ,drawer open ,drawer close ,window open , and door close across preference-based RL methods and feedback amounts forimage-based observations. The oracle labeller is used to generate preference feedback. Mean suc-cess rates are plotted along the y-axis with number of steps (in units of 1000) along the x-axis. Thereis one plot per environment (grid columns) and feedback amount (grid rows) with corresponding re-sults per learning methods in each plot. From top to bottom, the rows correspond to 2.5k, 5k, 10k,and 20k pieces of teacher feedback. From left to right, the columns correspond to button press,sweep into, drawer open, drawer close, window open, and door close.43H.4 Image-space Observations Normalized ReturnsThe normalized returns (see Section ??and Equation 5) for walker-walk, quadruped-walk, cheetah-run, button press, sweep into, drawer open, drawer close, window open, and door close across alarger range of feedback amounts for image-space observations are given in Table 12.Table 12: Ratio of policy performance on learned versus ground truth rewards for the image-observations space walker-walk ,cheetah-run ,quadruped-walk ,sweep into ,button press ,drawer open ,drawer close ,window open , and door open tasks across preference learning meth-ods, labelling methods and feedback amounts (with disagreement sampling). The results are re-ported as means (standard deviations) over 10 random seeds.DMCTASK METHOD 50 100 250 500 1000 M EANWALKER -WALKPEBBLE 0.06 (0.02) 0.07 (0.02) 0.09 (0.02) 0.11 (0.03) 0.11 (0.03) 0.09+DIST. 0.18 (0.03) 0.33 (0.10) 0.40 (0.09) 0.57 (0.16) 0.68 (0.23) 0.43+CONTR . 0.28 (0.12) 0.35 (0.13) 0.46 (0.14) 0.58 (0.14) 0.61 (0.16) 0.46QUADRUPED -WALKPEBBLE 0.28 (0.23) 0.23 (0.19) 0.23 (0.15) 0.23 (0.19) 0.47 (0.09) 0.29+DIST. 0.56 (0.15) 0.59 (0.16) 0.61 (0.12) 0.71 (0.14) 0.72 (0.19) 0.64+CONTR . 0.53 (0.16) 0.64 (0.097) 0.69 (0.17) 0.73 (0.22) 0.71 (0.16) 0.66CHEETAH -RUNPEBBLE 0.01 (0.01) 0.02 (0.01) 0.02 (0.02) 0.02 (0.03) 0.03 (0.02) 0.02+DIST. 0.07 (0.03) 0.07 (0.03) 0.07 (0.02) 0.12 (0.04) 0.18 (0.07) 0.10+CONTR . 0.05 (0.02) 0.14 (0.04) 0.14 (0.04) 0.23 (0.09) 0.16 (0.06) 0.14METAWORLDTASK METHOD 2.5K 5K 10K 20K MEANBUTTON PRESSPEBBLE 0.16 (0.07) 0.20 (0.07) 0.27 (0.09) 0.33 (0.11) 0.24+DIST. 0.25 (0.04) 0.35 (0.09) 0.46 (0.16) 0.58 (0.23) 0.41+CONTR . 0.27 (0.04) 0.35 (0.10) 0.34 (0.07) 0.42 (0.11) 0.35SWEEP INTOPEBBLE 0.03 (0.02) 0.05 (0.03) 0.04 (0.04) 0.10 (0.08) 0.06+DIST. 0.06 (0.05) 0.10 (0.10) 0.10 (0.06) 0.10 (0.11) 0.09+CONTR . 0.10 (0.11) 0.12 (0.06) 0.11 (0.07) 0.12 (0.05) 0.11DRAWER OPENPEBBLE 0.39 (0.10) 0.50 (0.12) 0.55 (0.11) 0.60 (0.14) 0.51+DIST. 0.55 (0.07) 0.60 (0.11) 0.65 (0.08) 0.70 (0.08) 0.63+CONTR . 0.60 (0.11) 0.60 (0.12) 0.67 (0.08) 0.71 (0.08) 0.64DRAWER CLOSEPEBBLE 0.93 (0.22) 0.82 (0.16) 0.90 (0.17) 0.86 (0.17) 0.88+DIST. 0.95 (0.11) 0.97 (0.12) 0.93 (0.09) 0.97 (0.06) 0.96+CONTR . 0.97 (0.13) 0.93 (0.11) 0.93 (0.13) 0.95 (0.08) 0.95WINDOW OPENPEBBLE 0.23 (0.16) 0.18 (0.078) 0.22 (0.1) 0.35 (0.15) 0.25+DIST. 0.26 (0.13) 0.35 (0.17) 0.49 (0.19) 0.51 (0.19) 0.40+CONTR . 0.26 (0.12) 0.46 (0.20) 0.48 (0.19) 0.58 (0.19) 0.44DOOR OPENPEBBLE 0.17 (0.06) 0.26 (0.06) 0.23 (0.04) 0.26 (0.06) 0.23+DIST. 0.34 (0.07) 0.33 (0.08) 0.34 (0.09) 0.49 (0.13) 0.37+CONTR . 0.19 (0.03) 0.34 (0.15) 0.33 (0.07) 0.40 (0.05) 0.3244I Stability and Generalization BenefitsI.1 Reward Model Stability0.000.05WalkerWalk50 100 500equalmistake myopicnoisyoracleskip0.000.05QuadrupedWalkequalmistake myopicnoisyoracleskipequalmistake myopicnoisyoracleskippebble pebble+simsiamFigure 14: Variance in predicted ˆrψreward as ˆrψis learned and updated in conjunction with πφ.There is one plot per feedback amount (columns) and environment (rows) with corresponding re-sults per learning method and labelling strategy in each plot. The learning methods assessed arePEBBLE and PEBBLE with the SimSiam temporal consistency objective. The labelling strategies(see Appendix B for details) are marked along the x-axis.We evaluate reward model stability across updates by computing the variance in predicted rewardsfor each transition in Bacross reward model updates. We expect lower variance to translate intomore stability and less reward non-stationarity, resulting in better policy performance. Mean andstandard deviation in predicted rewards are provided for each model update over 10 random seedsin Figure 14. A representative subset of the conditions (walker-walk and quadruped-walk) fromSection 6 are used to evaluate reward model stability.Figure 14 shows that for fewer feedback samples, the predicted rewards are more stable acrossreward updates for REED methods. For larger amounts of feedback ( ≥500), where REED vs.non-REED reward policy performance is closer, the amount of predicted reward variability does notdiffer greatly between REED and non-REED reward functions. Therefore, the benefits of REEDmethods are most pronounced when preference feedback is limited.These reward stability results partially explain why REED leads to better policy performance. Next,we investigate whether performance differences are solely due the interplay between reward andpolicy learning, or if the difference is also due to differences in overall reward quality.I.2 Reward ReuseWe assess reward re-usability on a representative subset of the conditions from Section 6 by: (1)learning a preference-based reward function following [3] and [4]; (2) freezing the reward function;(3) training a SAC policy from scratch using the frozen reward function. Reward function reuse isevaluated by comparing policy performance to: (a) SAC trained on the ground truth reward, and (b)SAC learned jointly with the reward function.Figure 15 shows that, when reusing a reward function, REED improves policy performance relativeto non-REED methods. When environment dynamics are encoded in the reward function, perfor-mance closely matches or exceeds that of both policies trained on the ground truth and policiestrained jointly with the reward function.We see different trends when comparing the reused case to the joint case across environments. Forwalker-walk, the policy trained on the reused reward function typically slightly under performs thepolicy trained in conjunction with the reward function (with the exception of feedback = 200 withthe REED reward function) suggesting the REED and non-REED reward functions are over-fitting tothe transitions they are trained on. For quadruped-walk, policies trained on the reused REED rewardfunction outperform the policy trained jointly with the REED reward function for feedback amounts450.0 0.501000walkerwalk500.0 0.52000.0 0.55000.0 0.510000 1Steps (x1e6)01000quadrupedwalk0 1Steps (x1e6)0 1Steps (x1e6)0 1Steps (x1e6)sac joint pebble reuse pebble joint pebble+reed reuse pebble+reedFigure 15: Preference-learned reward reuse policy learning curves for walker-walk and quadruped-walk comparing SAC on the ground truth reward function vs. on the PEBBLE learned reward func-tion vs. on the PEBBLE with SimSiam reward function with the oracle labeller across feedbackamount. Mean policy returns are plotted along the y-axis with number of steps along the x-axis.>200 and matches for 200. To our surprise, SAC is able to learn faster on the REED rewardfunction than on the ground truth reward function whenever the amount of feedback is >200.Whereas for the non-REED reward function, the policy trained on the reused reward function under-performs, matches, or out-performs the policy learned jointly with the reward function dependingon the amount of feedback. The results suggest that the REED method are less prone to over-fitting.I.2.1 Complete Reward Reuse ResultsThe complete reward reuse results for walker-walk (Figure 16), quadruped-walk (Figure 17), andcheetah-run (Figure 18) across feedback amounts and teacher labelling strategies.46Figure 16: Learning curves for walker-walk comparing joint reward and policy learning with policylearning using a previously learned reward function across preference-based RL method, labellingstyle, and feedback amount. Mean policy returns are plotted along the y-axis with number of steps(in units of 1000) along the x-axis. There is one plot per labelling style (grid columns) and feedbackamount (grid rows) with corresponding results per learning methods in each plot. From top tobottom, the rows correspond to 50, 100, 200, 500, and 1000 pieces of teacher feedback. From leftto right, the columns correspond to equal, noisy, mistake, myopic, oracle, and skip labelling styles(see Appendix b for details).47Figure 17: Learning curves for quadruped-walk comparing joint reward and policy learning withpolicy learning using a previously learned reward function across preference-based RL method,labelling style, and feedback amount. Mean policy returns are plotted along the y-axis with numberof steps (in units of 1000) along the x-axis. There is one plot per labelling style (grid columns)and feedback amount (grid rows) with corresponding results per learning methods in each plot.From top to bottom, the rows correspond to 50, 100, 200, 500, 1000, and 2000 pieces of teacherfeedback. From left to right, the columns correspond to equal, noisy, mistake, myopic, oracle, andskip labelling styles (see Appendix b for details).48Figure 18: Learning curves for cheetah-run comparing joint reward and policy learning with policylearning using a previously learned reward function across preference-based RL method, labellingstyle, and feedback amount. Mean policy returns are plotted along the y-axis with number of steps(in units of 1000) along the x-axis. There is one plot per labelling style (grid columns) and feedbackamount (grid rows) with corresponding results per learning methods in each plot. From top tobottom, the rows correspond to 50, 100, 200, 500, and 1000 pieces of teacher feedback. From leftto right, the columns correspond to equal, noisy, mistake, myopic, oracle, and skip labelling styles(see Appendix b for details).49 |
UVARkqnsDd | ScalableMap: Scalable Map Learning for OnlineLong-Range Vectorized HD Map ConstructionJingyi YuSchool of Geodesy and GeomaticsWuhan Universityjingyiyu@whu.edu.cnZizhao ZhangElectronic Information SchoolWuhan University3zair1997@gmail.comShengfu XiaSchool of Geodesy and GeomaticsWuhan Universityxiashengfu@whu.edu.cnJizhang SangSchool of Geodesy and GeomaticsWuhan Universitysangjzh@whu.edu.cnAbstract: We propose a novel end-to-end pipeline for online long-range vector-ized high-definition (HD) map construction using on-board camera sensors. Thevectorized representation of HD maps, employing polylines and polygons to rep-resent map elements, is widely used by downstream tasks. However, previousschemes designed with reference to dynamic object detection overlook the struc-tural constraints within linear map elements, resulting in performance degradationin long-range scenarios. In this paper, we exploit the properties of map elementsto improve the performance of map construction. We extract more accurate bird’seye view (BEV) features guided by their linear structure, and then propose a hier-archical sparse map representation to further leverage the scalability of vectorizedmap elements, and design a progressive decoding mechanism and a supervisionstrategy based on this representation. Our approach, ScalableMap, demonstratessuperior performance on the nuScenes dataset, especially in long-range scenarios,surpassing previous state-of-the-art model by 6.5 mAP while achieving 18.3 FPS.Code is available at https://github.com/jingy1yu/ScalableMap.Keywords: Map Construction, Multi-view Perception, Long-range Perception1 IntroductionTo ensure the safety of autonomous vehicles on the road, downstream tasks such as trajectory pre-diction and motion planning typically rely on high-definition (HD) maps as prior information [1, 2],which provide centimeter-level location information for map elements. However, the production ofsuch HD maps is generally carried out offline, involving complex processes known for their highlabor and economic costs, making it difficult to construct maps that cover a wide area [3, 4].Recent researches aim to construct local maps in real-time using on-board sensors. More studiesshow the superiority of schemes based on bird’s-eye view (BEV) representation for unifying datafrom various sensors. Early attempts [5, 6, 7, 8, 9] regard map construction as a semantic segmen-tation task, using convolutional neural networks (CNN) to obtain occupancy grid map. However,these schemes can only generate rasterized maps, which lack instance and structure informationabout map elements and are therefore difficult to apply directly to downstream tasks [10].HDMapNet [11] uses time-consuming heuristic post-processing algorithms to generate vectorizedmaps. More recent approaches [12, 13] focus on constructing end-to-end networks similar to dy-namic object detection schemes, treating map elements as ordered sets of vertices. However, theproperties of map elements are different from those of dynamic objects. Map elements are typically7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.linear and often parallel to axes [14], which makes it difficult to define bounding boxes. Moreover, indense vehicle scenarios, the limited visibility of vertices with map elements in the image space hin-ders accurate map shape inference solely based on heatmaps. While a recent approach [15] proposeshierarchical query embeddings to better describe the arbitrary shape of an element by modeling eachvertex as a query, it requires dense points to ensure the shape of elements and need to predict a largenumber of vertices simultaneously without structural guidance. This poses challenges to the conver-gence speed and performance, particularly in long-range scenarios. Therefore, there is still a needfor an approach that can effectively capture the structural constraints within map elements to achievehigh accuracy in long-range HD map construction tasks.In this paper, we aim to exploit the structural properties of vectorized map elements to address thechallenges of accurately detecting map elements at longer ranges. First, we extract position-awareBEV features and instance-aware BEV features via two branches respectively and fuse them underthe guidance of linear structure to get hybrid BEV features. Next, we propose a hierarchical sparsemap representation (HSMR) to abstract map elements in a sparse but accurate manner. Integratingthis representation with cascaded decoding layers proposed by DETR [16], we design a progressivedecoder to enhance the constraints of structured information by exploiting the scalability of vector-ized map elements and a progressive supervision strategy to improve the accuracy of inference. Ourscheme, ScalableMap, dynamically increases the sampling density of map to get inference results atvarious scales, allowing us to obtain more accurate map information faster.Contributions. Our contributions are summarized as follows: (i) We propose ScalableMap, a firstend-to-end long-range vectorized map construction pipeline. We exploit the structural propertiesof map elements to extract more accurate BEV features, propose a HSMR based on the scalablevectorized elements, and design a progressive decoder and supervision strategy accordingly. Allof these result in superior long-range map perception. (ii) We evaluate the performance of Scal-ableMap on the nuScenes dataset [17] through extensive experiments. Our proposed approachachieves state-of-the-art results in long-range HD map learning, surpassing existing multimodalmethods by 6.5 mAP while achieving 18.3 FPS.2 Related workLane Detection The lane detection task has been a popular research topic for many years. Earlyapproaches to these tasks [18, 5] usually rely on segmentation schemes that require complex post-processing to obtain the final result. In order to obtain structured information, some schemes [19,20] aim to find a unified representation of curves, while others [21, 22, 23, 24] utilize anchor-based schemes to abstract map elements with open shapes. Compared with the above solutions, ourthinking is closer to HRAN [4], which directly outputs structured polylines. However, it relies ona recurrent network that is known to be computationally inefficient. Our ScalableMap is capableof handling real map elements with complex geometric structures, while the previously mentionedmethods can only handle a single type or regular shape.Boundary Extraction Boundary extraction aims to predict polygon boundaries for object in-stances on images. Polygon-RNN [25, 26] adopts recurrent structure to trace each boundary se-quentially, which is not suitable for scenarios with real-time requirements. Some works [22, 27, 28]achieve good results in boundary extraction, but they are generally designed for polygons in imagespace and are not suitable for map construction tasks. The closest to our proposed scheme is Bound-aryFormer [29], which uses queries to predict vertices of polygons to obtain vectorized polygonboundaries. However, the differentiable loss it defines for closed-shape elements in image spaceis not suitable for map element that is dominated by open-shape linear elements, as they have lessconcentrated features compared to dynamic objects.Vectorized HD Map Construction Recent work tries to infer vectorized HD maps directly fromon-board sensor data. HDMapNet [11] generates vectorized maps using a time-consuming heuristic2Figure 1: Overview of ScalableMap. (a) Structure-guided hybrid BEV feature extractor. (b) Hierar-chical sparse map representation & Progressive decoder. (c) Progressive supervision.post-processing method, while VectorMapNet [12] proposes a two-stage framework with an end-to-end pipeline using a slow auto-regressive decoder to recurrently predict vertices. InstaGraM [13]proposes a graph modeling approach based on vertex and edge heatmaps to reason about instance-vertex relations, which may be difficult to infer some vertices of a map element appeared in multipleviews. Given the challenge of dealing with arbitrary shapes and varying numbers of vertices inelements, MapTR [15] tackles this by employing a fixed number of interpolations to obtain a uni-form representation. But MapTR’s hierarchical query design primarily focuses on the structuralassociation of elements during the initialization phase, resulting in slow convergence and deterio-rating performance as the perception range increases. Only SuperFusion [30] is a relevant workfor long-range vectorized HD map construction, which also uses post-processing to obtain vector-ized results. Our model is the first end-to-end scheme that utilizes the structural properties of mapelements throughout the entire process to construct long-range vectorized maps.3 Methodology3.1 OverviewGiven a set of surround-view images {I1, ..., I k}captured from kon-board cameras, the goal ofScalableMap is to predict Mlocal map elements {L(j);j= 1, ..., M }within a certain rangein real-time, including lane dividers, road boundaries, and pedestrian crossings. Each map el-ement is represented by a sparse set of ordered vertices, which can be described as L(j)={(x0, y0, z0), ...,(xmj, ymj, zmj)}, where mjis the number of vertices of element L(j)and(x, y, z )are the coordinates of each vertex in a unified vehicle coordinate system.The architecture of ScalableMap is illustrated in Figure 1. We build the model with three com-ponents to construct long-range vectorized HD maps: (1) structure-guided hybrid BEV feature ex-tractor : transforming camera sensor data into BEV features with structure-guided fusion (Section3.2) ; (2) progressive decoder : layer by layer map element decoding based on the proposed HSMR(Section 3.3) ; (3) progressive supervision : bipartite matching and training for HSMR (Section 3.4).33.2 BEV Feature ExtractorThe ill-posed nature of 2D-3D transformation is exacerbated by the elongated and linear character-istics of map elements, leading to feature misalignment and discontinuity. To obtain hybrid BEVfeatures, we utilize one branch for extracting position-aware BEV features and another branch forextracting instance-aware BEV features. These branches are then fused together, guided by thestructural properties of map elements.Perspective View Converter. We start by extracting image features through ResNet. Methodproposed by BEVFormer [31] is adopted to obtain position-aware BEV features Fpbev, which utilizesdeformable attention [32] to enable spatial interaction between BEV queries and correspondingimage features based on a predefined 3D grid and calibration parameters. Additionally, we useseveral multi-layer perception (MLP) layers to obtain instance-aware BEV features Fibevsince theyare effective at preserving continuous features in image space [33]. kimage features are individuallyconverted to their respective top-views using kMLPs. To further improve feature continuity acrossviews, we use a linear layer to transform top-view features into a unified BEV feature.Structure-Guided Feature Fusion. To enhance the robustness of features for accurate map con-struction, we propose a mutual correction strategy that leverages information from two distinct fea-tures: Fpbevwith relatively precise positional data for select map vertices, and Fibevencompassingcomprehensive shape information for map elements. By directly summing these features, we pro-duce the updated Fi′bev. Additionally, we introduce a segmentation head to Fi′bev, guiding it to focuson the drivable area to learn the transformation scale. Subsequently, Fpbevis concatenated with therefined Fi′bev, and their fusion is executed through a convolutional layer. This fusion process correctsmisalignments in Fpbev, producing an hybrid BEV feature with enhanced richness and accuracy.3.3 Progressive DecoderThe varied shapes of vectorized map elements present challenges for conventional abstractionschemes like bounding box-based and anchor-based approaches. To address this, we introduce aHSMR as the core idea of our approach. HSMR provides a sparse and unified representation thataccurately describes the actual shape of elements while supporting fast inference. Building uponthis, we design a progressive decoder inspired by the DETR paradigm. Moreover, we incorporate amodule that generates structural queries first and then dynamically inserts queries, acting as a vitalbridge to connect maps of different densities.Hierarchical Sparse Map Representation. Polyline representations of map elements are typi-cally obtained by sampling points where the curvature exceeds a threshold, thus resulting in varyingnumbers of vertices for each element. We define the number of vertices forming each element as themap density to ensure a consistent representation. Based on this density, we employ uniform pointsampling for elements with an excessive number of vertices, while for elements with fewer verticesthan the desired density, we perform point subsampling based on distances between the original ver-tices. This approach allows us to obtain representations of the same element at arbitrary densities.By combining the iterative optimization idea of DETR paradigm with the dynamically adjustabledensity of the vectorized map, we hierarchically utilize a low-density map as an abstract representa-tion of the high-density map. The low-density map captures map element shapes adequately whilebeing sufficiently sparse. A visual depiction of HSMR and its performance is provided in Figure 4.Decoder Layers. We define query qn,mresponsible for the m-th vertex of the n-th element. Lever-aging the hierarchical sparse representation of map elements, a small number of queries are initiallygenerated to capture the approximate shape of each map element. Each query is formed by adding aninstance embedding qinsnand a position embedding qposn,m. Our progressive map element decoder iscomposed of multiple decoder layers, each containing two types of attention mechanisms. These at-tention mechanisms facilitate information exchange among vertices, and enable interaction between4Figure 2: Visualization of progressive polyline loss.each vertex and its corresponding BEV feature. The exchange between vertices is implemented us-ing multi-head self-attention [34], while the other is implemented using deformable attention [32].Structural Query Generation and Dynamic Query Insertion. To connect layers that handledifferent densities, we exploit the positional constraints among adjacent vertices within the sameelement to augment the map density. We introduce new queries by taking the mean value of twoadjacent queries that share an edge, and dynamically insert new queries between these two queries.Rather than employing traditional methods that initialize a large number of queries simultaneouslyand update them iteratively, we adopt a strategy of initializing each element with only a limited num-ber of queries and gradually increasing the map density layer by layer. This enables the module tofocus on the original sparse instance features and leverage the structural characteristics of vectorizedmap elements, ensuring robust long-range perceptual capabilities.3.4 Progressive SupervisionDuring training, we infers Nmap elements {ˆLi}Ni=1in each layer, and Nis set to be larger thanthe typical number of elements in a scene. Assume there are Mtargets {Li}Mi=1, which is paddedwith∅to form a set of size N. Following [16, 35], bipartite matching is employed to search for apermutation σ∈Σwith the lowest cost. Σincludes the equivalent permutation for each element, asmultiple vertex orders can represent the actual shape of an element in map construction task:σ∗=argminσ∈ΣNXi=1[− 1{ci̸=∅}ˆpσ(i)(ci) + 1{ci̸=∅}Lmatch (ˆLσ(i), Li)] (1)where ˆpσ(i)(ci)is the probability of class cifor the prediction with index σ(i)andLmatch is apair-wise polyline matching cost between prediction ˆLσ(i)and ground truth Li. We use Hungarianalgorithm [36] to find the optimal assignment σ∗. We employ focal loss to supervise the elementcategory and drivable area, and additional loss terms are incorporated in the following loss function:Lpolyline =λvLvertex +Ledge (2)Vertex Loss. Considering HSMR involves subsampling process, we differentiate the supervisionbetween original vertices and newly added vertices. A visual representation of the supervision mech-anism for the progressive polyline loss is shown in Figure 2. For each of predicted original vertexˆvσ∗(i),jassigned to vertex vi,j, we employ L1 distance to ensure prediction accuracy. With Nlvstanding for the number of original vertices in layer l, vertex loss of each element is formulated as:Lvertex =NXi=11{ci̸=∅}Nlv−1Xj=0∥ˆvσ∗(i),j−vi,j∥1 (3)Edge Loss. We use edge loss to supervise edge shape, which includes distance from newly addedvertices {ˆvσ(i),j,k}Njv−1k=1corresponding to the current edge ei,j, edge slope, and angle formed byadjacent edges. The distance component is supervised with L1 loss, while the slope and angle5Table 1: Results on nuScenes validation dataset. C denotes camera only and C+L denotes camera-LiDAR fusion. Range represents the perceived range along the Y-axis. FPS of ScalableMap is mea-sured on a single RTX 3090 GPU, with batch size 1 and GPU warm-up. The metrics of MapTR*are obtained by retraining the model while modifying only the perception range, following the of-ficial code and ensuring consistency with the claimed specifications. Metric values marked with †represents the AP value under a threshold of 1.0. Since SuperFusion only provides this metric, weconduct the same benchmark test for a fair comparison.Method Modality Range AP ped APdivider APboundary mAP FPSHDMapNet C [-30.0, 30.0] 14.4 21.7 33.0 23.0 0.8HDMapNet C+L [-30.0, 30.0] 16.3 29.6 46.7 31.0 0.5VectorMapNet C [-30.0, 30.0] 36.1 47.3 39.3 40.9 2.9VectorMapNet C+L [-30.0, 30.0] 37.6 50.5 47.5 45.2 -InstaGraM C [-30.0, 30.0] 47.2 33.8 44.0 41.7 17.6MapTR C [-30.0, 30.0] 56.2 59.8 60.1 58.7 11.2ScalableMap C [-30.0, 30.0] 57.3 60.9 63.8 60.6 18.3MapTR* C [-60.0, 60.0] 35.6 46.0 35.7 39.1 11.2ScalableMap C [-60.0, 60.0] 44.8 49.0 43.1 45.6 18.3SuperFusion C+L [0.0, 60.0] 22.3† 30.3† 53.4† 35.3† -ScalableMap C [0.0, 60.0] 51.0† 55.1† 48.4† 51.5†18.3components are supervised with cosine similarity. The edge loss of each element is formulated as:Le=NXi=11{ci̸=∅}{Nlv−1Xj=0[λpNv,j−1Xk=0d(ˆvσ∗(i),j,k, ei,j)+λsc(ˆeσ∗(i),j, ei,j)]+λaNv−2Xj=0c(ˆaσ∗(i),j, ai,j)}(4)where ˆeσ∗(i),jis the edge formed by two adjacent vertices, ˆaσ∗(i),jis the angle formed by twoadjacent edges, d(ˆv, e)denotes the distance from vertex vto edge e, and c(ˆa, a)denotes the cosinesimilarity between two edges. Nv,jis the number of added vertices corresponding to edge eσ∗(i),j.4 Experiments4.1 Experimental SettingsDataset and Metrics. We evaluate ScalableMap on the nuScenes dataset, which consists of 1000scenes. Each scene has a duration of approximately 20 seconds. The dataset provides a 360-degreefield of view around an ego-car, captured by six cameras. Following previous works [12, 13, 15], weuse the average precision(AP) metric to evaluate the performance, and chamfer distance to determinewhich positive matches the ground truth. We calculate the AP for the three categories, and for eachcategory, AP is computed under several thresholds {0.5,1.0,1.5}.Implementation Details. We train ScalableMap for 110 epochs on RTX 3090 GPUs withbatch size 32. The perception range for regular range test is [−15.0m,15.0m]along the X-axis and [−30.0m,30.0m]along the Y-axis, and the perception range for long-range test is ex-panded to [−60.0m,60.0m]along the Y-axis. To unify the representation of vertices, we use theRamer–Douglas–Peucker algorithm [37] to simplify the original polyline with a threshold of 0.05mbefore subsampling. For training, we set the loss scales λclsas2.0,λv, λpas5.0andλs, λaas5e−3respectively. The progressive decoder is composed of six decoder layers.4.2 Results.Comparison with Baselines. We evaluate the performance of ScalableMap by comparing it withthat of state-of-the-art methods on nuScenes validation test. As shown in Table 1, under cameramodality, ScalableMap performs slightly better than MapTR, achieving 1.9 higher mAP and faster6Figure 3: Visualization of qualitative results of ScalableMap in challenging scenes from nuScenesvalidation dataset. The left column is the surround views, the middle column is the inference resultsof the ScalableMap, the right column is corresponding ground truth. Green lines indicate boundaries,red lines indicate lane dividers, and blue lines indicate pedestrian crossings.inference speed within the conventional perception range of [−30.0m,30.0m]along the Y-axis.When the same models are directly applied to [−60.0m,60.0m]scenario, ScalableMap achieves45.6 mAP and 18.3 FPS, while MapTR’s corresponding values are 39.1 and 11.2. It is noted thatSuperFusion is the only method which publishes experiment results in this range. However, it is afusion model of lidar and single-view camera. The mAP achieved by our approach is higher than thatof SuperFusion by 16.2 under the same benchmark, demonstrating the superior performance evenin a multi-view camera modality with near real-time inference speed. The results demonstrate thatour scheme effectively meets the real-time requirements of online map construction tasks, deliveringsuperior accuracy in both conventional perception range tests and long-range tests.Qualitative Results Visualization. The visualization of qualitative results of ScalableMap onnuScenes validation dataset in long-range test is shown in Figure 3. More visualization resultsof challenging scenarios are presented in Appendix B for more visualization results of challengingscenarios. Our model still performs well even in curved roads, intersections, congested roads, andnight scenes. We further visualize three out of six decoder layers of MapTR* and ScalableMapin Figure 4. Our strategy demonstrates a faster ability to focus on the instance features, while theprogressive iteration yields more precise shapes of elements.7Figure 4: Visualization of prediction from three decoder layers of MapTR* and ScalableMap. Theperception range along the Y-axis is [−60.0m,60.0m]. The light-colored lines on the image repre-sent ground truth, while the dark-colored lines represent the inference results.4.3 Ablation StudiesWe conduct ablation experiments on nuScenes validation set to verify the effectiveness of the com-ponents of the proposed method and different design. Settings of all experiments are kept the sameas mentioned before. Additional ablation experiments are provided in Appendix A.Ablation of Proposed Components. Table 2 presentsexperimental results showcasing the impact of our pro-posed components. HSMR demonstrates effective per-formance in long-range perception with sparse repre-sentation. SQG&DQI enhances structural informationwithin map elements, while the SGFF module signifi-cantly enhances performance.Table 2: Ablations about modules.HSMR SQG&DQI SGFF mAP40.1✓ 39.7✓ ✓ 42.6✓ ✓ ✓ 45.6Ablation of Number of Vertices. Ablations of theeffect of number of vertices forming each element onlong-range perception in each decoder layer are pre-sented in Table 3. The experimental results show that,based on our proposed HSMR, the model performanceis quite stable with the number of vertices. We trade-offaccuracy and speed to select the appropriate parameters.Table 3: Ablations about vertex number.Number of Vertices mAP FPS2/3/5/9/9/9 43.6 19.23/5/9/17/17/17 45.6 18.34/7/13/25/25/25 44.2 17.65 DiscussionWe propose ScalableMap, an innovative pipeline for constructing long-range vectorized HD maps.We exploit the inherent structure of map elements to extract accurate BEV features, propose the con-cept of HSMR based on the scalable vectorized maps, and design progressive decoder and supervi-sion strategy accordingly to ensure fast convergence. Through these designs, our method effectivelycaptures information over long distances. Experiment results on nuScenes dataset demonstrate com-pelling performance, particularly in long-range scenarios, thus affirming its real-time applicabilityand effectiveness in real-world environments.Limitations. Our method relies solely on real-time camera sensor data, thus its performance de-pends on the visibility of the scenarios, which may be limited in situations like traffic congestionor extreme weather conditions. Additionally, accurate camera calibration parameters are assumed,which can pose a constraint in practical deployment. Future research can focus on reducing thereliance on calibration parameters by developing calibration-free approaches or incorporating on-line calibration methods. Exploring the integration of positional constraints among map elements orleveraging global coarse maps as prior knowledge may further enhance the robustness and accuracy.8AcknowledgmentsWe appreciate the reviewers for their comments and feedback.References[1] J. Gu, C. Sun, and H. Zhao. Densetnt: End-to-end trajectory prediction from dense goal sets.InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 15303–15312, 2021.[2] W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun. End-to-end interpretableneural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 8660–8669, 2019.[3] N. Homayounfar, W.-C. Ma, J. Liang, X. Wu, J. Fan, and R. Urtasun. Dagmapper: Learning tomap by discovering lane topology. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision , pages 2911–2920, 2019.[4] N. Homayounfar, W.-C. Ma, S. K. Lakshmikanth, and R. Urtasun. Hierarchical recurrentattention networks for structured online maps. Computer Vision and Pattern Recognition ,2018.[5] J. Philion and S. Fidler. Lift, splat, shoot: Encoding images from arbitrary camera rigs byimplicitly unprojecting to 3d. In Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16 , pages 194–210. Springer, 2020.[6] T. Roddick and R. Cipolla. Predicting semantic map representations from images using pyra-mid occupancy networks. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 11138–11147, 2020.[7] L. Reiher, B. Lampe, and L. Eckstein. A sim2real deep learning approach for the transfor-mation of images from multiple vehicle-mounted cameras to a semantically segmented imagein bird’s eye view. In 2020 IEEE 23rd International Conference on Intelligent TransportationSystems (ITSC) , pages 1–7. IEEE, 2020.[8] H. X. W. S. B. Z. J. M. Runsheng Xu, Zhengzhong Tu. Cobevt: Cooperative bird’s eye viewsemantic segmentation with sparse transformers. In Conference on Robot Learning (CoRL) ,2022.[9] B. Yang, M. Liang, and R. Urtasun. Hdnet: Exploiting hd maps for 3d object detection. InConference on Robot Learning , pages 146–155. PMLR, 2018.[10] J. Gao, C. Sun, H. Zhao, Y . Shen, D. Anguelov, C. Li, and C. Schmid. Vectornet: Encodinghd maps and agent dynamics from vectorized representation. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 11525–11533, 2020.[11] Q. Li, Y . Wang, Y . Wang, and H. Zhao. Hdmapnet: An online hd map construction andevaluation framework. In 2022 International Conference on Robotics and Automation (ICRA) ,2022.[12] Y . Liu, Y . Wang, Y . Wang, and H. Zhao. Vectormapnet: End-to-end vectorized hd map learning.arXiv preprint arXiv:2206.08920 , 2022.[13] J. Shin, F. Rameau, H. Jeong, and D. Kum. Instagram: Instance-level graph modeling forvectorized hd map learning. arXiv preprint arXiv:2301.04470 , 2023.[14] Y . Xu, W. Xu, D. Cheung, and Z. Tu. Line segment detection using transformers without edges.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 4257–4266, 2021.9[15] B. Liao, S. Chen, X. Wang, T. Cheng, Q. Zhang, W. Liu, and C. Huang. Maptr: Structuredmodeling and learning for online vectorized hd map construction. In International Conferenceon Learning Representations , 2023.[16] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-end ob-ject detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 , pages 213–229. Springer, 2020.[17] H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Krishnan, Y . Pan, G. Baldan,and O. Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings ofthe IEEE/CVF conference on computer vision and pattern recognition , pages 11621–11631,2020.[18] D. Neven, B. De Brabandere, S. Georgoulis, M. Proesmans, and L. Van Gool. Towards end-to-end lane detection: an instance segmentation approach. In 2018 IEEE intelligent vehiclessymposium (IV) , pages 286–291. IEEE, 2018.[19] Z. Feng, S. Guo, X. Tan, K. Xu, M. Wang, and L. Ma. Rethinking efficient lane detection viacurve modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 17062–17070, 2022.[20] W. Van Gansbeke, B. De Brabandere, D. Neven, M. Proesmans, and L. Van Gool. End-to-endlane detection through differentiable least-squares fitting. In Proceedings of the IEEE/CVFInternational Conference on Computer Vision Workshops , pages 0–0, 2019.[21] L. Tabelini, R. Berriel, T. M. Paixao, C. Badue, A. F. De Souza, and T. Oliveira-Santos.Keep your eyes on the lane: Real-time attention-guided lane detection. In Proceedings ofthe IEEE/CVF conference on computer vision and pattern recognition , pages 294–302, 2021.[22] S. Zhang, S. Zhang, X. Pan, Z. Wang, and C. Lu. Curve-gcn: Enhancing curve-based road ob-ject detection with graph convolutional networks. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 1529–1538, 2020.[23] N. Garnett, R. Cohen, T. Pe’er, R. Lahav, and D. Levi. 3d-lanenet: end-to-end 3d multiple lanedetection. In Proceedings of the IEEE/CVF International Conference on Computer Vision ,pages 2921–2930, 2019.[24] L. Chen, C. Sima, Y . Li, Z. Zheng, J. Xu, X. Geng, H. Li, C. He, J. Shi, Y . Qiao, et al.Persformer: 3d lane detection via perspective transformer and the openlane benchmark. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part XXXVIII , pages 550–567. Springer, 2022.[25] L. Castrejon, K. Kundu, R. Urtasun, and S. Fidler. Annotating object instances with a polygon-rnn. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages5230–5238, 2017.[26] D. Acuna, H. Ling, A. Kar, and S. Fidler. Efficient interactive annotation of segmentationdatasets with polygon-rnn++. In Proceedings of the IEEE conference on Computer Vision andPattern Recognition , pages 859–868, 2018.[27] J. Liang, N. Homayounfar, W.-C. Ma, Y . Xiong, R. Hu, and R. Urtasun. Polytransform: Deeppolygon transformer for instance segmentation. In Proceedings of the IEEE/CVF conferenceon computer vision and pattern recognition , pages 9131–9140, 2020.[28] S. Zorzi, S. Bazrafkan, S. Habenschuss, and F. Fraundorfer. Polyworld: Polygonal buildingextraction with graph neural networks in satellite images. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 1848–1857, 2022.10[29] J. Lazarow, W. Xu, and Z. Tu. Instance segmentation with mask-supervised polygonal bound-ary transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 4382–4391, 2022.[30] H. Dong, X. Zhang, X. Jiang, J. Zhang, J. Xu, R. Ai, W. Gu, H. Lu, J. Kannala, and X. Chen.Superfusion: Multilevel lidar-camera fusion for long-range hd map generation and prediction.arXiv preprint arXiv:2211.15656 , 2022.[31] Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Y . Qiao, and J. Dai. Bevformer: Learningbird’s-eye-view representation from multi-camera images via spatiotemporal transformers. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part IX , pages 1–18. Springer, 2022.[32] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai. Deformable detr: Deformable transformersfor end-to-end object detection. arXiv preprint arXiv:2010.04159 , 2020.[33] J. Zou, J. Xiao, Z. Zhu, J. Huang, G. Huang, D. Du, and X. Wang. Hft: Lifting perspectiverepresentations via hybrid feature transformation. arXiv preprint arXiv:2204.05068 , 2022.[34] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[35] Y . Wang, V . Guizilini, T. Zhang, Y . Wang, H. Zhao, , and J. M. Solomon. Detr3d: 3d objectdetection from multi-view images via 3d-to-2d queries. In The Conference on Robot Learning(CoRL) , 2021.[36] H. W. Kuhn. The hungarian method for the assignment problem. Naval research logisticsquarterly , 2(1-2):83–97, 1955.[37] D. H. Douglas and T. K. Peucker. Algorithms for the reduction of the number of points requiredto represent a digitized line or its caricature. Cartographica: the international journal forgeographic information and geovisualization , 10(2):112–122, 1973.11A Ablation StudyA.1 The Way of Feature Fusion.Given that SGFF employs a mutual correction strategy, we conduct ablation experiments to validatethe efficacy of this feature fusion approach. Specifically, we consider two scenarios: first, withoutcorrecting the position-aware features, where the two features are directly combined for fusion;and second, without correcting the instance-aware features, where the two features are directly fedinto the convolutional layer for fusion. The result of these experiments robustly underscore theeffectiveness of SGFF.Table 4: Ablation studies on feature fusion approaches.Fusion Method Ped Crossing Divider Boundary mAPw/o position-aware feature correction 39.9 43.7 38.6 40.7w/o instance-aware feature correction 42.3 46.0 39.0 42.4SGFF 44.8 49.0 43.1 45.6A.2 Effectiveness of Edge LossEdge loss in ScalableMap comprises three crucial elements: the loss associated with newly intro-duced vertices concerning their corresponding edges, the loss concerning edge slopes, and the lossencompassing the angle formed by three consecutive vertices. The first element holds particularsignificance as it directly influences the shape regression of the map element, while the latter two el-ements exert their influence indirectly on the map element’s shape. We conduct ablation experimentsto underscore their efficacy in the context of map construction tasks.Table 5: Ablation studies on edge loss.Loss Item Ped Crossing Divider Boundary mAPw/o edge loss 43.9 47.6 42.7 44.7with edge loss 44.8 49.0 43.1 45.6A.3 Influence of Vertex Count on Convergence SpeedFigure 5 illustrates the convergencecurves of our model in three abla-tion experiments conducted underdifferent vertex configurations forlong-range tests. Excessive verticescan hinder convergence, whereasinsufficient counts can compromisethe accuracy of shape representa-tion. By carefully fine-tuning thenumber of vertices through abla-tion experiments, we strive to strikea balance. This fundamental ap-proach aligns closely with the nu-anced perspective presented in ourpaper, further strengthening the ro-bustness of our findings.Figure 5: Visualization of convergence curves.12B Qualitative visualizationWe present visual results of ScalableMap operating in adverse weather conditions and dealing withocclusion scenarios on nuScenes validation set as shown in Figure 6, Figure 7 and Figure 8.Figure 6: Visualization of qualitative results of ScalableMap in rainy scenes from nuScenes valida-tion dataset. The left column is the surround views, the middle column is the inference results of theScalableMap, the right column is corresponding ground truth. Green lines indicate boundaries, redlines indicate lane dividers, and blue lines indicate pedestrian crossings.13Figure 7: Visualization of qualitative results of ScalableMap in nightly scenes from nuScenes vali-dation dataset. The left column is the surround views, the middle column is the inference results ofthe ScalableMap, the right column is corresponding ground truth. Green lines indicate boundaries,red lines indicate lane dividers, and blue lines indicate pedestrian crossings.14Figure 8: Visualization of qualitative results of ScalableMap in occlusion scenes from nuScenesvalidation dataset. The left column is the surround views, the middle column is the inference resultsof the ScalableMap, the right column is corresponding ground truth. Green lines indicate boundaries,red lines indicate lane dividers, and blue lines indicate pedestrian crossings.15 |
kOm3jWX8YN | Learning to Discern: Imitating HeterogeneousHuman Demonstrations with Preference andRepresentation LearningSachit Kuhar∗, Shuo Cheng, Shivang Chopra, Matthew Bronars, Danfei XuGeorgia Institute of Technology{kuhar, shuocheng, shivangchopra11, mbronars, danfei }@gatech.eduAbstract: Practical Imitation Learning (IL) systems rely on large human demon-stration datasets for successful policy learning. However, challenges lie in main-taining the quality of collected data and addressing the suboptimal nature of somedemonstrations, which can compromise the overall dataset quality and hence thelearning outcome. Furthermore, the intrinsic heterogeneity in human behavior canproduce equally successful but disparate demonstrations, further exacerbating thechallenge of discerning demonstration quality. To address these challenges, thispaper introduces Learning to Discern (L2D), an offline imitation learning frame-work for learning from demonstrations with diverse quality and style. Given asmall batch of demonstrations with sparse quality labels, we learn a latent rep-resentation for temporally embedded trajectory segments. Preference learning inthis latent space trains a quality evaluator that generalizes to new demonstratorsexhibiting different styles. Empirically, we show that L2D can effectively assessand learn from varying demonstrations, thereby leading to improved policy per-formance across a range of tasks in both simulations and on a physical robot.Keywords: Imitation Learning, Preference Learning, Manipulation1 IntroductionImitation Learning (IL) allows robots to learn complex manipulation skills from offline demonstra-tion datasets [1, 2, 3, 4, 5]. However, the quality of the demonstrations used for IL will significantlyinfluence the effectiveness of the learned policies [2, 6, 7]. Practical imitation learning systems mustamass demonstrations from a broad spectrum of human demonstrators [8, 9], ranging from novicesto experts in a given task [10, 11]. Moreover, even experts may complete a task differently, resultingin disparate demonstrations of equal quality [12, 13, 14]. Therefore, an effective IL algorithm mustlearn from demonstrations of varying quality, each further diversified by the unique skillset of theindividual demonstrators.However, mainstream IL algorithms often make the simplifying assumption that all demonstrationsare uniformly ideal [11]. As a result, policies trained with these algorithms may unknowingly learnfrom suboptimal or even contradictory supervisions [11, 14, 15]. Recent work attempts to estimateexpertise by developing unsupervised algorithms to quantify action distribution and variance [6] oractively soliciting new demonstrations that match with a known expert policy [16]. While thesemethods are general in principle, two critical limitations remain. First, estimating demonstratorexpertise without any external quality label is an inherently ill-posed problem. As noted above,demonstrated behaviors can vary drastically across demonstrators and even across different trialsfrom the same demonstrator. Neither action variance nor similarity to an expert is sufficient tocapture expertise in such settings. Second, these works only consider state-level features, whilebehaviors are better represented in temporal sequences.∗work done while at Georgia Institute of Technology.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: L2D: Our framework proceeds in three primary stages during training. First, we aug-ment trajectory segments with temporal embeddings and employ contrastive learning to map thesesegments to a latent space. Next, we use preference learning in this latent space to train a qualitycritic on sparse preference labels. Finally, we train a Gaussian Mixture Model (GMM) on the critic’soutputs where the different modes represent demonstrator quality.To this end, we present Learning to Discern (L2D), a completely offline method for learning fromheterogeneous demonstrations. L2D can efficiently estimate expertise from limited demonstrationquality labels based on sequence-level latent features. By incorporating temporal embeddings intotrajectory segments, we are able to learn a more meaningful latent representation for long-horizonmanipulation tasks. Preference learning in this latent space trains a quality evaluator that generalizesacross diverse styles of task completion while only requiring a subset of coarsely ranked data. Wedemonstrate that our method can not only discern the quality of in-domain data but also identifyhigh-quality demonstrations from entirely new demonstrators with modes of expertise unseen intraining. We show this empirically in both simulated and real-world robot settings. With L2D, itbecomes feasible to filter high-quality demonstrations from the vast and diverse datasets that areneeded for data-driven IL.2 Related WorkImitation Learning with Suboptimal Data. Imitation Learning (IL) [17, 18, 19, 20, 21, 22] fromsub-optimal data is an active area of research in robotics. Various works show that near-optimalpolicies can be trained if demonstrations are ranked based on their quality [4, 11]. However, manualannotation of dense reward labels is costly and unscalable. CAIL [23] extrapolates demonstrationquality from a subset of labeled data D′, but still requires a dense ranking of D′. Other worksestimate demonstrator quality in an unsupervised manner (ILEED [6]) or compare demonstrationsto an expert policy (ELICIT [16]). Such unsupervised algorithms do not generalize to equal qualitydemonstrations from different modes of task completion. This motivates the need for an IL algorithmthat learns a generalizable quality estimator from a subset of coarsely ranked data.Preference Learning from Human Demonstrations. Preference learning is a form of supervisedlearning that learns rankings from pairwise comparisons [24, 25, 26, 27, 28, 29]. When appliedto human demonstration data, preference learning can effectively train a reward function based onrankings of trajectory segments [30]. Various frameworks utilize this reward function to train poli-cies through inverse reinforcement learning [31, 32]. To limit manual annotations, some methodssynthetically generate lower-quality data through noise injection [7]. These methods are hamperedby the fact that noise level does not directly translate to preference ranking of trajectories [33]. Ourmethod, L2D, ranks offline demonstrations using preference learning in a latent space. This allowsus to generalize across diverse styles and modes of task completion with sparse preference labels.Representation Learning for Data Retrieval. Representation learning, especially with a focus onrobotics, has been widely studied. This area typically explores learning from visual inputs [34], data2Figure 2: Filtering Unseen Demonstrations: When faced with unseen demonstrations, L2D parti-tions the trajectory into segments and augments each with its chronological ordering in the sequence.The segments are mapped to the latent space learned during training and ranked by the quality critic.After calculating the mean and variance of ranks in a full trajectory, the trained GMM is employedto predict a preference label for the unseen demonstration.augmentation [35], goal-aware prediction [36], and domain-specific information [37]. Recent worksuch as that by Du et al. [38] enhances the performance of robot manipulation tasks by first mappingoffline data to a latent space using contrastive learning and then retrieving offline data using thatlatent space with task-specific data as queries. Yet, these works do not account for the suboptimalityof given demonstrations in task-specific data and offline data.3 Problem SettingWe study the imitation learning problem in environments modeled as Markov Decision Processes(MDP), where SandAdenote the state space and the action space, respectively. P:S × A × S →[0,1]is the state transition function. The environment emits a binary reward upon task completion.The learning algorithm does not have access to this signal.We assume access to a small set of demonstration Dknown ={τi}Ni=1, where a trajectory τiis a se-quence of transitions with a variable length Lτigiven by (s0, a0, s1, a1, . . . , s Lτi−1, aLτi−1). Eachtrajectory in Dknown is associated with a quality label l∈ L, where Lis the set of possible labelsindicating the quality of a trajectory. We divide Dknown into sub-datasets i.e., {DA, DB, DC, . . .}of similar quality levels, based on the quality labels associated with each trajectory. If dataset Aisconsidered to be of superior quality compared to B, we denote it as A≻B. Similarly, we assumeτA≻τB, provided τA∈AandτB∈B.In this paper, we develop a framework for imitation learning that achieves state-of-the-art perfor-mance on sub-optimal data by estimating the quality of new demonstrations. More specifically, ouraim is to learn a representation that captures the multimodality from a large dataset of unknownquality Dunknown and can critique the quality of new demonstrations within the same context. Im-portantly, our approach operates in an offline learning setting where we do not have access to theenvironment, reward signals, or any successful completion of trajectories [6, 1]. This constraintfurther underscores the need for our method’s ability to discern and evaluate the quality of demon-strations without direct interaction or feedback from the environment.4 Learning to DiscernImitation Learning (IL) with suboptimal data often leads to inferior task performance. Evaluatingthe quality of a demonstrator without supervision is challenging, and manually assigning quality todemonstrations is impractical. The core component of our method is a preference network Qthatlearns to evaluate the quality of a demonstration τby estimating its ranking label l. As mentionedabove, the key challenges of learning a generalizable demonstration quality estimator are that (1)it is difficult to capture behavior-level features by assessing individual state-action pairs, especiallyfor long-horizon manipulation tasks and (2) the demonstrated behaviors are diverse even among3demonstrators of similar expertise. In this section, we describe each challenge in more detail andintroduce the components in our method L2D that address the corresponding challenges.Extracting Behavior Feature through Temporal Contrastive LearningAs we mentioned earlier, estimating the quality of a demonstration by merely observing variance instates is a poorly-posed problem. Instead, we propose learning a temporal latent space for qualitycritique. We utilize a neural network encoder E:RL1×Obs.Dim→RLatent.Dimthat maps trajectorysegments {σj={st}t=i+L1j,t=i}i=num segmentsj=1 into a d-dimensional latent space. Sampling triplet pairsfor this learning using triplet margin loss [38] poses a challenge: naive sampling based on qualitylabels may degrade representation quality. Specifically, long-horizon tasks like Square [11] containbottleneck states indifferent to demonstration quality, leading to task-specific regions that don’t ma-jorly impact overall quality. We use these sampled segments to create triplets of an anchor trajectorysegment σa, a positive trajectory segment σp, and a negative trajectory segment σnwhich are usedin different contrastive learning strategies:•Strategy 1: Arbitrary Sampling from Specific Quality Sets : We sample σaandσpfrom differ-ent trajectories from a demonstration set A. We then sample the negative trajectory segment σNfrom a different demonstration quality set B.•Strategy 2: Specific Segment Sampling from Arbitrary Quality Sets : We leverage domain-specific knowledge that certain segments, like initial or final segments, do not influence demon-stration quality. For instance, in the case of a square task with complete demonstrations, we knowthat all demonstrations conclude with the robot placing the square in the cylinder. Thus, we sam-ple the final segments σa,final from τ1andσp,final from τ2. The negative trajectory segmentσnis sampled such that it is not from the same region but can be sampled from any subset as thesegmented regions themselves do not impact quality.Position Encoding for Capturing Non-cyclic BehaviorPreference learning is effective for evaluating the quality of unknown and diverse demonstrations,but we find it struggles with long-horizon manipulation tasks. We posit that this is due to the intrin-sic non-cyclical nature of these tasks can lead to identical state-action pairs having different qualitylabels depending on the trajectory context. To navigate this inherent complexity, we introduce dataaugmentations like position encoding for each demonstration state with respect to the entire trajec-tory. We find that this provides the necessary context to the latent space for comprehensive taskunderstanding. Let our original observation at any time-step tbe denoted as ot∈Robs.dim. Weintroduce position encoding, represented as a real number ptindicating the normalized time-step(i.e.,t/T, where Tis the total number of time-steps in the demonstration). We then formulate theaugmented observation o′twhich becomes a vector in Robs.dim +1, defined as o′t= [ot, pt].Training Quality CriticAfter training the encoder E to learn a latent space that is cognizant of quality-sensitive regionsof the demonstrations, our approach leverages a Quality Critic network Q, defined as a functionQ:Rd→R, that serves as a regressor, mapping the d-dimensional trajectory segment embeddingto a continuous scalar value indicative of the quality of the demonstration. In the training phase,we utilize pairs of trajectory segments, σAandσB, randomly sampled from datasets DAandDB,respectively. Each segment is first converted into an embedding and subsequently passed throughQto obtain a quality score. The pairwise ranking loss [39, 30, 31], capitalizes on these pairs tocompare and learn the relative quality differences between the segments. During the inference stage,the model computes the quality score of individual segments.Respresenting Suboptimality as a Gaussian Mixture ModelDemonstrations of varying quality may exhibit mixtures of behaviors in different regions. By beingreceptive to individual demonstration regions, our approach allows us to estimate the quality of thesemixed-quality demonstrations, as the overall quality of a demonstration can be viewed as a majorityvote of its constituent parts. This concept can be naturally represented by a Gaussian distribution,4where the mean of the Gaussian represents the average quality of segments in a demonstration. Afterlearning the latent space encoder Eand quality critic Q, we train a Gaussian Mixture Model (GMM)G. We sample a sufficient number of segments ( ⌈LL1×k⌉) from each demonstration in Dknown andpass each segment through encoder Eand quality critic Qto obtain a scalar value. These valuesrepresent the quality of different regions within the demonstration. We then map these scalar valuesinto sets that correspond to demonstrations of different qualities (i.e., DA,DB,DC, etc) and trainthe GMM where different Gaussians correspond to different qualities.Estimating Quality of Unseen DemonstrationsGiven a new unseen demonstration τ, we generate a set of scalar values using Q(E(sampler (τ))),with each value estimating the quality of a specific region within the demonstration. We then esti-mate the probability of each value with respect to the trained GMM and assign each scalar to the setwith the maximum probability. Finally, we use a heuristic to determine whether the demonstrationshould be used for training the policy. As an example, in our robomimic experiments, for an unseendemonstration, we count the percentage of segments assigned to the ’good’ quality set. Demonstra-tions are then rated based on this count. We subsequently rank all demonstrations based on this scoreand select the top demonstrations for training the IL algorithm BC-RNN. This procedure ensures theinclusion of demonstrations that predominantly exhibit desirable behavior.5 Experimental EvaluationWe empirically validate whether L2D can learn effective representations that enable accurate dis-crimination of high-quality demonstrations from the pool of available demonstrations, which varyin quality. We show that distinct components of the multi-component architecture of L2D signifi-cantly contribute to its effectiveness with an ablation study. We demonstrate that L2D can discernhigh-quality demonstrations from both seen and unseen demonstrators and compare against base-lines. Finally, we collect human demonstrations for real and simulated manipulation tasks to test themethod’s applicability in more realistic scenarios.Figure 3: Good (green) andBad (red) demonstrations forthe Robomimic’s Square task.Experiment setup. We use the Robomimic [11] for all experi-ments, which include a range of tasks like Lifting, Square Nut As-sembly, and Can Pick-and-Place collected from multiple human op-erators of varying skill levels. We specifically focus on the SquareNut Assembly task where Robomimic provides us with 300 demon-strations, with 50 from each of the 6 operators. These 300 demon-strations are categorized into 3 sets of 100 demonstrations each andare labeled as ’good’, ’ok’, or ’bad’ based on the execution quality.Using higher quality demonstrations for training leads to a highersuccess rate in robomimic tasks [11]. To evaluate the different fil-tering methods, we use RNN-based behavioral cloning (BC-RNN)to train a policy on selected demonstrations and report the successrate (SR) (see appendix D for details).We bifurcate the identification of high-quality demonstrations into two scenarios: (A) Familiardemonstrators. The demonstrations in each set originate from the same demonstrators, i.e., bothDknown andDunknown have demonstrations each for varying quality from the same demonstrators.(B) Unseen demonstrators. The demonstrations in each set originate from different demonstrators,i.e., both Dknown andDunknown have demonstrations each for varying quality, but they are providedexclusively from different demonstrators. We adapt the robomimic dataset for these scenarios as fol-lows: In both cases, we divide the 300 available demonstrations into two sets of Dknown andDunknown .In the first case, we perform this division uniformly such that all 6 operators provide an equal num-ber of demonstrations into either set. However, in the second case, the demonstrations in each setoriginate from different demonstrators, i.e., both Dknown andDunknown have 50 demonstrations eachfor varying quality, but they are provided exclusively from different demonstrators.5Baselines. We benchmark the performance of our proposed method, L2D, against two state-of-the-art methods for learning from mixed-quality demonstrations. ILEED [6] employs an unsupervisedexpertise estimation approach to identify good demonstrations. ELICIT [16] actively filters newdemonstrations based on whether the state-action pairs match the prediction of an expert policy. Wealso compare against adaptations of preference learning and contrastive learning methods. Prefer-ence Learning baseline is a best effort adaptation of TREX [31] reward learning method and usespairwise ranking loss [30, 40]. Contrastive Learning uses triplet margin loss samples from distinctquality sets [38] for training and performs filtering by using cosine-similarity between encodings oftrajectory segments. As additional baselines, we compare against a naive uniform sampling strategyfrom the dataset of unknown quality demonstrations, as well as an oracle baseline representing theideal scenario where the highest-quality demonstrations can be perfectly identified.5.1 Component AblationIn our ablation study, we dissect L2D to evaluate the individual contribution of each component. Bycomparing the Wasserstein distances between the distributions corresponding scores generated byQ, for sub-dataset with specific labels within Dunknown , we quantify the quality of the representa-tion learned. A higher Wasserstein distance indicates enhanced separability between trajectories ofdifferent qualities, making it easier to identify high-quality trajectories. In Robomimic tasks, betterquality demonstrations result in a higher SR when training a policy using BC methods [11]. There-fore, higher total Wasserstein distances indicate more effective learned representations for filtering.Table 1 and Figure 4 show our findings and the corresponding distribution visualizations, reveal thatL2D consistently outperforms the preference learning approach in unseen demonstrator experimentsetup 5.2. The histogram visualizations plot the distributions of trajectory segment scores output byQfor each quality subset in Dunknown . Adding the S1 Contrastive component and positional encodingenhances the quality of the learned latent space, confirming the necessity of contextual understand-ing for long-horizon non-cyclic robot manipulation tasks. The final addition of the S2 Contrastivecomponent underlines the benefits of using well-crafted positive and negative pairs for contrastivelearning. These pairs, even from demonstrations of differing quality, lead to better representations,highlighting the importance of understanding that some regions of demonstrations do not influencethe quality. This lends support to the idea of leveraging commonalities across all demonstrations toregularize our learning process, thereby avoiding overfitting specific demonstrator biases.Method Good vs. Okay ↑Good vs. Bad ↑Okay vs. Bad ↑Total Distance ↑Preference 0.130 0.143 0.147 0.420+ S1 Contrastive 0.337 0.213 0.414 0.964+ Position Encoding 0.535 0.649 0.589 1.773+ S2 Contrastive 0.604 0.848 0.417 1.869Table 1: Wasserstein Distance Comparison for the Robomimic Square Task: The table showcases thecapability of different methods, including the incremental introduction of contrastive components,to discern among demonstrations of quality levels (good, okay, bad) from unseen demonstrators.Figure 4: Histogram Distribution Visualization: Comparing demonstration quality scores (good,okay, bad) for the Square task from unseen demonstrators. The left histogram represents a conven-tional preference learning approach, while the right highlights the efficacy of our method, L2D65.2 Main ResultsL2D can identify unseen high-quality demonstrations from familiar demonstrators. We trainedL2D on the known demonstrations Dknown and tested its ability to filter unseen demonstrationsDunknown from the same demonstrators. Our results, as presented in Table 2, indicate that L2Dsuccessfully identifies 43 out of the top 50 high-quality demonstrations from the pool. We find thatour method outperforms ILEED [6], reinforcing the idea that using preferences over an unsuper-vised approach leads to better representations. Finally, we observe that our method achieves anoracle-level success rate (SR) when training the policy on filtered data.Method Success RateNaive 0.44ILEED 0.54Our Method 0.66Oracle 0.66Method Good Okay BadNaive 68 16 16Our Method 93 4 3Oracle 100 0 0Table 2: Identifying Unseen High-Quality Demonstrations from Familiar Demonstrators:Comparison of methods to identify high-quality demonstrations in an unseen dataset. The righttable presents the quality categorization of selected demonstrations with the best 50 demonstrationsfromDknown , while the left displays the imitation learning policy’s success rate when trained on acombined set of best-known and selected demonstrations.L2D can identify high-quality demonstrations from unseen demonstrators. In a practical situ-ation, a policy may be trained on data provided by previously unseen demonstrators. This is chal-lenging because each demonstrator may possess a unique style of execution and a diverse range ofdemonstration quality. We evaluated L2D in this setting by training it and the baselines on Dknownand then using them to filter the top 50 demonstrations from Dunknown . Table 3 shows that ourproposed method L2D significantly outperforms both contrastive learning and preference learningmethods, which in turn outperforms naive sampling. In comparison with the ELICIT method [16],which learns only from high-quality data, L2D demonstrates superior performance, suggesting that itis more adaptable to variation in demonstration quality and demonstrator style. Notably, the successrate achieved by our method is only marginally inferior to that of the oracle, further supporting ourclaim that L2D can effectively discern high-quality demonstrations from unfamiliar demonstrators.Method Success RateNaive 0.20Contrastive 0.36Preference 0.38Elicit 0.38Our Method 0.44Oracle 0.46Method Good Okay BadNaive 18/50 16/50 16/50Contrastive 23/50 20/50 7/50Preference 27/50 20/50 3/50Elicit 23/50 20/50 7/50Our Method 39/50 8/50 3/50Oracle 50/50 0/50 0/50Table 3: Identifying Unseen High-Quality Demonstrations from New Demonstrators: A com-parison of methods to discern top demonstrations from an unfamiliar source. The left table displaysthe imitation learning policy’s success rates, while the right categorizes the quality of demonstra-tions selected. L2D’s performance approaches that of the oracle, highlighting its effectiveness inrecognizing quality across unfamiliar demonstrator styles5.3 Learning from Real-World DemonstrationsWe evaluate the ability of L2D to estimate the quality of demonstrations in realistic scenarios, com-pared to controlled simulated environments. We gathered new human demonstrations for tasks suchas Square Nut Assembly in simulated environments while Lifting and Stacking were demonstratedin real-world environments using actual robots, all performed by a diverse group of operators. Theseoperators performed independently, without leveraging knowledge from others’ demonstrations. Wethen trained our method on a subset of these demonstrations and used it to filter the unseen dataset7 0.250.87Figure 5: Demo Quality Estimation. We show the predicted quality for real-world demonstrationsof the Stack task. The trajectory segments with less optimal behaviors (e.g., jittering or waving) willbe assigned with lower scores (marked with red bounding boxes).for high-quality demonstrations. Please refer to Sec. D in the Appendix for more training details.The effectiveness of our method is evaluated based on the improvement in success rate after theintegration of these newly identified demonstrations. Our experimental results, depicted in Table 4,suggest that L2D substantially improves the performance of imitation learning policy in realisticscenarios. In Figure 5 we visualize the quality prediction for two stacking demonstrations and showthat L2D can correctly identify high-quality demonstrations based on sparse labels.Environment Task Method High-Quality Demo Selection (%) SR (%)Simulated Square Naive Sampling 37 10Simulated Square Our Method 95 32Real Stack Naive Sampling 33 50Real Stack Our Method 97 80Real Lift Naive Sampling 33 30Real Lift Our Method 45 53Table 4: Performance comparison of naive sampling and our method in terms of high-quality demon-stration selection from an unknown dataset and SR increase for two tasks (Can Pick and Place andLift) on a real robot and one task (Square) in a simulation.6 LimitationsOur method filters out entire demonstrations based on their assessed quality, ensuring that onlyhigh-quality data is used for analysis. However, this approach may be overly simplistic, assumingthat a low-quality demonstration contains no high-quality segments. In reality, demonstrations oftenencompass diverse tasks and behaviors, and even a low-quality demonstration may contain sectionsthat could provide useful training data. Additionally, our method assumes a clear distinction betweenhigh and low-quality demonstrations, but the line separating the two is often blurred, dependingon the complexity of the task and the specificity of the demonstrator’s style. Addressing theselimitations could lead to even better-learned representations.7 ConclusionWe presented Learning to Discern (L2D), a novel method for efficiently estimating expertise fromheterogeneous demonstration data in IL without access to the environment and reward signals. L2Demploys sequence-level latent features and temporal embeddings to learn a robust latent represen-tation of complex tasks. With a quality evaluator trained in this latent space, our method can gen-eralize across varied task completion styles and identify high-quality demonstrations, even fromnew, unseen modes of expertise. This study demonstrates the potential of L2D to filter high-qualitydemonstrations from vast and diverse datasets, marking a significant advancement in the field ofdata-centric robotics.8References[1] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.[2] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. arXiv preprint arXiv:2108.03298, 2021.[3] S. Emmons, B. Eysenbach, I. Kostrikov, and S. Levine. Rvs: What is essential for offline rl viasupervised learning? arXiv preprint arXiv:2112.10751, 2021.[4] A. Kumar, J. Hong, A. Singh, and S. Levine. When should we prefer offline reinforcementlearning over behavioral cloning? arXiv preprint arXiv:2204.05618, 2022.[5] D. Ghosh, A. Ajay, P. Agrawal, and S. Levine. Offline rl policies should be trained to beadaptive. In International Conference onMachine Learning, pages 7513–7530. PMLR, 2022.[6] M. Beliaev, A. Shih, S. Ermon, D. Sadigh, and R. Pedarsani. Imitation learning by estimatingexpertise of demonstrators. In International Conference onMachine Learning, pages 1732–1748. PMLR, 2022.[7] D. S. Brown, W. Goo, and S. Niekum. Better-than-demonstrator imitation learning viaautomatically-ranked demonstrations. In Proceedings ofthe3rdConference onRobotLearning, 2019.[8] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817, 2022.[9] S. Cabi, S. G. Colmenarejo, A. Novikov, K. Konyushkova, S. Reed, R. Jeong, K. Zolna, Y . Ay-tar, D. Budden, M. Vecerik, et al. Scaling data-driven robotics with reward sketching and batchreinforcement learning. arXiv preprint arXiv:1909.12200, 2019.[10] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, et al. Roboturk: A crowdsourcing platform for robotic skill learning through imita-tion. In Conference onRobot Learning, pages 879–893. PMLR, 2018.[11] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. In Conference onRobot Learning (CoRL), 2021.[12] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, S. Savarese, and L. Fei-Fei. Learning to generalizeacross long-horizon tasks from human demonstrations. Robotics: Science andSystems, 2020.[13] N. M. Shafiullah, Z. Cui, A. A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. Advances inneural information processing systems, 35:22955–22968,2022.[14] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137, 2023.[15] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mor-datch, and J. Tompson. Implicit behavioral cloning. In Conference onRobot Learning, pages158–168. PMLR, 2022.[16] K. Gandhi, S. Karamcheti, M. Liao, and D. Sadigh. Eliciting compatible demonstrations formulti-human imitation learning. In K. Liu, D. Kulic, and J. Ichnowski, editors, ProceedingsofThe6thConference onRobot Learning, volume 205 of Proceedings ofMachine LearningResearch, pages 1981–1991. PMLR, 14–18 Dec 2023. URL https://proceedings.mlr.press/v205/gandhi23a.html .9[17] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics andautonomous systems, 57(5):469–483, 2009.[18] D. A. Pomerleau. Efficient training of artificial neural networks for autonomous navigation.Neural computation, 3(1):88–97, 1991.[19] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predic-tion to no-regret online learning. In Proceedings ofthefourteenth international conference onartificial intelligence andstatistics, pages 627–635. JMLR Workshop and Conference Proceed-ings, 2011.[20] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control viapolicy optimization. In International conference onmachine learning, pages 49–58. PMLR,2016.[21] J. Ho and S. Ermon. Generative adversarial imitation learning. Advances inneural informationprocessing systems, 29, 2016.[22] Y . Ding, C. Florensa, P. Abbeel, and M. Phielipp. Goal-conditioned imitation learning.Advances inneural information processing systems, 32, 2019.[23] S. Zhang, Z. Cao, D. Sadigh, and Y . Sui. Confidence-aware imitation learning from demon-strations with varying optimality. Advances inNeural Information Processing Systems, 34:12340–12350, 2021.[24] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings oftheeighthACM SIGKDD international conference onKnowledge discovery anddatamining, pages 133–142, 2002.[25] J. F ̈urnkranz and E. H ̈ullermeier. Pairwise preference learning and ranking. In MachineLearning: ECML 2003: 14th European Conference onMachine Learning, Cavtat-Dubrovnik,Croatia, September 22-26, 2003. Proceedings 14, pages 145–156. Springer, 2003.[26] W. Chu and Z. Ghahramani. Preference learning with gaussian processes. In Proceedings ofthe22nd international conference onMachine learning, pages 137–144, 2005.[27] N. Houlsby, F. Husz ́ar, Z. Ghahramani, and M. Lengyel. Bayesian active learning for classifi-cation and preference learning. arXiv preprint arXiv:1112.5745, 2011.[28] K. Lee, L. Smith, and P. Abbeel. Pebble: Feedback-efficient interactive reinforcement learningvia relabeling experience and unsupervised pre-training. arXiv preprint arXiv:2106.05091,2021.[29] C. Kim, J. Park, J. Shin, H. Lee, P. Abbeel, and K. Lee. Preference transformer: Modelinghuman preferences using transformers for rl. arXiv preprint arXiv:2303.00957, 2023.[30] P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcementlearning from human preferences. Advances inneural information processing systems, 30,2017.[31] D. S. Brown, W. Goo, P. Nagarajan, and S. Niekum. Extrapolating beyond suboptimaldemonstrations via inverse reinforcement learning from observations. In K. Chaudhuri andR. Salakhutdinov, editors, Proceedings ofthe36th International Conference onMachineLearning, volume 97 of Proceedings ofMachine Learning Research, pages 783–792, LongBeach, California, USA, 09–15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/brown19a.html .[32] C. Wirth, R. Akrour, G. Neumann, J. F ̈urnkranz, et al. A survey of preference-based reinforce-ment learning methods. Journal ofMachine Learning Research, 18(136):1–46, 2017.10[33] L. Chen, R. Paleja, and M. Gombolay. Learning from suboptimal demonstration via self-supervised reward regression. In Conference onrobot learning, pages 1262–1277. PMLR,2021.[34] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022.[35] M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas. Reinforcement learningwith augmented data. Advances inneural information processing systems, 33:19884–19895,2020.[36] S. Nair, S. Savarese, and C. Finn. Goal-aware prediction: Learning to model what matters. InInternational Conference onMachine Learning, pages 7207–7219. PMLR, 2020.[37] R. Jonschkowski and O. Brock. Learning state representations with robotic priors.Autonomous Robots, 39:407–428, 2015.[38] M. Du, S. Nair, D. Sadigh, and C. Finn. Behavior retrieval: Few-shot imitation learning byquerying unlabeled datasets. arXiv preprint arXiv:2304.08742, 2023.[39] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method ofpaired comparisons. Biometrika, 39(3/4):324–345, 1952.[40] B. Ibarz, J. Leike, T. Pohlen, G. Irving, S. Legg, and D. Amodei. Reward learning from humanpreferences and demonstrations in atari. Advances inneural information processing systems,31, 2018.[41] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980, 2014.11A Additional Ablation StudiesTo further investigate the effect of different architectural components and design choices in our pro-posed method, L2D, we focus specifically on the roles of S2 contrastive learning and data augmen-tation. The experimental setup is the same as described in Section 5: identification of high-qualitydemonstrations from unseen demonstrator experiment on the Square task. We use LSTM+Self At-tention architecture with CLS token for contrastive learning followed by a simple MLP encoderfor preference learning. The baseline for this experiment does not use S2 contrastive learning ordata augmentation. We study the following design choices of L2D: (1) S2.Initial: Initial trajectorysegment similarity; (2) S2.Final: Final trajectory segment similarity; and (3) Data augmentations:location-sensitive time warping for segments used in S2 contrastive. As done in Section 5.1, we sys-tematically examine the increase in separability of demonstration quality from new demonstratorsby measuring the Wasserstein distances between trajectories of different qualities under differentconfigurations. Higher distance indicates better separability and quality of learned representations.S2.Initial S2.Final Data Aug G vs O G vs B O vs B Sum (↑)No No No 0.12 0.24 0.36 0.72Yes No No 0.19 0.28 0.4 0.87Yes No Yes 0.16 0.4 0.36 0.92Yes Yes Yes 0.28 0.48 0.48 1.24Table 5: Impact of Trajectory Segment Similarity and Data Augmentation . The ablation studydemonstrates the value added by incorporating initial and final trajectory segment similarity usingS2 contrastive and location-sensitive time warping into our method. Each component enhances theseparability of demonstration quality from new demonstrators, leading to superior representationsand increased total Wasserstein distance.Table 5 shows that as we integrate each element of L2D, the total Wasserstein distance increases,indicating improved separability of demonstration quality. The full configuration with both formsof trajectory similarity and time-warping augmentation achieves the highest total distance. This val-idates the effectiveness of L2D’s design choices in improving the quality of learned representations.B Real Robot DemonstrationsFigure 6: Real Robot Execution. We show the rollout results of the policies trained with thedemonstrations selected by our method and naive sampling for the Stack task.We collected physical robot demonstrations for the Lift and Stack tasks and simulated demonstra-tions for the Square task to provide a more realistic setting for evaluating our method. The Lift taskhad 60 demonstrations in the training set Dknown and 60 demonstrations in the test set Dunknown . TheSquare and Stack tasks had 100 demonstrations each in Dknown andDunknown . The labeled demon-stration data had three quality types: good, okay, and bad. The real robot demonstration data washigh-dimensional, containing camera images, gripper position, end-effector pose, joint positions,and joint velocities of the robot arm. After selecting high-quality demonstrations using our L2Dapproach and baseline approaches, we trained an imitation learning (IL) policy on the best demon-strations from Dknown combined with the selected demonstrations from Dunknown . We report thesuccess rate of the trained IL policy on the real robot task.For the Stack task, we compare the performance of our method with naive sampling. Our resultsshow that policies learned from higher-quality demonstrations exhibit robust and efficient task com-pletion, achieving an 80% success rate over 10 trials. Conversely, policies learned from demon-12strations selected through naive sampling occasionally display peculiar behaviors, such as wavingaround the target position or dropping the grasped cube at unsafe heights. Consequently, thesebehaviors contribute to a lower success rate of 50%. Our results demonstrate that the quality ofdemonstrations plays a crucial role in the performance of learned policies on real robots. By lever-aging our L2D approach to identify high-quality demonstrations, we can train policies that achievemore robust and efficient task completion.C L2D HyperparametersHyperparameter Defaultdiscriminator learning rate 0.0001label noise forQ 0.1 [31]discriminator num segments 20000discriminator segment length 48discriminator batch size 128discriminator training steps 500000initial trajectory segment len 12final trajectory segment len 6useposencoding Truediscriminator embedding size 12Architecture [LSTM,2 MLPs,Flatten(), 2 MLPs][2MLPs]Table 6: Hyperparameters used for training L2D to perform Robomimic based ablations.D Imitation Learning Policy Training DetailsHyperparameter DefaultBatch Size 16Learning rate (LR) 1e-4Num Epoch 1000Train Seq Length 10MLP Dims [400, 400]Image Encoder - Wrist View ResNet-18Image Feature Dim 64GMM Num Modes 5GMM Min Std 0.01GMM Std Activation SoftplusTable 7: Hyperparameters for L2DWe choose the default BC-RNN model settingfrom Robomimic benchmark [11] for learningpolicies. For simulation tasks the agents onlyuse low-dimensional observation (e.g., robotproprioception, object poses), the epochs areset to 2000 with batch size 100, and we test theagents by running rollout with a maximal hori-zon of 500 in every 50 epochs. For real-worldtasks, we also include the RGB color observa-tions from the three cameras (two at the side,one at the robot’s wrist) and the epochs are setto 500. All networks are trained using Adamoptimizers [41].For the Robomimic Square task on Multi-Human Dataset experiments in Section 5, there is an equal 150-150 demonstration split. L2Dtrains on 150 known demonstrations and is tasked to select the demonstrations that belong to the“good” quality from the set of 150 unknown demonstrations. Familiar demonstrator experiment inTable 2, aims to test the IL policy’s performance when it has access to a combination of known andunknown demonstrations. Specifically, BC-RNN is provided with 100 demonstrations: the top 50fromDknown and another 50 chosen from Dunknown . For the unfamiliar demonstrator experiment inTable 3, BC-RNN trains exclusively on 50 demonstrations selected from Dunknown .Once we have filtered demonstrations using different methods (including proposed method and base-lines) and consistent heuristics across methods, we employ Behaviour Cloning for training. We useRNN-based behavioral cloning (BC-RNN) to evaluate algorithm performance using subsets basedon demonstrator quality as prior works show [6, 11, 16] it is effective at handling mixed quality data.13 |